Pravin Barton · Dec 5, 2018 go to post

Hi Arun,

The simplest solution is to use the CSP session with a custom login page. That way you can use the built-in CSP authentication. Sergey's answer to this post has a good example: https://community.intersystems.com/post/authentication-options-cach%C3%…. The downside is that it's not truly stateless, and it requires you to serve your web files through CSP.

If your web application isn't connected to CSP, I recommend using OAuth 2.0. This is a little more work since it involves setting up an authorization server. There's an excellent series of tutorials here:

https://community.intersystems.com/post/intersystems-iris-open-authoriz….

Pravin Barton · Jan 2, 2019 go to post

You can add table-level privileges on all tables in the namespace by running "GRANT Select ON * TO Test", where Test is the name of the user or role. But if a new table is later added they won't automatically get access to it. I don't know of a way to do that with SQL privileges short of giving them %All.

Pravin Barton · May 13, 2019 go to post

For the first part of your question, here's a sample method that gets a bearer token from the request:

ClassMethod GetAccessTokenFromRequest(pRequest As %CSP.Request = {%request}) As %String
{
	Set accessToken=""
	Set authorizationHeader=pRequest.GetCgiEnv("HTTP_AUTHORIZATION")
	If $zcvt($piece(authorizationHeader," ",1),"U")="BEARER" {
		If $length(authorizationHeader," ")=2 {
			Set accessToken=$piece(authorizationHeader," ",2)
		}
	}
	return accessToken
}

EDIT: And here is a full sample of a REST handler that retrieves a bearer token and reuses it to make a request against another REST service.

Class API.DemoBearerToken Extends %CSP.REST
{

Parameter APIHOST = "localhost";

Parameter APIPORT = 52773;

Parameter APIPATH = "/some/other/path";

XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
<Route Url="/example" Method="GET" Call="example"/>
</Routes>
}

ClassMethod example()
{
	set accessToken = ..GetAccessTokenFromRequest(%request)
	set req = ##class(%Net.HttpRequest).%New()
	set req.Https = 1
	set req.SSLConfiguration = "some ssl config"
	set req.Server = ..#APIHOST
	set req.Port = ..#APIPORT
	set req.Authorization = "Bearer "_accessToken
	$$$ThrowOnError(req.Get(..#APIPATH))
	set %response.Status = req.HttpResponse.StatusCode
	set %response.ContentType = req.HttpResponse.ContentType
	if req.HttpResponse.StatusCode <= 400 { //if not an error response
		set jsonData = {}.%FromJSON(req.HttpResponse.Data)
		write jsonData.%ToJSON()
	}
	return $$$OK
}

ClassMethod GetAccessTokenFromRequest(pRequest As %CSP.Request = {%request}) As %String
{
	Set accessToken=""
	Set authorizationHeader=pRequest.GetCgiEnv("HTTP_AUTHORIZATION")
	If $zcvt($piece(authorizationHeader," ",1),"U")="BEARER" {
		If $length(authorizationHeader," ")=2 {
			Set accessToken=$piece(authorizationHeader," ",2)
		}
	}
	return accessToken
}

}

Pravin Barton · Aug 13, 2019 go to post

Hi Stephen, I think you're on the right track by using custom claims for access control. This is what I've done in the past. Scopes in OAuth are intended to be granted by the user, which is not quite what you want here.

As far as I know there's no way to customize the token response. Your best option is to add the custom claims to the userinfo response. This means adding logic to your ValidateUser() method that will set the claim values and also add them to the list of user info claims.

Set tClaim = ##class(%OAuth2.Server.Claim).%New()
Do properties.UserinfoClaims.SetAt(tClaim,"MyCustomNamespace/MyCustomClaim")
Do properties.SetClaimValue("MyCustomNamespace/MyCustomClaim","something based on the user")

Then when the "resource server" part of your app validates the access token, you can call the userinfo endpoint to get this claim and determine the user's permissions.

$$$ThrowOnError(##class(%SYS.OAuth2.AccessToken).GetUserinfo(appName,accessToken,,.pUserInfo))
set myCustomClaim = pUserInfo."MyCustomNamespace/MyCustomClaim"

If you'd rather not make a separate call to userinfo each time, your other option is to add them to the body of the JWT. That would look similar in the ValidateUser() method, but with properties.JWTClaims instead of UserinfoClaims. Then your resource server can validate the signature on the JWT and get the claim from the token body using the ##class(%SYS.OAuth2.Validation).ValidateJWT() method. This is a little more complicated because you have to enforce that the JWT does have signing enabled (unfortunately the ValidateJWT() method will accept a token with no signature.)

Pravin Barton · Jul 8, 2020 go to post

Thank you for the zbreak example and linking the tutorial. I've always wanted a way to set conditional breakpoints for debugging but never looked deep enough into the documentation to find it.

Pravin Barton · Feb 19, 2021 go to post

Hi Neil,

Using OAuth2 in a mirrored environment would require some additional scripting to keep the configuration in sync between mirror members, since as you note it's stored in %SYS.

The Server Configuration on the auth server won't be changing much over time so I'd recommend writing an installation script that sets up all relevant configuration. Below are some snippets from an installation class I'm using on a Caché authorization server:

ClassMethod CreateServerConfiguration(pOrigNamespace As %String = "%SYS", Output interval As %Integer, Output server) As %Status
{
	Set server = ##class(OAuth2.Server.Configuration).Open(.tSC)
 	If $IsObject(server) {
	 	Set tSC = server.Delete()
	 	If $$$ISERR(tSC) Quit tSC
 	}
	
	Set interval = 3600

	Set server = ##class(OAuth2.Server.Configuration).%New()
	Set server.Description = "Single Sign-On"
	Set issuer = ##class(OAuth2.Endpoint).%New()
	Set issuer.Host = ..#IssuerHost
	Set issuer.Prefix = ..#IssuerPrefix
	Set server.IssuerEndpoint = issuer
	
	Set scopes = ##class(%ArrayOfDataTypes).%New()
	Do scopes.SetAt("OpenID Connect","openid")
	Do scopes.SetAt("E-mail Address","email")
	Do scopes.SetAt("User Profile","profile")
	// Add whatever other custom scopes you need
	Set server.SupportedScopes = scopes
	
	Set server.AllowUnsupportedScope = 0
	Set server.SupportedGrantTypes = "APCI"
	Set server.ReturnRefreshToken = ""
	Set server.AudRequired = 0
	
	Set server.CustomizationRoles = "%DB_CACHESYS,%Manager"
	Set server.CustomizationNamespace = pOrigNamespace
	Set server.AuthenticateClass = ..#CustomAuthenticateClassName
	Set server.ValidateUserClass = ..#CustomValidateClassName
	Set server.GenerateTokenClass = "%OAuth2.Server.JWT"
	
	Set server.AccessTokenInterval = interval
	Set server.RefreshTokenInterval = 3*interval
	Set server.AuthorizationCodeInterval = 120
	Set server.ServerCredentials = ..#ServerX509Name
	Set server.SigningAlgorithm = "RS512"
	Set server.KeyAlgorithm = ""
	Set server.EncryptionAlgorithm = ""
	Set server.SSLConfiguration = ..#SSLConfig
	
	Quit server.Save()
}

ClassMethod CreateServerDefinition(Output server) As %Status
{
	Set tIssuer = ..#EndpointRoot
	
	Set server = ##class(OAuth2.ServerDefinition).%OpenId("singleton")
	Set:'$IsObject(server) server = ##class(OAuth2.ServerDefinition).%New()
	Set server.IssuerEndpoint = tIssuer
	Set server.AuthorizationEndpoint = tIssuer_"/authorize"
	Set server.TokenEndpoint = tIssuer_"/token"
	Set server.UserinfoEndpoint = tIssuer_"/userinfo"
	Set server.IntrospectionEndpoint = tIssuer_"/introspection"
	Set server.RevocationEndpoint = tIssuer_"/revocation"
	Set server.ServerCredentials = ..#ServerX509Name
	Quit server.%Save()
}

The client descriptions are likely to change over time as new clients are registered. I think to keep these in sync between mirror members you'll need to regularly export the relevant globals directly from the primary, transport them to the secondary, and import them into the %SYS namespace. Below are some methods that do the export and import:

ClassMethod ExportClientConfiguration(pDestFile As %String) As %Status
{
	new $namespace
	set $namespace = "%SYS"
	for type = "D","I" {
		set tList("OAuth2.Server.Client"_type_".GBL") = ""
		set tList("OAuth2.Client.Metadata"_type_".GBL") = ""
	}
	set tSC = ##class(%File).CreateDirectoryChain(##class(%File).GetDirectory(pDestFile))
	return:$$$ISERR(tSC) tSC
	return $System.OBJ.Export(.tList,pDestFile,,.errorlog)
}

ClassMethod ImportClientConfiguration(pSourceFile As %String) As %Status
{
	new $namespace
	set $namespace = "%SYS"
	return $System.OBJ.Load(.pSourceFile,,.errorlog)
}

You could use a task to do this regularly on a short schedule.

Pravin Barton · Aug 12, 2021 go to post

Hello Martin,

Using "DROP COLUMN" deletes the property from the class definition and modifies the storage definition by removing the property name. The storage definition will still have a "Value" item for the data, but it no longer includes the name of the property.

If you have the class definition in source control, the easiest way to truly delete the data is to revert to the previous version. Then you can run DROP COLUMN again with the %DELDATA option to remove the column and delete the data.

If this is not possible, I would look at the storage definition and find the empty slot in the storage definition. The "name" property on that slot will give you the storage index. You could then iterate through the global where the data is stored and do something like set $list(value, name) = "" to delete the data. I would recommend contacting Support before doing this to see if they have better suggestions.

Pravin Barton · Aug 12, 2021 go to post

Here are a couple of ways to avoid <STORE> errors by increasing the per-process memory available to IRIS processes:

  • Increase the 'bbsiz' parameter, either by editing the CPF file or in the System Management Portal under System Administration > Configuration > System Configuration > Memory and Startup.
  • In code in the specific process throwing the <STORE> error, set the $zstorage special variable to increase the memory available to that process.
Pravin Barton · Aug 12, 2021 go to post

The SSO system we use for this Developer Community has a "forgot password" implementation. Unfortunately it is down right now, but under normal circumstances  you would be able to try it out here: https://login.intersystems.com/login/SSO.UI.PasswordReset.cls

It works as follows:

  • The user enters their email address into a form. They are then taken to another form with an input for a token.
  • If the email address exists in the system, they are sent an email with a secure random token to input. Otherwise they are sent an email with instructions on how to register for an account.
  • Once the user inputs the token from their email to the page, they are taken to another form to set their new password.

It's important to avoid user enumeration by not revealing in the UI whether or not a user with the provided username or email address exists in the system. You should also hash the password reset tokens before storing them in a database, give them a short lifetime before they expire, and invalidate the token after it's used once.

I highly recommend OWASP for more resources on how to do this securely: https://cheatsheetseries.owasp.org/cheatsheets/Forgot_Password_Cheat_Sh…

Pravin Barton · Sep 14, 2021 go to post

Hi Joseph, I agree on using Client Credentials for this use case. As far as I know this is the only OAuth 2.0 grant type that authorizes server-to-server communication without the context of a user agent logging in. You can implement this in InterSystems IRIS by overriding the ValidateClient() method of the OAuth validation class: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic…

One thing to keep in mind is that by default anybody can register a new client with your authorization server by using the dynamic client registration endpoint. So the presence of a valid client isn't enough to authorize the API call. You will need some additional authorization logic.

Pravin Barton · Dec 2, 2021 go to post

Hi Prashanth,

Based on the error status, this looks like an invalid SSL certificate on the REST endpoint. The certificate name is "*.docmansandpit.com" but the server name is "api.ss1.docmansandpit.com". The certificate would need to have "*.ss1.docmansandpit.com" to cover that domain. I would try calling the same endpoint in another test client (or even a web browser) to see if it gives you the same certificate error. If so you would have to get in touch with the owner of that API to fix their certificate.

If you're only able to replicate the issue in your Caché instance I would contact InterSystems Support.

As a last resort, you can probably disable checking the server certificate by doing:

set httpRequest.SSLCheckServerIdentity = 0

This is not recommended because it's insecure, but it might be useful as a debugging tool.

Pravin Barton · Apr 7, 2022 go to post

Note if you are on a later IRIS version and you're not finding anything in ^%ISCLOG. The log entries have been moved from ^%ISCLOG into ^ISCLOG global, and they are only accessible in the "%SYS" namespace. The commands to use it look like this:

set ^%ISCLOG = 2
// do something that will generate logs
set ^%ISCLOG = 0
set $namespace = "%SYS"
zwrite ^%ISCLOG

The first version with this new log location is IRIS 2018.1.0.

Pravin Barton · Apr 22, 2022 go to post

Good question - it looks like the VS Code plugins only support password authentication for now. I'd encourage opening an issue against the InterSystems Server Manager GitHub project if you have a need for it. In theory this would be possible with some implementation work on the VS Code plugin. You would also need to enable delegated authentication on the /api/atelier web application in IRIS with a custom ZAUTHENTICATE routine to support OAuth.

Pravin Barton · May 12, 2022 go to post

If the character stream has JSON-format contents and you'd like to read it into a dynamic entity in ObjectScript, you can simply pass the stream into the %FromJSON method:

set obj = ##class(%DynamicAbstractObject).%FromJson(stream)

See the documentation for dynamic entity methods here.

Pravin Barton · May 18, 2022 go to post

Hi Michael, great questions. A lot of this will depend on your own practices for source code management and deployment. In my team's case we ended up overriding a lot of the %UnitTest behavior to provide reasonable defaults for our process. Hopefully this sparks some more discussion. I'm interested in how other peoples' answers will differ.

> Are all your unit tests added to .gitignore so they don't get wound up in source code?

No - we want source code for our unit tests to be in source control, for the same reason all other code is in source control. We make sure that unit tests don't end up on production systems by maintaining different branches for test and production. Unit tests are in a separate directory that we don't merge from the test branch to the production branch. This is using Perforce. There might be a different workflow recommended for Git that would give you the same results.

> Why does the normal RunTestCase() method automatically delete the extracted unit test class files from the folder?  Why is that the default?

If I had to guess, this is a good default for a deployment process where you compile everything, run tests, and then copy over the code database to production. In that case you would always want test classes to delete themselves after running. In our case we have a different way of deploying code, so we override the RunTest methods to use the "/nodelete" flag by default.

> When it comes to automated testing (Jenkins specifically) what is the lifecycle?

We use a very similar lifecycle for Jenkins automated testing to the one you describe.

  1. Jenkins pulls all code from the remote repo
  2. Run an %Installer class on the build instance that overwrites the code database so we start from scratch
  3. Load and compile all code from the local workspace into the build instance, including tests. Report any compilation error as a build failure.
  4. Run all tests.
  5. Generate JUnit-format test reports and Cobertura test coverage reports.
Pravin Barton · Jul 14, 2022 go to post

It's possible to do this by using a trigger generator. Then you can run GetColumns at compile time of the class, and use the result to write out lines of code using the {fieldName*C} syntax. Just as a warning, using generators can be tricky because it adds a layer of indirection to your code. The best way to debug is to use the "View Other" command in Studio or VS Code and look directly at the generated code.

Here is some sample code for a trigger generator:

Trigger TestTrigger [ CodeMode = objectgenerator, Event = INSERT/UPDATE, Foreach = row/object ]
{
    set tableName = %compiledclass.SQLQualifiedNameQ
    set st = ##class(%SYSTEM.SQL).GetColumns(tableName,.byname,.bynum,1)
    $$$ThrowOnError(st)
    set i = 1
    while $d(bynum(i)) {
        set xColName = bynum(i)
        do %code.WriteLine(" set ^test("""_xColName_" changed"") = {"_xColName_"*C}")
        set i = i + 1
    }
}
Pravin Barton · Aug 11, 2022 go to post

Thank you Alex! $SYSTEM.Process.ClientIPAddress($J) does it, that gives me the IP of the SOAP client.

Pravin Barton · Feb 16, 2023 go to post

Very helpful article, thank you for posting. I'm curious if you see any benefit to using columnar storage in a scenario that is also using InterSystems IRIS Business Intelligence (f.k.a. DeepSee) cubes. Columnar storage lets you run analytical queries with aggregates very efficiently in pure SQL. On the other hand IRIS BI pre-computes the aggregates in cubes, which you then must query in MDX. I might be totally off base but they sound like alternatives to each other.

Pravin Barton · Apr 18, 2023 go to post

Here's a simple example I wrote up and tested based on documentation.

A web service class on the server:

/// Sample.MyService
Class Sample.MyService Extends %SOAP.WebService
{

/// Name of the WebService.
Parameter SERVICENAME = "MyService";

/// TODO: change this to actual SOAP namespace.
/// SOAP Namespace for the WebService
Parameter NAMESPACE = "http://tempuri.org";

/// Namespaces of referenced classes will be used in the WSDL.
Parameter USECLASSNAMESPACES = 1;

Method ReceiveFile(attachment As %GlobalBinaryStream) [ WebMethod ]
{
	set filestream = ##class(%Stream.FileBinary).%New()
	$$$ThrowOnError(filestream.LinkToFile("C:\temp\file_"_$username_$zts_".out"))
	do filestream.CopyFrom(attachment)
	$$$ThrowOnError(filestream.%Save())
}

}

A web client class on the client. This was generated with the SOAP wizard in Studio. Only the datatype of the attachment argument to ReceiveFile has been modified.

Class MyService.Client.MyServiceSoap Extends %SOAP.WebClient [ ProcedureBlock ]
{

/// This is the URL used to access the web service.
Parameter LOCATION = "http://localhost:52773/csp/user/Sample.MyService.cls";

/// This is the namespace used by the Service
Parameter NAMESPACE = "http://tempuri.org";

/// Use xsi:type attribute for literal types.
Parameter OUTPUTTYPEATTRIBUTE = 1;

/// Determines handling of Security header.
Parameter SECURITYIN = "ALLOW";

/// This is the name of the Service
Parameter SERVICENAME = "MyService";

/// This is the SOAP version supported by the service.
Parameter SOAPVERSION = 1.1;

Method ReceiveFile(attachment As %GlobalBinaryStream) [ Final, ProcedureBlock = 1, SoapBindingStyle = document, SoapBodyUse = literal, WebMethod ]
{
 Do (..WebMethod("ReceiveFile")).Invoke($this,"http://tempuri.org/Sample.MyService.ReceiveFile",.attachment)
}

}

And some sample code for the client to use this class to send a file:

/// get the file
set filestream = ##class(%Stream.FileBinary).%New()
$$$ThrowOnError(filestream.LinkToFile(pFileName))

/// create the attachment
set attachment = ##class(%GlobalBinaryStream).%New()
do attachment.CopyFrom(filestream)

/// create the client and send the file
set client = ##class(MyService.Client.MyServiceSoap).%New()
set client.Username = "redacted"
set client.Password = "redacted"
do client.ReceiveFile(attachment)

This will include the entire base-64-encoded file in the body of the SOAP message. An even better way would be to use MTOM attachments for the file. See the documentation here for more details about how to do that: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSOAP_mtom

Pravin Barton · May 16, 2023 go to post

Very helpful tool! This is much better than testing everything manually. I'll be adding it into the build pipeline for my system using IRIS BI.

Pravin Barton · Sep 12, 2023 go to post

If your VS Code workspace has an active connection to an ObjectScript server, you can do this:

  • Open the local XML file in VS Code
  • Right click and choose "Preview XML as UDL"
  • In the preview window, right click and choose "Import and Compile Current File"
Pravin Barton · Sep 13, 2023 go to post

This can happen if the routine contains an ASCII character that cannot be printed to XML. Here is an example from a routine I created:

> set routine = ##class(%Routine).%OpenId("pbarton.test.MAC")
> zw routine.Read()
"pbarton"_$c(10)_" write ""hello"_$c(7)_""""

You can see the routine contains $c(7), which is a non-printable ASCII character. When I export the routine it looks like this:

<?xml version="1.0" encoding="UTF-8"?><Exportgenerator="IRIS"version="26"zv="IRIS for Windows (x86-64) 2023.2.0L (Build 159U)"ts="2023-09-13 11:20:00"><RoutineBase64name="pbarton.test"type="MAC"languagemode="0"timestamp="66730,40577.6434199">cGJhcnRvbgogd3JpdGUgImhlbGxvByI=
</RoutineBase64></Export>
Pravin Barton · Nov 3, 2023 go to post

Hello Alan, we are lacking in documentation that explains what each of those menu items do. I logged a GitHub issue here to add that: https://github.com/intersystems/git-source-control/issues/296

You mention having an existing application with a lot of code already in source control that you would like to migrate to Git. What I might do for this situation is initialize a new Git repository and copy all the files from your older source control system into the repo. You can then configure git-source-control to use this new repository for source control. The "Import All" option will import the files from this new repository into IRIS.

In the meantime, here's a quick and dirty explanation of the options you mention:

  • Status: outputs the results of "git status" to the source control output
  • Settings: opens a web page where you can configure git-source-control settings
  • Launch Git UI: opens a web page where you can perform basic Git commands graphically
  • Push to remote branch: equivalent of "git push"
  • Fetch from remote: equivalent of "git fetch"
  • Pull changes from remote branch: equivalent of "git pull", plus a call to the pull event handler
  • Export All: exports all newly changed items in IRIS to the Git repository
  • Export All (Force): exports all items in IRIS to the Git repository, including those with older timestamps
  • Import All: imports all items in the Git repository to IRIS if the version in IRIS is outdated
  • Import All (Force): imports all items in the Git repository to IRIS
Pravin Barton · Feb 26, 2024 go to post

This is a very delayed answer to an old question, but there is now a $zconvert mode in IRIS that will do this for you:

> write $zconvert("Árvore", "A")

Arvore
Pravin Barton · Apr 2, 2024 go to post

In older versions, trying to run "do $i(a)" would throw a <SYNTAX> error. You could instead use "if $i(a)" or "set a = $i(a)" to do the same thing. The "do $i(a)" was added with IRIS 2019.1 because it's a nicer-looking syntax. So you can treat them as identical, and the only reason to care either way is if you want to write code that is backwards-compatible with older Caché / IRIS versions.