Nigel Salm · Oct 1, 2020 go to post

Hi

Just to pad out that example:

set gbl="^%SYS",x="",file="c:\temp\myfile.txt"

open file:("WNS"):0

else  w !,"Unable to open file" quit

for  set x=$o(@gbl@(x)) q:x=""  zw @$ZR  w !

close file

for multiple globals:

set file="c:\temp\text.txt"

open file:"("WNS"):0

for gbl="^ABC,^XYZ,^PQTY,^%Nigel"  set x="" for  set x=$o(@gbl@(x)) w !!:x=""  q:x=""  use file zw @$ZR

close file

Nigel

Nigel Salm · Oct 1, 2020 go to post

Hi

The way that I have down this in the past is as follows:

1) Export the classes/DTL's/CSP Pages etc into an Export XML file.

2) Create an array of the strings you want to identify

 e.g.

set array({string1})="",array({string2})=""....array({stringN})=""

classmethod FindStrings(pfile as %String="", byref plist as %String) as %Status

{

set tSC=$$$OK  

try {

   open file:("R"):0

   if '$t set vtSC=$$$ERROR(5001,"Unable to open file: "_file) quit

    use file read line

    set x="" for {

         set x=$o(pList(x)) q:x=""

         if line[pList(x) {

             // Do what ever you want to do when you find a line that contains one of the sting values you are searching for

          }

}

catch ex {

    if $ZE["<ENDOFFILE"> {set tSC=$$$OK}

     else {set tSC=ex.AsStatus()}

  }

close file

quit tSC

}

call the method as follows:

set sc=##(MyClass).FindStrings({File_Name},.array)

Yours

Nigel

Nigel Salm · Oct 1, 2020 go to post

To add to the above reply. %Status is not just a Boolean TRUE (1) or FALSE (0). It is a complex structure where if the Status is false it is represented as 0_$list(of all of the Error Information). 

If you want to display the error code in SQL then create a calculated field "ErrorMessage" along side your field that stores the %Status value. Call that field MyClass.Status. Create a new field called MyClass.StatusErrorMessage and flag it as 'Calculated'

Then in either the OnBefore Save method or using a TRIGGER  do the following:

in the ErrorTextGet() method

method StatusErrorMessage ()

{

     quit $system.Status.GetErrorText(..Status)

}

Nigel

Nigel Salm · Oct 16, 2020 go to post

Hi

The type of content that is most appreciated by members of the community are articles on some aspect of the technology (Cache, Ensemble, IRIS) or specific usage of the technology that you have worked with, have a good understanding of, maybe have learnt a few tips and tricks about the  using the technology that is not covered by the core product documentation. This is especially true when you happen to make use of one of the more obscure features of the technology in a development project you have worked on and maybe battled to get it to work, found the documentation to be lacking, found that there are few if any posts on the subject in past community posts and no examples  in the various samples supplied in the Cache/Ensemble/IRIS "Samples" namespace or the "Dev" directory in the ../intersystems/..../dev directory.

To give you an example, some years back I was working on a project where we were building a prototype Robot. I was writing the Ensemble component that would interface the outside world and translate those instructions into calls to a Java code base that controlled the motors and sensors and other mechanical components of the robot. The developer I was working with knew all of the Java stuff and I knew all the Ensemble stuff and to make the two technologies talk to each other we had to make use of the Java Gateway. We read the documentation. It seemed straight forward enough. I had had a lot of experience working with most of the Ensemble Adapters so I was expecting things to go smoothly.

But they didn't. We re-read the documentation. We looked at the examples, we asked questions in the Developer Community, we contacted WRC but still we could not get it to work. Eventually my colleague fond a combination of bits and pieces of the Java Gateway that he merged together and eventually we got the interface working.

To this day I still don't understand why the gateway did not work the way the documentation said it should. I don't exactly understand how the solution we eventually put in place that did work, worked.   

At the time we were still experimenting with the Java Gateway and realised that the documentation only took us so far, it would have been great if we had been able to find an article in the Developer Community written by someone who had used the Gateway, had found some specific things that needed to be setup correctly for it to work, included some code snippets in the article and so on. If I had found such an article and it helped us get our Gateway to work (we had struggled with it for 2 months, it should have taken 2 days to get it to work) I would have sent a bottle of our famous South African Artisan Gin and a large packet of the South African delicacy, "Biltong" (dried Beef, Kudu, Springbok, Ostrich meat) to that man as a thank you.

These days the focus is on IRIS and IRIS for Health. There is huge interest in  FHIR and the various interoperability options for IHE, PIX, PDQ,  HL7, Dicom, CDA and so on. 

I have been quite active on the DC for many years and since the Covid19 Lockdown I have had more time to answer questions and I too am thinking of articles, code solutions and such like that i can write up or package for the Open Exchange applications page. I have even got to the point where I have invested in some lighting gear and a tool called Doodly which allows you to create animated videos to explain concepts or processes to achieve a desired solution or outcome. I hope to start publishing some of these articles in the near future.

So I hope that these observations will encourage you to find good subject material to write up and publish

Nigel

Nigel Salm · Oct 16, 2020 go to post

Hi

Even though LabTak is not really covered in this group let me give you a little insight into how LabTrak works:

The LabTrak data is stored in a number of globals. There are a couple of key globals that you need to be aware of:

^TEPI

^TDEB

^THOS

^TEPI contains all of the LabTrak Episodes and within each episode there is a sub-array of Test Sets and within that a sub array of Test Items.

The global structure was designed in pre-cache Objects and so the globals use delimiters to separate one field from another. Once you can navigate through the global structure you will find that the fields either contain data (String, Integer, Boolean etc) or they contain codes that point to one of about 50 code tables.

All of the logic of LabTrak is written in cache Objectscript routines. 

Most of the routines are hidden in the sense that the source code is not installed, just the compiled code. However there are some callable entry points into the key areas of the application.  Depending on what you want to do there are appropriate labels that can be called that will retrieve data, insert, update or delete data. You really don't want to play with these routines unless you have been given training by Trak or the InterSystems Trak Sales Engineers.

The data structures (globals) have been mapped to classes and so there are classes for all of the different logical components of the database. You can run sql queries against these sql tables but you absolutely do not want to use insert, update or delete sql statements. All updates to the database are controlled through the entry points I have mentioned before.

For basic retrieval of data your best bet is to use the table definitions. You need to create an InterSystems ODBC DNS to connect to the LabTrak database and once connected you can then view the various schemas and the tables within them. You can then write sql select statements to get the data you are looking for. I would recommend that if you want to access data in LabTrak you should use the Disaster Recovery mirror of the LabTrak database. The DR database(s) are real time copies of the production database and most reports are run against the DR data rather than production. LabTrak is a very extensive and complicated application and is very finely tuned by the Trak experts to run at optimal efficiency and with proper data integrity. You don't want to be running SQL queries against the production database as you will potentially affect the performance of the system by running large sql queries. It is far better to run these queries against the DR servers. The system operators would need to give you access to the appropriate servers and secondly bear in mind that you are working with sensitive, confidential patient data and that must be respected at all times.

Every LabTrak sites has customer specific routines that can be edited. These customer specific routines contain 'insert', 'update' and 'delete' methods and you can write code in these labels to pull data from the database and use it to create HL7 messages for example or create entries in a queue that can be processed by an Ensemble Production

The application supports HL7 interoperability that can be used to pass HL7 data into LabTrak and generate HL7 result messages to send out fro LabTrak but again you would need the appropriate training to understand how that functionality works.

As has been mentioned in one of the other replies your best bet is to connect with you LabTrak Project Manager or Sales Engineer to find out more and to find out what training material and documentation exists

Good Luck

Nigel

Nigel Salm · Oct 31, 2020 go to post

Hi

The way that I have dealt with this in the past is as follows:

1) Create a DTL that accepts the incoming HL7 message as its source and  a target class of EnsLib.HL7.Message with a message structure of 2.5.1:ACK. I am using HL7 2.5.1 but it will work with any version of HL7 from HL7 2.3 upwards (probably earlier versions as well but I have not worked with versions earlier that 2.3)

2) when you invoke the Transform() method of a DTL you will notice that there are three parameters

  1. pRequest (which is your source message)
  2. pResponse (which is the generated target message)
  3. aux

If you read the documentation, if the transform i invoked from a Business Rule then aux is an object and contains information about bthe Rule that invoked the Transform and a couple of other properties. However if you are invoking the transform from Cache ObjectScript then aux can be an instance of a class you create.

The way that I use 'aux' is as a mechanism for getting information into the DTL that is not present in either the sourc or target objects. In this example I want to send in the ACKCode and the ACKMessage.

So my DTL looks like this: (I apologise for having to paste in the class code but I have never found a way to attach classes to a Community Reply) so here goes.

My DTL Class reads as follows:

Class Example.Transformations.CreateNACKDTL Extends Ens.DataTransformDTL [ DependsOn = EnsLib.HL7.Message ]
{

Parameter IGNOREMISSINGSOURCE = 1;

Parameter REPORTERRORS = 1;

Parameter TREATEMPTYREPEATINGFIELDASNULL = 0;

XData DTL [ XMLNamespace = "http://www.intersystems.com/dtl" ]
{
<transform sourceClass='EnsLib.HL7.Message' targetClass='EnsLib.HL7.Message' sourceDocType='2.5.1:ADT_A01' targetDocType='2.5.1:ACK' create='new' language='objectscript' >
<assign value='source.{MSH:FieldSeparator}' property='target.{MSH:FieldSeparator}' action='set' />
<assign value='source.{MSH:EncodingCharacters}' property='target.{MSH:EncodingCharacters}' action='set' />
<assign value='source.{MSH:SendingApplication.NamespaceID}' property='target.{MSH:SendingApplication.NamespaceID}' action='set' />
<assign value='source.{MSH:SendingApplication.UniversalID}' property='target.{MSH:SendingApplication.UniversalID}' action='set' />
<assign value='source.{MSH:SendingApplication.UniversalIDType}' property='target.{MSH:SendingApplication.UniversalIDType}' action='set' />
<assign value='source.{MSH:SendingFacility.NamespaceID}' property='target.{MSH:SendingFacility.NamespaceID}' action='set' />
<assign value='source.{MSH:SendingFacility.UniversalID}' property='target.{MSH:SendingFacility.UniversalID}' action='set' />
<assign value='source.{MSH:SendingFacility.UniversalIDType}' property='target.{MSH:SendingFacility.UniversalIDType}' action='set' />
<assign value='source.{MSH:ReceivingApplication.NamespaceID}' property='target.{MSH:ReceivingApplication.NamespaceID}' action='set' />
<assign value='source.{MSH:ReceivingApplication.UniversalID}' property='target.{MSH:ReceivingApplication.UniversalID}' action='set' />
<assign value='source.{MSH:ReceivingApplication.UniversalIDType}' property='target.{MSH:ReceivingApplication.UniversalIDType}' action='set' />
<assign value='source.{MSH:ReceivingFacility.NamespaceID}' property='target.{MSH:ReceivingFacility.NamespaceID}' action='set' />
<assign value='source.{MSH:ReceivingFacility.UniversalID}' property='target.{MSH:ReceivingFacility.UniversalID}' action='set' />
<assign value='source.{MSH:ReceivingFacility.UniversalIDType}' property='target.{MSH:ReceivingFacility.UniversalIDType}' action='set' />
<assign value='$tr($zdt($h,3),",: ","")' property='target.{MSH:DateTimeOfMessage}' action='set' />
<assign value='source.{MSH:Security}' property='target.{MSH:Security}' action='set' />
<assign value='source.{MSH:MessageControlID}' property='target.{MSH:MessageControlID}' action='set' />
<assign value='"ACK"' property='target.{MSH:MessageType.MessageCode}' action='set' />
<assign value='source.{MSH:MessageType.TriggerEvent}' property='target.{MSH:MessageType.TriggerEvent}' action='set' />
<assign value='"ACK"' property='target.{MSH:MessageType.MessageStructure}' action='set' />
<assign value='source.{MSH:ProcessingID}' property='target.{MSH:ProcessingID}' action='set' />
<assign value='source.{MSH:VersionID}' property='target.{MSH:VersionID}' action='set' />
<assign value='source.{MSH:SequenceNumber}' property='target.{MSH:SequenceNumber}' action='set' />
<assign value='source.{MSH:ContinuationPointer}' property='target.{MSH:ContinuationPointer}' action='set' />
<assign value='source.{MSH:AcceptAcknowledgmentType}' property='target.{MSH:AcceptAcknowledgmentType}' action='set' />
<assign value='source.{MSH:ApplicationAcknowledgmentTyp}' property='target.{MSH:ApplicationAcknowledgmentTyp}' action='set' />
<assign value='source.{MSH:CountryCode}' property='target.{MSH:CountryCode}' action='set' />
<assign value='source.{MSH:PrincipalLanguageOfMessage}' property='target.{MSH:PrincipalLanguageOfMessage}' action='set' />
<assign value='source.{MSH:AltCharsetHandlingScheme}' property='target.{MSH:AltCharsetHandlingScheme}' action='set' />
<assign value='aux.ACKCode' property='target.{MSA:AcknowledgmentCode}' action='set' />
<assign value='aux.ACKMessage' property='target.{MSA:TextMessage}' action='set' />
</transform>
}

}

My AUX class definition looks like this:

Class Example.Transformations.CreateNACKDTL.AUX Extends %Persistent
{

Property ACKCode As %String;

Property ACKMessage As %String(MAXLEN = 200);

}

To generate the HL7 ACK my code reads:

classmethod GenerateACKMessage(pRequest as EnsLib.HL7.Message, byref pResponse as EnsLib.HL7.Message, pACKCode as %String(VALUELIST=",CA,CE,CR,AA,AE,AR")="AA", pACKMessage as %String(MAXLEN=500)="") as %Status

set tSC=$$$OK

try {

    set aux=##class(Example.Transformations.CreateNACKDTL.AUX).%New()

    set aux.ACKCode=pACKCode,aux.ACKMessage=pACKMessage

    set tSC=##class(Example.Transformations.CreateNACKDTL).Transform(pRequest,.pResponse,.aux) if 'tSC quit

}

catch ex {set tSC=ex.AsStatus()}

quit tSC

}

Notice that the DTL takes the SendingFaclity, SendingApplication, ReceivingFacility and ReceivingApplication and swaps them around in the DTL.

You should keep the MessageControlID the same as the incoming HL7 Message MessageControlId so that the ACK can be linked to the original HL7 request. 

The MessageTimeStamp can be updated to current Date/Time

The 'aux' mechanism is very useful. Unfortunately the documentation has one line of comment that says "if the transform is called from ObjectScript aux can contain anything you want" or words to that effect.

So I tested it in a simple example like the one above and it does indedd work and I now use it in the 30+ DTL's I amworking with at the moment.

Nigel

Nigel Salm · Nov 5, 2020 go to post

Hi

if you have a simple Insert SQL statement in the form:

INSERT into {Table} VALUES ('abc','zyx',123,'',NULL)

then Cache SQL will replace '' with $c(0) in the global record and "" for NULL.

NULL is a valid keyword in SQL not only for testing conditions such as "WHERE {column} IS NOT NULL" but also when being used in the context of values in my INSERT example and also in Class Methods or Class Queries that are projected as SQL Stored Procedures. 

So if you have a Class Method projected as a Stored Procedure then if you call the stored procedure as follows:

call MyTable_MyProcedure('abc','xyz',123,'',NULL) 

then the values for the parameters will be

p1 = "abc"

p2 = "xyz"

p3 = 123

p4 = $c(0)

p5 = ""

Yours

Nigel

Nigel Salm · Nov 6, 2020 go to post

What Access Rights do you have as a Cache/Ensemble/IRIS user? You will need to have the appropriate Role to be able to perform this task

Nigel

Nigel Salm · Dec 6, 2020 go to post

and to add to my collegues....

You can create Ensemble productions where Business Services create request messages and send the Asynchronously to one or more Business Processes. If the Pool size on those Business Processes then you will end up with multiple Business Processes Instances simultaneously  processing the request messages in the queue for that Business Process. Likewise, Business Services can have a pool size > 0 and the same paradigm works. Ensemble will evaluate the number of messages in the queue for that Service and if need be it will add more instances of the service to process the queue. The only thing you need to worry about is that when  message is picked up by a service of queue it should be locked so that another instance of the serviv=ce or process won't pick up the same same message.

Apart from that Cache has been optimised over the last 20 years to offer the best transactional processing power of any mainstream database and concepts such as threading in other technologies require extra work by developers to implement whereas in Cache you can take it for granted that all of that low level stuff is intrinsically taken care of for you by the Cache Kernel

Nigel Salm · Dec 18, 2020 go to post

Hi Yuri

If you could give me some idea of what you are trying to do I might be able to either find some examples or create an example for you

Yours

Nigel

Nigel Salm · Dec 18, 2020 go to post

Hi

I have written about 20 Integration Productions over the last 10 years and have worked for and with InterSystems for nearly 30 years. The InterSystems documentation is very good however it takes a certain amount of experience to get the most out of the documentation.

What I would suggest is that you look in the SAMPLES namespace as there are a number of example productions and other features of Ensemble and IRIS.

If you are working in the Cache Studio then use the "Project" -> "Add Item" to add all of the examples into your workspace.

The second thing I would advise is that you take a look at Microsoft V S Code which has a Cache ObjectScript Extension as well as a few others such as a Lint extension, multiple connections and so on. The advantage of working with VS Code is that it is quite possible that you will be working with other technologies such as Web Technologies like Angular and Python. VS Code also integrates with GIT (Git For Windows, Tortoise)

I have to confess that I am definitely more at home in a Windows Environment than I am in Linux and I am waiting for a Hard Drive upgrade to my laptop in order to install and work with Containers and there is of course a lot of emphasis on Linux/Unix and Containers in everything you read in the documentation and of course in the product distribution kits but once you have your O/S in place, your Containers, Your Cloud Environment then the first thing you'll notice that once installed Cache, Ensemble and IRIS look and behave identically on all of these platforms and technologies. So anything I say about memory management or Interoperability Adapters or coding standards is true across all platforms.

You happen to have asked a question that I am very passionate about. From getting the most out of Cache Objectscript, Coding Standards, Designing your database Classes/Tables. How to get the most out of Cache Objects, Cache SQL and Cache Direct. How does InterSystems implement Object Orientation as well as Relational SQL without having to duplicate, replicate or manipulate the underlying data in order to provide a seamless integration between the two. How to access the database directly bypassing OO and SQL: Why would I want to do this? What are the benefits? what are the pitfalls?

Then there is the whole topic of designing great Integration Productions. There are many concepts to consider here:

1) How is data getting into your production. How to design and configure your inbound and outbound adapters.

1.1) How to transform data using the powerful string manipulation functions that are intrinsically at the heart of Cache.

2) How to create Request and Response Messages to send salient information from your Business Services to Business Processes and/or Business Operations (Pushing data out of the Production, interacting with external third party applications and/or databases), Interacting with .Net and Java.

3) Build your UI: What are your options? Cache Server Pages, ZEN, Bootstrap, Angular, Python?

4) Synchronous vs Asynchronous Calls

5) BPL: Pro's and Con's

6) Data Transformations (DTL's)

7) HL7 and FHIR (I am deeply involved in FHIR at the moment) and other standards DICOM, LDAP, CDA, CCDA, HTTP, SOAP/Web Services, Java, .Net, TCP, ASTM, JSON, XML and the 43 other variations thereof

8) Reusability of code

9) Debug Logging

10) House Keeping

11) Alerting, What does IRIS offer in the way of creating Alerts and monitoring of those Alerts, assigning alert issues to specific Users. Alternatively making use of a custom Alerting system (I have one of those which I am willing to share)

12) How to configure your servers. Disk allocation, memory allocation, process memory

13) Understanding how IRIS manages data: Retrieving data from disk, managing that data in memory, what happens when data is modified in memory? Do other users see that modified data in memory? When that page of data in memory needs to be written back to the disk system: how does that happen? What happens if the server crashes in the middle of that physical update to disk? How does it recover

14) Why is IRIS so fast? How are Globals (The underlying physical storage structures that are projects through Object Classes or Relational Tables) structured

14.1) Indexing: Conventional Indexing, Bitmap and BitSlice Indexing, iFind Indexing. What are the differences? When to use them, the power of iFind Indexing on unstructured but important data such as Names and Addresses

15) How do I optimise the I/O of data from your disk system. Databases, Journalling, Write Image Journaling, Archive databases

16) Security: Auditing, Access Control, Encryption

17) InterSystems Learning Services

18) The Developer Community

19) InterSystems Open Exchange

So, as you may have gathered I could write anything from 1 page to an entire chapter based on my personal experience on every one of these topics so let me know which topics are most critical to you and I will do my best to answer each of them - documentation and in some cases code examples.

One of the tricks is absolutely the issue of standards, naming conventions, templates for classes, methods, property definitions, foreign keys, the difference between how IRIS manages Objects vs. Relational Tables. What are the implications when you want to write functions that can be called by both paradigms. For example, using the %OnNew, %OnBeforSave, %OnAfterSave vs SQL Insert, Update and Delete Triggers.

Methods and Class Queries projected as Stored procedures that can be invoked by external relational environments yet behave as Object functions within IRIS

What would help is if you could give me some idea about your background? What technologies have you worked with before you finally reached "The One True Database and Development Technology for Dedicated, Enthusiastic and Passionate System Architects and Developers" laugh 

Is your experience primarily in the Relational world of Microsoft and Oracle, Java, Web Development and so on. Knowing this will help me draw parallels between what you already know and how best to bridge the gap into Ensemble and IRIS.?

Looking forward to hearing from you

Nigel Salm

Nigel Salm · Dec 18, 2020 go to post

I wrote my first reply in rather a rush. What would help me is to understand how you came to the world of InterSystems and where it fits in your future development plans.

What I can say is that it is one of the most consistent environments in which to develop applications. Once you have understood the basics of the Language, the concepts of Classes and Tables, the common approach to the multitude of adapters supported for Interoperability you will find that no matter what you want to do within the technology you will find that everything is built upon a very simple yet powerful paradigm. Keep It Simple, There is no fluff in IRIS. There are no masses of libraries that have to be included and compiled into each component of you application. Object Orientation (of which I am a dedicated advocate) is so powerful that once you have written some core classes and methods and you wrapper them in such a way that they can be seen as Java Methods or Relational Stored Procedures just be adding a couple of key words and possibly running them through a wizard that will create the project wrapper to expose them as such to the outside world you will come to realise that the symbiosis of data and logic is unlike anything you will have experienced in other technologies.

If you give me a relatively experienced developer with a passion for developing software I can train that developer within 2 weeks to become a IRIS developer capable of creating classes (and implicitly tables), teach them when to use SQL rather than OO, create a basic Production that consumes data from an external source, manipulate and transform that data and either store it within IRIS or send it on to some other 3rd party application. Give me another 2 weeks spread over 2 months and I will help that developer tackle specific issues relating to any project that you start developing in IRIS. So give it some thought and let me know how best I can get you up and running in these probably the most innovative and enjoyable technologies on the market today.

Nigel Salm · Dec 18, 2020 go to post

Hi

I was going to offer to write such a function but it appears that other developers have already got solutions for your request. Pity, its a nice challenge

Nigel

Nigel Salm · Dec 18, 2020 go to post

Lol, I was so busy reading all the other posts otherwise I would have sat down and written it.

Would you like to share your code with me devil (*** please ***)

Nigel

Nigel Salm · Dec 20, 2020 go to post

That's really cool. I had a need for exactly this functionality a while back but had to find a different solution. I wish I had this then.

Nigel Salm · Jan 11, 2021 go to post

It almost reminds me of a language called APL that I learnt while I was being an Actuary. it was an entirely symbolic language designed to manipulate 2 or 3 dimensional tables for Actuarial Life Tables. I loved it. You could write an entire program in about 10 minutes and then spend 2 weeks explaining what the code means afterwards

 

Nigel

Nigel Salm · Jan 18, 2021 go to post

Hi

In one of my Zen applications I built a custom Login Screen and in the Web Application Definition I call that login csp page. In my case I maintain my own User/Role tables and when I add or modify a User I use the Ensemble security method calls to create or update the user in Ensemble.

The Zen Custom Login forms are designed to validate the user against the Ensemble Users Table.

Given that in your login page the User and Password fields can be accessed through the DOM model you can get the values from those fields by specifying a "OnChange" or equivalent JS event and you can then get the value entered by the user and you can then write those values into your own "Login Audit Table". Likewise you can detect when the Zen session has closed and so you could trap that and update your "Login Audit Table" to indicate when the user logged out.

In the Zen documentation they give an example of a simple custom login form:

<page xmlns="http://www.intersystems.com/zen" title="">
  <loginForm id="loginForm" >
    <text name="CacheUserName" label="User:" />
    <password name="CachePassword" label="Password:" />
    <submit caption="Login" />
  </loginForm>
</page>

So you could trap the user name and password values (the password value will have been encrypted so if you needed to see the actual password value you would have to unencrypt it).

The only problem I foresee is that once you have submitted the form you won't know if the user has logged in successfully until your Home page displays so you would have to store those values in a temp global and then if you get to your Home screen you would know that they had logged in successfully and you could create your login Audit. 

Given the way that the Cache/Ensemble/IRIS documentation is structured you may well find a section elsewhere in the Zen documentation that might tell you how to trap the outcome of the 'submit' but I have not attempted to see if my theory returns any more useful information.

Nigel

Nigel Salm · Jan 18, 2021 go to post

Hi

Before I  start playing around with this functionality would it be possible for you to include a screen shot of the data retrieved by the SQL query?

Nigel

Nigel Salm · Jan 18, 2021 go to post

Hi Olga

Does this challenge apply to people who live outside of the USA?

I would happily write a review but I would also like the $25 prize unless the prize could be converted into Global Master points which I could then redeem for the current items in the Rewards list.

By the way, when I had accumulated enough points to redeem some of them for one of the prizes I set my heart on the InterSystems Back Pack but by the time I tried to redeem my points the back packs had all gone and so I settled on the JBL speaker (which I absolutely love. It's very good and looks great), a pair of socks and some funny attachment for my phone which I am still trying to work out how it works.

Is there any possibility that you will be getting more InterSystems Backpacks and if so please will you put my name on one and let me know so that I can send in my application to redeem some points for the back pack.

I don't suppose that ISC marketing have any back packs hidden in their Marketing offices that you could possibly bribe Maureen to let you have one for me? :-)

Yours

Nigel

Nigel Salm · Jan 18, 2021 go to post

The best way of doing this is to create a class with a single property 'property Data as %BinaryStream'

class MyData extends %Persistent

{

property Data as %BinaryStream;

}

Then, I assume you you have a class that inherits from %CSP.REST (assume it is called MyRestClass) and in the XDATA Routes block define a route definition

<Route Url="/myapp" Method="POST" Call="SaveData" />

where /myapp is a Web Application defined in Management Portal -> System Administration -> Security -> Applications -> Web Applications. When you define the application specify the application URL ("/myapp"), the namespace where your data is going to be saved, and in the field "Dispatch Class" specify  "MyRestClass" and then Save the Web Application Definition.

in your REST Class(MyRestClass) define a class method

classmethod SaveData() as %Status

{

    set tSC=$$$OK

    try {

        set obj=##class(MyData).%New()

        set tSC=obj.Data.CopyFrom(%request.Content) if 'tSC quit

        set tSC=obj.%Save() if 'tSC quit

    }

    catch ex {set tSC=ex.AsStatus()} 

    quit tSC

}

This is just one approach. As Robert suggests below there are a number of ways of manipulation images in CSP but my example is one that I have used before

Nigel Salm · Jan 20, 2021 go to post

Hi

The reason that you are getting these errors is that there is a mismatch between the following attributes of the Client and Server Adaptors. Jeffrey mentions Framing which is essentially how Ensemble detects the beginning and end of each HL7 message. Within the message itself though your message consists of 1 or more Segments. In order to determine the end of a segment Ensemble needs to know what terminator to expect which can be <LF> or <CR,LF>. In order to detect these characters Ensemble needs to know what character set the 3rd Party Adapter is using. So you need to confirm that the attributes CHARSET (Character Set) and ENCODING. The default values for these should be 'Latin-1' and 'UTF-8'. On the subject of Framing your choice is dependent of whether you are the HL7 TCP Inbound Adapter (in a Business Service) or you are the HL7 TCP Outbound Adapter. If you are the server then you have the choice of ensuring that your Framing matches the 3rd Party Client Framing. However you also have the choice of Flexible which essentially tells the Ensemble Adapter to interpret the characters being streamed to the Server and based on Pattern matching rules Ensemble can 'auto-detect' the framing. However when you are working with the Ensemble HL7 TCP Outbound Adapter you don't have the option of 'Flexible' and you either need to confirm that you and the 3rd Party are using compatible Framing formats and you can find out if they support Flexible then you can set the value for your  Client Adapter Framing to a value of your choice 

Nigel

Nigel Salm · Jan 23, 2021 go to post

To add to Vic's response I suspect that the access violation is not necessarily about the sql access rights to the event tables but rather the access rights to the UI component that displays the event log and the access rights to access the database using SQL. I suspect that you need to assign an appropriate role(s) that will give the user access to the UI component and the ability to run sql. You can test this last aspect by using the Management Portal - System Explorer -> SQL and attempt to execute a simple query "select * from {table}" where {table} is the event table you want to access. I don't have access to HS 2018.1 at the time of writing this response.

Th answer of course will lie in the documentation as suggested by Vic. The trick is to ensure that the user has enough priviledge to do what they need to do but do not have so much access that they could potentially get to areas of the Management Portal and data that they should not have access too. It can be a bit tricky to get it perfect. If you have many users who need this specific access and it turns out that you require a combination of certain roles, tables etc then create a generic user with those assigned attributes and then when creating new users use the "Copy From" option on the "Create New User" page and specify the generic user you have created.

Nigel

Nigel Salm · Jan 23, 2021 go to post

Hi

On your Business Service Settings tab under the 'Additional Settings' tab there is a an option "NACK Error Code" whose definition reads as:

Controls the error code in MSA:1 of the NACK message this service generates when there is an error processing the incoming message. The default 'ContentE' behavior is to return code 'E' for errors with the message content, and code 'R' for system errors encountered within Ensemble while attempting to process the message. This distinction is important because system errors are expected to be solved so that if the remote client tries again at a later time they may not occur, while message content & validation errors are expected to require correction at the source and not to be worth retrying in the same form. However some client systems may expect or require a different error behavior. Therefore 3 additional behaviors are offered. The 4 behaviors are as follows:

  • ContentE = Use MSA error code 'E' to report errors in message content and code 'R' to reject due to (retryable) system errors.
  • ContentR = Return 'R' for content errors, 'E' for system errors.
  • AllE = Return 'E' for all content and system errors.
  • AllR = Return 'R' for all content and system errors.

Earlier versions of Ensemble used the behavior 'ContentR' exclusively.

As the documentation points out earlier versions of HL7 supported AA and AE, later versions added supports for CA, CE and CR.

The other attribute is "Use ACK Commit codes" and the documentation reads:

If 'true' and the HL7 message VersionID is 2.3 or higher, use the 'enhanced-mode' ACK 'Commit' codes ('Cx') in MSA:1 ('AcknowledgmentCode'). (Otherwise use the legacy-mode 'Ax' codes.) 

So to get the AA, AE and AR codes you need to make sure that the "Use ACK Commit Codes" is unchecked (false) and then from the "NACK Error Code" select "ContentE"

Yours

Nigel

Nigel Salm · Jan 23, 2021 go to post

Hi

In which time zone are the times for the presentations? If I am in South Africa which is GMT+2 in GMT terms what are the times for these presentations?

Is the seminar being held in Germany in which case I can probably work it out via google but it's not clear other than the announcement says that the presentations will be available in English and German. Hence my assumption that its being held in Germany.

Thanks

Nigel 

Nigel Salm · Jan 23, 2021 go to post

Hi

You can run the method in the 'Output' window which you can access from the 'View' menu.

Just a note on the difference between Cache Classes and Classes in other OO based languages. 

Cache supports two types of method:

Method ABC(Param1, param2, ... paramN)

and

ClassMethod XYZ(param1, param2, ... ParamN)

a Method acts on a instance of your class whereas a class method is independent of the object instance. i.e. you do not need to create a new instance or open an existing instance of a class in order to invoke a class method.

The second thing is that there a developer hooks that allow you to run code based on the following actions:

%New(): If you write a method %OnNew() this method will be run after the %New() method has been run. It allows you to do any initiation logic on the new instance that has been created.

%OnOpen(): This method is called after the instance of the class has been opened.

%Save(): There are two methods %OnBeforeSave and %OnAfterSave() that you can write that are called before or after the %Save() method.

%Delete: Likewise has two methods %OnBeforeDelete () and %OnAfterDelete() where you can write code that is executed before or after the %Delete() Method.

We don't really have the concept of a "Main" method that is automatically invoked when you reference the class other than through the %OnNew() and %OnOpen() methods.

In Ensemble things are slightly different in that Business Service/Business Process and Business Operation classes have methods such as OnProcessInput(), OnProcess() and in the case of Business Operations there is an XDATA block where you can specify the method to be invoked based on the message type being processed by the Business Operation.  You could consider these methods as the 'Main' methods of the class but in reality they are the methods where the developer is expected to put the application logic for the class. There are other methods that the Ensemble Production will execute before it is ready to run your code and there are methods that the Ensemble Production will run after your methods have completed.

Cache/Ensemble/IRIS also support the ability to write application/system specific code that will be executed when Cache Starts Up or Shuts Down and likewise when an Ensemble Production Starts Up or Shuts Down.

Have a look at the documentation on %ZSTART for more information on what you can do when Cache starts up or shuts down. There is similar documentation in relation to Ensemble Productions.

Nigel

Nigel Salm · Jan 24, 2021 go to post

Hi

There is an application DBeaver that will connect to your Cache/Ensemble/IRIS database and will generate a visual representation of your schema. I haven't tried creating tables in DBeaver and see whether it will generate a valid Cache Table/Class Definition. I suspect that it will generate classes in the same way that if you use the SQL Server import utility that will execute an SQL CREATE TABLE script and generate Cache classes/tables. Likewise you can use the SQL Gateway to connect to a SQL database and you can either create class/definitions in Cache that connect to the tables in SQL Server and leave the data in SQL Server or you can opt to create the table definitions in Cache and pull the data from SQL Server into Cache and remove SQL server altogether.

In the System Configuration and in the $system.SQL class you can manipulate how Cache Classes/Tables are created from an SQL Script or through the SQL Gateway. Including how to handle Integers and support for NULL vs Empty String.

I also found a couple of android apps called DB Scheme and Database that supports SQL Script Generation for about 8 common relational database technologies (ISC is not one of them but then we don't really push ourselves as a pure Relational database as we are primarily an Object Orientated Technology with  excellent Relational functionality and Relational Connectivity). The Android app allows you to design tables including relationship and then you select the SQL Database from the list of common Relational technologies and it will generate a SQL CREATE TABLE script. It also supports UPDATE and DELETE. It may support other functionality as well but I basically played with it for 10 minutes, built a simple table, generated the SQL Server SQL Script and then ran it through Ensemble and it created a valid Cache Table/Class. 

There is Cache based UML application that you can find in the Open Exchange tab. Download the zip file and import the classes. The documentation will tell you how to use the application. I like it because it is based on Cache Classes and Tables and therefore gives you a far more realistic view of your Cache Classes/Tables including all those nice weird things like Parent-Child Relationships, Streams, Certain Cache Data Types that we are so accustomed to using in Cache that are just not supported in other Relational classes. The only word of caution is that if your select a schema with loads of tables in it then it can take ages to render the display. I have attached a screen shot of the UML Class Explorer

Nigel

Nigel Salm · Jan 26, 2021 go to post

Hi

Dimitry is correct in his reply that this is a memory issue. Every cache connection or ensemble production class as well as all of the system processes run in individual instances of cache.exe or iris.exe (in IRIS). Each of these processes is in effect an operating system process (or job) and when a new user process is created Cache allocates a certain amount of memory to that process. The memory is divided into chucks, there is a chunk of memory where the code being executed is stored, there is a chunk of memory where system variables and other process information is stored and then there is a chunk of memory that is used to store variables that are created by the code being executed. Whether it is a simple variable [variable1="123"] or a complex structure such as an object (which is basically a whole load of variables and arrays that are related together as the attributes of an object instance). If you are using Cache Objects then when you create variables or manipulate objects in a (class)method those variables are killed when the method quits. Another feature of Cache Objects is that if you open an instance of a very large object with lots of properties, some of which are embedded objects, collections, streams and relationships Cache does not load the entire object into memory. it just loads the basic object and then as you reference properties that are serial objects, collections and so on then only then does cache pull that data into your variable memory area. And in normal situations you can generally speaking create a lot of variables and open many objects and still have memory left over. However there are a couple of things that can mess with this memory management and they are:

1) Specifying variables used in a method as PUBLIC which means that once they are created they remain in memory until you either kill them or use the NEW command on them. Secondly, it is possible to write code that gets into a nested loop and within each loop more variables are created and more objects are created or opened and eventually you will run out of memory and a <STORE> error is generated. 

I did a quick check to see where %SYS.BNDSRV is referenced and there is one line of code in the %Library.RegisteredObject class in a method called %BindExport what calls a method in %SYS.BINDSRV. The documentation for %BindExport says the following:

/// This method is used by Language Binding Engine to
/// send the whole object and all objects it referes to
/// to the client.

So my guess is that you have a Java, .Net or some other binding and when %BindExport is called it is trying to pass the contents of your object (and any directly linked objects) to the client and that is filling up your variable memory and generating the store error. 

I also see that the %Atelier class is also referencing %SYS.BINDSRV. 

So to investigate further do you use Atelier and/or are you using class bindings (Java etc....)

If you are then something you are doing with Atelier or in you application is periodically trying to manipulate a lot of objects all at once and killing your process memory. You can increase the amount of memory allocated to cache processes but bear in mind that if you increase the process memory allocation then that setting will be applied to all cache processes. I suspect there may be a way of creating a cache process with a larger memory allocation for just that process but I have no idea if it is possible or how to do it.

It is quite likely that even if you increase the process memory it may not cure the problem in which case I would suggest that you contact WRC and log a call with them.

Nigel