Colin Brough · May 14, 2024 go to post

Not sure it counts as an answer, but what we did to step round this issue was to move the bulk of the functionality - where the error handling was required - into a new business process, leaving only the most basic "pass the trigger message along" functionality in the business service. Added an extra component to the production, but we can now see errors in the log when they occur, and they are passed appropriately to Ens.Alert.

Colin Brough · Jun 27, 2024 go to post

Say you have the same code (Production) running on different servers - for example, a local instance on a developers own machine, a test server used for system testing and a production server.

Your code accesses an external web-service. The actual web-service will be different for each system - maybe a mock service for the developer, a test version of the web-service for the test system and a production version for your production server. Then the URL for accessing the web-service would be different for each one.

In your code you have a setting on the business operation in your production that connects to the web-service. The value of this setting can be set from the System Defaults Settings page, and will contain different values between the servers.

This allows you to separate out settings that will be the same across all servers, and settings that will differ between servers - settings that are the same on all servers can be set on the services/processes/operations themselves, settings that differ will be set via system defaults.

Colin Brough · Jul 3, 2024 go to post

7 years after it was written, this comment helped us sort our problem - we are disabling components to prevent further attempts at processing on certain error conditions, and were struggling to get the EnableConfigItem() call to take effect immediately... Sorted now. 

Colin Brough · Aug 8, 2024 go to post

Sorry Sandeep, no real resolution. As I indicated, it was a development server and we scrubbed it and reinstalled Ensemble - haven't seen the issue since.

Colin Brough · Aug 16, 2024 go to post

We have a situation that looks suspiciously similar:

  • job that runs an external program via $ZF(-100,...) to perform a task from a business process runs perfectly when the Pool Size = 1
  • not all of the external tasks complete successfully when Pool Size > 1

More detail:

  • Production takes incoming stream of HL7 ORU_R01 messages, and for each one produces a PDF
  • this is done by converting each HL7 to an XML representation, then calling Apache FOP (the one pre-installed in Ensemble) with a stylesheet and the XML to build the PDF. A business process takes care of this step.
  • with Pool Size = 1 runs correctly
  • with Pool Size = 2 all the XML files are generated (via a call to a class method) but only a small subset of the PDF files are generated - maybe 4 out of 20?
  • no error messages that we've been able to find yet

Here's an illustrative screenshot - yellow are first HL7->XML->PDF, green are second HL7->XML->PDF. Yellow produces a PDF, green doesn't. As far as we can tell the FOP commands should be independent (no shared files - unless stylesheets can't be opened by multiple processes simultaneously?)


Only thing we've seen in documentation that gives us pause is the line: "On a Windows system you should never omit both the /ASYNC and /STDIN flags." (from $ZF(-100) | InterSystems IRIS Data Platform 2024.2) - but when only one copy is running it appears to be fine with "" as the flags argument.

Is this a $ZF/Ensemble issue, or is it something about FOP specifically?

Colin Brough · Sep 2, 2024 go to post

Business Operation is one generated by SOAP Wizard. It is being fed by a custom Business Process that runs in response to a scheduled task - the BP queries a database table and extracts a set of documents to send. At certain points in the day we want to query the table like this:

      SELECT * from TABLE

while at other points in the day we want to query the table like this:

      SELECT TOP NN * from TABLE

Then the documents selected by the query are sent, in turn, to the Business Operation for onward transmission.

Colin Brough · Sep 9, 2024 go to post

Thanks Deepak, that's a neat trick - hadn't thought to go sideways like that. You could even have a more complicated lookup table arrangement with times as well as document send limits encoded in the lookup, so your schedule was encoded in the lookup table values rather than embedded in your code.

Colin Brough · Sep 18, 2024 go to post

For the sake of closing off this old question, and to answer my own question, in light of more experience and some testing...

  • side effects of the transformation could, in theory, change the behaviour - but it'd have to be a transformation that had side-effects (eg kept some kind of state across executions, whether in globals or on the filesystem or in some other way)
  • performance could be affected, since transformation is called twice rather than once, but in most cases the difference is likely to be negligible.
Colin Brough · Sep 18, 2024 go to post

Closing off an old question for completeness, we never did get Zen working. In the end we used Apache FOP directly:

  • HL7 -> XML as described in the original question
  • call Apache FOP using $ZF(-100,  $$$fopbat, "-xml", XMLfilename, "-xsl", StyleSheetFilename, "-pdf", PDFfilename)

This puts the output PDF onto the filesystem from where, in our solution, it is later picked up for onward transmission to a downstream system.

Colin Brough · Sep 27, 2024 go to post

Auto-adjust / design question 4: we'd find this useful, especially if it handles bulk renames - a bunch of classes implementing a data type, all being moved in one go from one place in the class hierarchy to another and all being consistently renamed.

So files X/Y/A.cls, X/Y/B.cls, X/Y/C.cls, and containing classes X.Y.A, X.Y.B, and X.Y.C being moved to Q/P/A.cls and Q.P.A etc. Especially if Properties defined in X.Y.A as "Property pp As X.Y.B" becomes "Property pp As Q.P.B" when renamed.

Colin Brough · Sep 30, 2024 go to post

Thanks Ben, that confirms what we suspected and gives us a tool to try and remedy the situation if (as we suspect) the supplier of the "interfering" system isn't very interested in fixing something affecting a tiny number of developers!

Colin Brough · Oct 28, 2024 go to post

Never mind, our support people had updated the Java version, despite what the logs said! And because they hadn't restarted Ensemble/the server, it still had the old JAVA_HOME in its environment. We are currently banging their heads against the nearest wall for them! 🙄

Colin Brough · Nov 20, 2024 go to post

We've got the same issue, but with an incoming HL7 feed with embedded, encoded characters - would be nice to be able to detect what's coming in, but I take it from this discussion that's not (reliably) possible. Don't really want to scan the whole text of every incoming message to heuristically look for possible encodings. Upstream say/think they are sending UTF-8, but we seem to be getting Window-1252, for the characters we've seen in the (limited) testing. Who knows what will come through the feed once it goes live!

Colin Brough · Nov 22, 2024 go to post

If only MSH-18 were set... 🙄 Up-stream system isn't setting it! And until yesterday supplier of upstream system was claiming they were sending UTF-8 when we thought the feed looked awfully like Windows-1252. Yesterday they admitted/confirmed they are sending Windows-1252, so at least now we know!!

Additional info: if we change the order of <allergies> and <patientNotes> in the schema, it does not change the order of elements produced in the XML.

Its a virtual document: EnsLib.EDI.XML.Document, with the XML schema applied having been loaded into Ensemble via Ensemble -> InterOperate -> XML -> XML Schema Structures. 

For the sake of other people being able to find an answer, we took this up with WRC. Suspicion is this is a bug/aberrant behaviour in Ensemble (and they suspect recent versions of Iris). WRC were going to check in with development, and we are still waiting to hear back from them around that.

With help from WRC, looks like this is something of our own doing - we were using sub-transforms in ways that caused us problems. Our sub-transforms above were outputting whole new XML documents, when at the very least they should have been taking existing partially produced XML from the top-level transform and using that existing document to output. We "solved" the node ordering issue by moving everything into a single transform, so there was no mismatch between top level and sub-transforms.

I was going to respond and say something like, "I'm sure that does answer the question, but I'm not sure I understand the answer!" So thanks for the executive summary! And thanks @Robert Cemper 
 for the background information.
  

Thanks @Eduard Lebedyuk , that was helpful. As you describe, headers are not re-used from a previous call. On closer inspection, our confusion arose from the fact that if you provide a set of credentials as a setting on the operation which is implemented by EnsLib.HTTP.OutboundAdapter, even if you don't explicitly set the Basic Authentication headers yourself, they are created for you by the adapter using the provide credentials - we didn't realise this, and so when we removed our own explicit setting of the headers (for testing) we were confused by the headers still arriving in the downstream system!!

Thanks, that's helpful. Our end goal was the perennial XML to JSON conversion conundrum, or HL7 to JSON, and as we're currently stuck on Ensemble 2018, we've not got access to %JSON.Adapter, so we're... limited.

Our immediate goal was, given a particular message/document, visit all nodes holding data within the document (traverse the tree), and do some processing for each node - eg output a JSON representation of the data held at that node so that a JSON serialisation of the whole document can be produced; but the general case, not just the JSON case, is of interest.

Current application is taking an HL7 feed and sending (a subset of) the data in the HL7 and sending to be consumed by a downstream system via a REST web-service. The downstream in this case is being developed by another team within our organisation, and they can take XML - we've built a transformation to extract the relevant data into an EnsLib.EDI.XML.Document, and we've developed an XML schema for the data so we can validate it before sending on. But it would have been easier for them to consume JSON, so we were exploring how to do that - and are conscious that future requirements might include sending JSON rather than XML...

Thanks for the link, but as we are stuck on Ensemble (not Iris), we can't make use of this.

Thanks for looking. Realised that I gave the version number for the ObjectScript extension (3.0.1), but not for the Language Server (2.7.2). And as you've highlighted its the language server which is giving me the error.

I'm also seeing the Language Server output disappearing from the drop-down now! @Brett Saviano is responding more fully on the GitHub forum linked above.

An update in 2 parts:

  1. Some of the code in our repo is non-canonical ObjectScript (eg it has examples of '//ABC' comments, with no space, in the files in the repo; these are canonicalised to '// ABC'). So when imported into Ensemble via the VS Code extensions, it is "canonicalised" and subsequently shows as changed in relation to what was in the repo. This is our problem, not a bug, but we've no idea why it occurred in the first place!
  2. In the course of investigating it we stumbled on the ObjectScript Language Server error above, and I've subsequently been able to isolate it at least to the point of making it reproducible. There is further discussion on the GitHub discussion forum, including instructions for how to reproduce.

As result I'm going to mark this as answered, because further exploration will be done from stuff on the GitHub forum.