Julian Matthews · Nov 24, 2017 go to post

Thank you - I will try this the next time I'm running a big job and see what details I get.

Julian Matthews · Feb 15, 2018 go to post

Hi Jeff, thank you for  your reply.

I did end up raising a call for this and, as I was unable to replicate the issue at the point of raising the call, it was decided that the call would be closed and a new one raised if it reoccurred.

Julian Matthews · Feb 27, 2018 go to post

Hi Robert.

I think you have hit the nail on the head - the method return type is a string.

Time to do a bit of rework in the Dev environment...

Julian Matthews · Mar 14, 2018 go to post

Hi Mark, interesting stuff as usual.

Does your implementation go on to take advantage of the notification priorities? I'm just thinking of what benefits could be gained for alerting on-call staff to issues.

Julian Matthews · Mar 16, 2018 go to post

Hi Mark.

After throwing in an if statement for the Priority variation and some other local tweaks I have this working perfectly, so thank you for sharing.

I also added the Token and User Key as a setting to be set from the Operation within Ensemble.

It would be good to catch up outside of the Intersystems forums sometime soon.

Cheers!


For anyone interested in adding the Token and User Key

So I included before the method:

Property Token As %String;

Property User As %String;

Parameter SETTINGS = "Token, User";

And then the http request parameter became:

        Do httprequest.SetParam("token",..Token)
        
        Do httprequest.SetParam("user",..User)

This leaves the token and user key to be configured within Ensemble via the Management Portal:

Julian Matthews · Apr 4, 2018 go to post

Hi Bob.

These will be picked up by either Purge running the all command, or by selecting "Messages" for TypesToPurge in either one.

The MessageBodyS should be purged when the "BodiesToo" tick box is selected in the task. The description for the BodiesToo  option is: "Delete message bodies whenever their message header is deleted. This is off by default because some Productions may use message objects that are part of a larger environment and not transitory." so it may be that your task was left with the defaults and this has built up.

Julian Matthews · Apr 4, 2018 go to post

Hi Bob. Do you have KeepIntegrity selected?

I ask because the only thing I can see which might point in the right direction is that the selection of message headers when KeepIntegrity is selected does a "select top 100000000" and your Ens.MessageBodyS is a digit greater (10 digits vs 9), so it could be that the items are somehow being missed? If this is the case, running the purge without the Keep integrity selected might work?

Also, I assume you are getting a success status from the task running?

Julian Matthews · Apr 4, 2018 go to post

Hi Bob. Please can you try running the task without KeepIntegrity selected?

If this doesn't resolve your problem, I'm all out of ideas.

Julian Matthews · Apr 11, 2018 go to post

Hi Izak.

What task would Ensemble be completing to link the two applications together?

Julian Matthews · May 17, 2018 go to post

I tried this as a way of moving everything so far into our source control system, and the performance impact on Eclipse/Atelier was soul destroying.

Julian Matthews · May 18, 2018 go to post

Thanks Joyce, I made contact with them instead of support, and after a webex the solution was found!

It turns out the performance issues I had been getting when adding the entire namespace was because I had included all of the system folders (ens, enslib, EnsPortal, etc). So on each launch, Eclipse was then reindexing the entirety of the core files in each namespace.

Julian Matthews · May 29, 2018 go to post

Hi Gadi.

Glad to hear you have something in place now.

I guess a task is better for specifying the exact time for the event to run rather than every x number of seconds, as the timing could go out of sync if the service was restarted at any point.

For the setup I have, because I want to check the status of the drive free space (and I also check a specific folders files to alert if any of them have existed for more than 5 minutes) it makes sense to just let the Call interval run every x seconds.

Julian Matthews · May 29, 2018 go to post

The accepted answer would probably be your best shot.

Say for example you wanted a count of all messages that have come from a service called "TEST Inbound", you could use the SQL query option (System Explorer>SQL) to run the following:

SELECT count(*)
FROM Ens.MessageHeader WHERE SourceConfigName = 'TEST Inbound'

If you wanted to put a date range in as well (which is advisable if what you're searching is a high throughput system and your retention is large):

SELECT count(*) Total
FROM Ens.MessageHeader where SourceConfigName = 'TEST Inbound' AND TimeCreated >= '2018-04-30 00:00:00' AND TimeCreated <= '2018-04-30 23:59:59'
Julian Matthews · Jun 4, 2018 go to post

Thanks for the response.

It sounds like I should be fine with the machine I'm running (Win7, 120GB SSD, 8GB RAM, i5 CPU (dual core)).

My biggest hits are at start up, but once up and running it's pretty snappy. I should probably try to be more patient!

Julian Matthews · Jun 15, 2018 go to post

Hi Guilherme.

I think your best starting point will be providing your system specifications, the OS you're running Studio on, and the version of Studio/Cache you are running.

Depending on the issue it could be anything that is causing your problems.

Julian Matthews · Jul 17, 2018 go to post

Is the Business Process custom? If so, it's possible that there is a bit of bad code that is returning an error state and then continuing to process the message as expected?

It might help if you provide some more detail on the BP itself.

Julian Matthews · Oct 18, 2018 go to post

Sorry John, I hadn't had my coffee when I read your post.

When you look at the first message heading info within the Trace, does the Time Processed come before or after the Time Created of Message 2?

Julian Matthews · Nov 29, 2018 go to post

Hi Stephen.

Are you able to select the specific queue from the Queues page and press the abort all button, or does it return an error?

Julian Matthews · Dec 24, 2018 go to post

If you go to the management portal for the "down" mirror, are their any errors that might point to the issue?

I recently saw this happen where the mirror had run out of space to store the journal files, so the mirror stopped functioning and was showing as "down".

Julian Matthews · Dec 24, 2018 go to post

Hi Eric.

My first check would be looking at the console log for that instance to see if there's anything wobbling in the background. Specifically checking for any entries around the time the monitor thinks it has gone down.

Failing that, it's probably worth going to WRC. The last thing I think you need this close to Christmas is the Primary dropping and you needing the Mirror to be working.

Julian Matthews · Jan 9, 2019 go to post

What type of business service are you using? If you are using a single job on the inbound, I guess you're hitting a limit on how fast the adapter can work on handling each message (in your case, you're getting around 15ms per message)

You could look at increasing the pool size and jobs per connection if you're not worried about the order in which the messages are received into your process.

Julian Matthews · Jan 10, 2019 go to post

You might be hitting a hard limit on the performance of the hardware you're using. Are you able to add more resource and try again?

Julian Matthews · Jan 25, 2019 go to post

Just to add to this - I have had a play with this new function within the 2019 Preview release and it works really well.

Julian Matthews · Feb 20, 2019 go to post

Hi Scott.

I have just taken a look, and it doesn't seem to appear in anything I have (Highest being v2.7.1). Looking around online, ORU^R40 was apparently introduced in HL7 V2.8.

Julian Matthews · May 1, 2019 go to post

I haven't come across anything built in as standard that would do this in itself, but I guess it's something you could create within your environment.

I have something a bit similar, but the index is populated by a CSV received daily from another organisation, and then the process compares the HL7 messages against the index and sends if the patient is present in that table before sending.