David.Satorres6134 · Jun 14, 2018 go to post

Yes, I am. This is the class definition:
Class *****.BO.RestOperation Extends EnsLib.REST.Operation {

}

David.Satorres6134 · Jun 15, 2018 go to post

Hi,

Yes, we have. Actually, that piece of code worked just fine before the authentication was set on place:

#dim callResponse As %Net.HttpResponse = ""
set st = ..Adapter.GetURL(tURL, .callResponse)
If $IsObject(callResponse)
{
}

Now, callResponse is not an object anymore :'(

David.Satorres6134 · Jul 25, 2018 go to post

Hi Steve,

Thanks for your help. Finally I got it working and I didn't remember to come here and close the topic. The error was not at Ensemble but at the proxy server.

Thanks anyway.

Thanks David!

But the problem is that that what takes looong time is getting the list values, not throttling over them.

Yes, sorry, my mistake. Actually the line should be:

        set dat=$g(^TestD(id))    //dat=$lb("a","b","c","d","e")
 

compared to: 

        set dat=$g(^TEST2(id))   //dat = "a#b#c#d#e"
 

Thanks to Julius suggestion, I've ran the %SYS.MONLBL analysis tool and clearly something is messing up when trying to get the data from a list:

Routine Line GloRef DataBlkRd UpntBlkBuf BpntBlkBuf DataBlkBuf RtnLine Time TotalTime Code
Test.1 78 16823 9128 14129 14129 7742 16823 66.282935 66.282935   set dat2=$get(^ListGlobal(id))
Test.1 79 16823 0 1849 1849 16904 16823 0.062076 0.062076   set dat=$get(^StringGlobal(id))

Finally I've made some tests. I have duplicated the listglobal changint its values to string, so I can compare two different globals wth the same data but stored differently. Results, show that accessing a list is much slower.

Routine Line GloRef upper pointer block reads bottom pointer block reads data block reads directory block requests satisfied from a global upper pointer block requests satisfied from a global buffer bottom pointer block requests satisfied from a global buffer data block requests satisfied from a global buffer M commands Time TotalTime Code
Test.1 78 171053 1 40 10399 1 14109 14070 165867 171053 43.32538 43.32538 set dat=$g(^ListData(id))
Test.1 78 171053 14110 14110 176266 0 0 0 0 171053 0.265694 0.265694 set dat=$g(^ListData(id))
                           
                           
Test.1 79 171053 1 23 5853 1 11607 11585 166642 171053 20.5958 20.5958 set dat=$g(^StringData(id))
Test.1 79 171053 11608 11608 172495 0 0 0 0 171053 0.237311 0.237311 set dat=$g(^StringData(id))
                           
                           

But finally, after reading a bit of doc I found that I could improve the performance by changing the database from 8kb to 64kb. And it really worked:

Routine Line GloRef upper pointer block reads bottom pointer block reads data block reads directory block requests satisfied from a global upper pointer block requests satisfied from a global buffer bottom pointer block requests satisfied from a global buffer data block requests satisfied from a global buffer M commands Time TotalTime Code
                           
Test.1 78 171053 0 1 1861 1 0 6642 169402 171053 7.234114 7.234114 set dat=$g(^ListData(id))
Test.1 78 171053           6643 171263 171053 0.225354 0.225354 set dat=$g(^ListData(id))
                           
Test.1 79 171053     1808 1   6534 169420 171053 2.12363 2.12363 set dat=$g(^StringData(id))

So

David.Satorres6134 · Sep 11, 2018 go to post

I just installed 1.3 and it still not there :O And Intersystems announced they won't be any enhancements from now on.

So I guess this is all we'll get.

David.Satorres6134 · Oct 22, 2018 go to post

Hi Dmitry,

Thanks for your help, I've been able to compile the class and start the production.

We are using it in a legacy class meant help with the JSON. Anyway, it's working now :-)

David.Satorres6134 · Nov 16, 2018 go to post

Hi!

Yes, that solution is good. But we need to stop the production to be able to copy the files, and we don't want to to that. We need a way to transfer the data to IRIS without stopping current Ensemble production.

David.Satorres6134 · Jan 10, 2019 go to post

I'm using a process that is reading from a global using $order, it's not called by anything external. When removing the call to the BusinessProcess it makes a lot of operations, but when it comes to enqueueing the call it takes several milliseconds. Actually, if I clone the BS and run them at the same time writing to the same queue (even if the queue is Ens.Actor) , I reach the same number of messages than doing it single.

I mean, even working in parallel I cannot increase the number of messages put in the queue. I'm not able to write more than 60-70 per second.

The adapter is Ens.InboundAdapter, but really not using it.

David.Satorres6134 · Jan 14, 2019 go to post

Finally we identified a couple of bottlenecks using MONLBL utility... none of them was the message enqueueing. So, we redid the production and now is feeing more messages to the queue.

Thank you all for your help :-)

David.Satorres6134 · Sep 12, 2019 go to post

5.

I've seen weird numbers that make no sense. For example, for last hour three components have high values:

That's why I'd like to know how is it calculated :)

David.Satorres6134 · Sep 13, 2019 go to post

Pool size is 5 :-)

I've been looking for an example of one item that take a little long, as most of them are completed in less than 0.3 seconds. This is a full trace:

I can see the component that takes long, but not as much as the average shown at the analytics :)

David.Satorres6134 · Sep 13, 2019 go to post

I agree all other processes time are included. But how you decide the [16] value?
Anyway, analytics is showing now 500 so I'd like to know how the calculations are done. I guess it's best if I go with WRC.

David.Satorres6134 · Apr 11, 2019 go to post

I can't remove the question, so I answer myself. The problem was at the routine called by the queue: a "lock" instruction seemed to break down everything.

So, removing the line solved the problem :-) Now I just need to know why locking a global messes the whole thing up :-O

My mistake... the odbc driver name was wrong :'( I can connect now :)

But I see that I have to rewrite the functions because the prepare, execute, fetch, etc attributes doesn't exist in pyODBC :'(

UPDATE: just by changing a few lines allowed me to work with pyODBC.

David.Satorres6134 · Feb 20, 2020 go to post

Hi Mikhail,

Where did you get the information for the meaning of the returned values from $system.ECP.GetProperty("ClientStats")?

Nice job, anyway ;-)

David.Satorres6134 · Feb 20, 2020 go to post

Thanks Alex.

But this other class gives as well the amount of bytes transferred and received ($p19 and $p20). And another bunch of numbers that I'd like to comprehend :-)

David.Satorres6134 · Jun 16, 2020 go to post

Hi Matthew,

Thanks for the answer. Can it be downloaded from the vscode marketplace? I actually don't work with dockers :'(

David.Satorres6134 · Jun 18, 2020 go to post

Hi Dimitriy,

Im struggling to create a valid multi.code-workspace with several servers connected, but I fail. Do you have an example somewhere?

David.Satorres6134 · Jun 20, 2020 go to post

I answer myself, in case someone is in the same issue.

WRC response was:
The short answer is that unfortunately there is no stand-alone kit for Atelier, as it is distributed only as a plug-in for Eclipse, and as such it follows the official Eclipse distribution mechanism.

But they gave me a few hints. Finally, I downloaded all the atelier package using wget from another computer, zipped it, copied to the computer and have it installed as local zip package. Worked like a charm :-)

David.Satorres6134 · Jun 20, 2020 go to post

Hi just managed to set it up and it's working.

One thing I miss: the ability to synchronise the code with the server. If somebody else has changed to code in the server, I don't see any alert or message. So, if I compile a class I'll be updating with an old code. Is there any way to achieve this, like I can do in Eclipse+Atelier?

Thanks!

David.Satorres6134 · Jun 24, 2020 go to post

Ok thanks! I'll continue my integation when this update is available.

Very good job, by the way! :-)

David.Satorres6134 · Aug 10, 2020 go to post

Hi Dimitriy,

I saw you just released version 0.8.8 few days ago, but if I'm not wrong the sync ability is still not there, is it?

David.Satorres6134 · Aug 11, 2020 go to post

Hi Robert,

Maybe I didn't make myself clear enough... customers can't reach the port:

$ telnet xxxxxxx.com 53773
Trying 172.23.2.84...
telnet: connect to address 172.23.2.84: Connection refused
telnet: Unable to connect to remote host: Connection refused

When I ask my Systems&Network department they say that IRIS is only "listening" to localhost (127.0.0.1), and that's the reason we cannot reach the port.

My understanding is that IRIS is bound to l0 interface instead to eth0. Am I completely wrong here?

David.Satorres6134 · Aug 11, 2020 go to post

Hi,

Yes, no problem with SMP. Do I understand from your message than even if the JDBC gateway servers is set up at port 53773 users need to point their JDBC to 51773 (superserver port?)