I hope that the APIs (e.g. /api/monitor/metrics) will still work, so metrics can still be embeded into any monitoring tool that we are using like dataDog.
Is this correct to assume?
- Log in to post comments
I hope that the APIs (e.g. /api/monitor/metrics) will still work, so metrics can still be embeded into any monitoring tool that we are using like dataDog.
Is this correct to assume?
Could you please give additional information on how the data is being pulled?
You say "tables" so I assume you run a SQL: Is this done by "select" or do you run a SP?
Is this is a local task on the server (running a COS code) or externally with ODBC/JDBC connection?
using the /LOGCMD is very useful to log the resulting command line into messages.log so you could have easy access to see from the SMP.
Also, I/O Redirection could be useful for having input, output, errors linked to files (both Linux and Windows)
I have tested the metrics rest/api and seems that the terminator that is being used is $C(10)
We will continue to use SAM for a while. Next year we plan to migrate to "DataDog" monitoring tool which is already being used in our company.
The /api/monitor/metrics/ can be still being used :-)
As you probably know, a stream is a "collection" of strings ("under the hood"). As the $ZCRC doesn't support incremental hashing you need to choose some other hashing that support this: SHA-256, MD5 (not recommended due to security vulnerabilities)
Very nice article. Its exiting to see that Intersystems developers are taking advantage of the (great) Embedded Python feature in IRIS.
If you want to see a "real life" use case (we are using it for more than 2 years in our production environment), check this article: IRIS Embedded Python with Azure Service Bus (ASB) use case | InterSystems
(it also won the 1st place in an Intersystems article competition in 2022)
The best practice is put a token (that was safely acquired by the sender) rather than a user/password in the header. This token will give you both authentication, authorization and validity (expiration date and time or retention). Then, the recipient can verify those.
To increase your performance there are many factors and test that need to be done in order to take the best action (and approach). Some of them are:
1. Its highly recommended to identify the bottleneck before doing any action.
2. Hardware and infrastructure: Start by monitoring your infrastructure and hardware (network, memory, CPU) to check if there are any bottlenecks. Use task manager (or similar on Linux) to see if one or more disks are exhausted (100% active) - maybe splitting the databases and/or other Cache components (e.g. journals, WIJ, IRISTEMP etc.) between different disks might solve that issue. check if server has enough memory. How many processes are there? is the O/S using swap files when memory is low?
2. Check IRIS (or Cache) related issues: memory usage (are you allocation enough global and routine buffers) heap etc. - are there any errors in console.log that might show any potential issues?
3. Some production related things:
a. Can you run your process in parallel (pool size > 1) - maybe the bottleneck is there
b. Code - is your code optimized - use MONLBL to check the most "overwhelmed" places in your code
c. Journaling & mirroring might slow things if a lot of "temporary data" is journaled and mirrored
This is just the tip of the iceberg... there are many more things that can be done.
If you feel lost, I recommend that you open a WRC to get specific (your system related) help.
The @ (indirection) is not used only to write to a device (spool is a special type of device with number 2).
You may use the @ (indirection) to set any variable, array, list, object property or stream, while keeping this in memory:
I would try to:
1. check if there is a header for "Content-Length" (the client is setting this)
2. As the %CSP.Request is a stream you might try to check it's "size" property
3. Find the global(s) that are storing this request (not sure which, maybe ^%csp.session or some CacheTemp.csp* with the sessionID - would be a bit complex since its not documented.
Both MAC routines and class methods are compiled to INT routines, which are then compiled to OBJ (binary) code that is executed. To achieve better performance, try to make your code compact and efficient
You have those options:
1. you use $ZF(-1) or $ZF(100) to execute a command line from the OS
2. To use embedded python (if your version of IRIS has it) where you can use os. getpid() :
>Write $system.Python.Shell()
Python 3.9.5 (default, Jul 8 2023, 00:24:17) [MSC v.1927 64 bit (AMD64)] on win32
Type quit() or Ctrl-D to exit this shell.
>>> import os
>>> print(os. getpid())
16232
For simple questions, its working fine. If after the answer I want to refine my question, I need to add to the original question, and not sure if sessions are persistence (like in chatGPT that I can do a "conversation").
when a too long and complex question is being entered, I got:
"This is beyond my current knowledge. Please ask the Developer Community for further assistance."
There is an advantage that some (simple) questions get good referrals to the documentation or community pages
Hi Scott,
My remarks:
As mentioned here:
1. As already mentioned by @David.Satorres6134, global mapping for specific globals (or subscript level mapping) to different databases (located on different disks) may give you a solution for space, and also increase your overall performance.
2. Using ^GBLOCKCOPY is a good idea when there are many small globals. For a very big globals, it will be very slow (since it uses 1 process/global) so I recommend writing your own code + using the "queue manager" to do merges between databases for 1 global in parallel.
I would go with an (old) approach for pagination:
1. Store only the IDs/page in a temporary table
2. For any specific page, get the IDs and query the data from the main table
The pagination class:
The function to populate the pagination class:
Code for specific page data retrival:
Hello Jignesh,
I guess that you refer to a sync mirror (e.g. a failover pair: primary---backup). People here were mentioning unplanned down time (hardware and OS failures) but there is another advantage to benefit from a better HA for planned downtime for:
- IRIS maintenance => move a DB from disk to disk, split/move data from one DB to another
- Hardware maintenance => VM re-size, add/change data disks
- O/S maintenance => O/S patches, updates
Benefits:
1. All those activities are possible with 0 downtime (since the pair are 100% identical)
2. There is no data loss when there is a switch: either an automatic (due to failure) or manual (due to "planned" maintenance)
3. RTO - usually just a few seconds, depending on the complexity of your application/interoperability
4. Manual switch let you do the necessary work on the "backup", switch manually (e.g. make the "backup" a "primary") and do the same work on the other member.
Thanks Robert. working with Intersystems (and other M technologies) since 1991...
Hi,
You said the issue was only 1 server, and you could fail over to the mirror backup server, which could connect to LDAP from within IRIS. I assume you run the d TEST^%SYS.LDAP function to check connectivity.
If only 1 server can't connect, I would ask myself (investigate) "what was changed?"
using REDEBUG could help to see more information re. the issue.
In any case, I recommend opening a WRC for that, if you can not find the root cause.
The global IRIS.WorkQueue is located (mapped to) in the IRISLOCALDATA DB, which stores internal IRIS temporary data, including queue manager data.
This DB is being cleaned/purged at IRIS startup, but you may compact and truncate this DB while IRIS is running (either manually or programically)
Hello,
When several instances are connected to remote DB, then all their locks are managed in the IRIS instance where this DB is local (lets call this "the source")
I recommend increasing the "gmheap" parameter at "the source" to have more space on the lock table.
There are utilities to check the lock table free space, or full.
According to Snowflake documentation (https://docs.snowflake.com/en/user-guide/intro-key-concepts) is seems that you may use ODBC and JDBC, so SQL gateway can be used (SQL Gateway Connections | InterSystems Programming Tools Index | InterSystems IRIS Data Platform 2019.1)
There are also native connectors (e.g. Python). embedded python is not available on IRIS 2019.2 you may consider an upgrade to IRIS 2021.2
The CSP gateway has a "mirror aware" function that will always point you to the primary in a failover pair. This works most of the times, but in rare cases it keep a connection disabled after a primary swtich.
Another option is to use an external load balancer that has some kind of "health probe". Then, you could have a simple REST/API call (called by that health probe) that will return 200 for the primary and 404 (or 500) for the backup. This way going through that LB will always point you to the primary.
You can't directly do that in a BPL since it doesn't have persistence methods. You may convert your %DynamicArray into objectScript array or serialize your data into JSON that can be passed to PBL as a string
If some of those 3000 classes are divided to different packages, I would try to do the load in "segments" (using the queue manager, to have this done in parallel). This might speed things up a bit.
the SMP portal "about" page, has an option to choose the language. However, this persists for the current session only (in %session object). I would try to go with the solution proposed by @Raj Singh
to use a browser add-on that can modify HTTP headers: (e.g. the: HTTP_ ACCEPT_LANGUAGE CGI variable).
Intersystems could think of adding a user defined language, but not on the user profile since non local users (e.g. LDAP) are non persistent, so a global like ^ISC.someName(user)=Language could be the "best" way.
We don't want to (or can't) modify classes for the portal (some of them doesn't have sources).
This is a good candidate for the "intersystems ideas"
you are correct, but what I've suggested a way to check also that the interoperability is running, on that specific server
You have other 2 options:
1. With SQL against the %SYS.Task class/table (delete from %SYS.Task where id=taskID)
2. Set sc = ##class(%SYS.Task).%DeleteId(taskID)
maybe there are dependencies for that class? (e.g. class depends on another class(es) that need to be compiled before). try to add "r" (recursive) flag, so your flags will look like: "ckr"
Does your class have a relationship property to another class?
In that case you might want to consider to use: "CompileAfter" or "DependsOn" keywords (those might help the compiler to have a correct order when compiling).