Great news.
only comment I would add is I would think the peformance chart in this article would be easier to quickly digest if all of the values were in ms.
- Log in to post comments
Great news.
only comment I would add is I would think the peformance chart in this article would be easier to quickly digest if all of the values were in ms.
One aspect that has been lost in the transition is to have the Search Dialog always on the page. If you go to this page as an example http://docs.intersystems.com/beta/csp/docbook/DocBook.UI.FramePage.cls?… in order to initiate a new search it appears I have to
Whereas in the old system I could
I would vote to make the search box visible at all times.
Just checking as I still see Technical Articles I assume if you add content to DOCBOOK this additional content will still be part of this new UI?
Given that you guys have opened the hood, while not specifically a DOCBOOK request, I would like to see https://www.chromium.org/tab-to-search be supported for DOCBBOOK content. As an example I'd like to when using Google Chrome be able to type
docs.intersystems.com [tab] {SearchTerm}
and have the page respond with a list of results based on {SearchTerm}
I understand there may be issues with what version of DOCBOOK to show but it might be useful to just show results from the latest version.
Thanks, that is one way to do it. Might this be something that is fixed more formally in the future?
While the cost may be high, the real question might be better focused on the performance. Relative cost is just that, relative.
What does the rest of the query look like?
What is the time to first row, time for all of the data?
Does the query plan utilize the index on the StartDate column?
I'm generally looking at the query plans either from the SMP or from the context menu in Studio while writing class queries/embedded sql statements.
One issue that I've seen is that while the query plan is very good, an in many cases better than what other dbms's provide, when a sub-query is part of your query statement it's not exactly clear when it is utilized in the query plan. For example, I have this query plan

I cannot tell with 100% certainty where "subquery" is called.
If you can share the Show Plan information for each query that will probably add some insight. Given that the second query has only one column it may be that there are additional adjustments to the query that would yield better performance, although this does not directly address your specific question as to why it is much slower. At the same time a query that is taking well over 30 minutes seems like there is something not quite right.
While not specifically an answer to your question I have used Class Projections as a way to detect when a class is compiled or removed. This will not allow you to see the changes between classes but could be useful to see when classes are compiled.
If you have more recent versions of Cache you will likely benefit by using %PARALLEL especially if you have a large number of cores for your environment.
A couple ideas
If it's for debugging I use $System .OBJ.Dump(oRef)
If it's in application code I ran across something that looked like
While prop '= "" {
// Use global lookup instead of %Dictionary.Class query so users don't need privileges on that table
Set tPropClass = $$$comMemberKeyGet(sourceClass,$$$cCLASSproperty,prop,$$$cPROPtype)
I would be interested in the English version as well.
I've used the Security.Users class in %SYS as well as Security.Roles. In both cases the documentation suggests you should use the methods to interact with the data.. ie call the Create, Get, Delete methods found in the corresponding class.
Depending on what you exactly mean by the same structure you consider using a Serial Class definition that is embedded in both of your classes. However, if the structure is stored across several global nodes I do not think you could use this as the Serial class would define the pieces(whether delimited or $ListBuild pieces) and then the serial property is described in your 2 classes to occupy a single node.
You could also consider defining a single abstract class that describes the properties and have your 2 classes inherit from your abstract class.
With regards to having a variable defining the data location I suspect that it may not be doable, even if it were I don't see how it improves things in a significant way.
with your updated problem description where your globals only contain a single node I would consider the serial/embedded approach. This still don't address your request for a variable name for the storage, but it does mean for the columns/properties you can define them in the serial class once and then embed in the class that represents global a and global b. It also means in the storage map you just have to define the data node as the serial property.
Let's say we could get it to work such that the storage map is based on a variable, I suspect other things wouldn't work. For example, in Documatic when you view a class you can select the Storage checkbox and get a display that looks like
.png)
I suspect if somehow you can get the storage map to be dynamic and based on a variable name this display would fail or not show you the value of the variable.
You wrote
BUT the server is 6 times faster if OpenId replaced with simple read of a large global (s p=^someLargeGlobal). Any ideas what makes OpenId so slow only on the server?
while not specifically answering your general question, note that opening an object is very different than
Set p=^SomeLargeGlobal.
When you call %OpenId
Set p=^SomeLargeGlobal
and %OpenId can be quite different, at least academically.
If your class as relationships and/or serial/embedded classes I do not believe those properties are fully loaded into memory, but I could be incorrect.
In practice, if I need an object I use objects, if I need only a small part of the object/row I'll use embedded SQL for better performance.
Again this does not specifically answer your general question but I think it is useful to understand what %OpenId does and why it's not the same as
Set p=^SomeLargeGlobal.
I'm not sure I completely understand your question but one thing I have had to use recently is found here https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=D2MODEL_prop_for_list
I had a level which was a list and I too wanted to define properties for the level, the property would be for each element of the level/list. In my case I defined my level to run off of an expression where my expression returned a list of values. Then for my property definition, I used an expression as well. In the expression, I called a method passing %value.
Hope this gives you something to go on.
Not to my knowledge. While there is a global node in the storage map that is used to get the next available Id this would only work on tables/objects based on a single integer id. At the same time, this is the next available Id and does not account for physical deletes that may have occurred, ie the next Id might be = 101 but you may have fewer than 100 rows/objects as some may have been deleted. The simplest way to accomplish this would then to perform a SELECT COUNT(*) FROM TableName. If the table implements this bitmap indices this should be ms. If you don't get the performance you want you might consider adding %PARALLEL to the FROM clause and let the optimizer decide if it makes sense to split the job.
You could also use ZEN Reports and the <barcode> element to render a number of barcode types
I used to work at IDX under the division that produced what was called IDX Flowcast. IDX Flowcast is a practice management system for large practice/academic medical centers. Groupcast on the other hand was for the small practice environment, I don't recall the exact # of doctors as the threshold cutoff. I do not believe Groupcast is based on Intersystems Cache, but I could be incorrect. If the system is Flowcast aka GE Centricity Business, then most all of the data is exposed via Cache classes that are based on %SQLStorage and hence would be exposed via any SQL client interface connecting to Flowcast/GE Centricity Business using ODBC/JDBC.
Unless things have changed with IRIS I generally prefer to use triggers over any of the object implementations. Properly defined triggers will be executed whether you are doing an object Save or a SQL INSERT/UPDATE/DELETE. You may only want to perform the code during object save but I find why not implement the code in a trigger where you know it will always be executed. Additionally triggers provider
{ColumnName*N}
{ColumnName*O}
syntax which is valuable.
Can you share one of the SQL statement you wrote? Based on the table names I'd be able to tell which GE system this is actually against?
Ok so this is definetely Centricity Business aka Flowcast and not GroupCast.
Generally speaking your query looks correct but some considerations
The join is incorrect. Following your exact FROM clause, you would consider
FROM Registration.Patient reg
JOIN BAR.Invoice BAR on BAR.GID = Reg.ID
JOIN Dict.Provider prov on prov.Id=BAR.prov
There is an index on bar.invnum so there is no issue with indices defined.
Note that properties/columns are properly typed in these classes so you could make the statement more concise by doing
SELECT Grp,
GID->PatNm As Guarantor,
GID->MRN As MRN
Prov->Name As Provider,
SetDt,
FROM BAR.Invoice
WHERE InvNum BETWEEN 63882965 and 64306671
Late in replying but the differences between sourcing data from Cache vs a warehouse/data mart is that Cache will be able to provide you real-time information vs a warehouse/data mart which can have some degree of staleness, but that's likely obvious. The advantage with a warehouse/datamart is that you could bring in other data and join with that data. At the same time, there would be nothing to exclude you from bringing in external data in the HSPI namespace. We at Ready Computing have extensive experience with reporting of the HSPI data. This includes several ZEN reports, although note that the ZEN reports are just calling SQL Stored Procedures we wrote. We also have DeepSee cubes defined that provide analysis on both the Patient table as well as the Classified pairs data. It should be noted that the Classified pairs table has a number of indices defined to support most use cases for SQL queries. Lastly, we've not found issues with the definition of the Patient table as far as performance goes.
I dont think your solution is a solution that works long term, someone can regenerate the record map and if your script isn't run then the property would be removed. To answer your last question I think you would have better success if you define the property like
Property InsertDate As %UTC [ ReadOnly, SqlComputeCode = {set {*}=##class(%UTC).NowUTC()}, SqlComputed, SqlComputeOnChange = %%INSERT ];
I'm not 100% certain but the initial expression may only be executed as part of an object implementation but not part of an SQL statement. If the RecordMap code is actually doing SQL inserts this may produce better results.
Some of the reasons why I focus on utilizing class queries include
Class Queries are really worth investing in IMHO.
The IDX system is oftentimes partitioned by Group(GRP). Additionally I suspect the 86M records do not represent invoices for a single year. Using %SYSTEM.WorkMgr you could break the job up in to smaller jobs by GRP and or InvCrePd or YEAR(BAR.Invoice.SerDt)
You can look at the contents of zenutils.js to see the actual details of the zen(id) function.
This article refers to https://github.com/es-comunidad-intersystems/IRIS-in-Astronomy but when I tried going there you get a 404 Page not found. Maybe the repo is not public?