Stephen Canzano · Feb 4, 2016 go to post

Great news.

only comment I would add is I would think the peformance chart in this article would be easier to quickly digest if all of the values were in ms.

Stephen Canzano · May 17, 2016 go to post

One aspect that has been lost in the transition is to have the Search Dialog always on the page.  If you go to this page as an example http://docs.intersystems.com/beta/csp/docbook/DocBook.UI.FramePage.cls?… in order to initiate a new search it appears I have to

  1. Select this back button on the top left hand corner 
  2. Scroll the left hand side menu, list of books to the very top
  3. Place my cursor in the search field and enter a new search term.

Whereas in the old system I could 

  1. Press the [Home] key to be taken to the top
  2. Enter a new search term.

I would vote to make the search box visible at all times.

Stephen Canzano · May 17, 2016 go to post

Just checking as I still see Technical Articles I assume if you add content to DOCBOOK this additional content will still be part of this new UI?

Stephen Canzano · May 17, 2016 go to post

Given that you guys have opened the hood, while not specifically a DOCBOOK request, I would like to see https://www.chromium.org/tab-to-search be supported for DOCBBOOK content.  As an example I'd like to when using Google Chrome be able to type 

docs.intersystems.com [tab] {SearchTerm} 

 and have the page respond with a list of results based on {SearchTerm}

I understand there may be issues with what version of DOCBOOK to show but it might be useful to just show results from the latest version.

Stephen Canzano · Jul 14, 2016 go to post

Thanks, that is one way to do it.  Might this be something that is fixed more formally in the future?

Stephen Canzano · Dec 26, 2019 go to post

While the cost may be high, the real question might be better focused on the performance.  Relative cost is just that, relative. 

What does the rest of the query look like?

What is the time to first row, time for all of the data? 

Does the query plan utilize the index on the StartDate column?

Stephen Canzano · Jan 10, 2020 go to post

I'm generally looking at the query plans either from the SMP or from the context menu in Studio while writing class queries/embedded sql statements.

One issue that I've seen is that while the query plan is very good, an in many cases better than what other dbms's provide, when a sub-query is part of your query statement it's not exactly clear when it is utilized in the query plan.  For example, I have this query plan

I cannot tell with 100% certainty where "subquery" is called.

Stephen Canzano · Apr 28, 2016 go to post

If you can share the Show Plan information for each query that will probably add some insight.   Given that the second query has only one column it may be that there are additional adjustments to the query that would yield better performance, although this does not directly address your specific question as to why it is much slower.  At the same time a query that is taking well over 30 minutes seems like there is something not quite right.

Stephen Canzano · Sep 13, 2016 go to post

While not specifically an answer to your question I have used Class Projections  as a way to detect when a class is compiled or removed.  This will not allow you to see the changes between classes but could be useful to see when classes are compiled.

Stephen Canzano · Dec 2, 2016 go to post

If you have more recent versions of Cache you will likely benefit by using %PARALLEL especially if you have a large number of cores for your environment.

Stephen Canzano · Dec 13, 2016 go to post

A couple ideas

If it's for debugging I use $System .OBJ.Dump(oRef)

If it's in application code I ran across something that looked like

While prop '= "" {
//  Use global lookup instead of %Dictionary.Class query so users don't need privileges on that table
Set tPropClass = $$$comMemberKeyGet(sourceClass,$$$cCLASSproperty,prop,$$$cPROPtype)
 

Stephen Canzano · Apr 3, 2020 go to post

I've used the Security.Users class in %SYS as well as Security.Roles.  In both cases the documentation suggests you should use the methods to interact with the data.. ie call the Create, Get, Delete methods found in the corresponding class.

Stephen Canzano · Apr 15, 2020 go to post

Depending on what you exactly mean by the same structure you consider using a Serial Class definition that is embedded in both of your classes.  However, if the structure is stored across several global nodes I do not think you could use this as the Serial class would define the pieces(whether delimited or $ListBuild pieces) and then the serial property is described in your 2 classes to occupy a single node.

You could also consider defining a single abstract class that describes the properties and have your 2 classes inherit from your abstract class.

With regards to having a variable defining the data location I suspect that it may not be doable, even if it were I don't see how it improves things in a significant way.

Stephen Canzano · Apr 15, 2020 go to post

with your updated problem description where your globals only contain a single node I would consider the serial/embedded approach.  This still don't address your request for a variable name for the storage, but it does mean for the columns/properties you can define them in the serial class once and then embed in the class that represents global a and global b.  It also means in the storage map you just have to define the data node as the serial property. 

Let's say we could get it to work such that the storage map is based on a variable, I suspect other things wouldn't work.  For example, in Documatic when you view a class you can select the Storage checkbox and get a display that looks like 

I suspect if somehow you can get the storage map to be dynamic and based on a variable name this display would fail or not show you the value of the variable.

Stephen Canzano · May 1, 2020 go to post

You wrote

BUT the server is 6 times faster if OpenId replaced with simple read of a large global (s p=^someLargeGlobal). Any ideas what makes OpenId so slow only on the server?

while not specifically answering your general question, note that opening an object is very different than 

Set p=^SomeLargeGlobal.

When you call %OpenId

  1. Concurrency is checked.  While not widely used there is a second parameter to the %OpenId method that controls the behavior of concurrency  and you may want to consider providing the desired concurrency option.  In the past if I was not concerned with concurrency as I just wanted to read data I would use option 0.
  2. Each and every property defined in your class is loaded into memory, ie the data is read from the global and then the individual properties are set accordingly.  Depending on how complex your class is the difference between 

Set p=^SomeLargeGlobal 

and %OpenId can be quite different, at least academically.

If your class as relationships and/or serial/embedded classes I do not believe those properties are fully loaded into memory, but I could be incorrect.

In practice, if I need an object I use objects, if I need only a small part of the object/row I'll use embedded SQL for better performance.

Again this does not specifically answer your general question but I think it is useful to understand what %OpenId does and why it's not the same as 

Set p=^SomeLargeGlobal.

Stephen Canzano · May 13, 2020 go to post

I'm not sure I completely understand your question but one thing I have had to use recently is found here https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=D2MODEL_prop_for_list

I had a level which was a list and I too wanted to define properties for the level, the property would be for each element of the level/list.  In my case I defined my level to run off of an expression where my expression returned a list of values.  Then for my property definition, I used an expression as well.  In the expression, I called a method passing %value.  

Hope this gives you something to go on.

Stephen Canzano · May 17, 2020 go to post

Not to my knowledge.  While there is a global node in the storage map that is used to get the next available Id this would only work on tables/objects based on a single integer id.  At the same time, this is the next available Id and does not account for physical deletes that may have occurred, ie the next Id might be = 101 but you may have fewer than 100 rows/objects as some may have been deleted.  The simplest way to accomplish this would then to perform a SELECT COUNT(*) FROM TableName.  If the table implements this bitmap indices this should be ms.  If you don't get the performance you want you might consider adding %PARALLEL to the FROM clause and let the optimizer decide if it makes sense to split the job.

Stephen Canzano · Jul 22, 2020 go to post

I used to work at IDX under the division that produced what was called IDX Flowcast.  IDX Flowcast is a practice management system for large practice/academic medical centers.  Groupcast on the other hand was for the small practice environment, I don't recall the exact # of doctors as the threshold cutoff.  I do not believe Groupcast is based on Intersystems Cache, but I could be incorrect.  If the system is Flowcast aka GE Centricity Business, then most all of the data is exposed via Cache classes that are based on %SQLStorage and hence would be exposed via any SQL client interface connecting to Flowcast/GE Centricity Business using ODBC/JDBC.

Stephen Canzano · Jul 27, 2020 go to post

Unless things have changed with IRIS I generally prefer to use triggers over any of the object implementations.  Properly defined triggers will be executed whether you are doing an object Save or a SQL INSERT/UPDATE/DELETE.  You may only want to perform the code during object save but I find why not implement the code in a trigger where you know it will always be executed.  Additionally triggers provider  

{ColumnName*N}

{ColumnName*O}

syntax which is valuable.

Stephen Canzano · Aug 6, 2020 go to post

Can you share one of the SQL statement you wrote?  Based on the table names I'd be able to tell which GE system this is actually against?

Stephen Canzano · Aug 9, 2020 go to post

Ok so this is definetely Centricity Business aka Flowcast and not GroupCast.

Generally speaking your query looks correct but some considerations

The join is incorrect.  Following your exact FROM clause, you would consider

FROM   Registration.Patient reg

       JOIN BAR.Invoice BAR on BAR.GID = Reg.ID

       JOIN Dict.Provider prov on prov.Id=BAR.prov

There is an index on bar.invnum so there is no issue with indices defined.

Note that properties/columns are properly typed in these classes so you could make the statement more concise by doing

SELECT Grp,  

                GID->PatNm As Guarantor,

                 GID->MRN As MRN

                  Prov->Name As Provider,

                  SetDt,

FROM     BAR.Invoice

WHERE  InvNum BETWEEN 63882965 and 64306671

Stephen Canzano · Sep 8, 2020 go to post

Late in replying but the differences between sourcing data from Cache vs a warehouse/data mart is that Cache will be able to provide you real-time information vs a warehouse/data mart which can have some degree of staleness, but that's likely obvious.  The advantage with a warehouse/datamart is that you could bring in other data and join with that data.  At the same time, there would be nothing to exclude you from bringing in external data in the HSPI namespace.   We at Ready Computing have extensive experience with reporting of the HSPI data.  This includes several ZEN reports, although note that the ZEN reports are just calling SQL Stored Procedures we wrote.  We also have DeepSee cubes defined that provide analysis on both the Patient table as well as the Classified pairs data.  It should be noted that the Classified pairs table has a number of indices defined to support most use cases for SQL queries.  Lastly, we've not found issues with the definition of the Patient table as far as performance goes.

Stephen Canzano · Oct 13, 2020 go to post

I dont think your solution is a solution that works long term, someone can regenerate the record map and if your script isn't run then the property would be removed.  To answer your last question I think you would have better success if you define the property like

Property InsertDate As %UTC [ ReadOnly, SqlComputeCode = {set {*}=##class(%UTC).NowUTC()}, SqlComputed, SqlComputeOnChange = %%INSERT ];

I'm not 100% certain but the initial expression may only be executed as part of an object implementation but not part of an SQL statement.  If the RecordMap code is actually doing SQL inserts this may produce better results.

Stephen Canzano · Oct 30, 2020 go to post

Some of the reasons why I focus on utilizing class queries include

  • Studio and other editors are much better at providing coloring/syntax checking vs a strategy of setting a tSQL string as some arbitrary SQL statement
  • As mentioned in the original post it provides the separation that allows for easy re-use and testing.  If I have a class query I can decide to allow it to be reflected as a stored procedure, I can expose it to ZEN reports, ODBC/JDBC, JSON projection, %XML.DataSet, I can use it to satisfy a control in ZEN that allows for binding to a class query,  as well as others.  Basically, it provides for great separation.
  • I also like the idea of considering writing the statement using SQL as %Query.  If for some reason I run into performance issues that I cannot overcome I can change the implementation to %SQLQuery and implement COS code, the calling application would see the improved performance but does not have to understand the internal implementation.  However, I've only done this on extremely rare occasions.
  • I think one of the most important aspects of SQL to always read the query plan, it's there where you can understand what the optimizer is going to do.  I care all about the query plan and almost always validate what it reports.  Having a class query allows for easy Show Plan examination whereas it's generally hard to do if you take the approach of setting a tSQL string(sometimes with concatenation over several lines).

Class Queries are really worth investing in IMHO.

Stephen Canzano · Nov 1, 2020 go to post

The IDX system is oftentimes partitioned by Group(GRP).   Additionally I suspect the 86M records do not represent invoices for a single year.  Using %SYSTEM.WorkMgr you could break the job up in to smaller jobs by GRP and or InvCrePd or YEAR(BAR.Invoice.SerDt) 

Stephen Canzano · Apr 7, 2021 go to post

You can look at the contents of zenutils.js to see the actual details of the zen(id) function.