Dan Pasco · Feb 18 go to post

Dynamic SQL doesn't have to build the query solution every time you run the same statement. Once you initially prepare an SQL Statement, the implementation of that statement is cached and reused. The only overhead should be limited to a hopefully very brief cache resolve. There are ways that you can write a statement that makes the hashing algorithm we use very inefficient (prepare:execute ratio p:n, the larger 'n' is, the better - 1:1 being the worst). Since we now use a unified model, even embedded SQL will produce a cached implementation. Embedded reduces the cache resolve time.

Dan Pasco · Mar 21 go to post

I agree - $system.external is the correct interface. You should be able to find out more about the $system.external interface by calling $system.external.Help().

Dan Pasco · Jun 17 go to post

Our JPL implementation, called "ASQ", is not very fast (yet, we are working on that) but it works well for small sets of data.

LATEST:USER>:pp obj.apply("$[*].items[*].externalPositions[*]")
do ##class(%ASQ.SetUtils).pp(obj.apply("$[*].items[*].externalPositions[*]"))
[
  {
    "pureId": 0,
    "appointment": {
      "uri": "string",
      "term": {
        "en_GB": "Some text"
      }
    },
    "appointmentString": {
      "en_GB": "Some text"
    },
    "period": {
      "startDate": {
        "year": 0,
        "month": 1,
        "day": 1
      },
      "endDate": {
        "year": 0,
        "month": 1,
        "day": 1
      }
    },
    "externalOrganization": {
      "uuid": "196ab1c9-6e60-4000-8b89-29269178a480",
      "systemName": "string"
    }
  }
]
Dan Pasco · Jun 26 go to post

The compute triggers, %%INSERT and %%UPDATE are, in my opinion the best option. They are specifically supported for this purpose and are completely seamless between SQL and Objects. Callbacks and other options may not be seamlessly supported by both filers.

Previous version of my reply was for the storage order for data stored. Sorry.

In response to my good friend Enrico's comment about "storage":

SELECT cc.Name as class_name,cs.Name as storage_name,cd.Name as data_node_name,cd.structure as structure, cd.subscript as subscript,cd.attribute asattribute, cv.name as value_name, cv.value asvalueFROM %Dictionary.CompiledClass cc
    JOIN %Dictionary.CompiledStorage cs on cc.%ID = cs.parent
    JOIN %Dictionary.CompiledStorageData cd on cs.%ID = cd.parent
    LEFTOUTERJOIN %Dictionary.CompiledStorageDataValue cv on cd.%ID = cv.parent
 WHERE cc.name = 'User.Person'

And a general comment about dynamic SQL. In the early days of Caché, someone made the decision to return a status value from methods to indicate success/failure. We didn't have try/catch at the time so it made some sense. Now we have try/catch (regrettably, no finally) and dynamic SQL follows a pattern that I started using many years ago - the traditional %<capital case> methods return a %Status value and a non-percent, lower camelCase method throws an exception. For dynamic SQL, that is prepare() and execute().
The primary benefit in using the exception-throwing method interface is that you cannot completely ignore errors unless you use a specific pattern to do so.

LATEST:USER>zw sql
sql=6
sql(1)="SELECT cc.Name as class_name,cs.Name as storage_name,cd.Name as data_node_name,cd.structure as structure, cd.subscript as subscript,cd.attribute as attribute, cv.name as value_name, cv.value as value"
sql(2)="  FROM %Dictionary.CompiledClass cc"
sql(3)="    JOIN %Dictionary.CompiledStorage cs on cc.%ID = cs.parent"
sql(4)="    JOIN %Dictionary.CompiledStorageData cd on cs.%ID = cd.parent"
sql(5)="    LEFT OUTER JOIN %Dictionary.CompiledStorageDataValue cv on cd.%ID = cv.parent"
sql(6)=" WHERE cc.name = 'User.Person'"

LATEST:USER>do statement.prepare(.sql)

LATEST:USER>set result = statement.execute()

LATEST:USER>do result.%Display()
class_name    storage_name    data_node_name    structure    subscript    attribute    value_name    value
User.Person    Default    DescriptionNode    node    d    description
User.Person    Default    PersonDefaultData    listnode            1    name
User.Person    Default    PersonDefaultData    listnode            2    dob
User.Person    Default    PersonDefaultData    listnode            3    address
User.Person    Default    PersonDefaultData    listnode            4    counter

5 Rows(s) Affected

Perhaps I should have been more clear. I was responding to Enrico's comment about sequencing properties as they are stored in a global.

Back to your initial question, what is your definition of "sorted by Storage"?

If you need the ($list) position of properties within the global, then your query does not answer your question. I'm writing this in case other community members read this question/answer.

I am somewhat reluctant to show this as the performance is not great. We are working on that. Also, keep in mind that the expression language is JSON Path Language which was made an ISO Standard in the 2016 SQL Standard. We have extended it a bit and also implemented it in a way that allows it to be used outside of SQL which is not part of the Standard.

In this snippet, the lvar 'j' is assigned the value of the JSON posted by the OP. I modified it to make it valid JSON (just added quotes here and there).

do##class(%ASQ.SetUtils).pp(j.apply("$.items[*].identifiers[*]?(@.typeDiscriminator == 'ClassifiedId' && @.type.term.en_US == 'Scopus Author ID')"))

The result:

[
  {
    "typeDiscriminator": "ClassifiedId",
    "pureId": "xxxxxxxx",
    "id": "xxxxxxxx",
    "type": {
      "uri": "/dk/atira/pure/person/personsources/scopusauthor",
      "term": {
        "en_US": "Scopus Author ID"
      }
    }
  }
]

The performance of apply() can be improved significantly by parsing the expression first and then passing the parsed result to apply(). For one-off executes that isn't helpful but if you are applying the same expression to different data then the improvement is significant.

LATEST:USER>set ast = ##class(%ASQ.Parser).parse("$.items[*].identifiers[*]?(@.typeDiscriminator == 'ClassifiedId' && @.type.term.en_US == 'Scopus Author ID')")

LATEST:USER>set result = j.apply(ast)

LATEST:USER>do##class(%ASQ.SetUtils).pp(result)
[
  {
    "typeDiscriminator": "ClassifiedId",
    "pureId": "xxxxxxxx",
    "id": "xxxxxxxx",
    "type": {
      "uri": "/dk/atira/pure/person/personsources/scopusauthor",
      "term": {
        "en_US": "Scopus Author ID"
      }
    }
  }
]

You can also do this in SQL but it is quite verbose. You have to map the JSON values to columns in the SELECT. I can provide an example if it would be helpful.