Sean Connelly · May 31, 2017 go to post

Hi Alexander,

Unless I am missing a cool trick, you can't do this directly...

Property DateOfBirth As %Date(JSONNAME = "BirthDate");

you would need to extend %Date with your own class type and add the JSONNAME parameter to it, which means you end up with...

Property DateOfBirth As Cogs.Lib.Types.Date(JSONNAME = "BirthDate");

Which for me feels much more cumbersome, not to mention that developers are forced to change all of their existing code as well as amend any existing overriden data types that they use. 

Unless I am missing another trick, I'm pretty sure you can't add these attributes to complex types which if I am right is a show stopper anyway.

Annotations are just much easier to work with, I need them for methods as well so it just seems more in keeping to do it all this way.

Sean. 

Sean Connelly · May 31, 2017 go to post

OK, excellent thanks for that. Seem to remember hitting a brick wall trying to get this to work many moons ago.

Sean Connelly · May 31, 2017 go to post

Excellent, just tried it and the value is accessible via ReturnTypeParams in the %Dictionary.CompiledMethod table.

Sean Connelly · May 31, 2017 go to post

Thanks Rubens and Alexander, I've not even released the code and getting good ideas to improve things - open source at its best.

Given that return types can also be applied to methods, I am now weighing up native vs annotations.

Any preferences?

Sean Connelly · May 31, 2017 go to post

Hi Alexy,

You've fished out a property that is of type Cogs.Lib.Types.Json,

In its property state the JSON is stored as a pure string, hence seeing the odd escaping.

When its serialized back out to JSON it will be correctly escaped, which you can see in the JSON dump I posted before it.

This provides the best of both worlds, schema driven properties that can have one or more non schema properties for generic data storage.

btw, Cogs includes JSON classes for serialising and de-serialising to and from arrays and globals as well, interestingly they are only 50 lines of code each, so will be interesting to compare them.

Sean.

Sean Connelly · May 31, 2017 go to post

I ended up writing my own solution in the end.

It's a TCP wire based solution that uses the JSON-RPC messages as the main protocol.

Node starts up a concurrent TCP listener and then Caché jobs off as many client connections as required.

It surprisingly simple on the Node side, minimal glue to bind HTTP requests to TCP messages with zero blocking.

I did quit a lot of testing on it at the time I wrote it and found that I could get twice as many RPC messages into Cache via Node than I could via CSP. My guess is that the RPC route does not have to deal with all the HTTP protocols.

I then wrapped the same event emitter used for the HTTP requests with a small promise caller and was able to do some testing of proxy objects inside Node itself. It's a little bit experimental on the Node side, but I am able to run the 30,000 browser unit tests (lots of automated ones in there) over the ORM library and it just works.

Not sure I would want to put it into production until its been kicked around some more.

Sean Connelly · Jun 1, 2017 go to post

As requested, here are some snippets of the ORM library that works for both browser and Node.JS. This is from some of 30,000 unit tests that I built on top of the Northwind database data.

The solution starts with a Caché class that extends the Cogs.Store class, this is just a normal %Persistent class with extra methods.

Class Cogs.CoffeeTable.Tests.Northwind.Customers Extends Cogs.Store
{

Parameter DOMAIN = "northwind";

Property CustomerID As %String;

Property CompanyName As %String;

Property ContactName As %String;

Property ContactTitle As %String;

Property Address As %String;

Property City As %String;

Property Region As %String;

Property PostalCode As %String;

Property Country As %String;

Property Phone As %String;

Property Fax As %String;

Index CustomerIDIndex On CustomerID [ IdKey, PrimaryKey, Unique ];

}

There are then two approaches to develop in JavaScript. The first is to include a client API script that is dynamically created on the fly, this includes a promise polyfill and an HTTP request wrapper. This is a good approach for small to medium projects.

In this instance there will be a global object called northwind that will contain a set of database objects, each with a set of CRUD methods

A basic example of using find...

northwind.customers.find().then( function(data) { console.log(data) } )

The second approach uses TypeScript and Browserify using a modern ES6 approach.

A code generator produces a TypeScript Customer schema class...

import {Model} from 'coffeetable/Model';

export class CustomerSchema extends Model {

    static _uri : string = '/northwind/customers';

    static _pk : string = 'CustomerID';

    static  _schema = {
        Address : 'string',
        City : 'string',
        CompanyName : 'string',
        ContactName : 'string',
        ContactTitle : 'string',
        Country : 'string',
        Fax : 'string',
        Phone : 'string',
        PostalCode : 'string',
        Region : 'string',
        CustomerID : 'string'
    };

    CustomerID : string;
    Address : string;
    City : string;
    CompanyName : string;
    ContactName : string;
    ContactTitle : string;
    Country : string;
    Fax : string;
    Phone : string;
    PostalCode : string;
    Region : string;

}

as well as a model class which can then be extended without affecting the generated class...

import {CustomerSchema} from '../schema/Customer';

export class Customer extends CustomerSchema {

    //extend the proxy client class here

}

now I can develop a large scale application around these proxy objects and benefit from schema validation, auto type conversions as well as having object auto complete inside IDE's such as WebStorm.

Create and save a new object...

import {Customer} from "./model/Customer";

var customer = new Customer();
//Each one of these properties auto completed
customer.CustomerID = record[0];
customer.CompanyName = record[1];
customer.ContactName = record[2];
customer.ContactTitle = record[3];
customer.Address = record[4];
customer.City = record[5];
customer.Region = record[6];
customer.PostalCode = record[7];
customer.Country = record[8];
customer.Phone = record[9];
customer.Fax = record[10];
customer.save().then( (savedCustomer : Customer) => {
    console.log(customer)
}).catch( err => {
    console.log(err)
})

Open it...

Customer.open('ALFKI').then( customer => {
    console.log(customer.CompanyName);    
})

Search...

Customer.find({
    where : "City = 'London' AND ContactTitle = 'Sales Representative'"
}).then( customers => {
    console.log(customers);
});

The last example returns a managed collection of objects. In this instance the second approach includes a more sophisticated client library to work with the collection, such that you can filter and sort the local array without needing to go back to the server.

customers.sort("Country")

this triggers a change event on the customers collection which would have been scoped to a view, so for instance you might have a React component that subscribes to the change and sets its state when the collection changes

Motivation

I needed to develop an application that could run on existing customer databases, Ensemble -> Caché, Mirth -> PostgreSQL as well as MongoDB. Such that the database can be swapped in and out without changing a line of client code.

I looked to adapt one of the existing ORM libraries such as Sequelize or Sails but it was easier to start out from scratch to leverage on Caché without needing to use lots of duck tape to get it working.

This new solution required a JSON-RPC interface and more JSON functionality from Caché, hence re-engineering some old JSON libs and building out the Cogs library.

Moving forward the plan is to release CoffeeTable as a seperate NPM library and Cogs will essentially be a server side adapter to it.

Probably the wrong forum to talk about GT.m, but I have a long standing internal library that was designed for this eventual abstraction and will be one of the databases that will be added to CoffeeTable down the line.

Sean Connelly · Jun 1, 2017 go to post

Its a very simple JSON-RPC wire protocol. The JSON is stripped of formatting. Its then delimited with ASCII 13+10 which are already escaped in the JSON. Nothing more complicated than that.

> How do you deal with license usage? How much does it escalates with a fair amount of users and how do you manage all of that?

I can only refer to benchmarks at the moment, hence why the node connector is still marked as experimental.

The set up was on a single 3 year old commodity desktop machine running a stress tool, node, cache and about 10 other open applications.

The stress tool would simulate 50 users sending JSON-RPC requests over HTTP to a Node queue, a single Caché process would collect these requests over TCP, unpack the JSON, perform a couple of database operations, create a response object, serialise it and pass it all the way back.

With one single Caché process running one single licence I recorded an average of 1260 requests per second.

Sean Connelly · Jun 1, 2017 go to post

100,000 per second is a synthetic benchmark, a for loop in a terminal window will only just do 100,000 global sets a second, and this is without any data validation, data loops, referential integrity etc

you also don't mention if this is done via the API or over the network, I would only be interested in the over the network benchmarks

what I would be really interested in are real world benchmarks that track the number of http requests handled per second, so not some tight benchmark loop, but real end to end http requests from browser, federated through node, to cache.node and Caché and back again

plus I am not really interested in global access from node, I want to work with objects everywhere and gain the performance of letting optimised queries run on Caché without shuffling data back and forth unnecessarily

i know cache.node does handle objects, but it just doesn't fit my needs, I'm not a fan of the API and it is missing some functionality that i need

fundamentally there is a missmatch with the CoffeeTable framework that I have developed and the cache.node API

basically it just didn't seem like a good idea to end up using cache.node as nothing more than a message forwarder with potential overhead that I can't see, what I ended up with is a lean 142 lines of node code that is practically idling in the benchmarks that I have done so far

i also have concerns over the delays I have read about with the cache.node versions keeping up with the latest Node.JS version

the other thing is where is its open source home, I looked and couldn't find it, would have been nice to inspect the code, see how it works and fill in the gaps that the documentation does not go deep enough into

ultimately, why not have alternatives, different solutions for different needs

Sean Connelly · Jun 2, 2017 go to post

It might sound a bit of a far fetched idea...

After recent (minor) database corruptions caused by VM host activities I did wonder if a future Caché version could be made to self heal itself by using its mirror member.

Sean Connelly · Jun 2, 2017 go to post

Great answer Rubens.

The class documentation makes no mention of the second parameter and I was not aware that it existed.

Fortunately I've only had to deal with documents under the large string size to date and did wonder how I would might need to work around that limitation at some point.

Question, the length the XML writer uses is set to 12000. Would this solution work for 12001 or does the size have to be divisible by 3? I'm wondering because 3 characters are represented by 4 characters in base64.

Sean.

Sean Connelly · Jun 2, 2017 go to post

What about...

&sql(select RuntimeType into :qRuntimeType from %Dictionary.CompiledProperty where ID1='Foo.MyClass||MyProperty')
Sean Connelly · Jun 7, 2017 go to post

Agreed, especially as there are hundred+ classes pending for release.

I have an existing sync tool that will automatically export into folders, but I am in the middle of hacking it to work with UDL and a couple of other new features. Until I know its production ready I will do these next few releases of Cogs manually into the same folder (next week or so).

Sean Connelly · Jun 8, 2017 go to post

Thanks Rubens, port looks really good.

I agree with the extra JSON use case for legacy code.

Will need some thinking about to get similar functionality and performance. Perhaps a just in time code generator that gets cached...

Sean Connelly · Jun 27, 2017 go to post

FOO>set msg=##class(EnsLib.HL7.Message).%OpenId(15)
 
FOO>w msg.RawContent
PID|2|2161348462|20809880170|1614614|20809880170^TESTPAT||19760924|M|||^^^^00000
OBR|1|8642753100012^LIS|20809880170^LCS|008342^UPPER RESPIRATORYCULTURE^L|||19980727175800||||||SS#634748641 CH14885 SRC:THROASRC:PENI|19980727000000||||||20809
OBX|1|ST|008342^UPPER RESPIRATORY||POSITIVE~~~~~~~|

FOO>w !,msg.SetValueAt("Positive","PIDgrpgrp(1).ORCgrp(1).OBXgrp(1).OBX:5")
 
0 0<Ens>ErrGeneralObject is immutable
 
FOO>set msg2=msg.%ConstructClone()
 
FOO>w !,msg2.SetValueAt(msg.GetValueAt("PIDgrpgrp(1).ORCgrp(1).OBXgrp(1).OBX:5.1"),"PIDgrpgrp(1).ORCgrp(1).OBXgrp(1).OBX:5")
 

1
 
FOO>w msg2.RawContent                                                           

PID|2|2161348462|20809880170|1614614|20809880170^TESTPAT||19760924|M|||^^^^00000
OBR|1|8642753100012^LIS|20809880170^LCS|008342^UPPER RESPIRATORYCULTURE^L|||19980727175800||||||SS#634748641 CH14885 SRC:THROASRC:PENI|19980727000000||||||20809
OBX|1|ST|008342^UPPER RESPIRATORY||POSITIVE|

Sean Connelly · Jul 14, 2017 go to post

Can you provide your system $ZV version?

>Is there a way to transform this $lb (without the need of openig the object itself) to a JSON object with the proper table fields as properties?

Is there a good reason for not wanting to open the object?

If not, there are several ways to spin the object into JSON.

Sean Connelly · Jul 17, 2017 go to post

Jeffrey has the right answer.

Murillo, here are the comments for the %SYS.Task.PurgeErrorsAndLogs task that you are currently using, the ERRORS global is for Caché wide errors and not Ensemble errors...

/// This Task will purge errors (in the ^ERRORS global) that are older than the configured value.<br>
/// It also renames the cconsole.log file if it is larger than the configured maximum size.<br>
/// On a MultiValue system it also renames the mv.log file if it grows too large.<br>
/// This Task is normally run nightly.<br>

Sean Connelly · Jul 21, 2017 go to post

Hi Kishan,

Can you provide the source code at zFile+15^User.zKQRest.1

If you are not sure how to get this, open User.zKQRest.1 and then press Ctrl+Shift+V , this will open up the compiled code, now press Ctrl+G and paste in zFile+15, the cursor will now be on that line.

Could you also provide the source code for the property...

Sean

Sean Connelly · Jul 21, 2017 go to post

There are a few approaches.

The schedule setting on a service can be hijacked to trigger some kind of start job message to an operation. It's not a real scheduler and IMHO a bit of a fudge.

A slightly non Ensemble solution is to use the Caché Task manager to trigger an Ensemble service at specific times. The service would be adapterless and would only need send a simple start message (Ens.StringContainer) to its job target. A custom task class (extends %SYS.Task.Definition) would use the CreateBusinessService() method on the Ens.Director to create an instance of this service and call its ProcessInput() method.

The only downside to this is those scheduled configuration settings are now living outside of the production settings. If you can live with that then this would be an ok approach.

Alternatively, you could write your own custom schedule adapter that uses custom settings for target names and start times. The adapters OnTask would get called every n seconds via its call interval setting and would check to see if it's time to trigger a process input message for one of the targets. The service would then send a simple start message to that target.

I prefer this last approach because it's more transparent to an Ensemble developer new to the production, also the settings stay with the production and are automatically mirrored to fail over members.

Sean Connelly · Jul 25, 2017 go to post
Set arr=##class(%ArrayOfDataTypes).%New()
; place items into the array
Do arr.SetAt("red","color")
Do arr.SetAt("large","size")
Do arr.SetAt("expensive","price")
; iterate over contents of array
Set key=""
For  Set value=arr.GetNext(.key) Quit:key=""  Write key,":",value,!
Sean Connelly · Aug 9, 2017 go to post

The short (incorrect) answer that you are looking for is

set ..Count=..Count+1
$$$TRACE(..Count)

BUT, if you do this, you will notice that whilst Count does indeed increase with each message, it will not persist back to your settings.

Setting properties on an operation are just for convenience, they provide read access to the values held back in the production XDATA / the table Ens_Config.Item

If you want to dynamically update the setting values then you will need to look at the classes, Ens.Config.Production and Ens.Config.Item

But, as stated, it's best not to hijack static settings for a dynamic purpose. Updating the settings with every message will cause a production update event each time, and possibly other side effects.

If you want to track dynamic values at the operation level then have a look at pre-built services and operations that implement $$$EnsStaticAppData and $$$EnsRuntimeAppData, these macro helpers will save things like counts back to a persistent global.

Seems like a lot of hard work when you could just query the count and add a base value to it.

Sean Connelly · Aug 21, 2017 go to post

>I suspect many users may view this as "WRC-lite" and have a sense of entitlement based on their ongoing financial commitment to ISC.

I came to the same conclusion.

If I was new to DC I could easily think that everyone providing answers is an InterSystems employee and not realise that volunteers are giving up their time and goodwill to help others. Regardless, its still nice to say thank you.

Perhaps it should be made more obvious who are volunteers and moderators?

Sean Connelly · Aug 21, 2017 go to post

Out of context of the original code, I agree.

The actual implementation is there to stop an infinite loop on objects that reference each other as part of a JSON serialiser, see line 9...

https://github.com/SeanConnelly/Cogs/blob/master/src/Cogs/Lib/Json/Cogs…

I'm not sure the construction of seen($THIS) is so much incorrect, just problematic. Both seen($THIS) and seen(""_$THIS) will produce an array item with a stringy representation, and works perfectly fine with exception of the unwanted side effect.

My assumption was that $THIS used inside the $get was to be avoided for persistent classes, whilst I could continue to use the OREF for non persistent classes, hence finding that the persistent objects OID was a perfectly good workaround.

However, as it turns out seen(""_$THIS) prevents the unwanted behaviour and makes the code much simpler to read, so thanks to Timothy for testing a different idea out.

Out of interest, I have since discovered that the pattern of seen(+$THIS) is used extensively in Caché / Ensemble library code, where the + symbol will coerce the string to the ordinal integer value from the OREF. I was tempted to use this approach, but one thing I am not sure of is if ordinal values are unique across a mixed collection of objects...