Stefan Wittmann · Apr 7, 2016 go to post

I see. Serializing registered and persistent objects to JSON is a new feature in 2016.2. $toJSON() allows you to serialize these objects using a standard projection algorithm that we provide. There are ways how you can specify your own projection logic, in case the standard logic is insufficient for your needs.

$compose() is a new method that lets you project dynamic entities (%Object and %Array instances) to registered and persistent objects (and vice-versa).

The $compose functionality is using the same standard projection logic. Later versions will allow you to specify and store projection rules for specific needs.

Stefan Wittmann · Apr 7, 2016 go to post

We have no way to represent these special JSON values in Caché Object Script. When you access these special values they are automatically converted to a friendly Caché Object Script value. Here is some more code to describe this based on the snippets I used before:

USER>write object.$getTypeOf("boolean")
boolean
USER>write object.$getTypeOf("nullValue")
null
USER>write object.$getTypeOf("numeric")
number
USER>write object.boolean
0
USER>write object.anotherBoolean
1

You can see, that I can retrieve the type for each property using the $getTypeOf() method. The boolean property returns 0 while the anotherBoolean property returns 1. Both are Caché Object Script-friendly and allow embedding in if-statements.

We would have lost that capability if we introduced special Caché Object Script values to reference special JSON values. In addition, you have to have in mind, that we plan to introduce more serialization formats in the future, so we may not only talk about special JSON values here.

Does that make sense?

Stefan Wittmann · Apr 18, 2016 go to post

do a.$toJSON() does not work properly with I/O redirection. This is a known issue and will be fixed in future releases. The workaround is very simple: use write a.$toJSON(), or write something else to the stream before (like you did in your second example).

Personally, I prefer to be explicit in my REST methods and use the write command when I want to output something to the response stream. So this code snippet will work in your REST class:

ClassMethod Test() As %Status
{
set "test":}
write a.$toJSON()
quit $$$OK
}

Stefan Wittmann · Apr 18, 2016 go to post

Well, I think a major question is: What do you use to return runtime information to your caller when you implement your own code? Do you return a %Status object, or something similar, or do you throw exceptions and don't return anything.

Most code snippets I have seen here make use of try/catch, but do return a status code itself. 

Personally, I prefer to use try/catch blocks and throw errors when I encounter issues at runtime. The try/catch philosophy is optimized for the case that everything goes well and exceptions are the, well, exception. Handling a status object is not as clean from a code maintenance perspective (more lines of code within your logic), but allows you to handle multiple different scenarios at once (it was okay/not okay, and this happened...)

Obviously, this is my personal preference.

Stefan Wittmann · Apr 19, 2016 go to post

Well, that depends on where you did take the lock.

In your previous example you take a lock right before the try block, so you can release it directly after the try/catch block.

If you take a lock in your try block, you have to put the unlock code both in the catch block and at the end of the try block. I would not place the unlock code outside of the try/catch block. This is a case where a try/catch/finally construct would definitely help, as you could place the unlock code in the finally block. 

Stefan Wittmann · Apr 19, 2016 go to post

Sure, that is a valid solution for a developer or test system. But we definitely need something better for production usage.

Stefan Wittmann · Apr 19, 2016 go to post

That is correct. If you expect to serve larger JSON content you should make use of the stream interface as Dmitry has pointed out. Here is a snippet that copies the content of a dynamic object to a local file on windows:

ClassMethod WriteObjectToFile(pObject As %Object)
{

set stream=##class(%Stream.TmpCharacter).%New()
set filestream=##class(%Stream.FileCharacter).%New()
set sc=filestream.LinkToFile("c:\Temp\jsonfile.txt")
do pObject.$toJSON(stream)

do filestream.CopyFrom(stream)
do filestream.%Save()

}
Stefan Wittmann · Apr 21, 2016 go to post

The $toJSONFormat() method provided a way to output a formatted (pretty-printed) JSON string/stream. There is an effort involved to make sure such a method works properly on all supported platforms (which it didn't) and in addition, there are various options that users would ask for (like omitting null values, intent or no intent, intent with spaces, etc...). We had a similar experience with the previous JSON API.

We decided to put our initial efforts into the machinery and not into pretty-printing. For that reason, we do not produce any output that is irrelevant for machine processing, which is the major task for this output as JSON is a data-interchange format.

It is very simple to post-process a JSON file or string and pretty-print or minify it. There are online tools like

http://www.freeformatter.com/json-formatter.html

and there is functionality/plugins available for popular text editors like Sublime 3:

https://github.com/dzhibas/SublimePrettyJson

Also, there are many node.js packages available that pretty-print or minify JSON files and that can be automated with a grunt/gulp task if you have to automate this for some reason. 

Personally, I just copy/paste my JSON content into Sublime3 and pretty print it.

Stefan Wittmann · Apr 27, 2016 go to post

You are on the right path. Zen Mojo 1.1.0 prevented the drop-down menu from closing because we did not bubble up the event. We addressed this issue in Zen Mojo 1.1.1.

Make sure to upgrade to the latest version of Zen Mojo. You have to return false in your onselect event handler to allow the event to bubble up which will automatically close the drop-down menu after the user selected an item. 

Stefan Wittmann · Apr 27, 2016 go to post

These are the first two options I always enable in all my Studio environments. If you haven't made use of them yet, give them a try.

Stefan Wittmann · Apr 27, 2016 go to post

We are working on a solution for this. We plan to automatically clear the cache of the base URL path and associated subpaths in the CSP Gateway for REST enabled CSP applications after the application got enabled.

Stefan Wittmann · May 2, 2016 go to post

Looking at your includes, you have to make sure that you include jQuery first and then jQM. The error signals, that jQuery is loaded fine, but jQueryMobile is not.

Stefan Wittmann · May 3, 2016 go to post

Thanks, I did not recognize the changes. Probably because my eyes are trained to spot code that should not be there in the first place. laugh

Stefan Wittmann · May 13, 2016 go to post

Let me address your questions. The node.js module files are currently released and shipped with a Caché kit. Our mid-term goal is to make external binding files available via the native package managers of the corresponding environment. For node.js, we are talking about npm, for Java Maven and for .NET NuGet. That being said we are not there yet. I will see what we can do in the short-term.

Let's talk about support for specific versions.

Support for Node.js  4.2.x has been introduced with Caché 2016.2. If you grab a Windows field test kit you can find the binding file here: <install-dir>\bin\cache421.node.

Support for Node.js 5.x.x is already implemented and is currently triaged for a release with Caché 2016.3. But v5 is not a very important release for production use.

If you take a look at the long term support plan from the node team (https://github.com/nodejs/LTS/), there is a nice picture at the bottom describing that 4.x will be on long-term support (LTS) until April 2017. v6 becomes LTS from October 2016 until April 2018.

The v6 release blog post recommends staying on 4.x if you require stability or move to 6.x if you can upgrade. Avoid 5.x. https://nodejs.org/en/blog/release/v6.0.0/

We are currently working on implementing support for 6.x.x

Stefan Wittmann · May 23, 2016 go to post

I would write it like this:

    set array =  []

    while (result.Next()) {
        set object = { 
                "data":{
                    "id":result.Data("ID"),
                    "reg":result.Data("Registration"),
                    "snNum":result.Data("SatNavVehNumber")
                }
            }
          do array.$push(object)
    }

You can directly embed your values as Caché Object Script and that makes your code look pretty close like the desired outcome. This approach makes it very simple to build complex JSON structures and still know what you are doing.

Stefan Wittmann · May 27, 2016 go to post

Indeed. Starting with Caché 2016.2 you can call $toJSON() on registered and persistent objects and we will convert it with a standard projection logic into a dynamic entity (%Object or %Array) and serialize it as JSON with a $toJSON() call.

If you want to modify your object before you output it to JSON you can first call $compose on your registered object to convert it to a dynamic object, modify it to your needs and then call $toJSON() on your modified dynamic object.

Later versions will introduce more sophisticated means to influence the standard behavior of $compose. 

The addition of the JSON_OBJECT and JSON_ARRAY SQL functions allows you to easily create JSON from a SQL query as well, as Kenneth pointed out. 

Stefan Wittmann · May 27, 2016 go to post

The <html> component does not handle localization for you out of the box, as you can inject arbitrary HTML into it.

Most of the other components do localize their captions and titles automatically if you set the DOMAIN parameter in your Zen page. A button component automatically creates a dictionary entry for its caption property, e.g.:

<button caption="Save"/>
Stefan Wittmann · Jun 1, 2016 go to post

Q2: It is against core REST architecture principles, so I hope not.

That is not entirely true. Filters and action descriptions are commonly exposed as URL parameters as they serve to either specify an action or limit the working set. But you are still operating on the same resource identity.

What you can observe is that REST is just a best practice leveraging HTML. You will find any possible implementation of REST interfaces out there, some are true to the original spirit, some are not.

Anyway, you can fight endless wars about REST  interfaces, but one thing is for sure: URL parameters are commonly used for REST interfaces.

To answer the original question: I don't see a need to add URL parameters to the URL map. They would only add value if we added them as method arguments, but I think that would be very confusing as path variables are passed as arguments at the moment.

You can use the %request.Data property to retrieve the URL parameters.

Stefan Wittmann · Jun 1, 2016 go to post

Yes, it is, but I am not sure if it is applicable to Dan's use case. The biggest technical difference is that GETs are cached by clients, while POSTs must not be cached.

Stefan Wittmann · Jun 6, 2016 go to post

Ben,

Caché Objects don't come with exact the same benefits and I am happy to briefly discuss the differences and similarities. Actually, every time I talk about Caché Objects, I mean Caché Persistent Classes.

  • Flexibility

This one is simple. Caché Objects have a fixed schema. It can be changed for sure, but you have to potentially migrate your data if you still want to access it consistently. The impact depends on the type of the schema change, of course. If you just add a property, you are fine. Even some type changes may not require a data migration.

  • Sparseness

Caché Objects are persisted by making use of a storage strategy. By default, each property gets a slot in a $List structure. $List is optimized for sequentially accessing elements, not for random access, which is fine for a fixed schema world. You usually want to load all top-level values most of the time anyway. Therefore, the $List serialization is optimized for dense data.

Assume an object has 100 properties and only properties 1,10,25,50,75 and 100 are filled. That is sparse data. With the $List serialization we have to jump through the empty buckets to read the six values we actually are interested in. That is a waste of time. Also, we are storing 94 empty buckets on disk. That is a waste of space, not much, but it can add up if your data is very sparse. 

Document stores embrace serialization formats that are optimized for storing sparse data in a compact form and for random access.

  • Hierarchical

Caché Objects can either link to instances of other classes (persistent class includes a property where the type points to another persistent class) or they can embed instances of another class (persistent class includes a property where the type points to another serial class).

A document can embed another structure, which is similar to our serial class implementation because the data is actually physically stored together. One physical read of a document can retrieve every information you are interested in if it is designed correctly.

You cannot compare embedding with a link to another table/class as the data is stored separately and usually requires access to separate blocks.

  • Dynamic Types

Properties of a Caché Object have a type. I can't take a timestamp and store it in a property with the type Sample.Person. The Object and SQL layer will validate values and ensure type safety for me.

Document keys are not associated with a type at all. Only a value is. I can take two documents that have different types for the same key and store them in the same collection. This is an example of such two documents:

set person1 = {"name":"Stefan Wittmann"}

set person2 = {"name":{"first":"Stefan","last":"Wittmann","middle":null}}

I can't simply model this with classes. person1 would require a %String property while person2 requires a link to a serial class.

I hope this sheds some light on the individual benefits. Obviously, this comes with a price. Your application code has to do more validation as the backend allows you to work without a schema. There is always a cost involved.

Stefan Wittmann · Jun 9, 2016 go to post

Your points are well taken. I would like to add some thoughts:

Parent/Child relationship is an interesting concept, but it does not do well with larger volumes of data as you can't make use of bitmap indices in the child class. Embedding documents or serial classes on the other hand fully support bitmap indices which are important if you are operating on a larger set.

Data type handling can be designed in a very flexible way. Your suggestion of using generic %String properties is one option to deal with flexible data types with Caché Persistent Objects. But you get no support of the backend for any complex values you store in such a property. You have to write code to serialize/deserialize your complex values and - even more important - you can't index sub-values as they are no properties. This may be suited for some use cases, but not for others.

To answer your question about documenting schemas: Many developers just document sample JSON documents and explain what their purpose is. We offer no additional tooling for this yet, but we are working on tools that allow you to understand what a collection looks like. This is an area that will improve over time.

Stefan Wittmann · Jun 15, 2016 go to post

You can index any property within a document and by default, we will construct a bitmap index, but all index types supported by Caché Objects are supported by the document data model as well. So yes, we do support indexing a nested path within a document.

I am always happy for a constructive discussion and to learn about different viewpoints. In the end, we all have a better understanding and can build better products and applications. 

Stefan Wittmann · Jun 22, 2016 go to post

These two SQL functions are new in Caché 2016.2. Just grab the field test to take a look at them.

I will update the article to include the version, as I have obviously missed this.

Stefan Wittmann · Jun 22, 2016 go to post

Yes. The SQL page in the System Management Portal just sends the queries to the Caché server for execution. So every query that is supported by the server will run there.

Stefan Wittmann · Jul 4, 2016 go to post

Kevin, the parser in Atelier should indicate no error here. Updating the Studio parser has a low priority as Atelier is the path forward for the IDE. If you see an issue with Atelier, please let us know. Many thanks.

Stefan Wittmann · Aug 3, 2016 go to post

I am aware of the situation and we are working on publishing the node.js versions via npm independent from Caché kits. While that may take us a while, you can request the latest versions of the node.js module from WRC.