Herman Slagman · Dec 17, 2015 go to post

Bill, Luca,

Have you ever considered Caché as a container as opposed to in a container ?

Secure (sandboxed) small applications (let's call them MicroServices ;-)) easily deployable, replicable and synchronizable across multiple Caché instances.

I still find Docker containers a huge overkill (in terms of needed stack) if all you want to containerize are Caché applications.

Herman Slagman · Dec 19, 2015 go to post

Glad to have these kind of discussions.

I didn't mean 'huge overkill' in the sense of amount of bytes, but merely the stack that is needed to run a simple Cache service (Docker, Linux and a complete Cache instance). And yes, it's a huge improvement compared to VM's.

Now (at least in this phase of containerization) you need to know quite some Linux and since I'm not a DevOps guy, I don't want to know Linux (or any other OS for that matter) beyond being able to use it.

 

I've been working on a skunk project (Bento) that tries to provide containerization just using Cache.

My thoughts are about two kinds of containers, application (code) and data containers that expose a REST interface.

Applications don't have a state so they can be instantiated, upgraded, replicated quite easily.

The problem is with the data containers, they can be instantiated from a initial state ofcourse, but the amount of catching up to do in terms of data synchronization could be substantial.

It depends on the granularity of the data, but it makes no sense to instantiate a data container that needs to catch up terrabytes of data.

Bento is far from being published, not even as a demonstration, but I might put together a little presentation that explains its principles.

I don't think we need/want/can rebuild all the stuff that Linux provides, but it should be possible to sandbox a namespace a lot more then is prossible right now.

I think both solutions (Cache in a container and Cache as a container) can both exist, each having it's own advantages.

Herman Slagman · Jan 26, 2016 go to post

Does this mean there won't be a 2016.1 ?

If not what is the expected release period of 2016.2 ? The name suggests Q3 2016 ?

Herman Slagman · Mar 3, 2016 go to post

The same security precautions would be for generator methods which can execute any code as well during compilation.

Herman Slagman · Mar 4, 2016 go to post

If you're controling the webservice 'on the other side',  the best way to handle this is to make it asynchronous, just returning a sort of Ack that the message is received.

Depending if you need information back or not the service can return a status back.

Also since the service seems to provide rather unrelated functionality, it might be a good idea to split the service up in different services.

Herman Slagman · Mar 4, 2016 go to post

XSLT in combination with %XML.XSLT.CallbackHandler is an excellent way to extract information from a (large) XML file.

We use it to perform XML Shredding and flattening on very large XML.

Herman Slagman · Mar 4, 2016 go to post

That's the point you make the decision based on values of elements or attributes within those elements, XSLTs XPath expressions make it very easy to address and 'extract' those.

Herman Slagman · Mar 5, 2016 go to post

Considering myself among the 'the older and more established users of InterSystems technologies' ;-) we've always used the setup where each developer has his/hers own development machine. How the 'sandbox' namespace is called is not relevant. We use Git to tie the stuff together (more and more automated). It is allowed, or even encouraged, to use different versions of Caché or (in our case) Ensemble, which sometimes results in code that is not compatible with the current Production version, but we detect that in our QA phase and that might trigger discussions wether to upgrade Production if a used incompatible feature is of great value.

Having a separate development sandbox leaves room for experimenting which you wouldn't have if every developer is working in the same namespace, where in my opninion 'not to step on each others toes' would be much too constraining.

It will probable the same when Atelier takes off, some of us will be using that while others will stick to Studio or even brew their own in MS Code. 

Herman Slagman · Mar 5, 2016 go to post

There's where I find it a pity ISC choose Eclipse as a base for Atelier (love the name btw). Besides that I'm not a fan of Eclipse, I find it bloated and non-intuitive, the fact that plugins/extensions  need to be written in Java limits  (at least for me) the possibilities. A Javascript based IDE (such as Atom or Brackets) would make it much more accesible. That said, if ISC would be able to divise a plugin structure that is COS/Server based (at least partially) that might open up a lot of opportunities for seasoned COS developers.

Herman Slagman · Mar 5, 2016 go to post

Bit that's exactly what I want: a smart code editor. Lean and mean. The Workflow bit I want to be handled by dedicated workflow (build, CI, CD)  tools. I'm afraid that Atelier becomes Eclipse (sic): bloatware that tries to be a Swiss Army knive, but isn't succesful in any of it's features.

Herman Slagman · Mar 8, 2016 go to post

I assume your interfaces are all part of a bigger application, AFAIK there is no way to span an Ensemble Production across namespaces and from a EIA/ESB point-of-view having separate (connected) productions running on a single Ensemble instance doesn't make much sense.

Your idea to have individual builds and deployments makes sense though, but that would be more stuff that a dedicated build/deployment tool should be able to do

Herman Slagman · Mar 9, 2016 go to post

As I understood, the Atelier separate download was 'initial', you need to use the update features of Atelier (Eclipse) to get the latest client version.

But i agree that upgrading Cache/Ensemble FT 2016.2 should include the Atelier latest version

Herman Slagman · Mar 10, 2016 go to post

It would be handy if we would see and be able to check some version or build number.

If I look at the installed plugins, the all seem the have some sort of timestamp: 20151118, which doesn't look very up-to-date to me. Using the suggested link by Dmitry doesn't work for me: No Updates Found

Herman Slagman · Mar 10, 2016 go to post

John, I've just did that and indeed that did the trick.

I had the initial version, which probably didn't had the version info and update facility.

Herman Slagman · Mar 11, 2016 go to post

Even simpeler: it could be an option of the mapping settings. Then there is no need for this pseudo-namespace.

Herman Slagman · Mar 11, 2016 go to post

Did you know that when you define a ClassQuery that secretly behind the scenes a ClassMethod is being generated that executes the query ?

For instance if you have a query

Query ByName(Name As %String) As %SQLQuery(CONTAINID = 1)
{
SELECT %ID,Address,City,Name FROM ClassQuery
 WHERE (Name %STARTSWITH :Name)
}

There will be a ClassMethod ByNameFunc(Name) which returns a %SQL.StatementResult object.

Herman Slagman · Mar 14, 2016 go to post

We use an XSLT utility to merge the two Productions, where the existing 'Production' Propduction has prevalence.

Herman Slagman · Mar 15, 2016 go to post

I totally agree with you Bill.

ISC is doing a great job of being backwards compatible. Even deprecated features are supported for a long time.

There's plenty of time to phase those out.

Herman Slagman · Mar 18, 2016 go to post

I'm sorry, I missed the Ensemble part.
I was referring to the %CSP.REST class.
I don't know what version of Ensemble you use, but at least in 2015.1 it seems that the Enslib.REST.Service doesn't copy the %request.Data values (the query parameter values) into the attributes of the pInput stream.


But at least they should be available to you in the %request.Data array (accessible by %request.Get(var)), why that is the case in your situation I can't see, it has got nothing to do with your method, but somewhere 'upstream'.

Herman Slagman · Mar 19, 2016 go to post

I'm afraid this will have to be a WRC call: how to get to query parameters in EnsLib.REST.Service

I don't know why the %request doesn't hold the query parameters, maybe it's not even the same request that actually entered the application.

Herman Slagman · May 9, 2016 go to post

Interesting article, but where have the pictures gone ?

I remember seeing a version which had them, was that another resource ?

Herman Slagman · May 9, 2016 go to post

Ah, I found it !
It's the CSC Proxy that denies access to the pictures. angry

I must have read it before from home.

Herman Slagman · Aug 26, 2016 go to post

I opposed strongly against the naming conventions used in the $-systemMethods so I should be happy.

But facing such a backwards incompatibility in 2016.2 makes me very sad. We've planned several major new features in our application all using REST  and JSON to be released on 2016.1 very soon. Now we have to review a lot of code and try to decide what to do with them in order to be compatible with future releases. I don't like the suggested macro solution, but it might be the only viable way. We might even switch back to our own JSON implementation which we have used for years.

The most important other issue I opposed to is the interface of the Dynamic array, it is not compatible with the existing COS %Collection interface. Is there any change that will be 'fixed' too ?

Herman Slagman · Aug 26, 2016 go to post

I think that ISC should have implemented the JSON support for existing classes the same as they've done for XML. Through a mixin adaptor class and accompanying property class. That way the projection to JSON is far more controllable.  

Herman Slagman · Aug 26, 2016 go to post

What would be example business cases that require more then one projection of a class ?

If the adaptor would provide both a JSON string and an JSON object, you could manipulate the object in order to get a different projection.

Our JSON implementation did just that, 99% the default one was used, but if we needed some more filtering of certain properties they could be removed (or added for that matter) from the object and then transformed to a JSON string. 

Herman Slagman · Aug 26, 2016 go to post

It's a little confusing: you want to do integration tests with a unit test framework, that doesn't sound right.

I'm not a big fan of unit testing, but for integration tests you'll need mockup services to emulate the 'outside' world. We use Soap Sonar, SoapUI and Cache services to just do that.