Amir Samary · Aug 1, 2016 go to post

Code clear and document.

Ask help when you need it.

Refactor, and keep it simple.

InterSystems gives you power,

through simplicity,

the ultimate sophistication.

Amir Samary · Aug 27, 2016 go to post

I just realized that CacheGitHubCI will try to load classes from GitHub as XML. As I am using Atelier with the Eclipse GitHub plugin, the source code on GitHub is not stored as XML but as the source we see on Atelier when editing the class/csp/etc.

I can contribute to CacheGitHubCI, but before I do it, I wonder if anyone has implemented a new version of CacheGitHubCI that will load source as exported by Atelier...

Amir Samary · Aug 27, 2016 go to post

I just realized that CacheGitHubCI will try to load classes from GitHub as XML. As I am using Atelier with the Eclipse GitHub plugin, the source code on GitHub is not stored as XML but as the source we see on Atelier when editing the class/csp/etc.

I can contribute to CacheGitHubCI, but before I do it, I wonder if anyone has implemented a new version of CacheGitHubCI that will load source as exported by Atelier...

Amir Samary · Aug 31, 2016 go to post

Wow! Thank you! I was about to try doing that... :)

I will check it out right now! Thank you again!

Amir Samary · Sep 22, 2016 go to post

I have not. In fact, I don't want to do it because I want to install the csp gateway under a folder called /aupoldb/cspgateway instead of /opt/cspgateway.

That is because this folder is a logical volume system (lvs) and the plan is to have a snapshot of aupoldb file system (that is mounted on /aupoldb and includes caché on /aupoldb/cache and CSP Gateway on /aupoldb/cspgateway) before any patching so we can quickly rollback the entire filesystem if anything goes wrong with the patching.

Apache default installation is on /etc/httpd that is on the root logical volume. I wasn't planing on doing a snapshot of the root logical volume when patching Caché or CSP Gateway. At most, I would be saving httpd.conf before doing it as part of the backup procedure...

Amir Samary · Oct 25, 2016 go to post

I wish I could save an article to read it later... Looks like the subscription is not working. I subscribed to some articles to read them later and they don't show up on my "My Content > My Favorites".

Kind regards,

AS

Amir Samary · Mar 23, 2017 go to post

Here is my wish list (from the most important to the most desirable):

- There were more frequent releases. Why waiting months for the next release... If there is a small harmless feature that is good to go, why not publish it on a new release? 

- Support for seeing the other (.cls of CSP, .cls of a CLS, .int of a MAC, etc.) and the ability to jump to a relative location inside the INT: zNameOfMethod+10 (for instance).- Support Javascript syntax coloring on CSP

- Support for javascript files with long lines (such as the "min" version of javascript frameworks such as jquery or bootstrap). Atelier will mark these files with an error flag even though they are ok.

- Support for opening Management Portal of a specific server connection

- Support for using Studio wizards of a specific sever connection

- Support for seeing a CSP page (like when we click on the globe icon and the CSP page opens)

- Being faster

- Support for the Add Relationship wizard to classes

- Support for the other wizards such as add index, add property, add method, etc. These wizards are very good for beginners that don't know CDL or COS. I always use these wizards on my Sales demos because people then realizes that it is very easy to use our technology.

- Better testing with common source control plugins like GitHub, specially when developers are using a mix of operating systems (windows, linux and mac). There are many problems with CRLF x CR and character encoding. When the source is uploaded from a Windows machine into GitHub and downloaded on a Mac machine, it will synchronize with the server that will then change the CRLF to LF and make Atelier think that I touched the source when I didn't. The git diff then will add all those files to my pending GitHub list of files and I will have to upload them again to GitHub for them to disappear. I have only solved this problem by making all developers work with the same combination of encoding (utf-8) and end of line character (LF) on all platforms. My suggestion was that Atelier should be forcing UTF-8 and LF on all platforms to eliminate this problem.

- Support of a plugin to run SQL queries on the database with SQL syntax coloring, code completion etc. like WinSQL or Squirrel does. I mean: We could have Atelier installing some plugins that any developer would need. The same goes for UML editors that could allow us to see the application model more clearly.

Amir Samary · Mar 28, 2017 go to post

But I miss a lot a nice For Each sugar... This:

ForEach MyVar(key) 
{ 
    Write !,key 
} 

Is much better to read than:

Set key=""
For 
{ 
    Set key=$Order(MyVar(key))
    Quit:key=""

    Write !,key 
} 

I mean... We write code like this all the time, right? Locals and Globals are so important to us... Why not giving them some sugar?

Of course that the ForEach command would only try to $Order the last variable just like it works today with $Order. So instead of:

Set key1=""
For 
{ 
    Set key1=$Order(MyVar(key1))
    Quit:key1=""
 
    Write !,key1

    Set key2=""
    For 
    {
         Set key2=$Order(MyVar(key1, key2))
         Quit:key2=""

         Write !,$C(9),key2 
     }
} 

We would have:

ForEach MyVar(key1)
{ 
     Write !,key1
     ForEach MyVar(key1, key2) 
     { 
          Write !,$C(9), key2 
     }
} 

Much clearer and nice to read, don't you think? And if you really want to be fancy:

ForEach MyVar(key: value) 
{ 
    Write !,"The value for key ",key," is: ", $Get(value)
} 

Where value could be <UNDEFINED> if that local/global node ends up having no value defined (that's why the $Get on value).

I know it would be a lot of work, but, on a side note, It would be awesome if we could open up our virtual machine and give people the tools necessary so the community could implement other languages for our virtual machine. We could move our VBScript and TSQL implementation to this new, open, framework and have people use it as templates to build their own languages or language improvements.  I know building a compiler is not an easy thing and there are things that you have to hammer on the code. But it would be an interesting challenge and investigation project.

Amir Samary · May 16, 2017 go to post

I haven't read the article yet but, just to let you know, the first two images are missing.

Amir Samary · May 19, 2017 go to post

Sure! I can write about that. It will be good to get some peer review on the choice I have made.

Amir Samary · Jun 10, 2017 go to post

You are right Eduard. Column level security would be enough. It is even simpler!

Amir Samary · Jun 19, 2017 go to post

There is a GitHub Studio Hook already built out there in the wild. I wouldn't rewrite another one if I were you...

On the other hand, I wouldn't use this hook exactly because it generates XML exports of our files and I hate seeing my source code as XML on GitHub.

Instead, I would use Atelier with EGit plugin connected to my local Caché server. If you don't like Atelier, you can still use Studio if you want to. You will spend most of your time working with Studio on your local machine. When you are ready to commit your work to GitHub, you can open Atelier, synchronize your source code (what will export each class/routine to plain text files with your plain source code instead of XML) and commit the changes to GitHub.

It's like using Atelier as you would use Tortoise, except that Tortoise won't connect to Caché and export all/import all your source code for you like Atelier does... ;)

I like Atelier. I am used to it. Try it and maybe you will like it too. I can't wait to see the new release of it! Good luck!

Amir Samary · Jul 28, 2017 go to post

Hi Daniel!

I tend to look at REST services as a SOA Web Service an, as such, it must have a "contract". Binding this contract to the internal implementation can be problematic. Normally, you would try to work with a JSON object that is more "natural" to your web application while dealing with CRUD operations related to it on the server. That would allow you to decouple the client from the server through the contract and change your server implementation while keeping you client untouched.

So, beware that this excessive coupling can indeed increase productivity right now but may become a nightmare in the future...

Kind regards,

AS

Amir Samary · Aug 2, 2017 go to post

I understand the power of %SQL.Statement but as most of my queries are simple I continue using %ResultSet since the error treatment with %Status is more consistent.

It is bad enough that we have to deal with a mix of %Status and Exception handling. I don't like to have to check for %SQLCODE being negative after %Execute() and, if it is, having to transform it to a %Status to keep error handling consistent.

%ResultSet's Execute() method will return me a %Status while %SQL.Statement interface makes me have to deal with yet another type of error (%SQLCODE) making error handling code yet uglier... 

I like consistency, so I continue using %ResultSet. But when I need more functionality or more speed, I use %SQL.Statement instead.

Respectfully,

AS

Amir Samary · Aug 4, 2017 go to post

Hi Dan!

I have been using %ResultSet forever and my coding style is as follows:

/// Always return a %Status
ClassMethod SomeMethod() As %Status
{
     Set tSC = $System.Status.OK()
     Try
     {
          Set oRS = ##class(%ResultSet).%New()
          Set tSC = oRS.Prepare("Select ......")
          Quit:$System.Status.IsError(tSC)        

          Set tSC = oRS.Execute()
          Quit:$System.Status.IsError(tSC)

          While oRS.Next()
          {
              //Do something...
          }     
     }
     Catch (oException)
     {
          Set tSC = oException.AsStatus()
     }
     Quit tSC
}

As you can see, it is painful enough to have to deal with both Try/Catch and %Status ways of handling errors. I have used Try/Catch in the same way I used to use $ZT back in the days. We must protect the code from unpredictable errors such as <FILEFULL>, <STORE>, etc. On the other hand, most of our API return %Status. So, there is no choice but to use a similar structure for handling both ways of reporting errors.

With the new %SQL.Statement interface I am required to check yet another way of reporting errors (SQLCODE) and translate those errors to either %Status or an Exception. That makes my code look ugly and no so much object oriented as I would like. You see, when I am doing demos and coding in front of people I tend to code the same way I code when I am building something for real and vice-versa. Caché/Ensemble is really a formidable technology and one can build things with our technology that would take anyone else months on other technologies. But the first impression is key and when I am doing demos I want to show beautiful code that is easy to read and understand. That is why I keep using %ResultSet. It's true %Prepare() will return a %Status but %Execute won't and I would have to inspect %SQL.StatementResult for it's SQLCODE and transform it into a %Status/Exception.

I opened a prodlog for this some time ago (118943), requesting an enhancement for this class to support a %Status property as well as a SQLCODE. 

Kind regards,

AS

Amir Samary · Aug 5, 2017 go to post

I started using $System.Status.* methods about 10 years ago when I wanted to demo how we could take code from Visual Basic 6, VBA or ASP and copy most of its logic into a Caché class method and use Language = basic.

If you need to call one of our API methods from this VBScript code you would probably receive a %Status. As VBScript doesn't use the $$$ macros, the only way to parse the error was by using $System.Status methods. I believe supporting other languages as VBScript was one of the reasons we put this code in there... But I may be wrong.

So, for consistency, I started using only $System.Status methods everywhere. I could write some code in COS that would parse an error with $System.Status.IsError() and I could rewrite the same method in VBScript using the same $System.Status methods without having to explain to people why, on the same product, we would make you deal with errors in different ways. We couldn't avoid "On Error" x "Try/Catch" though.

This also helps people notice $System and %SYSTEM package of classes and see what else is in there. Very useful.

I understand using macros will result in faster code. I also believe our compile could optimize $System.Status.IsError() and $System.Status.OK() method calls to produce the same byte code as the macros. We probably don't do this, but as a Sales Engineer, that is trying to show people how simple and powerful our technology can be, I prefer consistency and clarity over speed. I would also prefer consistency and clarity over some additional speed in any professional code that must be maintained by someone else in the future... 

I have strong feelings about &SQL() too. I would avoid it at all costs whilst I know that it will be the faster way to run a query in Caché. I prefer using %SQL.Statement or %ResultSet because I hate to make my code uglier just to accommodate SQLCODE error handling. Beside this, &SQL can't be used on other supported languages such as VBScript (that is not important anymore) and will force you to compile your classes if you decide to add a new index to your class or make more dramatic changes such as changing your class storage definition. You can change your storage definition, add indices, etc. without having to recompile your classes when using %SQL.Statement or %ResultSet because those cached routines will be automatically deleted for you... That is what most people would expect. I like when things look clear, simple and natural... So I also avoid using &SQL.

Finally, people tend to not even check for errors . If you make things complex, most people will produce bad code and blame you for having a complex programming language. Consistency makes people safer.

Kind regards,

AS

Amir Samary · Aug 7, 2017 go to post

When you receive a <ZSOAP> or <ZGTW> error, throw it away and take whatever comes into %objlasterror as your final %Status code. For <ZSOAP>, %objlasterror will have the real error description such as the timeout, XML parsing errors, authentication errors, etc. For <ZGTW> errors, %objlasterror will have the complete Java stack trace of the error.

Kind regards,

AS

Amir Samary · Aug 7, 2017 go to post

Hi Dan!

I don't really like macros. :) But I love exceptions. It would be awesome if %SQL.Statement simply threw the Exception when an error occurs instead of returning a SQLCODE that must be checked and transformed to either an exception or a %Status... In this way, we could keep the number of ways we deal with errors reduced to two, instead of three.

Your explanation is indeed very compelling and I will start using %SQL.Statement from now on. I was thinking about building a macro named $$$THROWONSQLERROR(result) that will receive the resultset returned by %Prepare and check it's SQLCODE and, if there is an error, throw it using result.%SQLCODE and %Message just like CreateFromSQLCODE does. This would allow me to hide SQLCODE.

Kind regards,

AS

Amir Samary · Aug 7, 2017 go to post

Hi Dan!

Yes! I kept reading after I answered and I just noticed that. Thank you for pointing that out!

Kind regards,

AS

Amir Samary · Aug 8, 2017 go to post

I believe this recommendation you linked on our documentation is outdated and wrong. One must use %objlasterror on several instances. Examples:

Java Gateway: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…

.NET Gatewayhttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…

SOAP Error Handlinghttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…

Caché ActiveX Gatewayhttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…

%New() constructorhttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…

Using %Dictionary Classeshttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…

There are many instances where %objlasterror is the ONLY place where you can find out what REALLY happened. Not using this information on production is, IMHO, unwise.

Kind regards,

Amir Samary

Amir Samary · Aug 8, 2017 go to post

Agreed. I believe this information should have come inside the main exception. Many developers probably have a hard time debugging errors without the real root cause. But then, the documentation explains how to get to the root cause and even give code snippets on how you should code to always have the root cause (that is on %objlasterror).

Amir Samary · Aug 8, 2017 go to post

If you need to monitor your productions, try checking out the Ensemble Activity Monitor:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…

It will give you the information you need with minimum performance impact since the data will be stored on a cube and your queries won't affect your runtime system. You can see the counts on the last hour, week, month and year with trending graphs. You will also get the same for queuing and wait time. It's great stuff!

Amir Samary · Aug 8, 2017 go to post

I use GitHub's issues that allows you to create issues of all sorts (bugs, enhancements, tasks, etc.) and assign them to someone or to a group of people. The issues are associated with the source code repository and there is also a functionality for projects. You create a project and then list the tasks needed to accomplish that project. Then you can make every task an issue and assign it to people. Then you can drag and drop the tasks from one stage to the next like Specification > Development > Testing > Product.

GitFlow is a good and flexible workflow. But I don't use Git to deploy on Pre-Live or LIVE. I normally would have four environments:

  • Development - Your machine
  • Development - where you integrate the development branch from Git with the work from all developers. Downloading the code from GitHub can be done automatically (when a change is integrated back into the develop branch) or manually.
  • QA - This environment is where you download code from GitHub's master branch with a new release. Your users can test the new release here without being bothered. 
  • Pre-Production/Pre-LIVE - This environment is periodically overwritten with a copy of LIVE. It is where you try and test applying your new release.
  • Production

GitFlow's hotfix may be used depending on your evironment. Depending on the change and on the urgency, it will be a pain to actually test the fix on your develop machine. Your local globals may not match the storage definition of what is in production because you may have been working on a new version of your classes with different global structures. You may need larges amount of data or specific data to reproduce the problem on your developer machine, etc. You can do it, but every hot fix will be a different workflow. Depending on the urgency you may simply not have the time to prepare your develop environment with the data and conditions to reproduce the problem, fix it and produce the hot fix. But it can be done.  On the other hand, as pre-production is a copy of LIVE, you can safely fix the problem there manually (forget GitHub), apply the change to LIVE and then incorporate these changes into your next release. I think this is cleaner. Everytime you have problem in LIVE, you can investigate it on PRE-LIVE. If PRE-LIVE is outdated, you can ask Operations for an emergency unscheduled refresh of PRE-LIVE to work on it.

About patching

I recommend always creating your namespaces with two databases: One for CODE and another for DATA.

That allows you to implement patching with a simple copy of the CODE database. You stop the instance, copy the database and start the instance. Simple like that. Every release may have an associated Class that has code to be run to rebuild indices, fix some global structures that may have changed, do some other kind of maintenance, etc.

If you are using Ensemble and don't want to stop your instance, you can generate your patch as a normal XML package and a Word document explaining how to apply it. Test this on your Pre-LIVE environment. Fix the document and/or the XML package if necessary, and try again until patching works. The run it on LIVE.

Before patching applying new releases to PRE-LIVE or LIVE, execute a full snapshot of your servers' virtual machine. If the patching procedure fails for some reason, you may need to rollback to that point in time. This is specially useful on PRE-LIVE where you are still testing the patching procedure and will most likely break things until you get it right. To be able to quickly go back in time and try it again and again and again will give you the freedom you need to produce a high quality patching procedure.

If you can afford downtime, use it. Don't try to push a zero downtime policy if you don't really need it. It will only make things unnecessarily complex and risky. You can patch Ensemble integrations without down time with the right procedure though. I micro services architecture may also help you to eliminate downtime but it is complex and requires a lot of engineering.

Using External Service Registry

I recommend using External Service Registry so that when you generate your XML project with the new production definition and etc. no references to End Points, folders, etc. are there. Even if you don't send your entire production class, this will help you with the periodic refreshing of the pre-live databases from live. The External Registry Service would store the end point configurations outside your databases and they would be different on LIVE, PRE-LIVE, QA, DEV and on the developer's machine (that may be using local mock services, new versions of services elsewhere, etc.).

Amir Samary · Aug 10, 2017 go to post

Hi Mike,

%List is not supposed to be used that way. It doesn't have a projection to SQL, so you can't create a property on a persistent class with it and expect it to work as a %String or %Integer property will. You could have it there as a private property for some other purpose of your class but not to be used as any other property as you expect.

I think %List was created to be used on the definition of method parameters and return types of such methods to represent a $List parameter. Then one would know that such datatype must be used and this would also be used when projecting those methods to Java, .NET, etc. for the same reason (there is a Java/.NET/etc. $List like datatype for every language binding we support).

If you need to represent names in a format such as "Surname,Name", try using %String or creating another datatype that inherits from %String and validates that the string is in such format by implementing the IsValid() method of your datatype.

Also, don't try to use LogicalTo*, ODBCTo* or DisplayTo* methods directly. Those are called by the SQL and Object engine accordingly. For instance ODBCToLogical is called when storing data from ODBC/JDBC connections. You shouldn't have to call these methods. 

Kind regards,

AS