Amir Samary · Aug 10, 2017 go to post

Hi Robert,

You are right. Now I see there are methods on the datatype for LogicalToODBC and ODBCToLogical. 

But I insist that the methods LogicalTo*, ODBCTo* and DisplayTo* should not be called directly. Although they will work, the correct way of dealing with data type conversions are using normal functions such as $ZDateH() with the proper formatting code.

If one wants to store names in a format where the surname should be separated from the name by a comma, I would instead simply use %String for that or subclass it to create a new datatype that will make sure there is a comma in there (although I think this is too much).

Kind regards,

AS

Amir Samary · Aug 29, 2017 go to post

If you have a flag on your database that you set after reading the records, and this flag has an Index, you should be fine. Why bother? I mean: You could set this to be run every hour and process the records that have the right flag...

But if you really need to run your service on a specific schedule I would suggest  changing your architecture. Try creating a Business Service without an adapter (ADAPTER=Ens.InboundAdapter) so you can call it manually through Ens.Director.CreateBusinessService(). On the Business Service, call a Business Process that will call a Business Operation and grab the data for you. Then your Business Process will receive the data and do whatever needs to be done with it, maybe calling other Business Operations.

If that works, create a task (a class than inherits from %SYS.Task.Definition) that will create an instance of your business service using Ens.Director and call it. Configure the task to run on whatever schedule you need such as "Once a day at 12PM" or "Every hour" or "On monday, at 12AM", etc.

Kind regards,

AS

Amir Samary · Oct 11, 2017 go to post

Hi!

I wouldn't recommend putting globals or routines on your %SYS namespace for two reasons:

  • This will be shared with all namespaces. This may be what you want today, but may not be what you want in the future.
  • One nice thing to organize your code, globals and classes on different databases is that you can simplify your patching procedures. Instead of shipping classes, globals and routines from DEV to TEST or LIVE you can ship the necessary databases. So, if you have a database for all your routines/classes/compiled_pages, and you have a new release of your application, you can copy the entire CODE database from DEV to TEST or LIVE and "upgrade" that environment. Of course upgrading an environment may involve other things like rebuilding some indices, running some patching code, etc. But again, you can backup your databases, move the new databases to replace the old ones, run your custom code and rebuild your indices (if necessary) and test it. If something fails, you can always revert back to the previous set of databases. Well, you can't do that if you have custom code and globals on %SYS since copying the CACHESYS database from one system to another is taking more than what you would like to take from that system to the other.
  • With the new InterSystems Container Manager (ICM) that comes with InterSystems IRIS, there is this concept of "durable %SYS" where %SYS namespace is extracted from inside the container and put into an external volume. That is good because you can replace your container without loose all the system configurations you have stored on %SYS. But IRIS is not meant to be compatible with old practices such as taking care of your custom globals, routines and classes on %SYS. InterSystems may decide to change this behavior and make %SYS less custom code friendly.

So, you can perfectly create more databases to organize your libraries and globals and map them to your namespace without problems. One database for code tables, one database for some libraries you share between all your namespaces, one database for CODE another for DATA, etc. That is what namespaces are for: To rule them all.

Kind regards,

AS

Amir Samary · Oct 25, 2017 go to post

Hi Edouard!

Robert's solution works perfectly if you map the other database, through ECP, to the other host. ECP is very simple to configure. 

If this global belongs to a table, you could configure a ODBC/JDBC connection to the other system and created a linked table on system "TO" that is linked through ODBC/JDBC to the real table on system "FROM". And run a code similar to Robert's. But instead of an $Order, you would use %SQL.Statement and SELECT the records. 

ECP requires a Multi-Server license. That is why I am suggestion this alternative with SQL Gateway.

Kind regards,

AS

Amir Samary · Oct 26, 2017 go to post

Hi Eduard!

Without going too deep into your code and trying to enhance it, I can suggest you to:

  1. Put this method on a utility class
  2. Recode it so that it is a method generator. The method would use a variation of your current code to produce a $Case() that would simply return the position of a given property instead of querying the class definition in runtime.
  3. Make the class you want to use it (ex: Sample.Address) inherit from it.

Of course, if this is a one-time thing, you could simply implement number 2. Or you could simply forget about my suggestion at all if performance isn't an issue for this scenario. By using a method generator, you eliminates from the runtime any inefficiency of the code used to (pre)compute the positions.

Kind regards,

AS

Amir Samary · Oct 27, 2017 go to post

Hi Eduard,

I fail to follow your reasoning. If you are making other classes inherit from a class of yours, you are changing the class by definition. You are adding a new method to the class. Whether this method is a simple method or has its code computed during class compilation is transparent to whomever is calling that method.

And the resulting method could return the information in any format you choose, including $LB.

Kind regards,

AS

Amir Samary · Dec 7, 2017 go to post

HI!

If it's unidirectional and the network is protected and under control, I would definitely avoid using %SYNC and would use Asynchronous Mirroring instead.

On the other hand, there is a limit on the number of Asynchronous Mirroring you can have. I think it's currently 16. If this is not enough for you then you will have to think on another solution such as %SYNC or simply a process that periodically exports the globals and send them everywhere through a secure channel (a SOAP web service).

I have implemented a %SYNC over SOAP toolkit that makes things easier to setup and monitor. I am still finishing some aspects of it. %SYNC is a very good toolkit but it lacks a good communication and management infrastructure. I have the communication (through a protected SOAP communication) sorted out. I am now working on operational infrastructure such as purging ^OBJ.SYNC global, protecting the journal files from being purged if a node is lagging behind, some monitoring, etc.

If Asynchronous Mirroring doesn't work for you (that should be your first choice) and if you can build a simple task that periodically export your globals and send them through SOAP to your other nodes, I can share with you my code around %SYNC.

Kind regards,

Amir Samary

Amir Samary · Dec 29, 2017 go to post

Hi!

If you don't have problems with loosing the order of messages, just increase you pool size. But you should take a look at your transformation, why is it so heavy? A transformation should not slow down your system like this.

On the other hand, if you do care about the order of messages, you could use a message router to split the channel for more processes based on some criteria that won't affect the order of messages. For instance, at a client I was receiving an HL7 feed from single system that  was used on many facilities. On this feed I had messages to create a patient, update it, admit it, etc. If one messages arrived on its destination before the previous related ones, I would loose updates. The process that was transforming them was a bottleneck (and improve it to eliminate the bottleneck would take time). So we ended up creating a routing rule to split the HL7 feed by facility and created an instance of that business process for each facility. That allowed us to parallelize the processing while still keeping the order of messages (because I patient couldn't be on more than one facility at the same time).

Of course, we later took the time to improve that business process that was slow an retrofitted the production back to fast simplicity. ;)

Kind regards,

AS

Amir Samary · Feb 12, 2018 go to post

IMHO, Minimal Security option should be completely eliminated from the product.

I saw this behavior of having /api/atelier application created with only Unauthenticated on Ensemble installations with Lock Down Security. But that was about a year ago and I thought that was because it was still beta. Is this happening on current Ensemble and IRIS installations as well? Did you install them with what security level?

Amir Samary · Feb 20, 2018 go to post

Hi Antonio!

The examples I have given show how to use the returning data. I show how to do it by

  • Direct using |CPIPE| and
  • By using a handy method on class %Net.Remote.Utility. 

Have you seen it above?

AS

Amir Samary · Mar 4, 2018 go to post

Hi!

IMHO, I don't think this is application dependent at all. When we do the Freeze on one of the failover members, we don't care about what is running on the instance. Please notice that after you call Freeze, you snapshot everything, not only the filesystems where the database files are but also the filesystems where journal files, WIJ files and application files are. So, when you restore the back up,  you are restoring a point in time that may or may not have a database file (CACHE.DAT) consistent, but also the journal and WIJ files that will make it consistent. 

Also it is important to notice that Freeze/Thaw will switch the journal file for you and this is where transaction consistency will be kept. I mean, Caché/Ensemble/IRIS will split the journal at a point where the entire transaction and probably what is on the WIJ file is there and consistent.

After restoring this backup, you must execute a Journal Restore to restore all journal files generated after the Freeze/Thaw to make you instance up to date.

Unfortunately I can't answer about doing the backup on the Async node. At first, I believe there is no problem with it. You just need to be careful not forgetting the Async node exists and forget to apply patches and configurations you have done on the failover members so you have a complete backup of your system (application code and configurations included). But I don't know what happens when you execute the Freeze/Thaw procedure on the Async. Supposedly, freezing writing to the databases and new journal file creation would be performed on all cluster members, but the documentation is not clear about what "all cluster members" means. It is not clear if "all cluster members" includes Async Mirror Members.

My opinion is that Backup on an Async Member is not supported and may be problematic. For it to work, it would still have to freeze both failover members to have consistent journal files members on all nodes. So there would be no gain on doing it on the Async node. But that is only my opinion. Let's see if someone else can confirm my suspicious.

Kind regards,

AS

Amir Samary · Mar 6, 2018 go to post

Hi again!

I was checking the documentation of ExternalFreeze() here and there is an option for not switching the journal file. The parameter is defaulted to 1 (to switch the journal file) but you can change it to 0. Maybe that would allow you to do the Freeze/Thaw on an Async mirror member without consequences. Maybe ExternalFreez() will do it without switching the journal file independently of what you pass to this parameter just because it's being called on an Async mirror member. The documentation is not clear though... 

Maybe someone with more knowledge about the internals of this could clarify? I believe each CACHE.DAT file knows what was the last journal entry applied to it and, during a restore procedure, it could simply start in the middle of a journal file and proceed to the newer journal files created during/after the backup.

I would like to understand why we switch the journal file if, during a Freeze, all new transactions that can't be written to the database (because of the freeze) will be on the current journal file. A new journal file is created after the ExternalThaw() but all those transactions executed during the Freeze will be there on the previous journal file. It seems to me that switching the journal file serves no purpose since we always have to pick the previous journal file anyway during a restore.

Kind regards,

AS

Amir Samary · May 7, 2019 go to post

Hi! 

I thought of that. But I really wanted to write a custom ObjectScript code instead of relying on a %SQL.Statement or %ResultSet. That is because the data I want to aggregate and return is not easily searchable with a single statement.

But I think I am going to be using %Dictionary.* to generate the code dynamically.

Kind regards,

AS

Amir Samary · May 8, 2019 go to post

Thank you! That helps a lot. The problem I was having is that I was implementing XXXGetInfo but not XXXGetODBCInfo(). 

Amir Samary · Apr 2, 2017 go to post

Hi!

You don't actually need to configure a certificate on your Apache or even to encrypt the communication between Apache and the SuperServer with SSL/TLS.

You can create a CSP Application that is Unauthenticated and give it privileges to do whatever your web services need to do (Application Roles - more info here). I would also configure a "Permitted Classes" with a pattern to only allow your specific web services to be called. I would also block CSP/ZEN and DeepSee on this CSP Application.

More info on configuring CSP Applications here.

Then, for each web service you want to publish on this application (that is mentioned on the Permitted Classes), you will create a Web Service Security Policy using an existing Caché Studio wizard for that (more info here).

The wizard will allow you to choose from a set of options and several variations for each option on securing your web service. You may choose from the combobox "Mutual X.509 Certificates Security". Here is the description for this option:

This policy requires all peers to sign the message body and timestamp, as well as WS-Addressing headers, if included. It also optionally encrypts the message body with the public key of the peer's certificate.

You can configure Caché PKI (Public Key Infrastructure) to have your own CA (Certificate Authority) and generate the certificates that your server and clients will use.

This guarantees that only a client that has the certificate given by you will be able to authenticate and call this web service. The body of the call will be encrypted. 

If you restrict the entry points of this "Unauthenticated" csp application using "Permitted Classes" and if these permitted classes are web services protected by these policies, you are good to go. Remember to give to this application the privileges (Application Roles) for your service to be able to run properly (privilege on the database resource, SQL tables, etc.).

This doesn't require a username token. If you still want to use a username/password token, you can require that using the same wizard. Here is an additional description that the wizard provides:

Include Encrypted UsernameToken: This policy may optionally require the client to send a Username Token (with username and password). The Username Token must be specified at runtime. To specify the Username Token, set the Username and Password properties or add an instance of %SOAP.Security.UsernameToken to the Security header with the default $$$SOAPWSPasswordText type.

If you decide to do that, make sure your CSP application is configure for authentication  "Password" and do not check "Unauthenticated". 

Also, don't forget to use a real Apache web server. My point is that you don't need to configure your apache or its connection to the super server with a SSL certificate for all this to work. Caché will do the work, not Apache. Apache will receive a SOAP call that won't be totally encrypted. But If you look into it, you will notice that the body is encrypted, the header includes a signed timestamp, the username/password token will be encrypted, etc, etc. So, although this is not HTTPS, the certificates are being used to do all sort of things in the header and the body of the call that will give you a lot more protection that plain HTTPS.

But please, don't get me wrong. You need HTTPS if you are building an HTML web application or if you are using other kinds of web services such as REST, that don't have all the alternative enterprise security provided by SOAP. SOAP can stand alone, secure, without HTTPS. Your web application can't. 

Amir Samary · May 11, 2017 go to post

Hi Danny!

That is exactly what I want to avoid...

If you have written simple old style CSP applications, you will remember that CSP infrastructure will do this translation for you. UTF-8 comes in, but you don't notice it because by the time you need to store it on the database, it is already converted into the character encoding used by the database. And when you write that data from the database back to the user to compose a new html page, it is again translated to UTF-8.

I was expecting the same behavior with %CSP.REST. 

Why %CSP.REST services go only half way? I mean:

  • If I do nothing and leave CONVERTINPUTSTREAM=0, data will come in as utf-8 and I will save it on the database as utf-8. Then, when I need to give back the data to the page, it will present itself ok, since it is utf-8. But the data on the database is not on the right encoding and that will cause different problems. To avoid these problems, I must do what you suggest and use $ZConvert on everything. 
  • If I set CONVERTINPUTSTREAM=1, data will come in as utf-8 and be translated by %CSP.REST/%CSP.Page infrastructure and I will save it on the database using whatever encoding Caché uses to store Unicode characters on the database. So, I am avoiding doing the $ZConvert my self. It is done automatically. But then, when I need to use that stored data to show a new page, %CSP.REST won’t translate it back to utf-8 and it will be presented as garbage. So I am required to use $ZConvert to do the translation myself what is absurd and inelegant since %CSP.REST has done half of the job for me already.

So, I want to use CONVERTINPUTSTREAM=1 to mimic the typical behavior of CSP pages that you describe and we all love. But it goes only halfway for some reason and I wonder what could I do to fix this right now for my client. 

Do you realize that CONVERTINPUTSTREAM is good thing? I am only sorry that we don't have CONVERTOUTPUTSTREAM...

Kind regards,

AS

Amir Samary · May 11, 2017 go to post

Ok... I think I have found how to do it.

The problem was that I use a Main dispatcher %CSP.REST class that routes the REST calls to other %CSP.REST classes that I will call the delegates.

I had the CHARSET parameter on the delegates but not on the main router class! I just added it to the main router class and it worked!

So, in summary, to avoid doing $ZConvert everywhere with REST applications, make sure you have both parameters CONVERTINPUTSTREAM=1 and CHARSET="utf-8". It won't hurt having the CHARSET declarations on your CSP and HTML pages as well like:

<!DOCTYPE html><html><head>    <CSP:PARAMETER Name="CHARSET" Value="utf-8">    <title>My tasks</title>    <meta charset="utf-8" /></head>

Kind regards,

Amir Samary

Amir Samary · May 12, 2017 go to post

Hi Sean!

Thank you for your analysis. But try doing it with a %CSP.REST service instead of a %CSP.Page. %CSP.REST overrides the Page() method with a completely new one. The behavior is different because it is on the Page() method that the IO translation table is set up.

It looks like my problem was related to the fact that I didn't have the CHARSET parameter declared on my main %CSP.REST dispatcher. I only had it on the %CSP.REST delegates. When I put it on the main dispatcher, it worked perfectly.

But you may be onto something... I would rather specify the charset the way you do (on the javascript call) because I may want to use the same %CSP.REST dispatcher to receive a binary file or something other than UTF-8. That is an excellent point. Thank you very much for this tip. I will do what you said and try to remove the CHARSET parameter from both my main %CSP.REST dispatcher and delegates and see what happens. I will let you know!

Kind regards,

Amir Samary

Amir Samary · May 12, 2017 go to post

Why not use REST services? That scales better and is the way things are done nowadays everywhere, right?

Amir Samary · May 19, 2017 go to post

Hi Sean!

Can you please, tell me what is the exact $ZV of the instance you used to do your tests?

Kind regards,

AS

Amir Samary · May 23, 2017 go to post

I normally use a Web Application for serving CSP pages and static files and another for the REST calls. I configure one under the other like:

  • /csp/myapp
  • /csp/myapp/rest

And I configure both with the same Group ID, with session cookie and set the UseSession parameter on the Dispatcher class. That way, once logged in through CSP, the rest calls will just work without requiring login.

Kind regards,

Amir Samary

Amir Samary · May 23, 2017 go to post

Hi Eduard!

Here is a simple way of finding it out:

select top 1 TimeLogged from ens_util.log

where configname='ABC_HL7FileService' 

and SourceMethod='Start' 

and Type='4' --Info

order by %ID desc

You put the logical name of your component on the configname. There is a bitmap index on both Type and ConfigName so this should be blazing fast too! Although, for some reason, the query plan is not using Type:
 
Relative cost = 329.11
    Read bitmap index Ens_Util.Log.ConfigName, using the given %SQLUPPER(ConfigName), and looping on ID.

    For each row:
    - Read master map Ens_Util.Log.IDKEY, using the given idkey value.
    - Output the row.
 
Kind regards,
AS
Amir Samary · May 23, 2017 go to post

Hi!

     Assuming you meant "BPL" (Business Process Language) instead of "DTL" (Data Transformation Langue):

     If you simply want your Business Operation to try forever until it gets it done:

  •  On the BPL, make a synchronous call or make an asynchronous call with a sync activity for it
  •  On the BO, set FailureTime=-1. Also, try to understand the "Reply Code Actions" setting of your Business Operation. You don't want to retry for all kinds of errors. You probably want to retry for some errors and failure for others. If you set FailureTime=-1 and your Reply Code  Actions decides to retry for that kind of error, it will retry forever until it gets it done. If your Reply code Actions decides to fail for other types of errors, it will return an error to your Business Process.
  •  If you know that, for some errors, the BO will return a failure, protect the call you just did on your BPL with a scope action so you can capture this and take additional actions.

More about "Reply Code Actions" here.

Kind regards,

Amir Samary

Amir Samary · May 24, 2017 go to post

Hi!

The stream has an attribute called "Filename" that you can query like this on your OnProcessInput():

Set tFileName=pInput.Attributes("Filename")

You can query the same attribute on your Business Process or Business Operation.

This is documented on the Adapter documentation here.

Kind regards,

AS

Amir Samary · May 30, 2017 go to post

Hi!

If you are not using OS single sign-on, this shell script should do it:

#!/bin/bash

csession AUPOLDEVENS <<EOFF
SuperUser
superuserpassword
ZN "%SYS"
Do ^SECURITY
1
3




halt
EOFF

Where:

  • SuperUser - Is your username
  • superuserpassword - Is your SuperUser password

I have chosen SECURITY menu options 1, then option 3. Then I hit ENTER until I exited ^SECURITY routine and terminated the session with the halt command.

If you are using OS single sign-on, remove these two first lines since Caché won't ask for them.

The blank lines after number 3 are the ENTERs you enter to go up into the menu hierarchy until you exit.

The halt is necessary to avoid an error such as the following:

ERROR: <ENDOFFILE>SYSTEMIMPORTALL+212^SECURITY
%SYS>
<ENDOFFILE>
<ERRTRAP>

You can do more complex stuff with this technique such as validate errors and return unix error codes to your shell so that you can know if the operation was successful or not:

#!/bin/bash

csession INSTANCENAME <<EOFF
ZN "MYNAMESPACE"

Set tSC = ##class(SomeClass).SomeMethod()
If $System.Status.IsError(tSC) Do $System.Status.DisplayError(tSC) Do $zu(4,$j,1) ;Failure!

Do $zu(4,$j,0) ;OK!
EOFF

The $zu(4,$j,rc) will halt the session and return the return code on rc to your shell script. As you can notice, the Halt command is not necessary when using this $zu function.

I hope that helps!

Kind regards,

AS

Amir Samary · May 30, 2017 go to post

This is a quick an dirty code I just wrote that can convert simple JSON strings to XML. Sometimes, the JSON will be simple enough for simple code like this... I am not a JSON expert but maybe this can be a good starting point for something better.

This will work only on Caché 2015.2+.

Call the Test() method of the following class:

Class Util.JSONToXML Extends %RegisteredObject
{

ClassMethod Test()
{
    Set tSC = $System.Status.OK()
    Try
    {
        Set oJSON={"Prop1":"Value1","Prop2":2}
        Set tSC = ..JSONToXML(oJSON.%ToJSON(), "Test1", .tXML1)
        Quit:$System.Status.IsError(tSC)
        Write tXML1
        
        Write !!
        Set oJSON2={"Prop1":"Value1","Prop2":2,"List":["Item1","Item2","Item3"]}
        Set tSC = ..JSONToXML(oJSON2.%ToJSON(), "Test2", .tXML2)
        Quit:$System.Status.IsError(tSC)
        Write tXML2
        
        Write !!
        Set oJSON3={
                "name":"John",
                "age":30,
                "cars": [
                    { "name":"Ford", "models":[ "Fiesta", "Focus", "Mustang" ] },
                    { "name":"BMW", "models":[ "320", "X3", "X5" ] },
                    { "name":"Fiat", "models":[ "500", "Panda" ] }
                ]
             }
        Set tSC = ..JSONToXML(oJSON3.%ToJSON(), "Test3", .tXML3)
        Quit:$System.Status.IsError(tSC)
        Write tXML3

    }
    Catch (oException)
    {
        Set tSC =oException.AsStatus()
    }
    
    Do $System.Status.DisplayError(tSC)
}

ClassMethod JSONToXML(pJSONString As %String, pRootElementName As %String, Output pXMLString As %String) As %Status
{
        Set tSC = $System.Status.OK()
        Try
        {
            Set oJSON = ##class(%Library.DynamicObject).%FromJSON(pJSONString)
            
            Set pXMLString="<?xml version=""1.0"" encoding=""utf-8""?>"_$C(13,10)
            Set pXMLString=pXMLString_"<"_pRootElementName_">"_$C(13,10)
            
            Set tSC = ..ConvertFromJSONObjectToXMLString(oJSON, .pXMLString)
            Quit:$System.Status.IsError(tSC)
            
            Set pXMLString=pXMLString_"</"_pRootElementName_">"_$C(13,10)
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
}

ClassMethod ConvertFromJSONObjectToXMLString(pJSONObject As %Library.DynamicAbstractObject, Output pXMLString As %String) As %Status
{
        Set tSC = $System.Status.OK()
        Try
        {
            Set iterator = pJSONObject.%GetIterator()
            
            While iterator.%GetNext(.key, .value)
            {
                Set tXMLKey=$TR(key," ")
                Set pXMLString=pXMLString_"<"_tXMLKey_">"
                
                If value'=""
                {
                    If '$IsObject(value)
                    {
                        Set pXMLString=pXMLString_value
                    }
                    Else
                    {
                        Set pXMLString=pXMLString_$C(13,10)
                        If value.%ClassName()="%DynamicObject"
                        {
                            Set tSC = ..ConvertFromJSONObjectToXMLString(value, .pXMLString)
                            Quit:$System.Status.IsError(tSC)                            
                        }
                        ElseIf value.%ClassName()="%DynamicArray"
                        {
                            Set arrayIterator = value.%GetIterator()
                                        
                            While arrayIterator.%GetNext(.arrayKey, .arrayValue)
                            {
                                Set pXMLString=pXMLString_"<"_tXMLKey_"Item key="""_arrayKey_""">"
                                If '$IsObject(arrayValue)
                                {
                                    Set pXMLString=pXMLString_arrayValue
                                }
                                Else
                                {                                    
                                    Set tSC = ..ConvertFromJSONObjectToXMLString(arrayValue, .pXMLString)
                                    Quit:$System.Status.IsError(tSC)                            
                                }
                                Set pXMLString=pXMLString_"</"_tXMLKey_"Item>"_$C(13,10)
                            }
                            Quit:$System.Status.IsError(tSC)
                        }
                    }
                }
                
                Set pXMLString=pXMLString_"</"_tXMLKey_">"_$C(13,10)
            } //While
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
}

}

Here is the output:

Do ##class(Util.JSONToXML).Test()
<?xml version="1.0" encoding="utf-8"?>
<Test1>
<Prop1>Value1</Prop1>
<Prop2>2</Prop2>
</Test1>
 
 
<?xml version="1.0" encoding="utf-8"?>
<Test2>
<Prop1>Value1</Prop1>
<Prop2>2</Prop2>
<List>
<ListItem key="0">Item1</ListItem>
<ListItem key="1">Item2</ListItem>
<ListItem key="2">Item3</ListItem>
</List>
</Test2>
 
 
<?xml version="1.0" encoding="utf-8"?>
<Test3>
<name>John</name>
<age>30</age>
<cars>
<carsItem key="0"><name>Ford</name>
<models>
<modelsItem key="0">Fiesta</modelsItem>
<modelsItem key="1">Focus</modelsItem>
<modelsItem key="2">Mustang</modelsItem>
</models>
</carsItem>
<carsItem key="1"><name>BMW</name>
<models>
<modelsItem key="0">320</modelsItem>
<modelsItem key="1">X3</modelsItem>
<modelsItem key="2">X5</modelsItem>
</models>
</carsItem>
<carsItem key="2"><name>Fiat</name>
<models>
<modelsItem key="0">500</modelsItem>
<modelsItem key="1">Panda</modelsItem>
</models>
</carsItem>
</cars>
</Test3>
I hope that helps!
Kind regards,
AS
Amir Samary · Jun 9, 2017 go to post

Hi!

It looks like you are trying to implement security on your class model instead of just configuring it. I think you only need a single class with all the properties. Then you will give user A full access to the table by configuring this user on a Role that gives him INSERT, DELETE, UPDATE, SELECT privileges. 

User B would be assigned to another role that would give it SELECT privilege only.

And if User B can only see a subset of columns from your table, then configure row level security using the Role information on $Role. InterSystems documentation here explains row level security configuration very clearly.

Amir Samary · Jun 19, 2017 go to post

Hi!

I am not sure if I understood your questions. But here is an explanation that may help you...

If you want to run a SQL query filtering by a date

Let's take Sample.Person class on the SAMPLES namespace as an example. There is a DOB (date of birth) field of type %Date. This stores dates in the $Horolog format of Caché (an integer that counts the number of dates since 12/32/1940.

If your date is in the format DD/MM/YYYY (for instance), you can use TO_DATE() function to run your query and convert this date string to the $Horolog number:

select * from Sample.Person where DOB=TO_DATE('27/11/1950','DD/MM/YYYY')

That will work independently of the runtime mode you are on (Display, ODBC or Logical).

On the other hand, if you are running your query with Runtime Select Mode ODBC, you could reformat your date string to the ODBC format (YYYY-MM-DD) and don't use TO_DATE():

select * from Sample.Person where DOB='1950-11-27'

That still is converting the string '1950-11-27' to the internal $Horolog number that is:

USER>w $ZDateH("1950-11-27",3)

40142

If you already has the date on the internal $Horolog format you could run your query using Runtime Select Mode Logical:

select * from Sample.Person where DOB=40142

You can try these queries on the management portal. Just remember changing the 

If you are using dynamic queries with %Library.ResultSet or %SQL.Statement, set the Runtime Mode (%SelectMode property on %SQL.Statement) before running your query.

If you want to find records from a moving window of 30 days

The previous query brought, on my system, the person "Jafari,Zeke K.".  He was born on 1950-11-27. The following query will bring all people that was born on '1950-11-27' and 30 days before '1950-11-27'. I will use DATE_ADD function to calculate this window. I have also selected ODBC Runtime Select Mode to run the query:

select Name, DOB from Sample.Person where DOB between DATEADD(dd,-30,'1950-11-27') and '1950-11-27'

Two people will appear on my system: Jafari and Quixote. Quixote was born '1950-11-04'. That is inside the window. 

Moving window with current_date

You can use current_date to write queries such as "who has been born between today and 365 days ago?":

select Name, DOB from Sample.Person where DOB between DATEADD(dd,-365,current_date) and current_date

Using greater than or less than

You can also use >, >=, < or <= with dates like this:

select Name, DOB from Sample.Person where DOB >= DATEADD(dd,-365,current_date) 

Just be careful with the Runtime Select Mode. The following works with ODBC Runtime Select Mode, but won't work with Display or Logical Mode:

select Name, DOB from Sample.Person where DOB >= DATEADD(dd,-30,'1950-11-27') and DOB<='1950-11-27'

To make this work with Logical Mode, you would have to apply TO_DATE to the dates first:

select Name, DOB from Sample.Person where DOB >= DATEADD(dd,-30,TO_DATE('1950-11-27','YYYY-MM-DD')) and DOB<=TO_DATE('1950-11-27','YYYY-MM-DD')

To make it work with display mode, format the date accordingly to your NLS configuration. Mine would be 'DD/MM/YYYY' because I am using a spanish location.