Steve Pisani · Jun 6, 2017 go to post

Hi,

is it possible to get a recording or slide deck from this webinar ?

Thanks  - Steve

Steve Pisani · Jun 9, 2017 go to post

..and I must add... 'No' - it is not used to establish a connection without authentication. For that, the connection method would need to be specifically defined to allow unauthenticated access, and that needs to be done from within Cache.

Steve

Steve Pisani · Jul 12, 2017 go to post

Hi Carey,

As you probably realised, the intention of the business rule (and rule sets within a rules class) is to typically have 1 sets of rules that are applied to a message, but for which you could have multiple versions, based on an effective date/time period.  That is:  if the date is X run rules set# 1, but if the date is Y run rule set#  2.  

If date ranges overlap, than, whichever is the first valid date range in the order in which the rules sets are arranged, defines which set of rules is executed. This is important to remember for later...

You can create 'Rule Set #2' ahead of time, making sure that the effectiveBegin of rule set #2, is immediately after the effectiveEnd of rule set #1 (as you indicated you wanted).

However, the second rule set would need be essentially, a copy of the first set - except for the changes to an individual rule item(s) that you want a different behavior for.  

Admittedly, cloning a whole rule set to another copy is labor intensive via the UI, but, extremely easy if you open the generated class in Studio - as you can just copy/paste the XML elements between the <ruleSet> tags,  save and re-compile. After you have your second set,  make your edits to rule set 2's effective date range, and whatever rule changes needed.

The Delegate action sends the message to another rule definition altogether, and, the Send sends the message to any other business component, (which can be a routing rule).   Using these two actions, based on any condition regardless, would require you to build a new rule definition or component to act as their target which you said you did not want to do, so, they are out.

Now - What you *can* do is write a user defined function to retrieve a rule definition's second rule set date range, and, use the value from this function in a conditional statement that drives the behavior of any of your first rule set's rule items.  You must ensure that Rule Set #1, however, is always the one that ever gets executed and the system does not ever fall on to running rule set #2 - so - even though you put in  a date range for rule set #2, leave date ranges for rule set #1 blank or wide open, so it will be the first and only one that will qualify for execution every time. 

Warning: This is not standard use or best practice, and I'm not recommending you do this - as it will effectively negate the rules sets with effective periods feature. which in the future, you may want to use.   I would stick with the approach of cloning Rule Set 1 into Rule Set 2, setting an appropriate EffectiveStart/end ahead of time, and making the individual changes you need to take effect in the future. 

Steve

Steve Pisani · Aug 17, 2017 go to post

Hi,

The code below returns the maximum number of connections a user can make while consuming one license unit.

Write ##class(%SYSTEM.License).MaxConnections()

It is a global setting, and, it is not related to the License Unit Authorized value.  That value is how many license units (not connections per unit), you can take out.

sincerely,

Steve

Steve Pisani · Aug 17, 2017 go to post

John is absolutely right.

You cannot set this value.  In fact, I should have said in my earlier post that the advertised (documented) maximum number of connections per License Unit is and always has been only 12.

The fact it is 25, is somewhat common knowledge and is a sort of grace limit, however I always tell my customers that you should work on the assumption that InterSystems could eventually enforce the advertised number of 12. If this happens it should not come as a shock to anyone.

12 connections from a user interacting with Cache, coming in from the same address, should be ample.

Steve

Steve Pisani · Sep 18, 2017 go to post

Hi Ahncel,

To help... - If not already enabled; enable security auditing via the system management portal's "System Administration > Security > Auditing > View Audit Database",  and look at this audit log after generating the error.

It will shed some more light on the user in question, or service, or whatever - may be causing this issue.

Steve

Steve Pisani · Sep 20, 2017 go to post

Soufiane,

Following on from Daniel's instructions... Once you have created a class with the suggested your code to invoke your service, add this as a task in Task Manager.  You can ask Task Manager to invoke this every hour.

Sincerely,

Steve 

Steve Pisani · Oct 3, 2017 go to post

Hi Stephen,

I would avoid using the system database CACHESYS to hold your application globals.

You should create another database that  is specifically intended to hold the globals which would be common to all the productions you are running.

To enable the globals residing in this one, common database, to be visible from your various production namespaces - you would "map" these using Global Mapping rules defined against your production namespaces.

See the System Management Portal's Namespace configuration screens, and the Global Mappings feature, or look here: 

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GSA_config_namespace_addmap_global  

Sincerely,

Steve

Steve Pisani · Oct 4, 2017 go to post

Hi Randall.

The early implementation of JSON support (which exposes system methods with a '$' prefix like '$toJSON()', etc), was deprecated after 2016.1 releases (which you are using).

Unfortunately the articles you are referencing where posted at the time the old syntax was supported, hence the <SYNTAX> error.

This post in the community explains the rationale for this change in more  detail.

This linkis the on-line equivalent to the JSON documentation available for 2017.1, and covers all the new syntax you should be using.   In there you will find the correct methods to instantiate a dynamic object, and how to work with it.

Sincerely,

Steve

Steve Pisani · Oct 22, 2017 go to post

I do not always have sensor reading on the last day of a month (Jan 31st, Feb 28, etc).

So similar to the issue you ran into, I get blank results when there is no reading on the last day. Of course, I want to get what essentially would be the last reading, by looking for the latest reading in that month.

 Close, but still not solution... :(

Steve Pisani · Oct 25, 2017 go to post

err... no, I don't want to use averages. they multiple readings on the same last day can be different ,and I want the last number.

Steve Pisani · Oct 25, 2017 go to post

So it would seem, that I need to differentiate, in the DeepSee index tables, which of the two readings on the last day of the month, is actually the last one.

I do not have time. :)

The data is being loaded from a flat file into the persistent cache table, which has just Sensor/date/value.. (prior to the DeepSee indices being built).   So...., the sequence is important, and the row ID in the persistent cache class becomes the differentiation - and potentially - something I can use. 

In the absence of time, I thought I should create a dimension that is calculated at index build time based on the source.%Id(), or a combination of Date_source.%Id().  The ID is an integer that is incremented with every new row in the source table.

I feel certain I can use the source ID to differentiate what is the last reading for the day. I'll be trying an approach along the lines of using the raw ID for help.  Any other ideas welcome..

Steve Pisani · Jan 30, 2018 go to post

Hi

I've not validate this suggestion by trying this myself yet,...so just an idea:  I do know that when defining custom settings for an adapter, you are given an OnValidate callback of sorts to validate user input. 

You could put security checks (check if user holds a custom resources) in that method.

You may need to do a simple subclass of your to adapter to implement correctly though.

Steve 

Steve Pisani · Jan 30, 2018 go to post

Hi,

I had a closer look, and it is not actually an "OnValidate" method per se that exists for use,..- but the feature does exist...

You need to create a <setting>IsValid() method. something like this:

Class qmd.CustomAdapter Extends EnsLib.TCP.InboundAdapter [ ProcedureBlock ]{Parameter SETTINGS = "MyCustomSetting";Property MyCustomSetting As %Integer;ClassMethod MyCustomSettingIsValid(%val) As %Status{if %val'?1.n quit $$$ERROR(5001,"My Custom Setting should be an integer.")Quit $$$OK}}

Hope this helps.

Steve

Steve Pisani · Feb 5, 2018 go to post

Hi Arun,

If you want a different tooltip (aka 'title' property), for each different item in your dropdown box, then my guess is that, you need to trap the onchange event of the datacombo box.  That is - when the user selects a new item from your dropdown box,  and it fires off an event - add code to that event (onchange?)  which redefines the title property of that datacombo component, based on the selection made.

Of course, somewhere you need to supply to the page a list of title texts that need to be applied for each option in the datacombo box.

Steve

Steve Pisani · Mar 1, 2018 go to post

So I guess that is the question...

Is it possible to identify a point-in-time on the Async Mirror that it is logically consistent ?

I could shut down the Async member before backing it up, restarting it, and then, have it re-join and catchup with the mirror set. I'm seeking confirmation that this process will leave me with cache.dat's that are in a consistent state, and if restored, could be use like any other backup, and accept play-forward journal files.

Would this also apply if I performed an online Cache backup on the async member without shutting it down, or is the mirror de-journal activity on the async member ignorant of the final passes in the Cache Backup process ? 

It is impossible to determine a point-in-time logical consistency for the Async Member which is receiving mirror data from 5 busy Ensemble productions.  I'm hoping shutting it down, or taking it 'off-line' in some controlled manner,  would leave it in a state that could be backed up for later use.

Steve

Steve Pisani · Mar 5, 2018 go to post

Thanks Ray.

I'm still hoping this site will improve the underlying network storage such that snapshots can be taken as  backups, but at least now we are aware of the options/pitfalls with attempting to use an Async member for backups.   

The site should be in a better position to make an informed decision.

Thanks - 

Steve 

Steve Pisani · Mar 28, 2018 go to post

Thanks Anzelem.

Its good to hear that backups off of the DR is working for your situation.

I'm concerned that the online backups proved problematic for you (and by online backups, I'm assuming you refer to External Freeze/Thaw() process).  Be sure that, after ExternalFreeze() is called, that you copy not only folders where the database exist,  but also the folders where journal files, WIJ files and you application files are.   

A restore procedure would require all these be restored (which include the WIJ + Journal Files) to ensure the when Cache starts up after a restore, that databases are in an integral and consistent state from the time of the backup. 

I know of lots of people successfully using the ExternalFreeze/Thaw on production systems.

thanks -  

Steve 

Steve Pisani · Apr 20, 2018 go to post

Hi Laura,

I would declare the class property without the underscore as you have tried, given the way the generated code (in the class ending in ...Thread1.cls) interprets this and generates code.

Having said this - I'm interested in then knowing how you are going about generating the JSON string.   This is probably where you need to focus hour efforts and set a JSON element of "status_id".

How are you generating you JSON, and what version of Ensemble are you on ?  Answers to these questions will help others looking at your post to contribute a  solution.

Thanks - Steve

Steve Pisani · Jun 6, 2018 go to post

Thanks. - I'm going to use COUNT(). (I was aware of introducing a subclassed adapter, but want to keep my design as simple as possible)

For those interested who might be following the thread - I thought I'd post a more detailed entry of the use case and proposed solutions, just for education purposes.
Problem: I want to group multiple rows of my query in fewer Ensemble messages that get submitted.  That is - my query might return rows:

1- A
1- B
1- C
2- A
2 -B
...

I want to send only 2 Ensemble messages, the first that has a list property containing 1A, 1B and 1C; and for 2nd that has the same list property containing  items 2A,2B

I had this working by collecting the 'current' 1st column value in an instance property of the service, and, checking for when the first column value changes - (ie, when 1st column value goes from 1 to 2, I need to submit the first message) - but - the problem was that message #2, won't get submitted.

Solution #1: Use COUNT() and Business Service Instance properties.

- I'll change the query to return a column Count() (thanks for reminding me Eduardo..), which  MUST be the # of rows of the query - independent of any state data that Ensemble may be holding on to, like, recently processed ID's, etc. 
- I'll a new Properties on the Service called:  CurrentROW
- Every invocation of OnProcessInput will increment the property 'CurrentROW'.

I will use the existing logic to fire off Ensemble Messages at the correct row intervals, but I will also include a check to see if pInput.GET("ROWCOUNT")=..CurrentROW, - confirming I'm on the last row, and in that case, fire off the remaining Ensemble message, at the end - then set ..CurrentRow back to 0.

Solution #2: (thanks Gilberto) - use the support of a Business Operation

- Modify the query to only return distinct rows on the first column - hence - only 2 rows, leaving my collection properties of the Ensemble Message empty,
- In my current Business process - make a Business Operation call *back* into the database, to get the 'child' rows, (in the case of ID 1, this second query will return A,B,C)
- Add these to the list properties of the BP request message, and continue normal processing.

Thanks for the ideas ...

Steve

Steve Pisani · Jul 22, 2018 go to post

In addition typ Julian's comment,  did you have a look at the Error reported in the Error log ?

Details there  may shed some light on the whole situation. 

Steve Pisani · Jul 22, 2018 go to post

Hi,  at this point I would check what the value of 'st' is, as it seems the GetURL call might be failing.

Steve Pisani · Oct 5, 2018 go to post

Hi,

Sorry not getting back to you sooner - I was on a flight.

You are on the right track. You just need to understand the makeup  of the data returned.

So - to recap - in IRIS you are preparing a Document database - a collection of JSON documents. Each document represents an 'artist', with 'name', 'playcount', 'listeners' and other  properties.

The JSON string for 1 such entry (artist), or document in the collections would be look something like this, which is embedded actually, in the whole JSON your HTTP request returns..:

{
        "name": "Tough Love",
        "playcount": "279426",
        "listeners": "58179",
        "mbid": "d07276bc-3874-4deb-8699-35c9948be0cc",
        "url": "https://www.last.fm/music/Tough+Love",
        "streamable": "0",
        "image": [
          {
            "#text": "https://lastfm-img2.akamaized.net/i/u/34s/3fa24f60a855fdade245138dead7ec...",
            "size": "small"
          },...

}

If you extracted each artist document in the collection,  you can insert it into the database individually like this:

do db.%SaveDocument({"name":"Tough love","playcount":"279426",... })

The %SaveDocument method takes a single JSON dynamic object and inserts into the collection.  The whole JSON blob goes into the %DocDB 'column' of the projected table, and, for some elements like 'name' specifically created as columns via %CreateProperty - will be individually populated as column values.

But  - as mentioned earlier - the output from your HTTP call returns JSON which, down a few levels deep, has a collection of 'artist' documents :

{
  "artists": {
    "artist": [           <-- This is where the collection of artists starts from
      {
        "name": "Tough Love",
        "playcount": "279426",

Here are two approaches:

1. Option #1 - iterate through the correct part of your returned data, to extract individually each artist. (I prefer the next option #2)

Using the returned whole JSON, access the element 'artists', - and then, it's property 'artist'.  This element (represented by path "artists.artist", (poorly named imho), is actually the collection.  Use an iterator to iterate through each item in the collection.  

set wholeJSON={}.%FromJSON(httprequest.HttpResponse.Data)
set iArtist=wholeJSON.artists.artist.%GetIterator()  // iArtist is the iterator for the collection of artist JSON's
while iArtist.%GetNext(.key,.value) {
    // key is the item number in the collection
    // value is a dynamic object of the item in this collection
    do db.%SaveDocument(value)  //  insert 1 artist at a time.
}

2.  As you have discovered, you can use db.%FromJSON to import a whole collection of documents, in one hit, but, what you supply, should be a string or stream in JSON format, representing an array of documents, which the raw HttpResponse.Data is not because of the leading elements 'artists', etc..- but you can dive in and get the array:

set wholeJSON={}.%FromJSON(httprequest.HttpResponse.Data)
set arrArtists=wholeJSON.artists.artist   // this is the collection of artists
do db.%FromJSON(arrArtists.%ToJSON())  // need to give %FromJSON a json string.

.. and in one GULP, ALL artist documents, ie , all items in the collection, are added into the document database (I tried this - I confirm 1000 rows created).

Use option 1 if you want to filter the injection of data into your document database, or option 2 if you want to do a batch upload in one hit.

Let us know how you get on...

Steve

Steve Pisani · Oct 7, 2018 go to post

Don't see why not...

You've got to ask yourself  - do you want to hit that website (which returns the full set) every 5 seconds ?.. Probably not.  I would be hitting it every hour and, spend the time in between hits to go through and update the documents in the document database.

It's your choice whether to pause operations, delete all documents, and upload all documents every n seconds as a whole - that would be an easy approach. I think however, you can get clever and identify an element that can act as a key for you, and use it to extract individual documents and update them with changes, - then, insert new ones.   Keeping a track of rows inserted and updated with each cycle via some 'last updated' property, will also allow you to purge any rows which have been deleted and should no longer appear in your collection.

The above seems like a good approach, your use case may dictate a slightly different one. I'm not sure if there is a technical question here.  Technically - you will call the web site for the batch content in the same way, and, given the properties you already setup via CreateProperty - you can run an SQL Query to extract an individual document for updating/deleting.

Steve

Steve Pisani · Oct 18, 2018 go to post

Hi

I believe this is a work in progress in the product - but I know of no ETA, so at the moment, everyone builds their own synchronisation techniques as other comments here explain.

* Disclaimer - this is not necessarily the 'Answer' you where looking for, but merely a comment you might find useful * 

In the past I create a framework for doing this and more.   I'll describe it here :

Using a pre-defined base - one would create any number of subclasses for each type of data you wanted to synchronise (for example, a sub-class for mirroring security settings) and in these subclasses implement 2 methods only:

- The first method 'export' deals with collecting a set of data from wherever, and, saves it as properties of the class (for example, in this case the method will export all security settings and read back the xml export in a global character stream for persistence within the DB). These are persistent sub-classes

- The second  method  'import' is the opposite side, which would unpack recently collected data and sync (for example - export the global character stream of the instance data to a temporary 'security.xml'  file and run through the system API's to importing those settings)  

The persistent data created by the classes during the 'export' method call is be saved to a mirrored database, so by default becomes available on other nodes, during 'import' invocation.

A frequently running scheduled task , executing  on any mirror member (primary, secondary or async member) would iterate through known subclasses and based on knowing that server's role, would either invoke the 'export' or 'import' methods of each subclasses . (of course the primary members call the 'export' method only, the other roles call the 'import' method.)

There are various checks and balances, for example, to ensure only the last instance data is imported on the import sides in case some where skipped for some reason. Also - that no import executes midway - ie - waits until an export has been flagged as complete...

I wrote this as a framework because I felt there is other data - not just the Security data in CACHESYS  that would need replicating between members. 

I have done a fair amount of testing on the tools,  and completed it around the time I heard InterSystems was working on a native solution so have not persisted further in documenting/completing.  I wrote this for someone else, who ended up building a more hardcoded, straightforward approach, so it is not actually in production anywhere.

Steve

Steve Pisani · Oct 23, 2018 go to post

That's great !

Does it have to be a Github repo or can I use BitBucket ?

Also - if we find an error  (eg WebTerminal on IRIS), can we leave a comment generally or for the developer ?

Steve

Steve Pisani · Nov 6, 2018 go to post

Hi Chris,

I agree - note the Schedular basically STARTS or STOPS a business host automatically, on a pre-defined schedule -  (so it applies to Operations and Processes too, not just Services, which are the items that have a CallInterval feature).

For regular invocations of work on services,  in almost all cases,  absolutely - CallInterval is the way to go, and is what is used mostly. I certainly would prefer looking at the production and status of my business hosts and see all of them 'green' and running - even though, running might actually means 'idle in between call intervals' .  Using the Schedular the stopped business host will appear 'gray' when it is not started (ie, it is disabled) 

There are valid use cases -  though, a Schedule on, say, a Business Operation makes sense. For example, you may want to send out messages to business operation that interacts with a fee-per-transaction end point that is cheaper during certain times of the day. In this case, you can disable the operation, send queue it messages all day (which will accumulate in it's queue), then, at the appropriate time, enable the business operation via the schedular, then, disable it again after a period of choice.

In this thread's case, the easiest approach is to use OnInit to prepare the data and send the data. OnProcessInput (called by the interval, which can be very long), would do nothing but quit. That would work. Of course there are other approaches.

I wanted to include the Schedular information as it is often overlooked, and, sometimes, the full story and use case of the original poster might not be evident, and, schedular might have been appropriate. 

Thanks for your feedback.

Steve