... and more to the point, where/how would this be accomplished?
- Log in to post comments
... and more to the point, where/how would this be accomplished?
Hi -
What I'm trying to do is having a single specifiction for the dashboard (Cube, Pivot, KPIs, etc.) that will run against the same data, but live is different applications. (This really was a path I started down to enable independant dashboard branding for different applications)
I found a "better workaround" was to have multiple csp-applications that point to the same namespace but with different csp paths. This allows me to be able to have my dashboard have a "generic" logo image file name specified, and have this resolve to different actual image files depending upon the URL used to access the dashboard.
Thanks Derek - Good feedback.
Hi -
This doesn't seem to work. When I find the CSP files using the Server Explorer (<install root>/csp/application/file.csp), and right client "add to project" the dialogue says:

and I end up with nothing new in my project.
Hi -
I'm on build Atelier 1.0.144 against Ensemble/Caché 2016.2.0.636 on Windows (this may be a bug)
Hi -
My original thinking/concern is the notion that "Service Provider's Client 'A' - has a set of users" and "Service Provider's Client 'B' - has a different set of users" and neither A nor B users should be restricted in login user names to be unique across the Service Provider's total collection of login accounts. In other word there should be able to be a login for "A-client 's Bob" and "B-client's Bob" and they could both be "bob" at the screen, but "A-bob" and "B-bob" from the SaaS application that is being used by both "Client A" and "Client B". If there are separate installations of the Application for Client-A and Client-B with their own respective web-application user accounts, then the deployment of an upgrade to the application becomes more complicated, where each installation would need to be deploy'ed to instead of a single update being reflected across all instances of the application were it to be deployed in a single "production instance" (i.e. "upgrading multiple installations of the same application" vs. "upgrading a single installation, used by many discrete collections of users")
I'm trying to come up with a "best practice" approach (i.e. Pros/Cons for multiple simple deployments vs single more complex deployment). As more and more companies are looking to provide SaaS solutions there will end up being more and more multi-tenant situations that should be planned for with proper justifications for "lots of simple" or "single complex", and the system level user identification challenge is just one such aspect of the problem space.
Hi Joyce -
That SOUNDS good, but I don't have a "Preferences" menu anyplace (this is running on Windoze)... and there doesn't seem to be anything like your describing (at least not that I have found yet)
Getting closer...
OK, So I've created my "new template", but I do not see ANY place where ANY of these templates are callable (my new one or any of the shipped ones). There is nothing I can see in any of the "help" files that indicate where or how these templates are called in any context.
I'm clearly missing a step or connection someplace...
How do I actually USE a new template? (my new thing isn't showing up on the "Templates" list, even after cycling Atelier)
Hi -
I see my problem, I was being over sensitive to the feedback in the wizard ("Package Contains Invalid Dot Syntax"), and had I just continued with the rest of the "sub-package" naming, everything would have worked fine.

Thanks (I didn't think of that)
Using a instance.property coding is sometimes more appropriate for logic readability over expressing and executing a query.
This of course run smack up against "what is being required" of system admins by security departments compared to ACTUAL security. Never confuse useful with required.
Knowing how to force the "less secure, but mandated" patterns is really what my question was about.
This may well have been the issue, "File full" tripped complete journaling off and subsequent save fail "undo" was failing to rollback. This makes sense. Thanks
My problem is that my table (which *is* filled via an SQL) can have multiple pages of "displayed sub-sets" (i.e. the Page Size of the table, controlled by the Table Navigator), and when I launch the page directly with an instance ID passed by URL value, I can load the form, but I don't see anyway to figure out which page and which row to "jump to" and "select" programmatically. (my use case is relatively simple, as I don't have any client side filters or sorting that is done, just a "result set being displayed"
Success!!!
I created a callable ZenMethod function that allows me to pass in the table component ID, the SQL Table name and the Row ID, that will jump the table to the correct page and row to match the ID passed in.
Method jumpTable(componentID as %String, tableName As %String, id As %String, pageSize As %Integer) [ ZenMethod ]
{
set SQLtxt = "SELECT *, CEILING(%vid/"_pageSize_") PageNum, {fn MOD(%vid,"_pageSize_")} RelRowId FROM ( SELECT ID FROM "_tableName_") WHERE ID = "_id
set sql1 = ##class(%ResultSet).%New()
do sql1.Prepare(SQLtxt)
do sql1.Execute()
while sql1.Next() {
set page = sql1.Get("PageNum")
set row = sql1.Get("RelRowId")
}
set Table = %page.%GetComponentById(componentID)
set Table.selectedIndex = (row-1)
set Table.currPage = page
}This then can be used for any table on any zen page.
Thanks
In answer to the "Why?" questions...
Assume you have a generic "Person" record, and now you want to treat this "Person" as a "Doctor" record where Doctor is an Extension of Person. When the Person record was created, it was not known that this represented person was a going to be doctor record at some point in the future. The person record was created, properties set to values, and saved, linked referenced (i.e. "used" as a Person record). Now at some point later, there is a need to make a "Doctor" record out of this "Person" record. Since anything you could do with/to a Person record can be done with a Doctor record and Doctor only adds functionality (and possibly overriding methods), creating a "new" Doctor record and then replicating data values will not update any references to the "Person" record (and may not even be identifiable from just the Person record to begin with), but morphing that instance of a Person into an instance of a Doctor will preserve all of the references that might exist while enabling the new properties and methods of a Doctor.
Hi Scott -
A little context might help us help you ;)
Are you talking about operational metrics? (i.e. How much work is my system(s) doing?, How fast are my journals growing? etc.) or are you talking about "What's Stakeholder X's traffic today and of what type?"
There are some MANY things available, it becomes a question of what is the purpose of the measurements so we can figure out where to start getting meaningful values for your business need.
Hi Scott -
Have you looked at the Report Management framework under the Registry Management
(System Management Portal -> HealthShare -> Registry Management)?
Both the "Management Report" definitions, and the "Patient Report" definitions are a good starting point for some things. (Take a look at /csp/docbook/DocBook.UI.Page.cls?KEY=HERPT_ch_management_creating#HERPT_C273872 in the docs of your HealthShare installation)
"Response times" can be a bit tricky, since this is a measurement that would be not something that happens at a single point in the system (i.e. multiple Access Gateways, each having their own "start/stop" events, but only their own)
Just out of curiosity, which part of HealthShare are you running these queries? Are you looking on the Access Gateway, Edge Gateway, Registry?
They all use the same message structure, but in different parts of the "request/response" cycle. The "facing the outside world" context is the Access Gateway, the rest are more "internal to Information Exchange" propagating of the request and motivating the various parts to gather the response, so it's quite likely that (as Justin mentioned) there can be replications of the message content into "new message" records that are NOT actual "new requests".
When you look directly at the SQL table outside of the context of the complete message trace (i.e. a "Session") it should be expected that you will see what appears to be exactly the same content in multiple messages in the table.
When you are using the Text Categorization, you need to have a piece of meta data that is used to group the text into different categories. "Gender", or "Month" or, "Diagnosis Code", etc. Then each record has to have one of these values associated with it, so the learning process can determine what concepts/terms go with which category.
You will get the "category 1 covers the whole data" error, if you don't have a meta field either defined, or correctly populated. Without some level of variability in the "category" identified meta data field, the machine learning doesn't have reference point to sort out your text records.
Make sure that you both HAVE a Meta-Data element defined, and that there are differing values within the record set that you are using to create the categorizations.
Hi Scott -
Part of this depends on "how" (from what context) did you create your "export"
Assuming that you mean that you created an "export" from the System Management Portal:
Then review the documentation that can be found:
http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...
which will talk about the "Deployment" process for an "exported production"
Assuming that you are talking about an "export" from Studio, then the specifics of how a Business Operation is configured will be in the class definition of the Production Class (along with the actually configured Business Service/Operation class definitions (if they aren't standard InterSystems provided classes : i.e. If you have created your own FTP Operation class as opposed to using the FTP operation class InterSystems ships with the product)):
Then in this case you are looking at a more manual code promotion process, which involves importing from within Studio and recompiling things. This methodology is can work, but has a lot more moving parts (and therefor more "gotcha's" to look out for)
Hello -
There are multiple means of rendering a table in a "grid", from a simple html table populated with an SQL query, to ZEN "Grid" object
For the html table, the following code would give you a table from the SQL query
<script language=SQL name="query">
SELECT * FROM User.DataTable
</script>
<table border=1 bgcolor="">
<tr>
<csp:while counter=queryCol condition="(queryCol<query.GetColumnCount())">
<th align=left><b>#(query.GetColumnHeader(queryCol))#</b></th>
</csp:while>
</tr>
<csp:while counter=queryRow condition=query.Next()>
<tr class='#($S(queryRow#2:"DarkRow",1:"LightRow"))#'>
<csp:while counter=queryCol condition="(queryCol<query.GetColumnCount())">
<td>#(query.GetData(queryCol))#</td>
</csp:while>
</tr>
</csp:while>
</table>
For the ZEN example, take a look at your "local" sample page: <localhost>/csp/samples/ZENTest.DynaGridTest.cls
I found that creating a "Data Container" by extending the %DeepSee.DataConnector class allowed me to make an SQL based "source" where I could then create the dynamic filtering I wanted within the SQL of the container, and the balance of the IRIS Business Intelligence machinery would work just fine.
The account I'm using has %All (and doesn't have any problems with regular .cls class files) so I'm not sure that this is my issue.
Thanks, this looks like it will function for my needs.
Somel of the links on this article are now 404 (file not found) since the Website update. Where did these documents end up?
Jobbing had no effect. :(
The #server() call seems to be doing what I expect, it returns a value that I "alert();" and I'm getting that value back, so it's not crashing in there and I'm not seeing anything in any logs :-/