Rich Taylor · Dec 14, 2015 go to post

Found the solution.  The following MDX gives the values that I want.

SELECT {MEASURES.[Avg Test Score],
%LIST(NONEMPTYCROSSJOIN([BirthD].[H1].[Decade].Members,{[Measures].[Avg Test Score]}))} ON 0,
NON EMPTY homed.city.MEMBERS ON 1
FROM patients

I think the key was to enclose the measure in the NONEMPTYCROSSJOIN function in curly braces.  I had not done this in a previous attempt at getting this to work.

Rich Taylor · Feb 5, 2016 go to post

The work queues do actually sound pretty close to what is being looked for.  The question would be if the worker jobs have access to the in-memory objects of the process initiating the workers? 

Are there any practical examples of using this?

Rich Taylor · Feb 5, 2016 go to post

One thing I am unclear on is whether the worker jobs have access to the in-memory objects of the process initiating the workers?  I have a potential use case for this that I am investigating.

Are there any practical examples of using this?

Rich Taylor · Feb 8, 2016 go to post

Timur,

Thanks for the feedback.  I have used process private globals in the past.  Unfortunatey this and CACHETEMP will probably not work as the save needs to survive at least an application failure.   If the customer can accept that a full system failure, either Cache or the server itself completely shutdown, the CACHETEMP may work.  They would have to make changes to the application however as some of the objects involved are non-persistent.

Rich

Rich Taylor · Feb 17, 2016 go to post

No reponse yet.  I am really in need of a real world LDAP schema that if more complex (has been customized) than the provided schema.   If anyone has the ability to share something like this it would be greatly appreciated. 

Note that this is for a Global Summit presentation.  I want to use an actual use case rather than attempting to invent something that would look and be contrived.

Thanks in advance,

Rich

Rich Taylor · Feb 24, 2016 go to post

Two suggestions:

  1.  try returning a different error message.  One that might indicate an authentication error such as $SYSTEM.Status.Error($$$UserInvalidPassword).  I have not tried it myself, but this may work.  However, as someone else said this is really not the intended purpose of delegated authentication.  
  2. I am not sure about the nature of your additional checks.  However perhaps you can use the Login tag in %Zstart.

These are just suggestions as I have not had the chance to try either yet.

Rich Taylor · Apr 1, 2016 go to post

I think this would be useful to show in the project explorer too or at least in the properties of the project.

Rich Taylor · Apr 17, 2016 go to post

Status codes still have a place along side of Try/Catch in my opinion.  They really only serve to indicate the ending state of the method called.  This is not necessarily an error.  I agree that throwing an exception for truly fatal errors is the best and most efficient error handling method.  The issues is what does "truly fatal" mean?  There can be a lot of grey area to contend with.  There are methods where the calling program needs to determine the correct response.  For example take a method that calculates commission on a sale.   Clearly this is a serious problem on a Sales order.  However, it is less of an issue on a quotation.  In the case of the latter the process may simply desire to return an empty commissions structure.

Placement of try/catch blocks is a separate conversation.  Personally I find using try/catch blocks for error  handling to be clean and efficient.  The programs  are easier to read and any recovery can be consolidated in one place, either in or right after the catch.  I have found that any performance cost is unnoticeable in a typical transactional process.  It surely beats adding IF statements to handle to handle the flow.  For readability and maintainability I also dislike QUITing from a method or program in multiple places.  

So where is the "right" place for a try/catch?  If I had to pick one general rule I would say you should put the try/catch in anyplace where a meaningful recovery from the error/exception can be done and as close to the point where the error occurred as possible.   I the above example of a Commission calculation method I would not put a try/catch in the method itself since the method can not perform any real recovery.  However I would put one in the Sales order and Quotation code.

There are many methods to manage program flow under error/exception situations;  Try/Catch,  Quit and Continue in loops are a couple off the top of my head.  Used appropriately they can create code that is robust, readable and maintainable with little cost in performance.    

Rich Taylor · Apr 18, 2016 go to post

I will leave the logging issue alone as I don't see it as being the main point of the example.  It could also be a thread by itself.

The issue of using a bunch of $$$ISERR or other error condition checks is exactly why I like using throw with try/catch.  I disagree that it should only be for errors outside of the application's control.  However it is true that most of the time you are dealing with a fatal error.  Fatal that is to the current unit of work being performed, not necessarily to the entire process.

I will often use code like

set RetStatus = MyObj.Method() 

throw:$$$ISERR(RetStatus) ##class(%Exception.StatusException).CreateFromStatus(RetStatus)

The post conditional on the throw can take many forms, this is just one example.

Where I put the Try/Catch depends on many factors such as:

  • Where do I want recovery to happen?
  • use of the method and my code 
  • readability and maintainability of the code
  • ...

I the case of nested loops mentioned I think this is a great way to abort the process and return to a point, whether in this program or one farther up the stack, where the process can be cleanly recovered or aborted.

Rich Taylor · May 24, 2016 go to post

Evgeny,

Thanks for this!  Unfortunately it still appears to have the same limitation that you cannot inject your own Google API key.   There is a prodlog in play to possibly add in the ability to define this in a configuration setting of some kind.

I did download it and I am looking at playing around with what you have done.

Rich

Rich Taylor · May 24, 2016 go to post

Evgeny,

This too looks very interesting.  With REST services not built into 2016.1 that should make this easier.  One question if I decide to mess around with this myself.  Do you really need GULP to work with this project?   I already have a boat load of tools installed.

Rich

Rich Taylor · Aug 16, 2016 go to post

Problem with giving recommendations on how to minimize this is that there are many causes and many potential resolutions.  Some solutions will not be acceptable.  The simplest solution would be to create a database to hold this global and any other you don't need journaled.   Set this database to not be journaled and map that global to this database.  Of course you may want to have the database journaled.  Now you have to dig in further as others have suggested and determine why there is so much activity hitting this global.  It could be a routine that can be adjusted to generate less global activity.   I had a similar situation where the problem was traced to the fact that for every iteration of a process loop the global was updated.  The same record could be touched hundreds of times.  The solution was to hold the updates in a process private global (not journaled) and then do a single pass to update the records.

Hope this helps.

Rich Taylor · Aug 18, 2016 go to post

The Cache Version is Cache for Windows (x86-64) 2016.1.2 (Build 206) Mon Jul 25 2016 16:59:55 EDT.  The attached image shows the error.

Rich Taylor · Aug 23, 2016 go to post

If you are publishing a RESTful service then you don't set the Accepted Content type.  This is for the client to inform you as to what they responses they can accept.  So your rest service would examine this value to verify that you can supply the type of response the client will accept.  You would access this value using this syntax:

%request.CgiEnvs("HTTP_ACCEPT")

Rich Taylor · Aug 23, 2016 go to post

David,

You win the prize.   Version miss-match was the issue.  This caused several different errors depending on the class I was attempting to query.  Upgrading to the same version across the board resolved them all at least to this point.

Rich Taylor · Jan 9, 2017 go to post

I attempted this with the following query:

SELECT Name,%VID,ID,Age FROM
   (SELECT TOP 10 * FROM Sample.Person )
ORDER BY Name

Here are the results:  You can see that the %VID column does not correspond to the returned result set.

# Name Literal_2 ID Age
1 Adam,Tara P. 7 7 59
2 Brown,Orson X. 8 8 92
3 Hernandez,Kim J. 1 1 23
4 Kelvin,Dick J. 6 6 12
5 Orwell,Andrew T. 9 9 46
6 Page,Lawrence C. 3 3 67
7 Quixote,Andrew Q. 4 4 73
8 Williams,Stuart O. 10 10 77
9 Yakulis,Fred X. 2 2 82
10 Zemaitis,Emilio Q. 5 5 57
Rich Taylor · May 23, 2017 go to post

Let me recap what I think I understand.  You have a business service that uses an inbound file adapter.  The OnProcessInput method of that service builds the message that then gets sent into the production.  The input parameter of the OnProcessInput is a file character stream of some kind.  You want to get the filename associated to that stream to add to your message.

If all the above is true then you need to access the 'filename' property of the stream.  Note that the exact property may depend on the stream class being used.  'Filename' will work with any file based stream in the %Stream package or those stream classes defined in %Library.  If this stream is of type Ens.StreamContainer then use the 'OriginalFilename' property.

If I am still not clear perhaps you could share at least some of the implementation of you OnProcessInput method and the details of the message class you are building.

One other question would be how are you trying to incorporate the source filename into the output filename?  Are you using the %f filespec?

Rich Taylor · May 25, 2017 go to post

While it is true that Internal does not mean deprecated it is still not recommended that you utilize such items in your application code.  Internal means that this is for InterSystems internal use only.  Anything with this flag can change or be removed with no warning.

Rich Taylor · Jun 8, 2017 go to post

this looks like a good project to do with my kids.  Can you provide an inventory of the components you needed.  Obviously you needed an arduino board and a bread board.  What was the weather sensor you used?

Rich Taylor · Aug 29, 2017 go to post

Robert,

Great history lesson!  I have a question for you though.  As you were there at the  begining or close to it perhaps you might have some insight.  I came from a background in MultiValued databases (aka PICK, Universe, Unidata) joining InterSystems in 2008 when they were pushing Cache's ability to migrate those systems.   From the beginning I was amazed at the parallel evolution of both platforms.  In fact when I was preparing for my first interviews, having not heard of Cache before, I thought it was some derivative of PICK.  Conceptually both MUMPS and PICK share a lot of commonality.  Differing in implementation of course.  I have long harbored the belief that there had to be some common heritage.  Some white papers or other IP that influenced both.   Would you have any knowledge on the how the original developers of MUMPS arrived at the design concepts they embraced?  Does the name Don Nelson ring a bell?

Thanks again for the history.

Rich Taylor · Sep 18, 2017 go to post

Some questions first.  

  1. When you say 'localhost' are you implying that you are using the private Web Server contained within Cache?
  2. Is this the only application being run from the web server vs localhost?
  3. If not then are the other applications still accessible via the external web server.

Some things to check right off the bat.

  • enabling audting and seeing which user is getting the error will help
  • Use the CSP Gateway Management pages HTTP trace capability to see if the request is even making it into Cache.  It would seem so from the error, but better to confirm everything.
  • Make sure that the user that the CSP Gateway associated with the Web Server uses to communicate to Cache has access to your database.  This would be different from the person logging into your application.  This can be found in the CSP Gateway Management for the server.
Rich Taylor · Sep 25, 2017 go to post

To add to John's post.  That earlier post lets you convert the timestamp to a matching format that you can then get out of your current database as follows:

$zdatetime($h,3,1)

Or convert the timestamp into an internal date format with $zdatetimeh.

Rich Taylor · Sep 26, 2017 go to post

Thomas, I am working on the same problem.  I will post a solution if and when I get one.  And look for any support here to of course.

Rich Taylor · Nov 17, 2017 go to post

Marco,

I would suggest contacting InterSystems Support.  Got to WRC.InterSystems.com.  That would be the quickest way to resolve this particular issue.

Rich Taylor · Jan 9, 2018 go to post

Let me add my experience to this comment.  I have been wading into the Docker ocean.  I am on Widows and really did not want to run a Linux VM to get Docker Containers (seemed a bit redundant to me) so Docker for Windows was the way to go.  So far this has worked extremely well for me.  I am running an Ubuntu container with Ensemble added int.   My dockerfile is a simplistic version of the one earlier in these comments.   I am having only one issue related to getting the SSH daemon to run on when the container starts.

 I hope to have all my local instances moved into containers soon.

My feeling is that this will be great for demonstrations, local development, and proofs of concept.   I would agree that for any production use having a straight Linux environment with Docker would be a more robust and stable solution.

Rich Taylor · Jan 10, 2018 go to post

No it is not 'necessary'.  However I do like to be able to have an environment that more closely matches what one might need in production.  This is both for my experience and to be able to show InterSystems technology in a manner that might occur for a client.

I do use Docker exec, thought I choose to go to BASH so I have more general access.  I actually wrote  a simple cmd file to do this and added it to a menu on my toolbar.

@echo off
docker container ls --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
echo:
set /P Container=Container ID: 
docker exec -it %Container% bash