No idea how to delete this.
- Log in to post comments
No idea how to delete this.
This is great Mark, excellent write up.
Ran into a similar problem a couple of years ago on AWS with the mirror VIP, had a less sophisiticated solution with a custom business service on a target production/namespace listening for a keep alive socket the ELB to detect which Mirror Member was active.... re-used it for an auto-scaling group too for an indicator for availability we could put logic behind. Those links up there to the routines appears broke for me, would love to take a look at that magic.
What's Azure's VPN for solution look like for site 2 site connections? The diagrams above maybe suggest this is possibly bolted to on-prem, but just curious if you had any comments to that with Azure.
Did you provision a DNS Zone on a legible domain for internal communications? I abused a couple of *.info domains for this purpose and found that the hostnames enumerated from Cache were from the Instances and not very usable for interhost communication and broke things like Enterprise Manager, HS Endpoint Enumeration, etc.
Does Azure have an Internet Gateway or a NAT solution to provide communication outbound from a single address (or fault tolerance) ? The diagram for Web Server Load Balancing looks like they work for both inbound and outbound just wondered if that was the case.
Again, excellent resource, thanks for taking the time.
well, maybe spoke too soon as I see nothing coming from enabling the business services to TRACE and setting:
Set tSC = ..%sshSession.SetTraceMask(64,"/tmp/sftp-trace.log")
Set tSC = ..%sshSession.SetTraceMask(256,"/tmp/sftp-keys.log")
still not seeing any tracing at this point.
You do insanely good work... thanks for this.
Has anybody happened to get MQ inbounds working with explicit authentication? I have a need to talk to different queues with different credentials across environments and the limitation of the using the os user running the process is inhibiting it.
There is a tiny blurb about setting the credentials object on the adapter, but maybe I am not implementing it correctly as designed.
I basically copied the adapter class, and tried to use the ..CredentialsSet method which points to a standard Ens credential on the system with username and password.
Method OnInit() As %Status
{
Set tSC = ..InitQueue()
Set tSC = ..CredentialsSet("mq")
If $$$ISERR(tSC) Set ..%initQueueNeeded = 1
#; Do not prevent job starting if cannot initialise on connection.
#; If still error initialising in first poll it will be reported in the event log and available for Alert on Error
Quit $$$OK
}
No luck here, if anybody can help me out, it would be appreciated!
Hi Murray!
This is excellent, love this work and glad its making its way into the api.
For some reason though, I am unable to add this as a direct Prometheus datasource in Grafana and wondering if there is a model or version pre-requisite to Grafana ?
I can see the metrics with curl, wget, postman, and a browser et al... but when I add the datasource to grafana it fails the test.
Aany ideas ?
Hello,
I took this for a spin and noticed that the new Prometheus metrics are not available on it like they were in 2019.4 ? ( ie: https://community.intersystems.com/post/monitoring-intersystems-iris-using-built-rest-api ).
Am I missing something or is the metrics api still under consideration to make it into this build ?
Solved this.
Basically used the "D" and "T" for the trailing characters but did not include it as a mapped field. Inferred the value later on in the transform.
Hi @Anton Umnikov excellent work on this (and a lot of it too).
I was wondering if you can check the stack into an intersystems github repo so I can suggest some changes and additions to the CF Template through a PR? If not I can create one out of band too but thought it would be nice since its available to have it hosted in CC.
fixed, thank you!
Nice!
Wow, I guess I can do that in a Lamba function too says Stack Overflow... Assumes the function is in a VPC with internet access and no idea if it needs a `/tmp` provisioned, but wasn't aware of this harry pottery.
import os import sys import subprocess # pip install custom package to /tmp/ and add to path subprocess.call('pip install https://github.com/intersystems/quickstarts-python/raw/master/Solutions/nativeAPI_wheel/irisnative-1.0.0-cp34-abi3-linux_x86_64.whl-t /tmp/ --no-cache-dir'.split(), stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) sys.path.insert(1, '/tmp/')
^^^ Bacon Saver, thank you.
Interoperability productions with Python and Cloud connectors? YEEEESSSSSSS.
However, containers.intersystems.com is giving up bad credentials.... or am I the sole brunt of cruelty here?
(base) sween @ dimsecloud-pop-os ~
└─ $ ▶ docker login -u="ron.sweeney@integrationrequired.com" containers.intersystems.com
Password:
Error response from daemon: Get https://containers.intersystems.com/v2/: unauthorized: BAD_CREDENTIAL
Many thanks @Alexey Maslov ! looks like its %DEFAULTDB not %DBDEFAULT.
What timezone is the deadline for the 20th ? I am looking at the eleventh hour here if I can pull it off and was wondering if I have until midnight 2/20 EST.
fantastic @Eduard Lebedyuk !
thank you!
This has been a game changer for me.
Follow up here on the implementation of @Eduard Lebedyuk 's suggestion for the community and Google Geuse...
New $Namespace
Set $Namespace = "%SYS"
Set tSC = $$$OK
Set tSC = ##class(Security.Roles).Create("VSCODE")
Set tQuery = "GRANT EXECUTE ON %Library.RoutineMgr_StudioOpenDialog TO VSCODE"
Set tStatement = ##class(%SQL.Statement).%New()
Set tSC = tStatement.%Prepare(tQuery)
Set tResultSet = tStatement.%Execute()
Set pProp("MatchRoles")=":%EnsRole_Developer:VSCODE"
Set tSC = ##class(Security.Applications).Modify("/api/atelier", .pProp)
Quit tSCFollow up here, apache conf needed a directive for the move forward.
The previous Gateway/IRIS combination did not require the below apache directive, but the upgraded setup certainly required.
<Location /> CSP On </Location>
Docs do not really show it called out directly to turn on this apache directive for the root at all in the documentation but that is what was done to make it compatible in the declared version combination in case you run into this combo on similar configurations.
Thanks to Connie at the WRC for taking an in-depth look in short order.
In the FHIR Accelerator you can head to "Data Management -> Bundle Operations" and find sample scenarios in there. At the end of the day theses are bundles generated by Synthea so you can upload populations you create from that tool directly in for your use. If you want some help generating a specific population give us a hint and can help with that during that hack.
This is a great resource, nice work and a top chapter in this series for sure.
There seems to be different ways to approach declared IRIS state by codifying things, you can codify the exported objects and import them or like you mentioned, use the installer method that builds things as code.... which I have had pretty good success with in the past, like Tasks below.
<Method name="CreateClaims">
<ClassMethod>1</ClassMethod>
<FormalSpec>pVars,pLogLevel,tInstaller</FormalSpec>
<ReturnType>%Status</ReturnType>
<Implementation><![CDATA[
Set configItems = $LISTBUILD(
$LISTBUILD(1,
"Return payload from customer",
"create 835 from adjudicated claims",
"NS.Package.Task.CreateClaim")
for i = 1:1:$LISTLENGTH(configItems) {
Set item = $LISTGET(configItems, i)
Set Task=##Class(%SYS.Task).%OpenId($LISTGET(item,1))
if 'Task {
Set Task = ##Class(%SYS.Task).%New()
Set Task.Name = $LISTGET(item,2)
Set Task.Description = $LISTGET(item,3)
Set Task.NameSpace = "USER"
Set Task.Type = 2
Set Task.TaskClass= $LISTGET(item,4)
Set Task.TimePeriod = 5
Do Task.idSet($LISTGET(item,1))
Set Task.RunAsUser = "intersystems"
Set status=Task.%Save()
$$$ThrowOnError(status)
}
}
]]></Implementation>
</Method>
Thanks @Timothy Leavitt !
@Regilo Regilio Guedes de Souza , late 2020 dropbox deprecated long lived tokens , and went to a refresh_token approach instead. If you use the dropbox sdk the transition gets handled for you, but if not (and pretty sure we do not), what it entails is adding `token_access_type=offline` to the token request... this may need to be included a little deeper under the hood.
I see the MFT api has "IsAuthorized()" so it would be possible to do a check before hand in a process and manually invoke something, but I dont see the magic behind the UI's "GetAccessToken" in the UI.
Ill keep you posted!
Thats really good work, even follows the adapter guidelines. Pointing the community your way before preceding down the post.
Looks like my process for pre-post searches to look for duplicate content needs a re-think:
https://community.intersystems.com/smartsearch?search=dbt
https://openexchange.intersystems.com/?search=dbt&sort=r
These were goose eggs, so I proceeded down the path... however glad I played with duckdb and the plugins anyway.
Fantastic write up @Steve Pisani , thank you.
Hi Pierre,
I use and see it resident in code bases...
$$$AddAllRoleTemporaryIt It adds %ALL in %SYS...
compiles to:
i '($e($roles,1,$l("%All"))="%All") { n $ET,$roles s $ET="",$roles=$roles_","_"%All"}Basically adds %All to the current execution role.
I asked the question of @Eduard Lebedyuk some time ago and paying his wisdom forward.
This is ridiculously good work, the implementation of the custom operation and the fact it is a patient merge is fantastic. I have found native object de-duping, deepdiff, and two line list de-duping in python to be a way to quickly get to the point with FHIR Resource pair manipulation.
Thank you for taking the time on this, most likely going to have to read it a few times and load it up on my eReader.
One thing I'd add to tips and tricks from something I stole from someone somewhere:
zn "<FHIRNAMESPACE>"
Set ^FSLogChannel("all")=1
zn "%SYS"
Kill ^%ISCLOG
Set ^%ISCLOG=5
Set ^%ISCLOG("Category","HSFHIR")=5
Set ^%ISCLOG("Category","HSFHIRServer")=5
Set ^%ISCLOG("Category","OAuth2")=5
Set ^%ISCLOG("Category","OAuth2Server")=5Seems to give up a good mix of token processing and fhir calls debuggery.
This illustrates a particular subject that seems to be getting some traction in the FHIR community as a "FHIR Interceptor"... personally I have implemented this competency using an api manager (Kong, API Gateway) through an integration layer (function) prior to hitting IRIS which works but splits the business logic in two differnet places.
Your way here keeps the "intercepts" in IRIS and the resource server which I like.
Here is the "intercept layer" concept ablaze: https://darrendevitt.com/building-a-fhir-intercept-layer/