Benjamin De Boe · Sep 21, 2017 go to post

I've had something simple running on my laptop already a long time ago, but the internal discussion on how to package it proved a little more complicated. Among other things, an iFind index requires an iKnow-enabled license (and more space!), which meant you couldn't simply include it in every kit.

Also, for the ranking of docbook results, applying proper weights based on the type of content (title / paragraph / sample / ...) was at least as important as the text search capabilities themselves. That latter piece has been well-addressed in 2017.1, so docbook search is in pretty good shape now. Blending in an easily-deployable iFind option as Konstantin published can only add to this!

Thanks,
benjamin

Benjamin De Boe · Sep 29, 2017 go to post

iKnow was written to analyze English rather than ObjectScript, so you may see a few odd results coming out of code blocks. I believe you can add a where clause excluding those records from the block table to avoid them.

Benjamin De Boe · Jan 11, 2018 go to post

Hi Herman,

We're supporting SQL only in this first release, but are working hard to add Object and other data models in the future. Sharding any globals is unfortunately not possible as we need some level of abstraction (such as SQL tables or Objects) to hook into in order to automate the distribution of data and work to shards. This said, if your SQL (or soon Object) based application has the odd direct global reference to a "custom" global (not related to a sharded table), we'll still support that by just mapping those to the shard master database.

Thanks,
benjamin

Benjamin De Boe · Jan 12, 2018 go to post

Hi Warlin,

I'm not sure whether you have something specific in mind, but it sort of works the other way around. You shard a table and, under the hood, invisible to application code, the table's data gets distributed to globals in the data shards. You cannot shard globals.

thanks,
benjamin

Benjamin De Boe · Jan 12, 2018 go to post

If you have a global structure that you mapped a class to afterwards, that data is already in one physical database and therefore not sharded or shardable.  Sharding really is a layer in between your SQL accesses and the physical storage and it expects you not to touch that physical storage directly. So yes you can still picture how that global structure looks like and under certain circumstances (and when we're not looking ;-) ) read from those globals, but new records have to go through INSERT statements (or %New in a future version), but can never go against the global directly.

We currently only support sharding for %CacheStorage. There's been so many improvements in that model over the past 5-10 years that there aren't many reasons left to choose %CacheSQLStorage for new SQL/Object development. The only likely reason would be that you still have legacy global structures to start from, but as explained above, that's not a scenario we can support with sharding. Maybe a nice reference in this context is that of one of our early adopters who was able to migrate their existing SQL-based application to InterSystems IRIS in less than a day without any code changes, so they could use the rest of the day to start sharding a few of their tables and were ready to scale before dinner, so to speak.

Benjamin De Boe · Feb 1, 2018 go to post

Hi Robert,

DocBook has now moved fully online, which is what the mgmt portal will link to: http://docs.intersystems.com/iris

SAMPLES included quite a few outdated examples and was also not appropriate for many non-dev deployments, so we've also moved to a different model there, posting the most relevant ones on GitHub, giving us more flexibility to provide updates and new ones: https://github.com/intersystems?q=samples

JDBC driver: to what extent is this different from the past? It's always just been available as a jarfile, as is customary for JDBC drivers. We do hope to be able to post it through Maven repositories in the near future though.

Small icons: yeah, to make our installer and (more importantly) the container images more lightweight, we had to economize on space. Next to the removal of DocBook and Samples, using smaller icons also reduces the size in bytes ;) ;)

InterSystems IRIS is giving us the opportunity to adopt a contemporary deployment model, where we were somewhat restricted by long-term backwards compatibility commitments with Caché & Ensemble. Some of these will indeed catch your eye and might even feel a little strange at first, but we really believe the new model makes developing and deploying applications easier and faster. Of course, we're open to feedback on all of these evolution and this is a good channel to hear from you.

Thanks!
benjamin

Benjamin De Boe · Feb 1, 2018 go to post

Hi Dmitry,

Zen is indeed no longer a central piece of our application development strategy. We'll support it for some time to come (your Zen app still works on IRIS), but our focus is on providing a fast and scalable data management platform rather than GUI libraries. In that sense, you may already have noticed that recent courses we published on the topic of application development focus on leveraging the right technologies to connect to the backend (i.e. REST) and suggest using best-of-breed third-party technologies (i.e. Angular) for web development.

InterSystems IRIS is a new product where we're taking advantage of our Caché & Ensemble heritage. It's meant to address today's challenges when building critical applications and we've indeed leveraged a number of capabilities from those products, but also added a few significant new ones like containers, cloud & horizontal scalability. We'll be providing an overview of elements to check for Caché & Ensemble customers that would like to migrate to InterSystems IRIS shortly (i.e. difference in supported platforms), but please don't consider this as merely an upgrade. You may already have noticed the installer doesn't support upgrading anyhow.

Thanks,
benjamin 

Benjamin De Boe · May 4, 2018 go to post

I'm afraid we don't support the SQL PIVOT command, so unless you can enumerate the response codes as columns explicitly, you can only organise them as rows. If you control the application code, you could of course first have a query selecting all response codes and then generating the lengthy SQL call that includes separate columns for each response code. Something like SUM(CASE bRecord.ResponseCode WHEN 'response code 1' THEN 1 ELSE 0 END) AS ResponseCode1Count should work fairly well.

Benjamin De Boe · May 4, 2018 go to post

We currently don't support analytic windowing functions (PARTITION BY syntax), but have been looking into it for a future release. MATCH_RECOGNIZE is certainly one of the more advanced ones in that bucket. Is this the very one you would need or do you have scenarios that would be served by core windowing functionality, excluding the pattern matching piece?

Or is it the pattern matching and not as much the windowing you're looking for?

Benjamin De Boe · May 4, 2018 go to post

OK, thanks for the feedback. We're indeed looking into those additional windowing functions to go beyond our %FOREACH SQL extension, but it's not (yet) on the short-term agenda. Customer demand like yours of course helps us properly prioritize what should go on there.

Benjamin De Boe · Jul 6, 2018 go to post

Yes, we maintain an adoption guide that covers exactly that purpose. In order to be able to properly follow up on questions you'd have, we're  making it available through your technical account team (sales engineer or TAM) rather than ship it with the product.

Benjamin De Boe · Jul 30, 2018 go to post

Note that in InterSystems IRIS 2018.2, you'll be able to save a PMML model straight into InterSystems IRIS from SparkML, through a simple iscSave() method we added to the PipelineModel interface. You can already try it for yourself in the InterSystems IRIS Experience using Spark.

Also, besides this point-and-click batch test page, you can invoke PMML models stored in IRIS programmatically from your applications and workflows as explained in the documentation. We have a number of customers using it in production, for example to score patient risk models for current inpatient lists at HBI Solutions.

Benjamin De Boe · Aug 9, 2018 go to post

We're still working on a final agenda, but we're hosting quite a few external speakers, presenting what AI/ML means for their organisation or how they implemented it. There's still a few slots available, so if you're sitting on an exciting story, let us know.

Benjamin De Boe · Aug 14, 2018 go to post

I think all this trench-digging is becoming a somewhat pessimistic perspective, but indeed if you assume someone has admin access, they're admins and can take administrator action...

Maybe this is what lawyers invented licenses for :-) (agreeing that that's then becoming a somewhat simplistic perspective)

Benjamin De Boe · Sep 5, 2018 go to post

The ROWSPEC itself is indeed static, but depending on how you plan to use/expose this, you might generate/write a SELECT statement that does the renaming:

SELECT GLCode, Description, Year1 AS Jul2017, Year2 AS Jul2018 
FROM MyPackage.MyClass_GLReportYearToYearTrend(2017)
Benjamin De Boe · Sep 6, 2018 go to post

I'm not sure %SQL.CustomResultSet is going to be a true solution here, as it also requires you to statically define properties that represent the columns being returned upfront, while Lyle would like to be able to let those column names depend on a runtime argument. You might be able to just do this at runtime and have dynamic dispatch take care of things, but that won't help the column metadata get set up.

Benjamin De Boe · Sep 7, 2018 go to post

I just realized you're only on Caché 2012, which doesn't support table-valued functions, in which you can just SELECT from a function rather than having to use CALL, sorry.

On the other hand, I'd expect a BI tool like Logi to be capable of providing exactly the sort of UI-side labelling of columns, if not drive the entire YoY calculation. Not that I want to fend off the question, but if there's a full-fledged BI tool sitting on top of these results anyhow, let's make sure to use its full set of fledges :-) 

Benjamin De Boe · Oct 22, 2018 go to post

Perhaps also worth noting that %CacheObject is often mistakenly used for what should have been %ObjectHandle or %Base. Scenarios where user code would really need %Compiler.Type.Object are indeed very rare and thus intriguing :-)

Benjamin De Boe · Nov 19, 2018 go to post

Hi Yuri,

Which endpoint did you use? And did you check whether there's any seed sentiment markers in your user dictionary as explained in this article? You can also check highlighting output in the general indexing results page to see if it's being picked up as expected.

Feel free to reach out directly to me and/or Fabio if you want.

Thanks,
benjamin

Benjamin De Boe · Dec 10, 2018 go to post

Hi Evgeni,

The default community edition should support 5 concurrent users. You might have to enable Atelier's REST API, which is not enabled by default for some security profiles. See this note on how to verify / enable it. If you chose BYOL (Bring Your Own License), it might be that you just still need to BYOL :-)

Thanks,
benjamin

Benjamin De Boe · Jan 7, 2019 go to post

Thanks for posting Nikita. Your visualization has indeed been extremely helpful in showing what iKnow entities are all about to new audiences and is easily embeddable in applications where large numbers of entities need to be explored or navigated! 

Benjamin De Boe · Jan 14, 2019 go to post

my whole feeling of self-worth comes from the opinions of internet strangers!

.. which means I cannot contribute without creating a profile under my dog's name :-)

Still: great article! (and I don't have a dog)

Benjamin De Boe · Jan 22, 2019 go to post

Hi Guillaume,

I'm not sure what you're trying to get at. Our core JDBC driver supports batch processing through exactly the mechanism described in the tutorial you referenced, so that should work fine using default JDBC methods on the Java side. The JDBC SQL Adapter in EnsLib on the other hand was designed for a message-by-message processing and therefore the Adapter doesn't expose a batching mechanism. 

Maybe you can share a little more on the actual use case you're implementing? Buffering up messages for batch insertion or does the message carry a lot of data that deserves a batch insert by itself? Or am I totally on the wrong track here? :-)

Thanks,
benjamin

Benjamin De Boe · Feb 22, 2019 go to post

IDENTITY fields have fairly specific characteristics wrt the physical storage of your table. Are you sure you want that particular field to be INSERTable by default for all tables (it's never UPDATEable)? Maybe a SERIAL field is more appropriate?

Benjamin De Boe · Mar 6, 2019 go to post

You already got the expert answers, but maybe I'd just add this cautious recommendation: You should look at the IDKEY index keyword as a means to publish the internal rowid through a different name besides its default "%ID" alias. Unless you're mapping a class to an existing global structure, there's not many reasons nowadays to want to override it beyond that, as you may jeopardize some storage and runtime efficiencies like index options (eg bitmaps & bitslices).

The primary key is what you as the schema designer decide to be the key for your table. If you don't choose one, we'll just default to that internal rowid for you (cf option 2 in Aviel's answer).

Benjamin De Boe · Mar 19, 2019 go to post

Hi Joe,

would you mind sharing some of your code (minus API key values :-) ) for signing AWS REST calls? I have almost scratched my head off trying to find out why things still aren't working when my StringToSign and SigningKey appear to be correct, but the hash I create from them isn't. I can even reproduce (aka "make the same mistake") using the sample Python code AWS provides.

Relevant but not working (and therefore less relevant) code:

Property AWSAccessKeyId As %String [ InitialExpression = "AKIDEXAMPLE" ];Property AWSSecretAccessKey As %String [ InitialExpression = "wJalrXUtnFEMI/K7MDENG+bPxRfiCYEXAMPLEKEY" ];Property Region As %String [ InitialExpression = "us-east-1" ];Property Service As %String [ InitialExpression = "iam" ];Method BuildAuthorizationHeader(pHttpRequest As %Net.HttpRequest, pOperation As %String = "", pURL As %String = "", Output pAuthorizationHeader As %String, pVerbose As %Boolean = 0) As %Status{set tSC = $$$OKtry {if ..AWSAccessKeyId="" {set tSC = $$$ERROR($$$GeneralError, "No AWS Access Key ID provided")quit}if ..AWSSecretAccessKey="" {set tSC = $$$ERROR($$$GeneralError, "No AWS Secret Access Key provided")quit}set tAMZDateTime = $tr($zdatetime($h,8,7),":") // 20190319T151009Z//set tAMZDateTime = "20150830T123600Z" // for AWS samplesset tAMZDate = $e(tAMZDateTime,1,8) // 20190319set tLineBreak = $c(10)set pOperation = $$$UPPER(pOperation)// ensure the right date is setdo pHttpRequest.SetHeader("X-Amz-Date", tAMZDateTime)// ************* TASK 1: CREATE A CANONICAL REQUEST *************// http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html// Step 1 is to define the verb (GET, POST, etc.) -- inferred from pOperation// Step 2: Create canonical URI--the part of the URI from domain to query // string (use '/' if no path)set tCanonicalURL = $s($e(pURL,1)="/":pURL, $e(pURL,1)'="":"/"_pURL, 1:"/"_pHttpRequest.Location)// Step 3: Create the canonical query string. In this example (a GET request),// request parameters are in the query string. Query string values must// be URL-encoded (space=%20). The parameters must be sorted by name.// For this example, the query string is pre-formatted in the request_parameters variable.set tQueryString = $piece(tCanonicalURL,"?",2,*)set tCanonicalURL = $piece(tCanonicalURL,"?",1)// TODO: append pHttpRequest.Params content?// TODO: sort params!// Step 4: Create the canonical headers and signed headers. Header names// must be trimmed and lowercase, and sorted in code point order from// low to high. Note that there is a trailing \n.set tCanonicalHeaders = "content-type:" _ pHttpRequest.ContentType _ tLineBreak_ "host:" _ pHttpRequest.Server _ tLineBreak_ "x-amz-date:" _ tAMZDateTime _ tLineBreak// Step 5: Create the list of signed headers. This lists the headers// in the canonical_headers list, delimited with ";" and in alpha order.// Note: The request can include any headers; canonical_headers and// signed_headers lists those that you want to be included in the // hash of the request. "Host" and "x-amz-date" are always required.set tSignedHeaders = "content-type;host;x-amz-date"// Step 6: Create payload hash (hash of the request body content). For GET// requests, the payload is an empty string ("").if (pOperation = "GET") {set tPayload = ""else {// TODOset tPayload = ""}set tPayloadHash = ..Hex($SYSTEM.Encryption.SHAHash(256,$zconvert("","O","UTF8")))// Step 7: Combine elements to create canonical requestset tCanonicalRequest = pOperation _ tLineBreak_ tCanonicalURL _ tLineBreak_ tQueryString _ tLineBreak_ tCanonicalHeaders _ tLineBreak _ tSignedHeaders _ tLineBreak_ tPayloadHashset tCanonicalRequestHash = ..Hex($SYSTEM.Encryption.SHAHash(256, tCanonicalRequest))w:pVerbose !!,"Canonical request:",!,$replace(tCanonicalRequest,tLineBreak,"<"_$c(13,10)),!!,"Hash: ",tCanonicalRequestHash,!// ************* TASK 2: CREATE THE STRING TO SIGN*************// Match the algorithm to the hashing algorithm you use, either SHA-1 or// SHA-256 (recommended)set tAlgorithm = "AWS4-HMAC-SHA256"set tCredentialScope = tAMZDate _ "/" _ ..Region _ "/" _ ..Service _ "/" _ "aws4_request"set tStringToSign = tAlgorithm _ tLineBreak _ tAMZDateTime _ tLineBreak _ tCredentialScope _ tLineBreak_ tCanonicalRequestHashw:pVerbose !!,"String to sign:",!,$replace(tStringToSign,tLineBreak,$c(13,10)),!// ************* TASK 3: CALCULATE THE SIGNATURE *************// Create the signing key using the function defined above.// def getSignatureKey(key, dateStamp, regionName, serviceName):
     set tSigningKey = ..GenerateSigningKey(tAMZDate)
     w:pVerbose !!,"Signing key:",!,..Hex(tSigningKey),!// Sign the string_to_sign using the signing_keyset tSignature = ..Hex($SYSTEM.Encryption.HMACSHA(256, tStringToSign, tSigningKey))// ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************// The signing information can be either in a query string value or in // a header named Authorization. This code shows how to use a header.// Create authorization header and add to request headersset pAuthorizationHeader = tAlgorithm _ " Credential=" _ ..AWSAccessKeyId _ "/" _ tCredentialScope _ ", SignedHeaders=" _ tSignedHeaders _ ", Signature=" _ tSignaturew:pVerbose !!,"Authorization header:",!,pAuthorizationHeader,!!bcatch (ex) {set tSC = ex.AsStatus()}quit tSC}Method GenerateSigningKey(pDate As %String) As %String{set kDate = $SYSTEM.Encryption.HMACSHA(256, pDate, $zconvert("AWS4" _ ..AWSSecretAccessKey,"O","UTF8"))
    //w !,"kDate: ",..Hex(kDate)
    set kRegion = $SYSTEM.Encryption.HMACSHA(256, ..Region, kDate)
    //w !,"kRegion: ",..Hex(kRegion)
    set kService = $SYSTEM.Encryption.HMACSHA(256, ..Service, kRegion)
    //w !,"kService: ",..Hex(kService)
    set tSigningKey = $SYSTEM.Encryption.HMACSHA(256, "aws4_request", kService)
    //w !,"kSigning: ",..Hex(tSigningKey),! quit tSigningKey}ClassMethod Hex(pRaw As %String) As %String [ Internal ]{set out="", l=$l(pRaw)for = 1:1:{set out=out_$zhex($ascii(pRaw,i))}quit $$$LOWER(out)}ClassMethod SimpleTest() As %Status{set tSC = $$$OKtry {set tAdapter = ..%New()set tAdapter.AWSAccessKeyId = "use yours"set tAdapter.AWSSecretAccessKey = "not mine"set tAdapter.Region = "us-east-1", tAdapter.Service = "iam"set tRequest = ##class(%Net.HttpRequest).%New()set tRequest.ContentType = "application/x-www-form-urlencoded"set tRequest.ContentCharset = "utf-8"set tRequest.Https = 1set tRequest.SSLConfiguration = "SSL client" // simple empty SSL configset tRequest.Server = "iam.amazonaws.com"set tURL = "/?Action=ListUsers&Version=2010-05-08"set tSC = tAdapter.BuildAuthorizationHeader(tRequest, "GET", tURL, .tAuthorization, 1)quit:$$$ISERR(tSC)set tRequest.Authorization = tAuthorizationset tSC = tRequest.Get(tURL)quit:$$$ISERR(tSC)Do tRequest.HttpResponse.OutputToDevice()catch (ex) {set tSC = ex.AsStatus()}write:$$$ISERR(tSC) !!,$system.Status.GetErrorText(tSC),!quit tSC}
Benjamin De Boe · Apr 1, 2019 go to post

Thanks for all your input thus far, which is proving very helpful inspiration for our planning process. Feel free to participate if you haven't done so or share with your colleagues, as we're still watching new inputs. Also, don't hesitate to share your thoughts directly on this thread. Positive feedback is great, but critical is often even more helpful for us :-)

Benjamin De Boe · Apr 11, 2019 go to post

Horita-san,

not sure whether you mean the projection (table) itself is missing or the row you created through the API isn't showing up. This works fine for me, but in order to combine the use of APIs with a domain definition, you have to set the allowCustomUpdates flag to true (off by default).  See also the notes in this article on the dictionary builder demo

When set to false, the API methods like CreateDictionary() will return an error (passed by reference, the returned ID will be below zero to indicate a failure).

Hope this helps,
benjamin