Andreas Schneider · Feb 28, 2017 go to post

Thanks Benjamin!

Yes I've checked the sourcecode to see how a interpreter is done. It looks not like rocket science.. probably I've not completely understand the details, so I can say that ;-)

But! ... I have no idea which functionality would be helpful in an more advanced Caché interpreter. Can you give me a hint on what features do you think, please?
It should be very easy to implement something like a simple code completion with Caché Keywords to use in a notebook. But at this time I think all is limited to the JDBC interface, and with that to SQL.
Would be really great if we could use also COS to fetch data... !?

Andreas Schneider · Mar 6, 2017 go to post

I'll created a fork on github (https://github.com/andreas5588/zeppelin) for Apache Zeppelin. My plan is to create a Caché Interpreter (first JDBC based) to learn and unterstand the architecture of Zeppelin.
After this it would be great to extend this Interpreter to allow to query the Caché Data via COS. For this I'll contact the "Caché Web Terminal" guys maybe they can help...
All are very welcome support this open source Interpreter. So feel free to jump in and support this project by coding, idea, testing ... or what ever ;-)

Andreas Schneider · Mar 10, 2017 go to post

David thanks for detailed post! I've question about "When the Caché Spark connector is released, Spark-based applications will be able to make full use of Caché as an open analytics platform".

Did you have any details about plans from Intersystems to develop a "Caché Spark connector"? Links about that? Thanks!


Andreas

Andreas Schneider · Apr 28, 2016 go to post

Hi Andy! The logical explanation for this behavior is that changing the select list causes the query optimizer to choose another execution plan\access path. Unfortunately the new plan is slower than the first one.

Like David describe I think the first step is to check that all tables has selectivity information. These infos are very important and used by the optimzer to find the "right"\fastest plan. You can evaluate this by compare the used plans.

So: -> execute TuneTable and check again.

Andreas Schneider · Jun 16, 2016 go to post

I use SQL Server quit a lot and I've never seen that you can connect via SSMS to any other databases than MS SQL Server... You can use the Linked Server Feature to use Caché via SQL Server and use the SSMS to query Caché. But this a overkill just to execute some queries I think.

Attention advertising ;-) : You can take a look at Caché Monitor to connect directly to Caché (and other DBs). If you are familiar with SSMS you find many identical UI concepts.

Regards

Andreas

Andreas Schneider · Nov 25, 2016 go to post

I think also like Michael that this is an issue related to the data.
Maybe you can delimit this beavior to the caused column by execute your SQL statement with only one column at a time to test the columns one by one.

Like
"SELECT Name FROM ZenCrm.OAdress" -> works?

"SELECT Type FROM ZenCrm.OAdress" -> works?

and so on.

Andreas Schneider · Apr 10, 2021 go to post

Attention, I am now advertising on my own behalf ;-)
You can do that very easily with Caché Monitor


Caché Monitor connect via TCP to Caché\IRIS (like the ODBC driver).
Andreas

Andreas Schneider · Dec 30, 2021 go to post

Thanks! I have made the changes you suggested and I could load all rows. My encoding problem is also gone with the JVM Param smiley

The LOAD DATA statement still finish with error SQLCODE: <-400>:<Fatal error occurred>] ... *%qparsets>] but the data are there, but you have pointed out that this problem will be solved in version 2022.1.0

Andreas Schneider · Dec 30, 2021 go to post

Thank you for the suggestions!
My problem seemed to consist of at least two parts
1. There are characters in the files that LOAD DATA can not process
2. LOAD DATA is currently terminated with an error even in case of success.
I think it would be a helpful enhancement for the LOAD DATA feature if erroneous lines e.g. would be redirected to a textfile for later analysis.
Maybe an option for 2022.2 wink

Andreas Schneider · Jan 2, 2022 go to post

Kevin, can you give me tip please how I can set this JVM Argument via console? I want to integrate this setting in my docker image.
Thanks!

Andreas

Andreas Schneider · Jan 3, 2022 go to post

Thanks @

set srv = $system.external.getServer("%Java Server")
set srv.JVMArgs = "-Dfile.encoding=UTF-8"
set modifiedserver = $system.external.modifyServer(srv)
Andreas Schneider · Jan 3, 2022 go to post

Setting the JVM arg for the %Java Server can be done by Portal or console:

set srv = $system.external.getServer("%Java Server")
set srv.JVMArgs = "-Dfile.encoding=UTF-8"
set modifiedserver = $system.external.modifyServer(srv)
Andreas Schneider · Jan 3, 2022 go to post

Maybe you haven't send the complete statement?
This works for me via JDBC, tested with Caché2018.1.0.184  and IRIS 2021.2.0.617

LOCK TABLE dc_data_flights.airports IN EXCLUSIVE MODE

-- OR
LOCK dc_data_flights.airports IN EXCLUSIVE MODE

-- OR
LOCK TABLE dc_data_flights.airports IN SHARE MODE

-- OR
LOCK dc_data_flights.airports IN SHARE MODE

If try this I get the same error like you:

-- this create an error, the "IN EXCLUSIVE MODE" or "IN SHARE MODE" is missing
LOCK TABLE dc_data_flights.airports

Andreas

Andreas Schneider · Jan 3, 2022 go to post

If I check the Locktable in Caché 2018.1.0.184 it looks like this: 
(Used "LOCK Aviation.Aircraft IN EXCLUSIVE MODE" in SAMPLES Namespace)

If I check the Locktable in IRIS 2021.2.0.617 it looks like this:
(Used "LOCK TABLE dc_data_flights.airports IN EXCLUSIVE MODE")

A "UNLOCK TABLE dc_data_flights.airports IN EXCLUSIVE MODE" removes the lock

Andreas Schneider · Feb 9, 2022 go to post

Are you shure that you match the column list?

An INSERT without a column list is not very stable about changes.

Andreas Schneider · Feb 9, 2022 go to post

The rejected  insert has 239 chars in the fifth column. The working insert has 186 chars.

So check your table defintion. How are the columns defined?

Andreas Schneider · Feb 9, 2022 go to post

The column parameter has MAXLEN 229:

parameter As %Library.String(MAXLEN = 229) [ SqlColumnNumber = 6 ];

that is why the insert fails

Andreas Schneider · Apr 9, 2022 go to post

@Vitaliy Serdtsev thanks for this great idea! In the past i have also missed this feature myself from time to time. So far the pain has not been great enough ;-) 
But now I know that other users also miss this feature...

 

Andreas Schneider · Aug 14, 2022 go to post

By the way, why is SQL not available for selection in the codesnippets?

isn't SQL a first class citizen on the iris dataplatform?? ;-)

Andreas Schneider · Aug 14, 2022 go to post

Hi! Unfortunately there is no way to UPDATE the %DESCRIPTION information on a table or column with SQL.
But you can add these information while CREATEing a table like this:

or this way for a column:

Please see CREATE TABLE | InterSystems SQL Reference | InterSystems IRIS Data Platform 2022.1 for more details.

By default the %DESCRIPTION information is projected to the Remarks column via JDBC:

As you can see the text comes from the %DESCRIPTION property:

This works also for the columns.
This is from JDBC:

This is from INFORMATION_SCHEMA.COLUMNS:

Andreas

Andreas Schneider · Dec 25, 2022 go to post

As a test i created an iris instance in a docker container. Nicely the problem doesn't exist there. Maybe I'll just try a reinstallation of IRIS under win10 the next days...

Andreas Schneider · Dec 26, 2022 go to post

That looks strange. If you execute the query via JDBC the error ist the same. A CAST to the same datatype as provided from Config.ConfigFile works:

SELECT CAST(IDASVARCHAR(512)),CAST(CPFNameASVARCHAR(255)),CAST(CommentsASVARCHAR(4096)),CAST(NameASVARCHAR(64)),CAST(ProductASVARCHAR(64)),CAST(SectionHeaderASVARCHAR(255)),CAST(VersionASVARCHAR(7))FROMConfig.ConfigFile
Andreas Schneider · Mar 12, 2023 go to post

I've upgraded my small test project https://github.com/andreas5588/openflights_dataset from 2022.2.0.368.0 to IRIS 2023.1.0.207.0 and I've discovered that the SQL execution time was much slower than expected.
Within this project I load the data using "LOAD DATA FROM FILE" sql statement and it looks like that this is taking up the time.
A "docker-compose build" with IRIS 2022.2.0.368.0 took about about 17 seconds on my machine, the same with IRIS 2023.1.0.207.0 took 300 seconds!
The image with IRIS 2022.2.0.368.0 is 2.87Gb and the image file with IRIS 2023.1.0.207.0 image is 3.21Gb.
This is the only change I've made https://github.com/andreas5588/openflights_dataset

Any idea whats going on?
Andreas
 

Andreas Schneider · Mar 12, 2023 go to post

I'm not sure what the reason is but with 2023.1 there are some errors in %SQL_Diag.Result when loading data from my datasets. The errors are different from 2022.3.
After changing some columns in my data model from NOT NULLABLE to NULLABLE the errors are gone and the performance with IRIS 2023.1 is the same as with 2022.3