Written by

Question wx fg · Jul 18, 2017

how to release memory for process?

hi

  I query large mount of rows from one table with JDBC, and will prompt <store> error.  I found that it was out of process memory (default 16M, max 49M in version 2010.2) .

  so my question is : how to release memory for large  mount of rows  query? or how to resolve this error?

Comments

Dmitry Maslennikov · Jul 18, 2017

You have two ways, query fewer rows at once. Or switch to a newer version of Caché, since version 2012.2 available much more memory per process, and by default 256Mb.

0
wx fg  Jul 18, 2017 to Dmitry Maslennikov

Is there some method for release the memory used for the rows that I have read because I read the records of resultset  one by one and forward only.   

0
CM Wang  Jul 18, 2017 to Brendan Bannon

Just curious, does this <STORE> error indicate that process private memory is not enough ?

If so, GROUP BY use process private memory to store the intermediate results ?

0
wx fg  Jul 20, 2017 to Rubens Silva

not recursive. only so much rows, and include longvarchar column.

0
wx fg  Jul 22, 2017 to Robert Cemper

  I found this issue maybe resolved when I replace the jdbc driver with version 2017.1.  thanks

0
wx fg  Jul 20, 2017 to Brendan Bannon

the query include longvarchar column. when the process memory (bbsize) is  16M by default, I can read about 100,000 rows and then prompt <store> error.  

After I change the process memory to 49M(the max size in version 2010.2), I can read about 300,000 row and then prompt <store> error.

So I need some method to release the memory for process

0
Brendan Bannon  Jul 20, 2017 to wx fg

Sadly I think we need to fall back on the first answer you got, upgrade.  This sounds like a bug in 2010 as we should not be holding the LongVarChar in the process memory.  Most likely this problem is fixed in a later version of Cache and later version also allow processes to use more memory.

Brendan

0
wx fg  Jul 21, 2017 to Brendan Bannon

I think upgrade is impossible for me.   Is there a patch for this issue?

0
Vitaliy Serdtsev  Jul 20, 2017 to wx fg

There is a method $system.Memory.Clean(<cleancache>, <defragment>), but unfortunately it appeared only with version 2011.1.

Try this code (consider that LONGVARCHAR = %Stream.GlobalCharacter) Read a CLOB through JDBC:

try{   ByteArrayOutputStream bos = new ByteArrayOutputStream();   Statement st = dbconn.createStatement();   ResultSet rs = st.executeQuery("Select ID,Text from eprinstance.isegment");     while (rs.next())   {     int len;     Reader reader = rs.getCharacterStream("Text");     while ((len = reader.read()) != -1)       bos.write(len);       System.out.println(bos);       reader.close();     bos.reset();   }     bos.close();   st.close();   rs.close();     System.out.println("OK!");} finally{   dbconn.close();}
0
wx fg  Jul 21, 2017 to Vitaliy Serdtsev

I have tested this code ,but the result was same with my demo code.

0
wx fg  Jul 21, 2017 to Vitaliy Serdtsev

great! the new jdbc driver has resolved my issue!  Thanks!

0
wx fg  Jul 20, 2017 to Scott Roth

My code is in java and query with JDBC

0
Dmitry Maslennikov  Jul 19, 2017 to wx fg

In what time you getting this error? Just by reading one by one rows? If you would use the same query multiple times with the same amount of rows, the error will happen after the same count read rows?

0
wx fg  Jul 20, 2017 to Dmitry Maslennikov

Yes, I read rows one by one, the query include longvarchar column.  I can read about 100,000 rows when process memory size is 16m; and about 300,000 rows when process memory size is 49M

0
wx fg  Jul 21, 2017 to Dmitry Maslennikov

I have pasted the demo code.  very simple,  I  get the same error every time when I run this code ( 16M-->100,000 rows   49M->300,000 Rows).

0
Brendan Bannon · Jul 18, 2017

Can you give a little more info about the query format and the details of the <STORE> error.  In simplest terms:  

SELECT * FROM VeryLargeTable

will not result in a STORE error.  So you either have a GROUP BY or you are doing other things with the results and that is leading to a <STORE> error.

Brendan

0
Rubens Silva · Jul 18, 2017

If you can't release memory by ajusting your own source code, then what you need is to do is expand it's size by using $zstorage or the configuration.
Also, is that a recursive call?

0
Brendan Bannon · Jul 18, 2017

There are a couple of versions that use process private memory for GROUP BY but I don't think Cache 2010.2 is one of them, I was questioning myself as I was typing my first entry.

Looking at your post again you don't say what value you have for process memory.  When working with SQL we strongly recommend that you change this to the max value, 49M.

I still would like to see the full error message.

Brendan

0
wx fg · Jul 20, 2017
String sql="Select ID,Text from eprinstance.isegment";
  Statement st = dbconn.createStatement();

  java.sql.ResultSet rs = st.executeQuery(sql);
  
while(  rs.next()){

     String c=rs.getString("Text");
    System.out.println( c);
}

  st.close();
  rs.close();

  dbconn.close();

this is the demo code.   the "Text" column type is longvarchar

0
Scott Roth · Jul 20, 2017

Wonder if you add the following to your OnInit() would help...

Kill $$$EnsRuntimeAppData(..%ConfigName)

0
Brendan Bannon · Jul 21, 2017

2010.2 is 7 years old at this point.  Even if we could identify a fix for this problem back porting a fix that far is not recommended.

0
Robert Cemper · Jul 21, 2017

What about chopping your query like this or similar

lastID=0

"Select TOP 10000 ID,Text from eprinstance.isegment where ID > ?"

passing lastID as query Parameter

Now you loop on it until you collected all records.

The basic idea is to limit your result set by TOP as your partition allows.

wink it's definitely independent of Cache versions.
 

0
wx fg · Jul 22, 2017

hi everyone

  thanks for your help.   I replace the jdbc driver with version 2017.1 and then this issue  looks like be resolved .

Thanks again!

0