Anzelem Sanyatwe · Oct 25, 2016 go to post

Hi Jeffrey;

Isn't that tool late, for this to be processed the ISCAgent needs to be up and running already.

common message in the console.log is this one - before any Mirror checks happens.

"Failed to verify Agent connection...(repeated 5 times"

Anzelem Sanyatwe · Oct 25, 2016 go to post

Hi Bob;

I would like you to understand where the complication is coming from. It is actually a bit up in that page "Install a Single Instance of Cache", point number 2). Create a link from /usr/local/etc/cachesys to the shared disk. This forces the Caché registry and all supporting files to be stored on the shared disk resource you have configured as part of the service group. And they further suggest commands to run. 

Now because the default install directory is linked out, you can not install a standalone kit of ISCAgent on that second node because the cluster disks are not present. Typically you will get this:

[]# pwd

/usr/local/etc

[]# ls -al cachesys

lrwxrwxrwx. 1 root root 43 May 28  2015 cachesys -> /tcl_prod_db/labsys/usr/local/etc/cachesys/ (This path resides on a cluster disk).

[]# cd /usr/local/etc/cachesys

-bash: cd: /usr/local/etc/cachesys: No such file or directory

The default install directory of ISCAgent is the same as the path that is mapped out to shared cluster disks, hence the complication and why am reaching out.

I also agree that the ISCAgent can run on each node independently. There is no big reason for it's binaries to always follow the cluster resources all the time. 

Anzelem Sanyatwe · Oct 25, 2016 go to post

Hi Pete;

Unfortunately, ISCAgent is not part of cluster service groups. The ISC Veritas 'online' script only does the Cache portion of it.

Anzelem Sanyatwe · Oct 25, 2016 go to post

I previously wrote this to WRC, and still waiting for it to be ratified, if this is a viable alternative.

""""

The one I’ve been thinking of all along which could be an easy way forward if it is possible you can re-package ISCAgent installer to install in a different directory instead of the default one. The default directory is the one giving us headaches as it is linked back to the cluster disk.

This I mean if I’m on the secondary node without the cluster disks, this is what you will encounter:

[]# pwd

/usr/local/etc

[]# ls -al cachesys

lrwxrwxrwx. 1 root root 43 May 28  2015 cachesys -> /tcl_prod_db/labsys/usr/local/etc/cachesys/ (This path resides on a cluster disk).

[]# cd /usr/local/etc/cachesys

-bash: cd: /usr/local/etc/cachesys: No such file or directory

So in this scenario I cannot install the ISCAgent independently in its default format as it will fail as above.

That link we cannot touch as that will break the Cluster FailOver.

So the modifications I’m talking about will be:

  1. to change the default directory by creating a new one to ‘/usr/local/etc/iscagent’
  2. Then modify the  etc/init.d/ISCAgent script on this line from AGENTDIR=${CACHESYS:-"/usr/local/etc/cachesys"}  to AGENTDIR=${CACHESYS:-"/usr/local/etc/iscagent"}

After the installation this seems achievable by doing this :-

  1. rsync -av /usr/local/etc/cachesys/* /usr/local/etc/iscagent/
  2. Then edit etc/init.d/ISCAgent as suggested on 2. Above

The issue I’ve with this is that there could be other references in the installer that I might not be aware of. If so, hence suggesting you guys re-package it with the modifications as suggested above.

This way we make ISCAgent independent and resides locally on the TWO nodes (primary and secondary failover node),  as it’s binaries don’t really need to follow the Cluster Resources all the time. This way we also make etc/init.d/ISCAgent start automatically with the OS.


"""'''

Anzelem Sanyatwe · Oct 25, 2016 go to post

Hi Bob;

Would appreciate if you can hook me up with Tom Woodfin and the mirroring team.  The calls to peruse through are 861211 which was a continuation from 854501  .There are all sorts of suggestions, but if this can be bounced back to be validated.

Anzelem Sanyatwe · Oct 26, 2016 go to post

Hi Mark;

I like, those are logical steps to follow. last time I checked you guys did not have a Veritas lab test environment to validate this, because the moment it becomes a cluster resource it will then need to conform to Veritas facets,  e.g start, monitor, offline, etc. My instance is only in Prod mode, we have little room to experiment with this. Hence, the other, easy, quick way was that suggestion to break out ISCAgent directory. I just tested the 'rsync' copy and the directory edit in the service script and it seems to start up well.

Anzelem Sanyatwe · Oct 26, 2016 go to post

Dear Alexey;

We do not have 2 different DR approaches.

The mirror config is only with Primary (at Production Site) and DR async (at DR site) so two instances in total.

The Production site has two physical boxes in a Veritas Cluster config for H.A purposes. Should the first one have an issue, Cache fails over to the second node and still comes up as Primary.  Should these two nodes get tossed up, or we lose the entire Production site, then we promote the DR async instance. The same applies to the DR site. In this environment the decision to fail-over to DR is not an automatic process, it needs to be announced first.

Anzelem Sanyatwe · Oct 28, 2016 go to post

Journal route wasn't an option for me. They were just too much and too many to copy (this due to the nature of the application) compared to the smaller one cumulative backup I mentioned. It also depends on whether you are looking at continuous trickling of transactions to the new system or a once off restore.

Anzelem Sanyatwe · Dec 5, 2016 go to post

Hi Shawn;

Your method you mentioned works perfectly well. I have used it a lot, am predominantly Linux and my most preferred share is over 'nfs'.  Ensure you have write permissions to that drive and that there is no network performance impact when it is running between the two servers, but with the speed you mentioned the impact is insignificant. The other advantage is that the write I/O will be on the other server. 

Regards;

Anzelem.

Anzelem Sanyatwe · Dec 6, 2016 go to post

Hi Murray;

I agree. For all latest deployments, rather start with single server scaling, and go the ECP way if you really need to. Due to inefficiencies brought about by ECP, or shall I say ECP bottlenecks, we had to get rid of the ECP architecture and rather have a single powerful machine.

Regards;

Anzelem.

Anzelem Sanyatwe · Feb 3, 2017 go to post

I created a separate 'folder' in my outlook so they just go there.

On the positive note, if you see a topic of interest dropping in, it leads you to read it.

Anzelem Sanyatwe · Mar 8, 2017 go to post

Thanks Murray for these 'new technology catch-up' articles, especially this part, 9 and 10. Bob alerted me to these. I do have HCI deployment in the horizon based on ESXi and EMC scaleio all-flash (both cache and capacity tiers) architecture. I will keep this in mind when we finally meet the vendors of the HCI kit.

In the article you mentioned "you define the capabilities of storage as policies in vSAN using SPBM; for example "Database" would be different to "Journal". I was hoping to see specific policies for these further down the article??  (well if you consider i'm from traditional arrays where we normally pay attention to these).

Regards;

Anzelem.

Anzelem Sanyatwe · Mar 23, 2017 go to post

Just been in same situation yesterday and the migrate from ODBC worked for us. We needed the data in Cache for Reports, Dashboards, etc.

The new challenge is: the application that uses MySQL is not yet re-coded to use Cache, so the data is still written in MySQL. Using ODBC to do what we want was giving all sorts of errors... (due to restrictions and limitations for external sources as mentioned in these docs.).  After migrating the data, no issues at all.

Is there a way OR does anyone know how to keep the migrated data always in Sync? If data is written in MySQL it's automatically synced into Cache (... well in Cache speak this will be shadow, mirror, etc). Or custom polling needs to be written?

Regards;

Anzelem.

Anzelem Sanyatwe · Aug 14, 2017 go to post

Hi Mack;

Can you open your terminal and go and run this command.

%SYS> DO ^mgstat(5,17280,,10)

After at least 20 entries capture the screen shot and post here, for in-depth pointers.

Regards;

Anzelem.

Anzelem Sanyatwe · Mar 29, 2018 go to post

%SYS>w $ZVERSION
Cache for UNIX (Red Hat Enterprise Linux for x86-64) 2015.1.1 (Build 505U) Wed Apr 29 2015 12:02:38 EDT
%SYS>!cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)

Anzelem Sanyatwe · Oct 26, 2016 go to post

Hi Heikki;

I just recently been faced with the same situation you are in. I doubt it shadow is supported between those two versions, you can check it here 4) Supported Version Interoperability http://docs.intersystems.com/documentation/ISP/ISP-20162.pdf

The method we used was a Full Backup on old system and a restore on the new system just before the Migration day, and then during the downtime after users are stopped in the migration window we did a cumulative backup and restore (1, the cumulative backup and restore minimizes downtime 2, it's quick to copy over - as it is smaller). This plan worked well for us.

Anzelem Sanyatwe · Mar 28, 2018 go to post

Hi Steve;

I have encountered the same problem where these online backups are problematic on a production system - from the time it writes to disk to the time and it's copied out.

To start with - these are cold offsite backups taken only at a point in time - (NOT A DR SOLUTION). So backing up on the DR Async made more sense to us and we relieved the production a lot. I  have actually restored these backups on other aut/training/tests environments and they are just as good.

So if for off-site backup purposes - no harm I have encountered doing that.

Regards;
Anzelem.

Anzelem Sanyatwe · Feb 19, 2020 go to post

Thanks Alex, i did a netstat -anb > openports.txt before and after.

So other than disabling in CACHE, there are no traces at Windows OS level? Any setting related to Telnet that was opened/configured at OS level, when Cache was installed/started? 

Regards;

Anzelem.

Anzelem Sanyatwe · May 9, 2020 go to post

to some extent i concur with the above sentiments. i actually use the old service method, i don't use systemd. e.g perfect use case in cloud environment where you need the instance to automatically start everything if you have an instance stop/start schedule to contain costs. As for Production, i have never implemented a script and prefer a controlled operation. This is mostly because Cache is too finicky when it comes to start-ups, and preferably you need to be present.

Anzelem Sanyatwe · Apr 19, 2021 go to post

Thanks Mark - am also looking at the profiles of these blades - and depending on the workload - the HPE Synergy 660 Gen10 is more on database side with need for more cores/larger memory.

Anzelem Sanyatwe · Jun 1, 2021 go to post

Alternatively, you can create your own separate database that you are able to manage yourself and mark the setting of Journal to 'No'. Then map all not so important globals to that DB.

Anzelem Sanyatwe · Jun 14, 2021 go to post

If you are certainly sure, you have restarted the instance and the OS space and new size  after startup (as seen in system management) portal don't match, a 'truncate' will release the OS space immediately. I've encountered same issue  and this is what i've done.

Anzelem Sanyatwe · Feb 22, 2022 go to post

Good Day Murray;

Anything to look out for VMs hosting Ensemble Databases on vMware being part of VMware Site Recovery Manager
making use of vSphere Replication? Can it be alone be safely used to boot up the VM on the other site?

Regards;

Anzelem.

Anzelem Sanyatwe · Oct 19, 2023 go to post

Good Day;

i just encountered a similar site to the problem.

This site also has a proxy configured, requiring proxy authentication.

We have an HTTP Operation, with Proxy/Port fields only (Out of the box), with no 'proxy authentication' anywhere.

Has there been a solution to this issue?

Regards;

Anzelem.

Anzelem Sanyatwe · Jun 12, 2024 go to post

This was my solution and works well.

With newer Red Hat Linux OS'es that support 'systemd ', i'm going to be modifying /etc/systemd/system/ISCAgent.service to a new path containing ISCAgent binary files.