gg ip address - GoldenGate

Dear Friends,
Can any one please help me on the below query,
My database running on RAC environment with 2 node , That 2 nodes having two seperate ip address (.51 and .52) , which ip address we need to give in the RMTHOST paramaeter, if in case the particular ip address node failed how gg switch over to 2nd node which is having ip address .52? please guide me on this
jath 

Jath,
Not sure what your actual source and target configurations are! Is your source a two node RAC and the target also another two node RAC ? On the source side it is advisable to deploy GG software on shared disk so that you could start the manager/ER processes from any of the RAC nodes without any hassles.
Do give more details about target setup so that we can try and help answer this query.
Satish 

hi Satish,
Thanks for your response, My source is Non- RAC environment, Target is the RAC environment with two nodes(.51 and .52), then which ip address i can give in the pump process parameter rmthost?
Jath 

Jath,
Here is what you need to do:-
1. Install Golden Gate software on shared disks in target RAC environment.
2. Point extract/datapump RMTHOST to either .51 or .52 - say you selected .51
3. Start manager and replicat process from RAC node .51
4. In case RAC node .51 goes down and is inaccesible, RAC node .52 is up
a) Stop extract on source (non-RAC)
b) Change RMTHOST to RAC node .52
c) Restart extract on source (non-RAC)
d) Start manager and replicat on RAC node .52 (trail and checkpoint files are accessible as they are on shared disk). Am assuming that RAC node .51 is completely down and manager/replicat processes do not exist.
5. In case RAC node .52 goes down - replicate will continue to function as extract is sending trail file to RAC node .51
Satish 

Hi Jath.
Satish's suggestions are very good but I suggest one minor addition and that's to use a virtual IP address. The OGG processes and the VIP will float together on failover. Generally you'll need failover software to failover the process and VIP. Here's a helpful paper I've used a few times when setting this up using Oracle Clusterware:
http://www.oracle.com/technetwork/middleware/goldengate/overview/ha-goldengate-whitepaper-128197.pdf
So as far as OGG is concerned, the shared disk is a must (or one that can fail over) so that the checkpoint information and trail data pick up where they left off, but RMTHOST will use the VIP. If the connection to A dies then the retry will get routed to B once the VIP and OGG processes fail over. It may also be helpful to use the mgr.prm parameter AUTORESTART to help automate the reconnect interval. But these all things are mentioned in the above doc.
Good luck,
-joe

Related

RMTHOST value (for rac)

Hi, I'm configuring a new PUMP process and I wonder what value to write to the parameter RMTHOST. Target is a RAC Server, so there are more than one hostname/IPShould I have to put the SCAN name? Best Regards,
At the end of the day you need to land your data (rmttrail) onto a disk where the RAC can access. So any IP, VIP or SCAN will work, as long as you can reach it from the source. When you start the manager on the target you will need to assign a port number, say 7089. Then from the source do telnet <IP or hostname>:7809 , if this works use it.My preference is to use the SCAN name as it provides a failover. CheersKee Gan
Hi, What version of GoldenGate (GG) and Database are you using? We usually setup only one Target node from RAC on RMTHOST, the one where filesystem from GG is installed and running and a filesystem $OGG_HOME/dirdat with space available to support trailfiles coming from Source. Although the RAC has failover, GG can be installed and started in only one node. What you should do and it´s very recommended is use $OGG_HOME as shared filesystem between on RAC nodes, So, if the instance where GG processes are running fall, you can start it on other available node. Thanks & Regards!GMARTINS.

Golden Gate Extract on SQL Server AlwaysON

Hi, I have a SQL Server AlwaysON availability Group setup on a Windows Cluster.  I am using Golden Gate to extract from a couple of tables into a target Oracle database (same tables). The problem is the Golden Gate processes fail when there is a cluster failure and the nodes switchover (due to network issues etc). Is there a way to persist the Oracle Golden Gate processes so that if they are using the SQL Server AlwaysON listener address (cluster transparent) - not to fail if there is a cluster switchover? Any help appreciated. kind regards,Maj
Hi Maj,First I assume that you have taken care of OGG restarting when there is node failover using acgtl , etc. If this is the case then you can implement a virtual IP that points to whichever node underneath this. Speak to your network admin to set this up on your target server. The source (does not matter what, SQL , Oracle etc.) is non the wiser.CheersKee Gan
Hi Kee, Thanks for your answer, however the problem isnt pointing OGG to the AlwaysOn lister and gg process restarting on failover.The issue is that Extracts abend whent he failover takes place since - I believe - the transaction log is being mined by OGG and upon failover the consistency isnt there on the secondary instance. Remember this is not a SQL Server cluster, where the underlying filesystem is shared, this is two standalone instances being Mirrored in AlwaysOn. Any suggestions on how to square this hole .....?
Let me answer this with two topics; the first just being SQL Server in a Windows Cluster, and the second related to AlwaysOn. First, if Extract was running against a database in an instance that was part of a Windows Cluster, and Extract is installed on a shared cluster disk resource, then if Node1 fails and both the SQL Server instance and disk resources move to Node2, then Extract will Abend, but AUTORESTART and AUTOSTART in Manager should bring it back up.  For the second part about AlwaysOn, the only feature right now that is supportable with Extract and AlwaysOn is that the Extract runs on the Primary, but it cannot handle any failover where a Secondary becomes the Primary.  The reason is, that the Extract is a shared nothing component and there is no duplication of its checkpoint file and trails to the other Secondary databases.  So in your case where Extract is connecting through a LIstener, and the Primary fails and a Secondary becomes the Primary, the Extract is running on the old Primary node (i presume) but it cannot mine the tran logs from the new Primary because those logs are not local and are now remote, and Extract doesn't support remote Capture for SQL Server.One possibility that has been talked about might be to install the Extract on a network share, such as \\server\ogg, and map that as the same drive letter from every node in the AlwaysOn group.  Then, whichever server is running the Primary database, you start Extract from that server.  If that Primary goes down and another  node becomes the Primary, then you log into that node and start Extract from there.   This would require that only Secondary databases configured for Synchronous mode would work and you would have to use the ACTIVESECONDARYTRUNCATIONPOINT parameter so that Extract does not need any log backups, since log backups and their history are not available between nodes.

Golden Gate application VIP configuration.

I have a 2 two node RAC structure (DBX A/B and  DBYA/B instances) and have to setup Golden Gate (GG) Replication between DBX and DBY environments doing integrated Extract mode. I couldn't find answers for the below and looking for the expert's views/concerns. Questions: 1. Do I need 1 application VIP each for DBX and DBY for GG configuration?2. Can I use the same VIP that exists between DBX A/B instances and DBY A/B instances for the GG configuration.
I am missing some information. I assume that both DBX and DBY are in the same RAC? If so you don't have to be concerned about IPs (virtual or otherwise) as the collective RAC nodes shared the same FS. You simply extract from DBX writes the OGG trail to some disk and DBY can see this disk anyway. You also can have one OGG installation. If this is something else then please explain where DBX and DBY resides and where VIPs come into play.CheersKee Gan
Hi, Iam assuming the source and target are in different clusters and unless and until you are trying to achieve the high available configuration GG which uses the VIP.  In any cases, if you are not going with the high availability, use the physcial host or the ip address in the pump extract parameter file along with the manager port on source to establish the connection to the target manager for sending the local trails. ThanksVIvek
K.Gan wrote: I am missing some information. I assume that both DBX and DBY are in the same RAC? If so you don't have to be concerned about IPs (virtual or otherwise) as the collective RAC nodes shared the same FS. You simply extract from DBX writes the OGG trail to some disk and DBY can see this disk anyway. You also can have one OGG installation. If this is something else then please explain where DBX and DBY resides and where VIPs come into play.DBX and DBY are in separate clusters, each have 2 nodes A and B.
3218839 wrote: Hi, Iam assuming the source and target are in different clusters and unless and until you are trying to achieve the high available configuration GG which uses the VIP. In any cases, if you are not going with the high availability, use the physcial host or the ip address in the pump extract parameter file along with the manager port on source to establish the connection to the target manager for sending the local trails. We are planning to have the GG services fail over along with the DB instances to the surviving node. As said earlier DBX and DBY are in separate clusters, and each has 2 nodes A and B. What's your view on the above 2 questions?
Ok you can use the same VIP as long as it lands in the right cluster. It need not be a special one for OGG. CheersKee Gan
Ok then, for my two node cluster I have 2 VIP's.Which cluster VIP should i use for GG application VIP configuration?
Choose the VIP for the node that your OGG is going to run on.CheersKee Gan
Finally I got answers,1. GG needs one unused IP and VIP to configure GG application VIP, that shall be active on the configured node. When the active node is down GG VIP will failover to the surviving node.2. No, it has to be new and exclusively for GG use. Hope this helps all! Thanks all for inputs, concerns and time.

Golden Gate use of scn address for Extract Pump

I have 2 - 2 node RAC databases with Golden Gate Extract on one and Extract Pump to the other one and of course Replicat on the target.  The Target RAC has the golden gate application currently running on the B Node.  I have coded the Pump process to use the SCN host name.  My thought was by using the SCN host name that the Pump would be able to process on either the A node or the B node.  I also have coded the mgr process to retry on TCP/IP failures.  We recently had a network issue on both if these RAC servers and Golden Gate preformed quite nicely on the restart of all of the processes.  However, there were 2 Pump processes that continued to fail with TCP/IP connection errors using the SCN host name.  I changed the parameters to use the Node B host name - still failed, then I changed it to use the Node A host name and the Pump was able to successfully process.  Maybe I am not understanding how the use of the SCN host name works.  I thought it would try to connect to one of the Nodes then would switch to the other Node upon failure - but this does not seem to be the case.  Is there someplace I can go to get further detail information on how this works?  Or is using the SCN name not the way to go with Golden Gate?  I'm trying to understand the process so I can determine where the issue is - maybe a configuration issue - a network issue.  Any help on where I can start would be greatly appreciated.  I have attached the ggserr.log from the source as well as a few of the parameter files from the source.
HI , How was goldengate registered with RAC at target . Was it with AGCTL utility ? Thanks,Ragav.
I did not use the AGCTL utility.  We have gg_action.scr that we add to the /u01/app/grid/crs/script directory then issue crsctl add resource gg_app -type cluster_resource -attr "ACTION_SCRIPT=/u01/app/grid/crs/script/gg_action.scr, CHECK_INTERVAL=30, SCRIPT_TIMEOUT=300"
I take it you mean SCAN(Single Client Addr Name) rather than SCN. If so you cannot simply use the scanner for pump. Unlike the database the session process can run in either node and lives and dies on the same node, whenever any OGG processes are running they are consistently pin to that node. You don't want a stop/start of extract pump to suddenly running on the other node, not that it will run as the mgr process is not there. Therefore something like telnet nyhcbfqddbscn:7809 is not going to work.What you need to do is to create a VIP for each RAC that resolves the same consistent VIP to whichever node you want and you need to include this as part of the clusterware failover script. Then use this VIP in the RMTHOST.CheersKee Gan
Thank you for your reply.  I do mean SCAN - Cluster SCAN host name.  I will pursue your suggestion.

Goldengate installation quesiton on RAC 2 nodes

Our source is two rac node oracle databases 11gr1. The database service is only enabled on node 1.
I wonder do I have to install this goldengate on shared location or just on node 1?
Currently I installed on node 1, started extract, have not started replicat yet.
I would like any advice on how to proceed from here.
We use goldengate just for one time migration.
Thanks in advance. 
9233598 wrote:
Our source is two rac node oracle databases 11gr1. The database service is only enabled on node 1.You mean the instance on node2 is shut down or something else ?
I wonder do I have to install this goldengate on shared location or just on node 1?You can install it only on node1. The purpose behind installing GG on shared location is to ensure high availability in case one of the nodes goes down.
Currently I installed on node 1, started extract, have not started replicat yet.
I would like any advice on how to proceed from here.
We use goldengate just for one time migration.For that installing on local file system should be enough. 
We use goldengate just for one time migration.since its one time activity so you can bring up the replication and use on first node. archive log destination should be shared and visible to GG incase archivelog required by GG. 
WHat I mean is we have two different databases on those RAC nodes. One database service is active on node 1, other database is on node 2 , this is try to avoice memory issues.
Both instances are all running, but when the client conenct, they connect through service name, which they only connect to node 1. 
As it is a one time requirement, you can do it on one node only.

Categories

Resources