GG data transfer - GoldenGate

Can any one clarify on this,
Whether GG extract process transfer the committed DATA or transfer the SQL queries which will run on the source?
Thanks in Advance 

Assuming you're still referring to Oracle, Oracle logs the results of the DML operations to the redo logs as logical change records. The committed LCRs of interest are converted into a heterogeneous format and sent on their way.

Thats perfact, thanks joe 



to see queries generated by goldengate

Hi , is there anyways to see queries generated by goldengate from the extract file. Thanks in advance
Hi , What queries and to which Extract file you are referring to? could you please elaborate on this? Are you talking about the report file or trail file? Regards,Veera
there will be tables name, filter conditions and other stuffs in the prm file .While goldengate processes it , Is there any way to find the equivalent  sql queries for the corresponding param file.
If you are trying to monitor the database to see if how extract is affecting it with regards to sql then you can either use sql developer monitor or you can trace (10046) the extract database session to see the sql statements.If you doing this to see how extract  works then you will be missing a huge part of the story. The majority of its work is reading the redo logs and the corresponding data associated with the transactions. For that Integrated extract uses Xstreams APIs (logminer functions) that is not really easily tracked. CheersKee Gan 
HI K.Gan, Can you just tell how to  trace (10046) the extract database session to see the sql statements.
After the userid parameter put thisSQLEXEC ('ALTER SYSTEM SET trace_enabled = TRUE') This will create the usual database 10046 trace for as long as extract runs. When you done enough transactions stop extract and remember to remove this parameter.CheersKee Gan
Hi there,an extract produces trail entries. If you want to display them you can use logdump utility.See here for the docsUsing the Logdump Utility HTH Mathias
Hi All,Share links related to database 10046 trace. Thanks and Regards,Ragav.
Hi Ragav,This link is ok As always this is NOT an oracle site so be cautious but this article is ok. Speak to a DBA and they should know.CheersKee Gan

Using BATCHSQL for tables which are modified frequently

Hi, Recently we are facing a lot of issues in the GG. We had an lag of more than 15 min during business hoursThe objects in a schema are getting update very frequently. We have multiple inserts deletes and updates. I had suggested my GG support team to use the BATCHSQL parameter for that schema as it will help to redue the perfomace issues. But he is not very confident in using it. As per the orcale documentation i could see the BATCHSQL works fine. In my earlier project we were using it without any issues. I dont have the access to it and could only suggest them with the best options. Can you please suggest how could i expain him. ThanksAmeya
Can i use the INSERTAPPEND with this ?
Hi Ameya, What is the type of replicat process you are using? Classic or Integrated or Coordinated? Regards,Veera
Classic  We are using 11g Golden agte and same version of oracle database. I need to help them understand the BatchSQL. Their question - If there are deletes inserts and updates happening, how will the batch sql proceed.
Hi , You can ask them to check the below link,  In the above link, you can check the Table 3-31 Replicat Modes Comparison which has the answer to your query. Regards,VeeraOracle Global Customer Support
Hi AmeyaGenerally it is a good idea to turn on BATCHSQL in the replicat. To start just have the parameter without any options anywhere in the parameter file, ie.BATCHSQLA quick summary on how this works. BATCHSQL gets ready the group of transactions it is going to commit. Instead of just doing this, replicat sorts this which makes the retrieval of data much more efficient because the database need not jump all over the place. There is a catch, if after the sorting the DMLs gets out of order the replicat rollbacks what it has done and do that transaction before sorting. If you use BATCHSQL the report file generates statistics on this and if you see lots of rollbacks your application environment does not suit BATCHSQL. Having said that most sites work. My advice is just put it in and see if performance improves.CheersKee Gan
Hi Ameya, Firstly, issue the below commands and check the performance of the replicat process. stats replicat <rep_name>, totalsonly *.*, reportrate min Secondly use  the parameter BATCHSQL and again issue the above command to check the performance. You can definitely see the difference in the performance. Please refer the document below and if needed, you can assign the manual values and adjust them to get the maximum performance.  Regards,Veera
Thanks Kee and Veera, I will check on this and update

Target database unavilable

I'm using GG for replicating between one source and three target oracle dbs with bi directional method, also GG using with regional wise filter with help of sqlexec (with procedure),the issue is, one of our target db crashed due to disk corruption , other two pumps and replicat process are running fine... in this case how to restore the datas for crashed db? can you please give the solution to resolve this issue? Thanks in advance. 
Hi Experts,
Any update please? 
Start the extract on the failed-but-now-up-again server based on a checkpoint far enough back in time to get those records sent to the target. 
Thanks Steve.

Estimation of data loss in GG

Dear all,
As GG is a async solution, how could I estimate the possible data loss?
1. Is there any way to estimate the maximum data loss when use GG? Is it only bounded by the network?
2. What is the default time interval for synchronization? Is it configurable?
3. To cite this as an example:
- There are 5 transactions (e.g. update statement) transfer from production to backup DB
- At (time) t=1, the first transaction start transfer to backup DB, then t=2 for second transaction and so on.
- Due to network reason, 3rd transaction is not delivered to backup. On the other hand, the 4th and last transaction already arrive backup DB
- My question are: Will the 4th and last transaction pending for update in backup until the 3rd transaction is updated in backup DB?
Thanks for your help.
Depends on how you configure this. I've never lost ANY data using goldengate
The network latency depends on what you are throwing at the target and how much it can take.. if you ahve too much data going through the replication you might want to add moer than one replicat process and force them to work in sync to read the trails.
Also, what you say..for some reason update 3 doesnt happen? What is that? If it is commited then gg will extract from redo log and send over the network.
Im not following what you are asking sir. Please explain yourself.
If configured properly, you will not lose any data. 
Hi ,
As NK said if OGG is configured properly, there is no possibility of data loss. You can expect a slight lag in replication as the data needs to transferred over the network.
GoldenGate would only capture commited transactions .

GG behavior

Hi Experts , As per the gg documents , extract process will capture commited and uncommited data from source database, Once its commited it will be sent to target system otherwise it will be rolled back, the query is, 1. Where the extract process is storing the uncommited recoreds , whether its in trail files or buffer , if its in buffer , maximum how much data can be stored ? Please clarify on this. Thanks in advance.
Hi , Oracle GoldenGate replicates only committed transactions. it stores the operations of each transaction (committed or uncommitted) in a managed virtual-memory pool known as a CACHE until it receives either a commit or a rollback for that transaction.  The below doc clearly explains you how the records are read and where it is stored. Hope this might be very helpful and cleared your doubt. Regards,Veera