GG report files - GoldenGate

Is there any parameter in goldengate for purging report files which is generating in /dirrpt path?
Thanks in Advance 

This is no such command (beside rm or del) that purges the report files. However, they cycle out so you'll have at any one time a maximum of 11 report files: the current + current0...current9.



Purgetrails of alternate files are not replicating in case of audited files

Hi All, In my production system, one extractor is reading TMF audit trails and not capturing purgetrails of only Alternate files of ITLF log files whereas it captures purgetrails of Main file. As I can not see purgetrails of alternate files in output trail of extractor but in TMF audit trails they are present. While testing in test environment, same setup has been placed and tested and found all purgetrails of ITLF i.e. main and alternate files has been captured by extractor and write into output trail. GGS version and extract object are same in prod and test setup Please provide some suggestion to resolve this. Regards,Keshav
Do you have the extract report file for both production and test systems? A compare between the two not just for the parameter settings but what extract feedback in the report is important.Another thing to do is to jump into tmfcom and check the differences between the TMF setup. INFO TMF and INFO DATAVOLS will be worth checking.It looks like you are running Base24. Check that the alternate files are properly audited. What version of Base24 are you running? If you are using classic which swims around without TMF what steps did you take to make them use audited files? CheersKee Ganp/s if the report files are too big, email them to
Hello Gan,, Thanks for your kind response. Yes, we are using Base24 version 6.10, it has RBSI sub product where it uses audited database files such as it's log file ITLF. It has one main file and two alternate files, all files are audited.Also there is no such difference between disks configuration which are for these three files between test and prod. TMF configuration is same except AudDump is ON in prod.Extractor report file shows three purge commands processed for 2 alternate and 1 main file in TEST Environment but in prod it shows only one purge command i.e. for main file.TMF audit trail shows all three purge commands in both system prod and test.still only in prod extractor put only one purge command in output which is of main file.Please let me know if anything else to be check and required.Regards
I will need the report file and check the version. Please report this issue to Oracle support as there is a bug with extract not capturing the alternate file purge from TMF. Submit the full report file and ask support to check the bug with not capturing this alternate file activity. Alternatively login to MOS and search "TMF alternate file purge missing". Sorry I don't have a CSI nowadays to login to MOS.CheersKee Gan
Hi Gan, I searched for this kind of case in MOS but I could not find. Do you find or faced any such type of case ? Thanks,Keshav
Hi Keshav,Create a service request and report this to Oracle support. What is the version of GoldenGate you are running? You said test works and production does not. If you can attach both the report files I can compare the versions. You can email report files to Kee Gan
Dear Gan, I have sent email with reports files. I can see file operations of main and alternate file in Test but in prod report file I can see only main file operations.Please check once and confirm that I need to raise a case with Oracle. As GGS version is same for both environment and extract object is also same, parameter file is same, why prod extractor not capturing alternate file operation. Please guide. Regards,Keshav
Ok the production report file is useless as you have this parameter in itREPORTROLLOVER AT 00:10Which I wrote in a previous post to say don't do this (OGG TIP: REPORTROLLOVER). I suppose the production extract has been running for over 10 days now and the report for the initialization page is now gone. Stop/start the production extract. There is no harm and take the opportunity to remove this annoying parameter. Then attached the report file. Also this has a chance for the extract to redo the DDL for the production file. Anyway this is what I will suggest anyway. See if the alternate files are now captured. As this is TLF you will need see the results overnight. If this is still not happening, attach the report file for the restarted extract. This time I can see your parameters.CheersKee Gan
Hi Gan, It is very hard to me to change something in production however I used same parameter file in test and stop/start extractor and will send report to you. You can differentiate production files as $data* and test files as $dvl* in that report file.Scanning Count for test files are correct in report file.Yesterday I found something in test explained below.1. Everyday on 19:30 pm, these ITLF files get purged in test as is prod.2. These are recorded in TMF as they are audited files. These trails read by datapmp extractor and just put output trail on local server.3. I keep datapmp started from 19:00 pm, and checked datapmp output trail after 19:30, where I did not found purge trail for alternate file and found one purge trail for main file, same happened as production server.4. Then I search trails in TMF audit file, I found all trails for alternate and main file. Then I alter datapmp with same rba which has these trails in TMF audit trail file and start datapmp, this time datapmp capture alternate file trails and put it into output trail.5. That means when it was running normally extractor did not capture alternate files trails and when I changed rba and put it form the position extractor capture these trails.I will send all files through email. Please check. Regards,Keshav
Hi Keshav,Ok the issue here is that on restart extract re-evaluate the files. For newly created extract did not resolve this completely missing the alternate files.Try the test again creating the TLF (just create the new TLF manually as it is easier) while extract is running with these parametersFILERESOLVE BOTHALTFILERESOLVE    (default but put this in) CheersKee Gan
Hi Gan, Thank you very much sir, since your last comment I was testing with param FILERESOLVE BOTH for these ITLF log files. And it worked as expected. with this param extractor is capturing purge trail for alternate files and main files too. I observed these for 2-3 day with param and without param.Thank you very much for this kind support. Just let me know more information / exact reason, How this param works? I read in manual but I did not get the meaning/working of it. Please provide some information. Regards,Keshav
Hi Keshav,That is  good to hear. The default is supposed to be FILERESOLVE DYNAMIC whereby files are resolved as encountered. So if you have FILE $DATA.SUBVOL.FILEA;...etc.FILE $DATA.SUBVOL.FILEZ;Then a FILERESOLVE IMMEDIATE will attempt to resolve all 26 files whether or not it is encountered. So DYNAMIC is more efficient. However as I noted earlier I think it is a bug, so by using another set of parameter cause different areas of the code to be executed and this is what you are seeing. Bottom line the code is wrong, we are just hacking. Since this is resolve can you please marked this as answered for other forum viewers benefit. CheersKee Gan
Hello Mr.Gan, I tried to reach you.I used FILERESOLVE BOTH parameter for log files in production.But it seems, extract only captured purge alternate file operations on existing files and skipped for files those created after stop/start of extract.Please provide your views. Regards,Keshav
Hi Keshav,Ok I thought you tested this for both cases earlier. If that is the case there isn't a lot we can do as this is a bug. Can you provide me with your parameters for me to check?ThanksKee Gan
Hello Mr.Gan, I put email to your gmail id Please check once the param file. Regards,Keshav

Goldengate DIRPRM files deleted

Hi Guys, I have goldengate 11.2 installed with bi-directional setup. By mistake, I deleted all the files in dirprm. Goldengatte is still up and running.  Is there anyway to bring those files back? What would be my next course of action?  Hoping for a quick reply.... Regards,
If anyone has any clue or gone through this please reply. I am in panic  mode. 
can you do an "info <extract> or <replicat>" couple times.
What will it do? I ran many times info command .
You can rebuild all your parameters from the report file. The first pages should list all the parameters. You will need to edit them somewhat as report files blank out passwords etc but the essential bits are all there. CheersKee Gan
From the dirrpt directory. 
Thanks for the reply K.Gan  I have started doing that eventually.That will be a lot of works matching env,trail file etc. I was hoping similar to  SQL> create pfile from spile .  Thanks Alex.
Unfortunately there isn't one, OGG is a decoupled utility. You can use one instance on several databases etc. For a start save the dirrpt directory now, you don't want to lose that. Then save a copy ofggsci > info * showchggsci > info * detailNote don't forget to create mgr,prm You just need to recreate the parameter files, the on going stuff are all there, the checkpoints, trail locations etc.CheersKee Gan
Hi , The only way to recreate the parameter files is by using the report files. In future to avoid such scenarios or issues, please take a backup of the parameter files and checkpoint files on regular basis. Regards,Veera

OGG-01262 Error

I have my extracts abending with an OGG-1262 The call to cm_obj_find() function from line 2108 in CM_object_create() failed with reason 'cm_obj_find: object ( e.g., trans) already exists: error:115'.
ANy thoughts? 
Upload the report please.
It says the object already exists and so the error. Drop the object and retry again.
Also do this command and post your output
ggsci> view report <extract_name>
I ended up having to delete the extract and re add it.
The object in question was in the GGUSER schema.
Security restrictions prevent me from posting any system document. 
I am Ram. I want to install the golden gate in virtualbox.
actually i installed the the virtual box in windows 32 bit.
in that virtual box i installed the two nodes then i installed 11g R1 database in both nodes.
then i created the mount point /u01/app/oracle/product/gg.
upto replicate process i did then after that in source database i started the manager, extract group both are working fine. in destination database i am trying to start the replicat process it'snot starting .
the status of the replicat is stopped . please any one help me
Please start your own thread for this issue. What does a report show? 
I am having this exact same problem.
Did you have any solution?

Purge Old extracts problem

Dear All,
I Have a tipical problem while purging old extracts...
gg version 11
os hp-ux
for gg extract source trail i have choosen a size of 10mb (source trail convention P1)
for gg replicat target trail i have choosen a size of 100 (target trail convetion P1)
if i see datapump
send P1 status
source trail file is writing at 10000
target trail file is writing at 1000 file
i mentioned in mgr parameter file for purge old extract P1* usecheckpoints minkeephours 6
but manager is considering target trail file number but not the source trail file number.....
so manager work on file number or transaction wise...??
every time in ggserr.log
file is rolling over to or incrementing to +1 of exiting number and mgr process is deleting that old file...
because of this my mount point getting filled.....
so please advice me ....
Thank you very much in advance.....
Best Regards
If im not wrong, you should name your source/target trail files different else the purge might fail. Is this the case? 
Dear Nk,
You are correct, purge is failing
may be i can think because of the naming convention
and 2nd is file sizes are different...
So could you please suggest me a better way to solve this issue for smoother run for purge old extracts..
thank you very much
Put trail files in seperate sub-directories with meaningful names, such as /ext or /rmt and try use different 2-characters for the local and remote trail file names. 
Use the correct parameters:
purgeoldextracts <location of your trail files>/P1*, USECHECKPOINTS
And make your trail files much, much larger. Sounds like you're just starting and you're already into the thousands. Trails should follow the same old standard rule for Oracle redo logs: no more than 4 an hour.
Good luck,

World writable files in /ggs directory

Hi Experts,
i installed oracle goldengate version 14171650_FBO on a LINUX-System.
I saw now that there are many files with permission -rw-rw-rw- , world writable.
For example ggserr.log, discard-files, report files (i.e MGR.rpt), trailfiles ...
Of cource our security-guys don't appreciate this ...
Does anyone know how i can cofigure GoldenGate that these files are created with lower permissions ?
The umask of the unix-user is not the problem.
thanks in advance,
in the meantime i found the parameter 'OUTPUTFILEUMASK', useful to set permissions for trail- and discardfiles,
for a solution for ALL files a RFE seems to be necessary ...