Goldengate getupdatebefores column - GoldenGate

Hi Friends ,
i have a Table A with column X & Y #SOURCE Database and Table B with X1 and Y1 columns # TARGET Database . i want to track column y at source table A when ever there is an update or insert on that column Y has to be compared with the column Y1 of table B # Target db if it is diffrent then changed value,then apply changed value has to be inserted in to column Y1 or else ignore .can any one explain me in simple steps to achive this Task .
Thanks in advance
ESSKAY 

Hi,
If you haven't gone through this doc from Metalink, then here it is [ID 1085140.1]. It's briefly explained about the GETUPDATEBEFORES and also your specification. Hope this works.
Rows can be ignored rather or discarded rather than sent to the database if before/after filters are used in the replicat. This can cause the amount of data to be sent to the database to be reduced if certain data is not changed. This is of use in high volume instances where not all row changes are useful.
Solution
Sample:
There are two parts to it.
First, on the Oracle side, supplemental logging has to be enabled for all the columns you are going to use in the target table. The up side of this is that you won't need to check if a column is present with your function. In fact, that was part of the problem.
add trandata tab8811s cols ( name, city, state)
Second, you need to compare before after images with #streq. Here is my sample:
REPLICAT r8811
reperror 21000, discard
SOURCEDEFS e:\gg\Teradata\DIRDEF\def8811.def
DISCARDFILE e:\gg\teradata\discards\r8811.txt, PURGE
GETUPDATEBEFORES
TARGETDB vmprime_mzerger, USERID mzerger, PASSWORD mzerger
-- if name and city are both unchanged, do not replicate update
MAP mzerger.tab8811s
TARGET mzerger.tab8811t,
filter (
on update,
(
((1 = #streq (before.name, name)) ) AND
((1 = #streq (before.city, city)) )
),
raiseerror 21000
)
;
Thanks.

Related

How to Map Source and Target table with differences?

Hi Experts,
1) My Source and Target DB table colums are different, How do I map this? (Will def file help me? If so I have 75 such table and more are adding on. How can I take care of all tables?)
Example 1*
source DB
-------------
table1(CREATED_BY NUMBER(15), LAST_UPDATED_BY NUMBER(15))
Target DB
--------------
table1(CREATED_BY VARCHAR2(40 CHAR), LAST_UPDATED_BY VARCHAR2(40 CHAR))
2) And How do I take care the table where the target DB has some extra colums than the one in Source DB???
EXAMPLE 2*
source DB
-------------
table1(Colum1, colum2)
Target DB
--------------
table1(Colum1, colum2,*colum3, colum4, colum5*)
~Lother 
for your second question you can use COLSEXCEPT on the map statement
Please refer to the reference document for examples on this. 
yes,you've to generate definition file with defgen utility,
see complete detail on this.
http://gavinsoorma.com/2011/08/goldengate-replication-using-a-data-definition-file-and-defgen-utility/ 
Thanks Kuljeet,
Ya I tried, The the problem is I have 75+ tables and generating 75 .def file for all should be a big thing. Isn't there any other way or can I club them into one def file?
~Lother 
Thanks N K,
I have the same problem here as well. I have huge list of tables like 75+. So do I need to do this mapping for all 75 tables instead of db1.*???
~Lother 
Yes you can put everything into one file:
defgen.prm
DEFSFILE /<PATH>/Source.defs, purge
userid X, password Y
TABLE owner.name
TABLE owner.name2
TABLE owner.name3
TABLE owner.name4Then execute:
./defgen paramfile /<path>/defgen.prm
Then in the replicat param:
SOURCEDEFS /<path>/Source.defs
Hopefully this is what you were looking for :) 
Thanks N K again,
.def file is the answer to my 1st question
how about my second question ??
~Lother 
Just replied to you in a different post ! Look at it and let me know if that is what you were looking for!
Greetings,
N K 
Quoting my 1st message:
for your second question *you can use COLSEXCEPT on the map statement*
Please refer to the reference document for examples on this.Taken from the reference guide:
Using COLS and COLSEXCEPT
Use COLS and COLSEXCEPT to control column selection.
● Use COLS to specify columns whose data you want to synchronize. All other columns are
ignored by Oracle GoldenGate.
● Use COLSEXCEPT to exclude columns from synchronization. All other columns are
processed by Oracle GoldenGate. For tables with numerous columns, COLSEXCEPT may
be more efficient than listing each column with COLS. Do not exclude key columns.
To use COLS, the following is required:
● The table must have one or more key columns, or else a substitute key must be defined
with the KEYCOLS option of TABLE.
● The key columns or the columns specified with KEYCOLS must be included in the column
list specified with COLS. Otherwise, they will not be captured, and an error will be
generated during processing. (Note: Without COLS, key columns are automatically
captured.)
Without a primary or unique key or, in the absence of those, a KEYCOLS clause in the TABLE
statement, Oracle GoldenGate uses all of the columns in the table, rendering COLS
unnecessary.
Do not use this option for tables being processed in pass-through mode by a data-pump
Extract group.
Syntax TABLE <table spec>, {COLS | COLSEXCEPT} (<column> [, ...]) ;
NOTE If the database uses compressed updates (where column values are not logged
unless they changed), a column specified for extraction with COLS might not be
available. To make these columns available, use the FETCHCOLS option in the TABLE
statement or enable supplemental logging for the column.
Using DEF
Use DEF to specify a source-definitions template. The definitions template is created based
on the definitions of a specific source table when DEFGEN is run for that table. Once the
template is created, new source tables that have identical definitions to that table can be
added without having to run DEFGEN for them, and without having to stop and start Extract.
The definitions in the template specified with DEF will be used for definitions lookups. For
Component Description
<column> The name of a column. To specify multiple columns, create a comma-delimited list as
in the following examples.
The following processes only columns 1 and 3.
TABLE hq.acct, COLS (col1, col3);
The following processes all columns except column 4.
TABLE hq.acct, COLSEXCEPT (col4);

DB2 Data Pump not capturing UPDATES and DELETES from EXTTRAIL

Hi Guys. I'm trying to extract data from a DB2 for LUW source to a SQLServer2014 target database. Currently I am facing the problem of not capturing Update and Delete operations at my Data pump.  My configuration is the following one: ExtractEXTRACT EXTPERSOURCEDB SourceDB, USERID ADMIN, PASSWORD secretpassword--TRANLOGOPTIONS NOUSEREXITEXTTRAIL ./dirdat/PETABLE ADMIN.TableA; Extract data pumpEXTRACT EXTPPER--TRANLOGOPTIONS NOUSEREXIT--PASSTHRUSOURCEDB SourceDB, USERID ADMIN, PASSWORD secretpasswordRMTHOST 10.123.80.106, MGRPORT 7810RMTTRAIL ./dirdat/PETABLE ADMIN.TableA                ,COLS(ColA, ColB,ColC)                ,KEYCOLS(ColA)                ,FILTER(ColB = 1 OR ColB = 2 OR ColB = 3 OR ColB = 19) ,SQLEXEC(ID LKPORGN2,  QUERY "  SELECT ORGN2 FROM ADMIN.TableA WHERE ColA = ? " , PARAMS (p1=ColA)) ,TOKENS(TOKORGN2 = LKPORGN2.ORGN2); After an insert and an update, The prosses is fully working for inserts. When I issue an update, it is not propagated. The comand: GGSCI> STATS EXTPER return 1 insert and 1 update  The comand: GGSCI> STATS EXTPPER return 1 insert and 1 update    When I configured the process I used the following commands: ADD EXTRACT EXTPER, TRANLOG, BEGIN NOW    ADD EXTTRAIL./dirdat/PE, EXTRACT EXTPER, MEGABYTES 5 ADD EXTRACT EXTPPER, EXTTRAILSOURCE ./dirdat/PE ADD RMTTRAIL ./dirdat/PE, EXTRACT EXTPPER, MEGABYTES 5   Do someone have any hint's about what can be happening? Why can I get the updates at the primary extract process, but can't get the updates at the data pump process? Is there any problem about getting updates from data pump's processes? Best regards,Andre
First The comand: GGSCI> STATS EXTPPER return 1 insert and 1 updateDo you meanReturn only 1 insert, else it does not make sense.It looks like you filters may have cause the update to be dropped on the pump. You can create another pump but without the filters and I am sure it will capture it.  So does the update contained one of ColB = 1 OR ColB = 2 OR ColB = 3 OR ColB = 19 or it would be dropped. In addition if you start checking many non key columns you need to log all columns or else the update will only have the key column(s) and columns that are touched. Please explain you use of keycols as this is rarely used in extract. The reference manual does not help :-(, so what's new.  You use cols. If you do this it generally means that you table has lots of columns but you only want those declared in cols to be written to the trails. CheersKee Gan
Hi Kee Gan. Once again, thank you fro your feedback. The comand: GGSCI> STATS EXTPPERreturn 1 insert and 1 updateDo you meanReturn only 1 insert, else it does not make sense.What I wanted to mean was that If I do an insert, and after that a update, the extract process shows that an insert and an update were executed in the last hour. But the pump process only got the insert statment. So does the update contained one of ColB = 1 OR ColB = 2 OR ColB = 3 OR ColB = 19 or it would be dropped. In addition if you start checking many non key columns you need to log all columns or else the update will only have the key column(s) and columns that are touched.This was the problem. Once I was executing updates without any of this columns, and I was logging only the updated columns, the statement was being dropped by my data pump process.I am doing this to reduce the number of transactions that will be sent over the network.Thanks to your feedback I realized that if I remove the filter or use the ColB on the process, then I can replicate the Update.Going to try to finde the best way of logging all columns at the trail file with db2. Please explain you use of keycols as this is rarely used in extractThis is my first GoldenGate project, and after reading some information and manuals, what I understood was that using keycols would help the process to process records using this as a index. And therefore I thought it would help any process . You use cols. If you do this it generally means that you table has lots of columns but you only want those declared in cols to be written to the trails.My source is a legacy system database, and I don't want most of the columns. The table has like 80 columns and I only want 25. This is why I am using the "cols" parameter. Long story short: The problem was related to not having all columns logged with Updates and Deletes.Since I am using a filter clause, the filter caused the Updates and Deletes to be dropped and not used in the process.I need enable the logging of all columns to overcome this challenge. Thanks again for your feedback that made me realise my problem  Best regards.Andre

Golden gate bulk column mapping

Hi, I have a requirement to log all the transaction from source to target.  If the source table has three columns , target table will have double the no.of columns , in this case six columns where "B_" columns hold the before values of the primary columns Source. table1 (col1 number, col2 varchar ,col3 varchar(5))  target.table1_history  ( col1 number,                                     col2 varcgar,                                     col3 varchar(5),                                     B_col1 number,                                     B_col2 varcgar,                                     B_col3 varchar(5) )  Flowing mapping in currently working. :Map source.table1 target.table1_history,INSERTALLRECORDS,COLMAP ( USEDEFAULTS,B_col1 = #before (col1),B_col2= #before (col2)B_col3= #before (col3));  Question: 1) But we have tables with more than 200 columns and over1000 tables to map. Can we simplify this for such huge tables, something like  Map source.table1 target.table1_history,INSERTALLRECORDS,COLMAP ( USEDEFAULTS,    --after values                    COLMATCH PREFIX "B_"  USEDEFAULTS = #berfore (USEDEFAULTS)               );  Note : tried it but not working. 2) When we update just one column on the source, the target table after values are null for columns which are not replicated if they are not part of the key columns.  Source :   table1 col1 col2 col31     Sam  true2     Tim   true update  table1 set col3= 'false' where col1=1; Target : table1_history col1 col2 col3  b_col1 b_col2 b_col31     null  false    1        Sam    true    How to replicate all the values even if they are not part of the source update statement?  Thanks for the help,Poornima.
Hi Poornima2) The second part is easy, just logged all columns. You can alter database SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Or alter the individual tables to log them.If you running v12 you can ggsci> add trandata <table>,allcols 1) To log the before and after images after you logged all columns as above, see https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_datainteg.htm#GWUAD447 Search for section 'To implement transaction reporting'See the sample map. Ok so what you need is 2 tables rather than one combined. Both tables are like your source with a few columns to indicate what this is as per the sample MAP statement. To see both tables together create a view for this on your target.          CheersKee Gan 
For Question No 1. The syntax for colmatch is as given bellow. COLMATCH NAMES CUSTOMER_CODE = CUST_CODECOLMATCH NAMES CUSTOMER_NAME = CUST_NAMECOLMATCH NAMES CUSTOMER_ADDRESS = CUST_ADDRCOLMATCH PREFIX S_COLMATCH SUFIX  _TMAP SALES.ACCT, TARGET SALES.ACCOUNT, COLMAP (USEDEFAULTS);MAP SALE.ORD, TARGET SALES.ORDER, COLMAP (USEDEFAULTS);COLMATCH RESETMAP SALES.REG, TARGET SALE.REG;MAP SALES.PRICE, TARGET SALES.PRICE; Or you can use one more method instead of creating columns in same table you can create same table with other relevant name  and store all the before update data in this table ,  which would be same as source.With this column mapping would be very much effortless.  Below given 2 question answer is example of the same.   For 2 question you can use the bellow given solution.. You can use the extract GETBEFORECOLS for getting all before value of each columns on table as suggested bellow as an example. EXTRACT E1userid ggs_owner, password AACAAAAAAAAAAAJAGISJZFPDWIKADGIGKCZBGGREUBKBPIBC , encryptkey  defaultEXTTRAIL ./dirdat/E1TABLE ACTIVE.EMP  GETBEFORECOLS (ON UPDATE ALL, ON DELETE ALL)  ; REPLICAT R1userid ggs_owner, password AACAAAAAAAAAAAJAGISJZFPDWIKADGIGKCZBGGREUBKBPIBC , encryptkey  defaultASSUMETARGETDEFSDECRYPTTRAILMAP ACTIVE.EMP, TARGET TARGET.EMP  ; INSERTALLRECORDS  MAP ACTIVE.EMP, TARGET TARGET.EMPTMP COLMAP (USEDEFAULTS, EMPNO =  before.EMPNO , ENAME =  before.ENAME, JOB  =  before.JOB ,MGR = before.MGR , HIREDATE = before.HIREDATE , SAL = before.SAL , COMM  =  before.COMM , DEPTNO =  before.DEPTNO) ;
Hi Kee,  Thanks you for looking into this.Creating the two tables for before/after images and a view on top is a good idea. We are on oracle12 and GG12.  DB supplemental logs for all columns is enabled as well as the trandata for all columns. Also implemented both.  . History table with source columns and just  before_after indicator.  But i don't see the logs with "Before" indicator at all in the history tables. . History table with two sets of columns (b/a),  and don't see all the columns populating even with mapping. The issue is only with the tables configured  with replication parameter "#before" . All the before columns are populated but for after columns,  only the updated columns from the updates statement  and key columns have values. .  Thanks,Poornima.
Hi Sachin ,  Thanks for looking into this. This is the solution i implemented and its working. But is there work around to map all the columns at once with a prefix. Like .. COLMATCH PREFIX "B_"   table.* Thanks,Poornima.
We know #before works, you have done so from the original question, you just don't want to manually assign #before to each column. Can you please show me your extract? Version 12 by default combines before and after into a consolidated trail record. Also are you logging all columns? You can check this using logdump> Open <trail>> detail dataGo search for that record.CheersKee Gan
use the GETBEFORECOLS to log all the before values in trail as suggested earlier.. you can use the extract GETBEFORECOLS for getting all before value of each columns on table as suggested bellow as an example. EXTRACT E1userid ggs_owner, password AACAAAAAAAAAAAJAGISJZFPDWIKADGIGKCZBGGREUBKBPIBC , encryptkey  defaultEXTTRAIL ./dirdat/E1TABLE ACTIVE.EMP  GETBEFORECOLS (ON UPDATE ALL, ON DELETE ALL)  ;

Data load with filter

Hi Experts, We have gg setup between two oracle databases with 11.2 version, Can you please confirm , is it possible to do the filter on replicat when we use the direct initial load. for example , we already table A on both sides with same structure , there is a column amount , now we want to filter the data based on column amount , like if amount value is mismatch between source and target table then that value should be inserted in to TABLE B on target side. This must be performed with initial load configurration, like sourceistable and specialrun repilcat. if we start the initial load extract in source then that should check the column value and if any difference then that record should be inserted in to table B , Table B also havin same structure of A. We tried below option , but no luck , the replicat process stats shows number of inserts but actual table doesnt have data map src.A, target tgt.B , filter (#getval(chk1.cnt)>0) , sqlexec (id chk1 , query "select count(1) as cnt from tgt.A where amount <> :p1 ", params (p1=amount),beforefilter); the above one is , check the "amount" col values between src and tgt A table and if any difference then should insert that record in to table B. here the pkey for the table A is ID and name column,Tried with Keycols(id,name,amount) as well but when i start the extract on src its running and replicat stats shows inserts 10 rows but B table doesnt have data. Please help ThanksArun
Hi Arun, Did you actually try this on the target DB to see if you get anything?select count(1) as cnt from tgt.A where amount <> <amount from src> CheersKee Gan
Hi Gan,  Thanks, Actually the problem is volume of data i believe , since the table A has 2 million rows, running more than  2hrs.  , Might be its a delay to check and validate  value of amount col fro 2 million rows.  Is there anyway to speed up this ?  Kindly suggest.  Thanks,Arun
Hi Arun,Not really, as this is a database issue. If you do a count it is going to go through all 2 million rows every time. Try to think of another way of doing this. Because this is an initial load so you do this once. I will move the import whole table into the target database and do your joins or unions to get the result you want. Using OGG is not efficient.CheersKee Gan
Thanks Gan, Yes i think better i have to think different option.  Also one more query , in target one of the table having 20 columns but source that same table having only 5 columns,  I need to replicat that only 5 column values to target table other 15 column can be nul in target table. i have generated def file and used below colmap , but its saying key column in missing from the map. That target doesn't have unique or pkey ,, Can you please suggest . Thanks,Arun .  
Ok you get that error only for updates. If there are no keys the replicat will want every column because all the columns become the key. What you can do is to use a logical key, which is the MAP KEYCOLS option. Or you can make sure that the 5 columns are always logged at the source. You can use ggsci > add trandata <table>, cols ....CheersKee Gan
Hi Gan ,  You are correct , thanks , i have insert a seqno for one of the target col and make that as primary key ,  its works fine now.   Tahnks,Arun

How to get transaction timestamp for a specific column replication

Hello Everyone,
I have a table "SAL" on source end with columns -
Emp_name varchar2 (50)
Emp_id varchar2(30)
Salary number(10)
At target end I have similar table with an extra column as "+Tran_Data+", which keeps records of latest transaction or changes on source end and updates at target end in form +#datenow().+
I have recently used COLS (Emp_id) functionality in extract on source end in order to get updates for Emp_id column only at target end.
With updates in Emp_id on source, it does get replicate on target and tran_data also gets updated with latest timestamp.
However, while updating any other column for eg. Emp_name which is not being included in COLS (Emp_id), the target table does not get data replicated but the timestamp in tran_data do get updated with latest time.
Can anyone please help me out how to avoid update in tran_data in above scenario?
Thanks in advance. 
can you please post your configuration? 
Extraction EXT1 (Source end)
EXTRACT EXT1
USERID gg_target, PASSWORD *********
EXTTRAIL /u3/ncrmst3data/oracleGG/dirdat/e1
TABLE GG_TARGET.SAL, COLS (Emp_id), TOKENS (TKN-COMMIT-TS = #datenow ());
Datapump Extraction DPUM1 (Source end)
EXTRACT DPUM1
NOPASSTHRU
RMTHOST 10.77.16.13, MGRPORT 7809
userid gg_target, password *********
RMTTRAIL /oraclegg/dirdat/r1
TABLE GG_TARGET.SAL, COLS (Emp_id);
Replication REP1 (Destination end)
replicat REP1
userid GG_ADMIN password *********
reperror 00001, discard
reperror 01004, discard
reperror 01154, discard
reperror 01296, discard
reperror 01151, discard
ASSUMETARGETDEFS
HANDLECOLLISIONS
MAXTRANSOPS 1
GROUPTRANSOPS 1
DISCARDFILE /oraclegg/dirrpt/REP1.DSC, APPEND megabytes 20480
MAP GG_TARGET.SAL, target GG_ADMIN.SAL, COLMAP (USEDEFAULTS,TRAN_DATA=#datenow ()); 
The following maps all source columns to target columns where the source/target names are the same, and explicitly maps the target "N" column to the source "K" column:
    MAP test.table, TARGET test.table2,
    COLMAP (
    USEDEFAULTS,
    N = K);The following ignores all columns from the source DB that are not explicitly listed.
    MAP test.table, TARGET test.table2,
    COLMAP (
    N = K,
    HI = HO,
    .
    .
    .
    something = else);In your case it should work by doing this:
MAP GG_TARGET.SAL, target GG_ADMIN.SAL, COLMAP (Emp_id=Emp_id,TRAN_DATA=#datenow ());Let me know if this works.
Please remember to mark your questions as answered in order to keep the forum clean.
Greetings,
N K
Edited by: N K on 16-may-2012 5:52 
934671 wrote:
I have a table "SAL" on source end with columns -
Emp_name varchar2 (50)
Emp_id varchar2(30)
Salary number(10)
At target end I have similar table with an extra column as "+Tran_Data+", which keeps records of latest transaction or changes on source end and updates at target end in form +#datenow().+
I have recently used COLS (Emp_id) functionality in extract on source end in order to get updates for Emp_id column only at target end.
With updates in Emp_id on source, it does get replicate on target and tran_data also gets updated with latest timestamp.
However, while updating any other column for eg. Emp_name which is not being included in COLS (Emp_id), the target table does not get data replicated but the timestamp in tran_data do get updated with latest time.
Can anyone please help me out how to avoid update in tran_data in above scenario?It's not clear exactly what is desired to be replicated from the source to the target... Do you
(1) want all of the source columns to be replicated, or
(2) want just "emp_id" from the source, and the generated pseudo-column "tran_data"?
If option #1: i.e., you do want the target updated for every change that happens on the source (regardless of which column has been updated), but simply do not want "emp_id" to ever be empty, be sure that it is defined as a "pk" column and you've properly run "add trandata" on the source DB table. (If "emp_id" is not a pk column, then use "add trandata" with "cols", and add a "pkcols" option to your "table" and "map" statements.") In this case, for example, if the "salary" column changes, you'll actually get a change event with "tran_data" populated, and "emp_id" will be there too as long as it's a pk (or unique index) column; or, if it's not a pk column, you must either "fetch" it (fetchcols) or force-log the column (add trandata).
If option #2: i.e., you do not want a change record replicated to the target for every change that happens on the source DB, you'll need to use a filter on the source. In that case, add a "filter" clause to the table statement, and don't replicat the record if "emp_id" is "#absent" (if it's not a pk, and not force-logged), or if "emp_id" is always "#present" (because it's a pk, or is force-logged), then you can prevent the record from being replicated if the before image is equal to the after image.
Sorry if that's too much info and/or not clear; but if you can provide a concrete example of what you're looking to do, I'm sure you'll get a more clear answer.
Btw, fyi #1, your pump really could be re-written as follows, since it doesn't do any type of filtering, mapping or user exits (and therefore can be as passthru pump), and since a pump can't ever send more data that what it's receiving, you can wildcard the table name and pass all columns.
{code}
EXTRACT DPUM1
PASSTHRU
RMTHOST 10.77.16.13, MGRPORT 7809
RMTTRAIL /oraclegg/dirdat/r1
TABLE GG_TARGET.*;
{code}
And fyi #2, "datenow()" is NOT the commit timestamp. It is the time that GoldenGate is reading the operation from the log which contains the committed transaction. Just as an example, if you stop extract, execute a commit, and start extract an hour later, your real commit timestamp and "datenow()" will be different by an hour.
Instead, use the following, which actually gets the commit timestamp from the logs (remembering that commit timestamps in Oracle are inherently imprecise, of course):
{code}
TKN-COMMIT-TS = #GETENV ("GGHEADER", "COMMITTIMESTAMP"),
{code}
Edited by: MikeN on May 16, 2012 10:56 PM
Edited by: MikeN on May 16, 2012 11:01 PM 
Hello Mike,
My exact requirement is to replicate only a specific column data from source to destination and update the timestamp at destination table only when the column update of specified column is being applied.
Here I have specified Emp_id as the only column to be replicated at destination table by using COLS (Emp_id) in extraction file. As I update Emp_id at source end, it do get replicated at destination table and tran_data timestamp gets updated with latest time.
However, when I update any other column lets say, Emp_name which is not mentioned in COLS (Emp_id) although it does not get replicated at destination end but the tran_data timestamp gets updated to latest time which is undesirable.
Please see my configuration details again -
Extraction EXT1 (Source end)
EXTRACT EXT1
USERID gg_target, PASSWORD *********
EXTTRAIL /u3/ncrmst3data/oracleGG/dirdat/e1
TABLE GG_TARGET.SAL, COLS (Emp_id), TOKENS (TKN-COMMIT-TS = #datenow ());
Datapump Extraction DPUM1 (Source end)
EXTRACT DPUM1
NOPASSTHRU
RMTHOST 10.77.16.13, MGRPORT 7809
userid gg_target, password *********
RMTTRAIL /oraclegg/dirdat/r1
TABLE GG_TARGET.SAL, COLS (Emp_id);
Replication REP1 (Destination end)
replicat REP1
userid GG_ADMIN password *********
reperror 00001, discard
reperror 01004, discard
reperror 01154, discard
reperror 01296, discard
reperror 01151, discard
ASSUMETARGETDEFS
HANDLECOLLISIONS
MAXTRANSOPS 1
GROUPTRANSOPS 1
DISCARDFILE /oraclegg/dirrpt/REP1.DSC, APPEND megabytes 20480
MAP GG_TARGET.SAL, target GG_ADMIN.SAL, COLMAP (USEDEFAULTS,TRAN_DATA=#datenow ());
I exactly want a dependency between Emp_id and tran_data that when Emp_id is updated, only then tran_data should get updated.
Also, want to tell that we cannot use COLS in PASSTHRU mode hence I modified it to NOPASSTHRU. 
Hello N K,
As per your directives I tried the below replication statement -
MAP GG_TARGET.SAL, target GG_ADMIN.SAL, COLMAP (Emp_id=Emp_id,TRAN_DATA=#datenow ());
The tran_data column still gets updated with latest timestamp even if I am updating Emp_name and not Emp_id.
Basically, I want a dependency between the update in Emp_id and timestamp in tran_data that the tran_data should only get updated for any update in Emp_id column only. 
Hello Everyone,
Is there any way of using FILTER or EVENTACTIONS in replication or extract scripts for solving my problem?
Please post a solution or any other related information anybody having for this issue thread. 
you should read Mike's comments closely. It's strange why you'd update an emp_id column which is most liekly a PK. Depending if it's a PK or not, you could use some built in functions (#present, #absent, #coltest(xx, present), etc) to first check if the transaction contsains data for the desired column before allowing the transaction to be applied on target. 
Hello ,
Can you please tell me the syntax of #present or #absent or any other filtering command to be used in my scenario as I cannot find that on net ?
Also where I should use that filter statement - In extract or in replication script ?
I have posted my extract n replicate configurations in earlier post, one can edit them and provide me with the solution.
Thanks in advance. 
MAP GG_TARGET.SAL, target GG_ADMIN.SAL, FILTER (ON UPDATE, #COLTEST(EMP_ID,PRESENT)),
COLMAP (USEDEFAULTS,TRAN_DATA=#datenow ());
or,
MAP GG_TARGET.SAL, target GG_ADMIN.SAL, WHERE (#PRESENT(EMP_ID)), COLMAP (USEDEFAULTS,TRAN_DATA=#datenow ()); 
Hello,
Thank you for providing the syntax for #present.
I tried it out in my Goldengate configuration, but it didn't worked out. When I am updating any other column also at source end for eg. Emp_name, then also the tran_data is getting updated with latest timestamp at destination end.
My major requirement is to make dependency specifically for column Emp_id, that is any change made to this column only at source end, then only tran_data should get updated at destination end.
Please help me out in this issue. 
Hello,
Please see my requirement according to below example.
Suppose I have following data in source table SAL -
emp_id      emp_name
+1      test+
I have following data in target SAL table -
emp_id      emp_name       tran_data
+1      test           01-MAY-2012 12:00:00+
now my requirement is, I have edited emp_id in source as -
emp_id      emp_name
+2      test+
then my target table gets updated as -
emp_id      emp_name       tran_data
+2      test           01-MAY-2012 12:02:00+
but if I update emp_name in source as -
emp_id      emp_name
+2      test123+
my target table shows
emp_id      emp_name       tran_data
+2      test           01-MAY-2012 12:04:00+
but I need tran_data to remain same as it was when only emp_id was modified like -
emp_id      emp_name       tran_data
+2      test           01-MAY-2012 12:02:00+
tran_data should not get updated for any other column change except Emp_id, as I have included COLS (Emp_id).
This is what I exactly need. Please help me out in achieving the same. 
If your emp_id is either primary key or unique index, the trail file will always contain data for this column, regardless you modified it or not on source. If this is the case, you will have to do a SQLEXEC to compare the EMP_ID in the trail vs. what existing target value is, If it is a match, that means EMP_ID is not updated on source so do nothing. If not, it means EMP_ID is updated on source and thus change the timestamp value on target.
Why you'd update primary key, if it indeed is, is something i'm curious about. 
Yes my friend, Emp_id is a primary key and that is why I have included it specifically for replication in COLS (Emp_id) in extract script.
And that is why I am not able to attain what I exactly need. The tran_data should only be dependent on any changes in Emp_id column only and not on any other column change. Emp_name is not a primary key and neither needed to be replicated and that is why I need that when any change made in Emp_name should not be replicated at target(which I have achieved) and also tran_data should not get updated(which I have not achieved).
I need primary key changes only to be reflected on target table. The changes in Emp_id and updating of tran_data should be dependent on each other without having any other dependency with other columns.

Categories

Resources