Scenario1:
We have a target source table containing 3 columns : Col1, Col2 and Col3. There is only 1 row in the table as follows:
Col1 Col2 Col3
-----------------
a b c
There is target table containg only 1 column Col. Design a mapping so that the target table contains 3 rows as follows:
Col
-----
a
b
c
Solution: Not using a Normalizer transformation:
Create 3 expression transformations exp_1,exp_2 and exp_3 with 1 port each. Connect col1 from Source Qualifier to port in exp_1.Connect col2 from Source Qualifier to port in exp_2.Connect col3 from source qualifier to port in exp_3. Make 3 instances of the target. Connect port from exp_1 to target_1. Connect port from exp_2 to target_2 and connect port from exp_3 to target_3.
Scenario 2:
There is a source table that contains duplicate rows.Design a mapping to load all the unique rows in 1 target while all the duplicate rows (only 1 occurence) in another target.
Solution :
Bring all the columns from source qualifier to an Aggregator transformation. Check group by on the key column. Create a new output port count_col in aggregator transformation and write an expression count(key_column). Make a router transformation with 2 groups:Dup and Non-Dup. Check the router conditions count_col>1 in Dup group while count_col=1 in Non-dup group. Load these 2 groups in different targets.
Scenario 3:
There is a source table containing 2 columns Col1 and Col2 with data as follows:
Col1 Col2
a l
b p
a m
a n
b q
x y
Design a mapping to load a target table with following values from the above mentioned source:
Col1 Col2
a l,m,n
b p,q
x y
Solution:
Use a sorter transformation after the source qualifier to sort the values with col1 as key. Build an expression transformation with following ports(order of ports should also be the same):
1. Col1_prev : It will be a variable type port. Expression should contain a variable e.g val
2. Col1 : It will be Input/Output port from Sorter transformation
3. Col2 : It will be input port from sorter transformation
4. val : It will be a variable type port. Expression should contain Col1
5. Concatenated_value: It will be a variable type port. Expression should be decode(Col1,Col1_prev,Concatenated_value||','||Col2,Col1)
6. Concatenated_Final : It will be an outpur port conating the value of Concatenated_value
After expression, build a Aggregator Transformation. Bring ports Col1 and Concatenated_Final into aggregator. Group by Col1. Don't give any expression. This effectively will return the last row from each group.
Connect the ports Col1 and Concatenated_Final from aggregator to the target table.
Scenario 4:
Design an Informatica mapping to load first half records to 1 target while other half records to a separate target.
Solution:
You will have to assign a row number with each record. To achieve this, either use Oracle's psudo column rownum in Source Qualifier query or use NEXTVAL port of a Sequence generator. Lets name this column as rownumber.
From Source Qualifier, create 2 pipelines:
First Pipeline:
Carry first port Col1 from SQ transformation into an aggregator transformation. Create a new output port "tot_rec" and give the expression as COUNT(Col1). Do not group by any port. This will give us the total number of records in Source Table. Carry this port tot_rec to an Expression Transformation. Add another port DUMMY in expression transformation with default value 1.
Second Pipeline:
from SQ transformation, carry all the ports(including an additional port rownumber generated by rownum or sequence generator) to an Expression Transformation. Add another port DUMMY in expression transformation with default value 1.
Join these 2 pipelines with a Joiner Transformation on common port DUMMY. carry all the source table ports and 2 additional ports tot_rec and rownumber to a router transformation. Add 2 groups in Router : FIRST_HALF and SECOND_HALF. Give condition rownumber<=tot_rec/2 in FIRST_HALF. Give condition rownumber>tot_rec/2 in SECOND_HALF. Connect the 2 groups to 2 different targets.
negative regexp
-
if you want to grep anything except foo, use grep -v or negative lookahead
echo -e 'foo\nbar\nboz'|grep -P '^(?!.*foo).*$' bar boz it is just so
beautiful ...
1 month ago
15 comments:
Gaurav,
Nice article.. Just one comment on scenario 4. If you use Oracle Analytics function in source qualifier, you can avoid 2nd pipe line.
eg. select col1 count() over() as total_rows from aTable
Sorry Missed comma and count needs to be count(*)
eg. select col1, count(*) over() as total_rows from aTable
Gaurav, useful blog for interviewers. Thanks :-)
for problem4: can we have the following solution:
1. Sequence generator--for numbering the records.
2. Count the total records in aggregator.
3. let x=Count/2 in expression
4. Router:If Seq_Gen val of record >x then else
hi can you plz xplain how we can xtract the data in one file in scenario one after populating it in 3 different instances of tgt tables, also whats is the key to denormalize without using normalizer.
my tgt was a flat file where i need to populate the data for all the 3 instances
For senario 1,our out put is having only 1 target r8,if we use 3 instances of targets na,bt according to 1st senario answer we can we havent get the 3 rows in single target...can any one explain my dout?
Good article and superb questions.
For more interview questions: informatica scenarios
scenario 2 solution is not correct, if you use aggregator it will shrink the data using aggregation, it won't return duplicate rows out from the aggregator.
scenario 2 solution is not correct, if you use aggregator it will shrink the data using aggregation, it won't return duplicate rows out from the aggregator.
What a material of un-ambiguity and preserveness of precious experience on the topic of unpredicted feelings.
My web blog ... vintage clothing stores in nashville
First of all I would like to say excellent blog! I had a quick question which I'd like to ask if you do not mind. I was interested to find out how you center yourself and clear your thoughts before writing. I have had difficulty clearing my thoughts in getting my thoughts out. I do take pleasure in writing however it just seems like the first 10 to 15 minutes are usually wasted just trying to figure out how to begin. Any recommendations or tips? Kudos!
my web page ... Clicking Here
You can find more scenarios at Informatica scenarios
Scenario3:
The decode statement in Solution3 should be
decode(Col1,Col1_prev,concat1||','||Col2,Col2)
the default condition is mentioned Col1 it should be Col2.
Informatica is a sea of knowledge and a tool that goes on becoming deeper and deeper as u keep diving.
Informatica Read Soap API
Post a Comment