Informatica Scenario Based Interview Questions
Informatica Scenario Based Interview Questions
Now that you are just one step away to land your dream job, you must
prepare well for all the likely interview questions. Remember that every
interview round is different, especially when scenario-based
Informatica interview questions are asked.
Q90. How do you load the last N rows from a flat-file into a target
table in Informatica?
Col
ABC
DEF
GHI
JKL
MNO
Now follow the below steps to load the last 3 rows into a target
table
Step 1
Create a dummy output port and assign 1 to the port in the same
expression transformation.
N_calculate=V_calculate
N_dummy=1
ABC, 1, 1
DEF, 2, 1
GHI, 3, 1
JKL, 4, 1
MNO, 5, 1
Step 2
Now it will hold the value as 1 and N_total_records port (it will keep
the value of the total number of records available in the source)
N_dummy
N_calculate
N_total_records=N_calculate
Outputs in Aggregator Transformation
N_total_records, N_dummy
5, 1
Step 3
ABC, 1, 5
DEF, 2, 5
GHI, 3, 5
JKL, 4, 5
MNO, 5, 5
Step 4
Output
JKL, 4, 5
MNO, 5, 5
Data
Amazon
Walmart
Snapdeal
Snapdeal
Walmart
Flipkart
Walmart
Situation – Give steps to load all unique names in one table and
duplicate names in another table.
And
Now for each row, the Dummy output port will return 1
Name, N_dummy
Amazon, 1
Walmart, 1
Walmart, 1
Walmart, 1
Snapdeal, 1
Snapdeal, 1
Flipkart, 1
name, N_calculate_of_each_name
Amazon, 1
Walmart, 3
Snapdeal, 2
Flipkart, 1
Amazon, 1, 1
Walmart, 1, 3
Walmart, 1, 3
Walmart, 1, 3
Snapdeal, 1, 2
Snapdeal, 1, 2
Flipkart, 1, 1
Specify it as O_dummy=O_count_of_each_name
Data
Amazon
Walmart
Snapdeal
Snapdeal
Walmart
Flipkart
Walmart
Ans.
Table 1
Amazon
Walmart
Snapdeal
Flipkart
Table 2
Walmart
Walmart
Snapdeal
Z_curr_name=name
Z_calculate=IIF(Z_curr_name=Z_prev_name, Z_calculate+1, 1)
N_calculate=Z_calculate
Walmart, 1
Walmart, 2
Walmart, 3
Snapdeal, 1
Snapdeal, 2
Flipkart, 1
Form a group
ABC 80 85 90 95
DEF 60 65 70 75
ABC 1 80
ABC 2 85
ABC 3 90
ABC 4 95
DEF 1 60
DEF 2 65
DEF 3 70
DEF 4 75
Step 1 –
Step 2 –
From the tab, click on the icon, this will create two columns
Select OK
4 columns will be generated and appear in the transformation
Step 3 –
In the mapping, link all four columns in source qualifier of the four
Quarters to the normalizer
ABC 1 80
ABC 2 85
ABC 3 90
ABC 4 95
DEF 1 60
DEF 2 65
DEF 3 70
DEF 4 75
Ans. The SCD Type 1 mapping helps in the situation when you don’t want
to store historical data in the Dimension table as this method overwrites
the previous data with the latest data.
For example:
Student_Id Number,
Student_Name Varchar2(60),
Place Varchar2(60)
Now we require using the SCD Type 1 method to load the data present in
the source table into the student dimension table.
Stud_Key Number,
Student_Id Number,
Student_Name Varchar2(60),
Location Varchar2(60)
Click OK
New_Flag = IIF(ISNULL(Stud_Key),1,0)
OR Location != Src_Location),
1, 0 )
Click OK
From the update strategy, connect all the appropriate ports to target
definition
infacmd
infasetup
pmcmd
Pmrep
PMCMD command helps for the following functions:
Start workflows
Schedule workflows
Start workflow
Abort workflow
From the toolbar, click on the Mappings and then click on Target
Load Plan
You will see a pop up that will have a list of source qualifier
transformations in the mapping. Also, it will have the target from
which it receives data from each source qualifier
Using the Up and Down button, move source qualifier within load
order
Click ok
Ans. When the first load is finished the table will become:
101 1 20011 1
201 2 20011 1
301 3 20011 2
Ans. This function is used to capitalize the first character of each word in
the string and makes all other characters in lowercase.
INITTCAP(string_name)
Allocate the variable port to an output port. The two ports in the
expression transformation are: V_count=V_count+1 and
O_count=V_count
Q101. How will you load the first 4 rows from a flat-file into a
target?
Ans. The first 4 rows can be loaded from a flat-file into a target using the
following steps:
2. It can filter rows only from relational 2. This can filter rows from any type of source
sources. the mapping level.
employee_id, salary
1, 2000
2, 3000
3, 4000
4, 5000
1, 2000, 2000
2, 3000, 5000
3, 4000, 9000
4, 5000, 14000
Ans. The following steps need to be followed to get the desired output:
1, 2000, 14000
2, 3000, 14000
3, 4000, 14000
4, 5000, 14000
Ans. The following steps should be followed to get the desired output:
Step 1:
employee_id
salary
O_dummy=1
Step 2:
Salary
O_dummy
O_sum_salary=SUM(salary)
Step 3:
Check the property sorted input and connect expression and aggregator
to joiner transformation.
Step 4:
Q105. Create a mapping to get the previous row salary for the
current row. In case, there is no previous row for the current row,
then the previous row salary should be displayed as null.
1, 2000, Null
2, 3000, 2000
3, 4000, 3000
4, 5000, 4000
Ans. The following steps will be followed to get the desired output:
employee_id
salary
V_count=V_count+1
V_salary=IIF(V_count=1,NULL,V_prev_salary)
V_prev_salary=salary
O_prev_salary=V_salary
Ans: The Informatica server rejects files when there is a rejection of the
update strategy transformation. In such a rare case scenario the database
comprising the information and data also gets interrupted.
If the SELECT list COLUMNS in the Custom override SQL Query and
the OUTPUT PORTS order in SQ transformation do not match?
Ans. Such a scenario where the SELECT list COLUMNS in the Custom
override SQL Query and the OUTPUT PORTS order in SQ transformation do
not match – may result in session failure.
If the data is unsorted, then consider the source with fewer rows as
the master source.
If joins cannot be performed for some tables, then the user can
create a stored procedure and then join the tables in the database.
Ans. To load alternate records into different tables through mapping flow,
just add a sequence number to the records and then divide the record
number by 2. If it can be divided, then move it to one target. If not, then
move it to the other target.
Give condition
Repository Privileges
Q111. How can you store previous session logs in Informatica?
Ans. The following steps will enable you to store previous session logs in
Informatica:
Save session log for these runs –> Change the number that you want to
save the number of log files (Default is 0)
If you want to save all of the log files created by every run, and then
select the option Save session log for these runs –> Session
TimeStamp
Ans. The following are the performance considerations while working with
Aggregator Transformation:
To minimize the size of the data cache, connect only the needed
input/output ports to the succeeding transformations.