0% found this document useful (0 votes)
4 views21 pages

Informatica Scenario Based Interview Questions

The document outlines frequently asked scenario-based interview questions for Informatica, focusing on technical skills and problem-solving abilities. It provides detailed solutions to specific scenarios, such as loading unique and duplicate rows from a dataset, using Normalizer Transformation, and implementing SCD Type 1 mapping. Additionally, it includes steps for using PMCMD utility commands and configuring target load order in Informatica.

Uploaded by

sharon.bathina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views21 pages

Informatica Scenario Based Interview Questions

The document outlines frequently asked scenario-based interview questions for Informatica, focusing on technical skills and problem-solving abilities. It provides detailed solutions to specific scenarios, such as loading unique and duplicate rows from a dataset, using Normalizer Transformation, and implementing SCD Type 1 mapping. Additionally, it includes steps for using PMCMD utility commands and configuring target load order in Informatica.

Uploaded by

sharon.bathina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Informatica Scenario Based Interview Questions

The following are some of the frequently asked informatica interview


questions scenario based.

What are Informatica Scenario Based interview questions?

In a Scenario based interview, you will be first offered a scenario and


then asked questions related to it. Your response to Informatica
scenario based questions will show your technical skills as well as your
soft skills, such as problem-solving and critical thinking.

Now that you are just one step away to land your dream job, you must
prepare well for all the likely interview questions. Remember that every
interview round is different, especially when scenario-based
Informatica interview questions are asked.

Q90. How do you load the last N rows from a flat-file into a target
table in Informatica?

Ans. This is an important Informatica scenario based question.

Considering that the source has data:

 Col

 ABC

 DEF

 GHI

 JKL

 MNO

Now follow the below steps to load the last 3 rows into a target
table

Step 1

 Assign the row numbers to each record by using expression


transformation. Name the row to calculate as N_calculate.

 Create a dummy output port and assign 1 to the port in the same
expression transformation.

 This will return 1 for each row.

Ports in Expression Transformation


V_calculate=V_calculatet+1

N_calculate=V_calculate

N_dummy=1

Outputs in Expression Transformation

col, N_calculate, N_dummy

ABC, 1, 1

DEF, 2, 1

GHI, 3, 1

JKL, 4, 1

MNO, 5, 1

Step 2

 Pass expression transformation output to the aggregator


transformation

 Do not specify condition ‘any group’

 Create a N_total_records output port in the aggregator

 Assign the N_calculatet port to it.

 By default, it will return the last row

 It will contain DUMMY port

 Now it will hold the value as 1 and N_total_records port (it will keep
the value of the total number of records available in the source)

Ports in Aggregator Transformation

N_dummy

N_calculate

N_total_records=N_calculate
Outputs in Aggregator Transformation

N_total_records, N_dummy

5, 1

Step 3

 Now pass the value of expression and aggregator transformation to


the joiner transformation

 Merge the dummy port

 Check the property sorted input in the joiner transformation to


connect both expression and aggregator transformation

 Now the join condition will be O_dummy (port from aggregator


transformation) = O_dummy (port from expression transformation)

Outputs in Joiner Transformation

col, N_calculate, N_total_records

ABC, 1, 5

DEF, 2, 5

GHI, 3, 5

JKL, 4, 5

MNO, 5, 5

Step 4

 Pass the joiner transformation to filter transformation

 Mention the filter condition as N_total_records (port from


aggregator)-N_calculate(port from expression) <=2

 Thus, the filter condition in the filter transformation will be


N_total_records – N_calculate <=2

Output

Outputs in Filter Transformation

col N_calculate, N_total_records


GHI, 3, 5

JKL, 4, 5

MNO, 5, 5

Check out the popular Business Data Mining Courses

Q91. Solve the below situations if data has duplicate rows.

Data

 Amazon

 Walmart

 Snapdeal

 Snapdeal

 Walmart

 Flipkart

 Walmart

Situation – Give steps to load all unique names in one table and
duplicate names in another table.

Solution 1 – We want solution tables as:

Amazon and Flipkart in one table

And

Walmart, Walmart, Walmart, Snapdeal, and Snapdeal in another table

Follow the below steps

 Sort the name data by using a sorter transformation

 Pass the sorted output to an expression transformation

 Form a dummy port N_dummy and assign 1 to the port

 Now for each row, the Dummy output port will return 1

Expression Transformation Output

Name, N_dummy

Amazon, 1
Walmart, 1

Walmart, 1

Walmart, 1

Snapdeal, 1

Snapdeal, 1

Flipkart, 1

 Pass the acquired expression transformation output to aggregator


transformation

 Check ‘groupby’ on name port

 Create an output port in aggregator N_calculate_of_each_name and


write an expression calculate(name).

Aggregator Transformation Output

name, N_calculate_of_each_name

Amazon, 1

Walmart, 3

Snapdeal, 2

Flipkart, 1

 Pass the expression and aggregator transformation output to joiner


transformation

 Join the name ports

 Review the property sorted input to connect both transformations to


joiner transformation

Joiner Transformation Output

name, N_dummy, N_calculate_of_each_name

Amazon, 1, 1

Walmart, 1, 3
Walmart, 1, 3

Walmart, 1, 3

Snapdeal, 1, 2

Snapdeal, 1, 2

Flipkart, 1, 1

 Move the joiner output to router transformation

 Create one group

 Specify it as O_dummy=O_count_of_each_name

 Connect the group to one table

 Connect default output group to another table

 You will get separate tables for both

Q92. Situation 2 – Solve the below situations if data has duplicate


rows.

Data

 Amazon

 Walmart

 Snapdeal

 Snapdeal

 Walmart

 Flipkart

 Walmart

Situation – Load each name once in one table and duplicate


products in another table.

Ans.

Solution 2 – We want the output as:

Table 1

Amazon
Walmart

Snapdeal

Flipkart

Table 2

Walmart

Walmart

Snapdeal

The below steps will give the desired solution:

 Sort the name data by using a sorter transformation

 Pass name output to expression transformation

 Create a variable port,Z_curr_name

 Assign the name port to variable port

 Create Z_calculate port

 Write in the expression editor, IIF(Z_curr_name=Z_prev_name,


V_calculate+1,1)

 Form another variable and call it as port Z_prev_port

 Assign the name port to this variable

 Form the output portN_calculate port

 Assign Z_calculate to this output port

Expression Transformation Name port

Z_curr_name=name

Z_calculate=IIF(Z_curr_name=Z_prev_name, Z_calculate+1, 1)

N_calculate=Z_calculate

Expression Transformation Output


Amazon, 1

Walmart, 1

Walmart, 2

Walmart, 3

Snapdeal, 1

Snapdeal, 2

Flipkart, 1

 Route the expression transformation to router transformation

 Form a group

 Specify condition as N_calculate=1

 Merge the group to one table

 Merge the default group output to another table

Learn more about Data Analysis

Q93. In Informatica, how do you use Normalizer Transformation


for the below-mentioned condition?

Quarter 1 Quarter 2 Quarter 3 Quarter


State
Purchase Purchase Purchase Purchas

ABC 80 85 90 95

DEF 60 65 70 75

Ans. This is one of the popularly asked Informatica Interview


questions that you must prepare for your upcoming interview.

If you want to transform a single row into multiple rows, Normalizer


Transformation will help. Also, it is used for converting multiple rows into a
single row to make data look organized. As per the above scenario-
based Informatica interview question, we want the solution to look
as:

State Name Quarter Purchase

ABC 1 80
ABC 2 85

ABC 3 90

ABC 4 95

DEF 1 60

DEF 2 65

DEF 3 70

DEF 4 75

Follow the steps to achieve the desired solution by using normalizer


transformation:

Step 1 –

 Create a table “purchase_source” and assign a target table as


“purchase_target”

 Import the table to informatica

 Create a mapping for both the tables having a source as


“purchase_source” “purchase_target” respectively

 Create a new transformation from the transformation menu

 Enter the name “xyz_purchase”

 Select create option

 Select done (now the transformation is created)

Step 2 –

 Double click on normalization transformation

 Go to normalizer tab and select it

 From the tab, click on the icon, this will create two columns

 Enter the names of columns

 Fix number of occurrences to 4 for purchase and 0 for the state


name

 Select OK
 4 columns will be generated and appear in the transformation

Step 3 –

 In the mapping, link all four columns in source qualifier of the four
Quarters to the normalizer

 Link state name column to normalizer column

 Link state_name and purchase columns to target table

 Link lkp_purchase column to target table

 Create session and workflow

 Save the mapping and execute it

 You will get the desired rearranged output

State Name Quarter Purchase

ABC 1 80

ABC 2 85

ABC 3 90

ABC 4 95

DEF 1 60

DEF 2 65

DEF 3 70

DEF 4 75

Q94. What to do when you get the below error?

AA_10000 Normalizer Transformation: Initialization Error: [Cannot


match AASid with BBTid.]

Ans. Follow the below process –

 Remove all the unconnected input ports to the normalizer


transformation

 If OCCURS is present, check that the number of input ports is equal


to the number of OCCURS
Q95. What are the steps to create, design, and implement SCD
Type 1 mapping in Informatica using the ETL tool?

Ans. The SCD Type 1 mapping helps in the situation when you don’t want
to store historical data in the Dimension table as this method overwrites
the previous data with the latest data.

The process to be followed:

 Identify new records

 Insert it into the dimension table

 Identify the changed record

 Update it in the dimension table

For example:

If the source table looks like:

CREATE TABLE Students (

Student_Id Number,

Student_Name Varchar2(60),

Place Varchar2(60)

Now we require using the SCD Type 1 method to load the data present in
the source table into the student dimension table.

CREATE TABLE Students_Dim (

Stud_Key Number,

Student_Id Number,

Student_Name Varchar2(60),

Location Varchar2(60)

Follow the steps to generate SCD Type 1 mapping in Informatica

 In the database, create source and dimension tables

 Create or import source definition in the mapping designer tool’s


source analyzer

 Import the Target Definition from Warehouse designer or Target


designer
 Create a new mapping from the mapping designer tab

 Drag and drop the source

 Select Create option from toolbar’s Transformation section

 Select Lookup Transformation

 Enter the name and click on create

 From the window, select Student dimension table and click OK

 Edit lkp transformation

 Add a new port In_Student_Id from the properties tab

 Connect the port to source qualifier transformation’ Student_Id port

 From the lkp transformation’s condition tab, enter the Lookup


condition as Student_Id = IN_Student_Id

 Click OK

 Now, connect source qualifier transformation’s student_id port to lkp


transformation’s In_Student_Id port

 Create expression transformation using the input port as Stud_Key,


Name, Location, Src_Name, Src_Location

 Create an output port as New_Flag, Changes_Flag

 In the expression transformation’s output ports, enter the below-


mentioned expression

 New_Flag = IIF(ISNULL(Stud_Key),1,0)

Changed_Flag = IIF(NOT ISNULL(Stud_Key)

AND (Name != Src_Name

OR Location != Src_Location),

1, 0 )

 Connect lkp transformation port to expression transformation port

 Also, connect source qualifier transformation port to expression


transformation port

 Form a filter transformation and move the ports of source qualifier


transformation

 Edit the filter transformation and set new Filter Condition as


New_Flag=1 from the edit filter transformation option
 Press OK

 Create an update strategy transformation

 Connect all filter transformation port just exclude except the


New_Flag port

 From the properties tab of update strategy, enter DD_INSERT as the


strategy expression

 Drag target definition to mapping

 Connect relevant ports to target definition from update strategy

 Create a sequence generator transformation

 Connect NEXTVAL port to target surrogate key port (stud_key)

 Create a different filter transformation

 In the filter transformation, drag lkp transformation’s port


(Stud_Key), source qualifier transformation (Name, Location),
expression transformation (changed_flag) ports

 Go to the properties tab to edit the filter transformation

 Mention the filter condition as Changed_Flag=1

 Click OK

 Create the update strategy

 Connect parts of filter transformation to update strategy

 From the update strategy properties tab, enter expressions


DD_Update

 In this mapping, drag target definition

 From the update strategy, connect all the appropriate ports to target
definition

Explore the concept of Business Analytics

Q96. Give steps to use PMCMD Utility Command.

Ans. There are 4 different built-in command-line programs:

 infacmd

 infasetup

 pmcmd

 Pmrep
PMCMD command helps for the following functions:

 Start workflows

 Schedule workflows

 Start a workflow from a specific task

 Stop and abort workflows and sessions

Below are the steps to use PMCMD command:

 Start workflow

pmcmd startworkflow -service informatica-integration-Service -d domain-


name -u user-name -p password -f folder-name -w workflow-name

 Scheduling the workflow

pmcmd scheduleworkflow -service informatica-integration-Service -d


domain-name -u user-name -p password -f folder-name -w workflow-name

 Start a workflow from a specific task

pmcmd startask -service informatica-integration-Service -d domain-name -


u user-name -p password -f folder-name -w workflow-name -startfrom
task-name

 Abort workflow

pmcmd abortworkflow -service informatica-integration-Service -d domain-


name -u user-name -p password -f folder-name -w workflow-name

pmcmd aborttask -service informatica-integration-Service -d domain-name


-u user-name -p password -f folder-name -w workflow-name task-name

Q97. How to configure the target load order in Informatica?

Ans. Follow the below steps:

 Create mapping containing multiple target load order groups in the


PowerCenter designer

 From the toolbar, click on the Mappings and then click on Target
Load Plan

 You will see a pop up that will have a list of source qualifier
transformations in the mapping. Also, it will have the target from
which it receives data from each source qualifier

 From the list, pick a source qualifier

 Using the Up and Down button, move source qualifier within load
order
 Click ok

 You will get the desired output

Q98. Using the incremental aggregation in the below table, what


will be the output in the next table?

Product ID Bill Number Cost Data

101 1 100 01/01/2020

201 2 150 01/01/2020

301 3 200 01/01/2020

101 4 300 05/01/2020

101 5 400 05/01/2020

201 6 500 05/01/2020

555 7 550 05/01/2020

151 8 600 05/01/2020

Ans. When the first load is finished the table will become:

Product ID Bill Number Load_Key D

101 1 20011 1

201 2 20011 1

301 3 20011 2

Q99. What is the syntax of the INITCAP function?

Ans. This function is used to capitalize the first character of each word in
the string and makes all other characters in lowercase.

Below is the Syntax:

INITTCAP(string_name)

These were some of the most popular scenario-based Informatica


interview questions.

Q100. How will you generate sequence numbers using expression


transformation?
Ans. We can generate sequence numbers using expression
transformation by following the below steps:

 Create a variable port and increment it by 1

 Allocate the variable port to an output port. The two ports in the
expression transformation are: V_count=V_count+1 and
O_count=V_count

Also Read>> Top Database Interview Questions and Answers

Q101. How will you load the first 4 rows from a flat-file into a
target?

Ans. The first 4 rows can be loaded from a flat-file into a target using the
following steps:

 Allocate row numbers to each record.

 Create the row numbers by using the expression transformation or


by using the sequence generator transformation.

 Pass the output to filter transformation and specify the filter


condition as O_count <=4

Q102. What is the difference between Source Qualifier and Filter


Transformation?

Ans. The differences between Source Qualifier and Filter Transformation


are:

Source Qualifier Transformation Filter Transformation

1. It filters rows while reading the data


1. Filters rows from within mapping.
from a source.

2. It can filter rows only from relational 2. This can filter rows from any type of source
sources. the mapping level.

3. Source Qualifier limits the row sets


3. It limits the row set sent to a target.
extracted from a source.

4. It reduces the number of rows used 4. To maximize performance, Filter Transform


in mapping thereby enhancing added close to the source to filter out the unw
performance. early.

5. Filter condition uses the standard 5. Filter Transformation defines a condition us


SQL to run in the database. statement or transformation function that ret
a TRUE or FALSE value.

Q103. Create a mapping to load the cumulative sum of salaries of


employees into the target table. Consider the following
employee’s data as a source.

employee_id, salary

1, 2000

2, 3000

3, 4000

4, 5000

The target table data should look like the following:

employee_id, salary, cumulative_sum

1, 2000, 2000

2, 3000, 5000

3, 4000, 9000

4, 5000, 14000

Ans. The following steps need to be followed to get the desired output:

 Connect the source Qualifier to the expression transformation

 Create a variable port V_cum_sal in the expression transformation

 Write V_cum_sal+salary in the expression editor

 Create an output port O_cum_sal and assign V_cum_sal to it

Q104. Create a mapping to find the sum of salaries of all


employees. The sum should repeat for all the rows. Consider the
employee’s data provided in Q14. as a source.

The output should look like:

employee_id, salary, salary_sum

1, 2000, 14000

2, 3000, 14000

3, 4000, 14000

4, 5000, 14000

Ans. The following steps should be followed to get the desired output:
Step 1:

Connect the source qualifier to the expression transformation.

Create a dummy port in the expression transformation and assign value 1


to it. The ports will be:

 employee_id

 salary

 O_dummy=1

Step 2:

Provide the output of expression transformation to the aggregator


transformation.

Create a new port O_sum_salary

Write- SUM(salary) in the expression editor.

The ports will be:

 Salary

 O_dummy

 O_sum_salary=SUM(salary)

Step 3:

Provide the output of expression transformation and aggregator


transformation to joiner transformation.

Join the DUMMY port.

Check the property sorted input and connect expression and aggregator
to joiner transformation.

Step 4:

Provide the output of the joiner to the target table.

Q105. Create a mapping to get the previous row salary for the
current row. In case, there is no previous row for the current row,
then the previous row salary should be displayed as null.

The output should look like:

employee_id, salary, pre_row_salary

1, 2000, Null
2, 3000, 2000
3, 4000, 3000
4, 5000, 4000

Ans. The following steps will be followed to get the desired output:

 Connect the source Qualifier to the expression transformation.

 Create a variable port V_count in the expression transformation.

 Increment it by 1 for each row.

 Create V_salary variable port and assign


IIF(V_count=1,NULL,V_prev_salary) to it.

 Create variable port V_prev_salary and assign Salary.

 Create output port O_prev_salary and assign V_salary.

 Connect the expression transformation to the target ports.

The ports in the expression transformation will be:

 employee_id

 salary

 V_count=V_count+1

 V_salary=IIF(V_count=1,NULL,V_prev_salary)

 V_prev_salary=salary

 O_prev_salary=V_salary

Q106. What is the name of the scenario in which the Informatica


server rejects files?

Ans: The Informatica server rejects files when there is a rejection of the
update strategy transformation. In such a rare case scenario the database
comprising the information and data also gets interrupted.

Q107. What will happen in the following scenario:

If the SELECT list COLUMNS in the Custom override SQL Query and
the OUTPUT PORTS order in SQ transformation do not match?

Ans. Such a scenario where the SELECT list COLUMNS in the Custom
override SQL Query and the OUTPUT PORTS order in SQ transformation do
not match – may result in session failure.

Q108. What can be done to enhance the performance of the joiner


condition?
Ans. The joiner condition performance can be enhanced by the following:

 Sort the data before applying to join.

 If the data is unsorted, then consider the source with fewer rows as
the master source.

 Perform joins in a database.

 If joins cannot be performed for some tables, then the user can
create a stored procedure and then join the tables in the database.

Q109. How do you load alternate records into different tables


through mapping flow?

Ans. To load alternate records into different tables through mapping flow,
just add a sequence number to the records and then divide the record
number by 2. If it can be divided, then move it to one target. If not, then
move it to the other target.

It involves the following steps:

 Drag the source and connect to an expression transformation.

 Add the next value of a sequence generator to the expression


transformation.

 Make two ports, Odd and Even in the expression transformation.

 Write the expression below

v_count (variable port) = v_count+1

o_count (output port) = v_count

 Connect a router transformation and drag the port (products,


v_count) from expression into the router transformation.

 Make two groups in the router

 Give condition

 Send the two groups to different targets

Q110. How do you implement Security Measures using a


Repository manager?

Ans. There are 3 ways to implement security measures:

 Folder Permission within owners, groups, and users.

 Locking (Read, Write, Retrieve, Save, and Execute).

 Repository Privileges
Q111. How can you store previous session logs in Informatica?

Ans. The following steps will enable you to store previous session logs in
Informatica:

 Go to Session Properties > Config Object > Log Options

 Select the properties:

Save session log by –> SessionRuns

Save session log for these runs –> Change the number that you want to
save the number of log files (Default is 0)

 If you want to save all of the log files created by every run, and then
select the option Save session log for these runs –> Session
TimeStamp

Q112. Mention the performance considerations while working


with Aggregator Transformation?

Ans. The following are the performance considerations while working with
Aggregator Transformation:

 To reduce unnecessary aggregation, filter the unnecessary data


before aggregating.

 To minimize the size of the data cache, connect only the needed
input/output ports to the succeeding transformations.

 Use Sorted input to minimize the amount of data cached to enhance


the session performance.

We hope that this interview blog covering Informatica interview


questions for freshers and experienced candidates as well scenario-
based Informatica interview questions, will help you crack your upcoming
interview.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy