0% found this document useful (0 votes)
21 views

take printout

The document discusses the functionality and limitations of the Router component in SAP CPI, particularly in routing messages based on specific conditions like 'Country'. It highlights the importance of the order of conditions and suggests optimized approaches such as using Splitter and Multicast patterns for more efficient routing of multiple records. Additionally, it covers error handling, tracing, and debugging techniques in integration flows.

Uploaded by

sapcpibuddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

take printout

The document discusses the functionality and limitations of the Router component in SAP CPI, particularly in routing messages based on specific conditions like 'Country'. It highlights the importance of the order of conditions and suggests optimized approaches such as using Splitter and Multicast patterns for more efficient routing of multiple records. Additionally, it covers error handling, tracing, and debugging techniques in integration flows.

Uploaded by

sapcpibuddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 71

Router:

How to check particular node exists in sap cpi

exists(//CustomerID)

the primary limitation of the "Router" component is that it can only process one routing branch based
on the conditions set, meaning that if multiple conditions are met, only the first matching branch will be
executed,

Only one routing branch is evaluated and executed per message, even if multiple conditions are met.

The order in which conditions are defined within the router is crucial, as the first matching condition
determines the route.

This is my XML

<root>
<players>
<player id="1">
<country>CAN</country>
<name>Siddhart</name>
</player>
<player id="2">
<country>USA</country>
<name>Jorge</name>
</player>
<player id="12">
<country>CAN</country>
<name>Pradeep</name>
</player>
</players>
</root>

The Route 2 have the following condition:


/root/players/player[country='CAN']

When do I use this pattern?

This pattern can be applied when the message needs to be routed based on the contents of the
message. For example, if a delivery is to be done in the UK, it is sent to Delivery System in the UK, if the
delivery is to be done in India, it is sent to Delivery System in India. In this example, the value of the
Country element can be used to send a message to one system or another.

In this blog, we'll focus on sending a message into a different Delivery System based on the content of
the Country element.

Let's consider these two payloads:

Input 1:

<Delivery>
<Country>UK</Country>
</Delivery>

Input 2:

<Delivery>
<Country>India</Country>
</Delivery>
Integration Flow

Order Route Name Condition Expression Default Route


1 India /Delivery/Country = 'India' No
2 UK /Delivery/Country = 'UK' No
3 Yes

Conclusion

Here, the DS is a central delivery system sending deliveries to delivery systems in each country.

The routing has been achieved using the Router 'Country'. Here's the configuration of the Routing
Condition table:

I am trying of understand the functionality of Router.

This is my XML

<root>
<players>
<player id="1">
<country>CAN</country>
<name>Siddhart</name>
</player>
<player id="2">
<country>USA</country>
<name>Jorge</name>
</player>
<player id="12">
<country>CAN</country>
<name>Pradeep</name>
</player>
</players>
</root>

I am creating this flow:


The Route 2 have the following condition:

/root/players/player[country='CAN']

The Route 1 is Default route.

However in the output appear the sames 3 records of the input with format JSON

This is the resulted of the trace

My question is:

The system not should entry in the route1when the player have as country USA?

How can get the resulted that I want?

Answer:

You can use content filter if you only want certain country
records: https://blogs.sap.com/2017/06/01/sap-cloud-platform-integration-content-filter-in-detail/

Or if you want to take different actions based on the country, you can use Splitter
pattern https://blogs.sap.com/2020/09/21/sap-cloud-platform-integration-general-splitter/
But the router is only checking the first record, and routing the entire payload down the correct branch
as it's currently designed.
How to Pass two conditions with multiple values in a Router using AND in a Non-XML expression?

If you are getting incoming payload and you want to store the order Id then you can use content
modifier -> create one property with name order Id and use xpath and provide the xml path here

when ever you are applying filter you need to add content Modifier as filter will delete root element

Suppose you are getting incoming payload and you want to send it to different receivers then multicast
is used ..if you are getting incoming payload and you eant to sent it to different receivers depending on
condition then router is used

Splitter + router- if you want to send only relevant data to the receiver instead of sending entire payload
to receiver

Multicast +filter- if you want to send only relevant data to the receiver instead of send entire payload or
data to receiver

Write variables – mostly used in delta loads

Request –Response (synchronous call) Calls an external receiver system synchronously and retrieves a
response.

Send : can be used to configure a service call to a receiver system where no reply is expected.

Filter – nodelist,node,Boolean,string,Integer

Activate to view larger image,


you are using a lookup table in Excel files right you are reading a one column from Excel and look up the
same data in another Excel file and retrive some data

when from the incoming message you pick up one value and read that value in one specific table like
structure then pick up the relative information and push the data that means from the incoming
message you will have one value but the same value will be translated to the receiving side that is what
value mapping is going to do

process call- you need an process call step in your main integration flow which allows you to select and
execute the desired Local integration process from the available local integration processes

Local Integration process:

---------------------------------------------------------------------------------------------------------------------------------------

Add a looping Process call step to repeatedly execute the steps defined in local integration process until
the condition is met or max iterations allowed is reached which ever comes first

Now, we will design a scenario in cloud integration. We will fetch large data from OData and loop it
through a specific condition (looping process call). Then, we will observe the results together.

Looping Process call: https://community.sap.com/t5/enterprise-resource-planning-blogs-by-members/


sap-cloud-integration-looping-process-call/ba-p/13598555
1. First, we will create a CPI link to be able to make calls to the service.

Figure 2. Sender HTTPS Adapter

Adapter Type: HTTPS


Address: Specific
Step 2. We specify that the looping process call will work according to the condition expression specified
in the "condition expression" field. By stating ".hasMoreRecords contains 'true', we indicate that the
loop will continue to run as long as there are multiple records. You can take a look at (hasMoreRecords).

When this condition returns false, the loop will end.

Figure 3. Loop Process Call

Step 3.OData informations.


tep 4.We use the "select" clause to choose which fields we want to retrieve from the Orders entity.

Our method is GET.

We need to mark "Process in Pages". If we don't mark it, the system will send all the data at once after
entering the loop once.

Figure 5.Odata Adapter Processing Information

Step 5.After passing through the filter, the data will no longer include "Orders" but will start with
"Order." This is because we need the information of "Orders/Order" due to sending the data in
fragments. After completing the process of sending fragmented data, we will merge it in the "Message
Body" of the Content Modifier.

Figure 6.Filter
Step 6.${property.payloadStack}${in.body} : We use it to continue adding each incoming fragmented
data to the body. You can take a look at it.

Figure 7. Content Modifier-Exchange Properties-Append Body

Step 7.We add the "Orders" tag, which we ignored with the filter, to this content modifier. Once the
loop is completely finished, we add the merged data as a property.

Figure 8.Content Modifier-Message Body

Step 8.To indicate that the last data will come in XML format, we add the "Content-Type" header.
Figure 9.Content Type

Step 9. We fill in the necessary information for the email adapter.

Figure 10.Mail Adapter Connection Information

Step 10.We determine who the email will come from and who it will go to.
Figure 11.Mail Adapter Processing Information

Step 11. Save and Deploy. Then once we have created a CPI link, we need to call it using the GET method
in Postman after the deployment.

Step 12. We are making a call to the CPI service using the CPI username and password.

Figure 12.Postman

Step 13. It entered the loop a total of 6 times, but on the 6th request, since there was no data left inside,
it combined the data sent in fragments, exited the loop, and continued to 'End'.
Figure 12.Monitoring

When we look at our first loop, it indicates that in the first request, it fetched the first 200 records from
the entire data set and provided the information that the next loop would start with OrderID 10448
using the expression "$skiptoken=10447".
In each loop, as it adds data, it indicates that there were 400 records in the 2nd request, and when it
enters the 3rd loop, it won't fetch the same initial 400 records again. Similarly, it shows that in the next
loop, the data will start with OrderID 10648.

The important point to note is that it continues to loop as long as the condition we set is met, meaning it
enters the loop as long as it evaluates to true.

When we check the final step, we understand that this condition returns false, indicating that it has
fetched all the data inside.
Due to the condition, since the Loop process has ended, we receive information that the last data has
the OrderID number 11047.

Finally, I wanted to add an email adapter. It notifies all the information via email.
Aggregator- from source if you are getting more than one incoming message
**Router in SAP CPI – Common Missteps and Optimized Approaches**

When working with XML payloads in SAP CPI, routing based on specific conditions (like Country) is a
common use case. Most developers naturally reach for the **Router**, and while it’s an excellent tool,
there’s more to consider to ensure your flow is **efficient and scalable**.

### The Scenario


Imagine you receive a payload like the one in the document

You want to process records differently based on the **Country**. For example, all "India" records
might go through a specific branch.

### The Default Approach


The **Router** checks the `Country` value in the first record of the payload and routes the **entire
payload** based on that.

This is fine for simple use cases, but what if your payload contains multiple records for different
countries?

---

### Optimized Approaches 🚀

11️⃣**Splitter + Router**
- Use a **Splitter** to process each record individually.
- Then use a **Router** to evaluate each record and route it accordingly.

**Why?**
This ensures accurate routing for all records within the payload, not just the first one.

---

2️⃣**Multicast + Filter**
- Use a **Multicast** step to create multiple processing paths for each condition (e.g., one for India,
one for others).
- In each path, apply a **Filter** to process only the relevant records.

**Why?**
This approach avoids splitting but still enables parallel processing of records based on conditions.

---
### Pro Tip: Dynamic Adapter Configuration
Instead of creating multiple receivers for different countries, use **Dynamic Adapter Configuration**.
- Set a dynamic endpoint based on the record’s country.
- Use one receiver with **adapter conditions** to optimize resource usage.

---

### Key Takeaways


✔️**Avoid Duplicate Steps**: Design your flow to minimize redundancy.
✔️**Enhance Scalability**: Use dynamic configurations for receivers instead of hardcoding them.
✔️**Improve Performance**: Choose the right combination of Splitter, Router, or Multicast based on
your payload and requirements.

**This tip might not be new for seasoned developers, but for learners, it's an essential foundation for
designing optimized integration flows.**

 ${date:now:yyyy-MM-dd} -Current Date/Time with format


 ${property. SAP_MessageProcessingLogID} -Message Guid
 ${CamelFileName} -File Name
 ${CamelHttpUri} -HTTP URI (URL with QueryString)
 ${CamelDestinationOverrideUrl} -SOAP Adapter URL
 ${exception.message}- Error Message
 ${exception.stacktrace} -Error Stacktrace
 ${camelId} -Iflow Name
 ${SAP_ApplicationID} -ID created in Message Processing Log for searching in monitoring

Interview questions:

 optimization techinuque while developing ifows


 What kind of issues we wi get in production
 Why only ODATA for SF integration
 Name 5 SF ODATA APIS
 How do you analysze the requirmentents how do you develop ifow in details

These are the questions asked me in TCS walk-in

${file:Onlyname.noext}_${date:now:yyyy-MM-dd HH:mm:ss SSS}

Can you please help understand in which situations do we go for Error End and Escalation End? The only
difference that I can see is the message status which is Failed / Escalated. Apart from this, whats the
purpose of both and have you come across real situations needing to use them separately? Please share
your thoughts?
n SAP CPI, an *Error End* event is used when the integration flow encounters a critical failure that
cannot be recovered, such as a mapping error, connectivity issue, or missing mandatory data, and needs
to explicitly terminate while logging the error for monitoring and troubleshooting. For example, if an API
call fails due to invalid credentials, the flow can be terminated using an Error End to ensure the issue is
logged as a failure in monitoring. An *Escalation End* event, on the other hand, is used to signal a non-
critical error or exceptional scenario that may require alternative handling or routing within a larger
process context, such as notifying a stakeholder or triggering a compensatory process. For instance, if a
stock check API returns "out of stock," an Escalation End could notify the sales team to follow up, while
allowing other parts of the process to continue. Use *Error End* for unrecoverable issues and
*Escalation End* for managed deviations that do not halt the overall process.

In SAP CPI, *trace* mode captures detailed, end-to-end information about the integration flow,
including intermediate payloads, headers, and properties, to help analyze the flow step-by-step, but it
impacts performance significantly and is typically used for troubleshooting. In contrast, *debug* is more
focused on testing specific components like Groovy scripts, enabling developers to identify logic errors
or configuration issues within a particular step, with a lower performance impact and logs written in
scripts or the Message Processing Logs. Trace is for operational analysis, while debug is for development
and testing.

Multicast (Multiple Recievers )


Parallel : if any of the branch fails .. All the braches will executed and status of the iflow in Message
Processing will be failed

Sequential : based on sequential order the branches gets executed and if any of the branch fails ..
remaining branches will not be executed

Multicast with Single receiver (-> Join + Gather )

Both In Parallel and Sequential if any of the branch fails – Join and Gather are not executed

Parallel with Join + gather : if one of the branch fails - it will execute remaining branches and Message
will not reach Join and Gather and the Message Processing status will be in failed status in Monitor
Section ...

Sequential with Join + gather : based on sequential order the branches gets executed and if one of the
branch fails ,it will not execute remaining branches and Message will not reach Join and Gather and
the Message Processing status will be in failed status in Monitor Section ...

when splitting a message into multiple branches, changes made to headers or properties within one
branch are not reflected in the other branches,

Any modifications to message headers or properties within one multicast branch are not visible to
other branches,

In a multicast pattern, the main branch of the integration flow is split into multiple child branches that
are processed either in parallel (parallel multicast) or in a defined order (sequential multicast).

There are the following constraints:

 Headers and properties set in the main branch before the multicast step are available in the
child branches.
 Headers and properties set in a child branch aren't available in other child branches.
 When the multicast branches are again merged in a Join and Gather step, properties that are set
in the child branches are available in the main branch. However, headers that are set in the child
branches aren't available in the main branch.
Assume that in a sequential multicast scenario there's the requirement that a value set in 1 branch is
available in a subsequent branch.

In this case, you have the option to use the local persistence (variable or data store) to share this data
across the different branches.

Behavior of CPI's message header/property in a multicast

Sample Integration Flow

To test this out, the below integration flow is used.

 An HTTPS sender is used to trigger a message into the integration flow.

 The main route is split into two branches using a sequential multicast.

 Script1 - Creates headers and properties in the main route

 Script2 - Modifies headers and properties (from main route) and creates new header and
property

 Script3 - Accesses all headers and properties created from above


Additionally, for header/property created in the main route, it is also tested with the following object
types:-

 java.lang.String

 java.util.HashMap

Test Results

With additional logging in each of the scripts, the content of the message headers and properties are
logged into the MPL as attachments. Below are the test results when a message is triggered into the
integration flow.

1 - Main Route

Following headers and properties are created and populated with


data; MainHeader, MainProperty, MainHeaderMap, and MainPropertyMap.

2 - Route 1

In the first branch of the multicast, the main headers/properties are modified.
Additionally, Route1Header and Route1Property are created and populated. These are highlighted in
yellow below.

3 - Route 2

Finally, when the message reaches the second branch of the multicast, the following are observed in the
logs:-

 Modifications made in Route 1 to the String-based header/property of the main route are not
seen.
 Header/Property created in Route 1 are not accessible.

 Modifications made in Route 1 to the HashMap-based header/property of the main route are
seen.

Conclusion

The following table summarises the behavior of message header/property when using the multicast
pattern.

Header/Property Accessible in Modifications in Predecessor Branch


Created In
Object Type Subsequent Branch seen in Subsequent Branch
java.lang.String Main route Yes No
java.util.HashMap Main route Yes Yes
Multicast
Any No N/A
branch
If you noticed the other details in the above logs, you might have observed that the Camel Exchange ID
changes when it moves from the main route to the multicast branches. This can be verified further in
the MPL, where it indicates that a new Exchange ID is created during branching.

1. Parallel Gateway: All branches of the gateway are executed in parallel.

2. Sequence Gateway: Branches are executed in the order specified in the Properties tab.
Note: You have to use the Join + Gather step together. A Gather step alone shall not work.

What happens in case of Error?

The quality of service applied by default is "Stop on Exception". In the Message Monitoring, it always
shows as a Failed message. Few combinations exist:

 If you have a Join+Gather step and one of the incoming branches (parallel or sequential) fails,
then the processing is stopped.
 When the multicast is connected to multiple receivers, then the successful branches shall
receive the message correctly. However, the overall message in the monitoring shall still be
shown as Failed.
 In a sequential multicast, if a branch fails to execute a message - then the processing is stopped
with an exception thrown.

Mail Adatpor:
You can download these server certificates ..server certificate chain ..root certificate ..intermediate
certificate ..peer certificate

Add those certificates in Monitoring->manage security->keystores ->Add certificate


click on confirm same repeat it for remaining two certificates
SFTP;

Dynamic Setting of Archive Directory for Post Processing in SFTP Sender Adapter

In this blog, how to move file to different locations after processing is finished is demonstrated. This blog
also serves as answer to question: Move original file to another folder on SFTP server in case of
exception.

The structure of this blog is as follows. First, requirements from the question are specified. Then,
dummy scenario is created to fulfil requirements. Finally, solution is demonstrated.
Requirements

 Files in 'input' directory must be processed


 When processing is successful, the processed file must be moved to 'output' directory
 When processing is failed, the processed file must be moved to 'error' directory

Scenario

XML files will be read from 'input' directory. XML files will contain single node 'Payload' having single
sub-node called 'Success'. If 'Success' is true, processing will be successful. If 'Success' is false, process
will end with Error End.

Possible Payloads

Success

<Payload>
<Success>true</Success>
</Payload>

Failure

<Payload>
<Success>false</Success>
</Payload>

Solution
The solution uses Content Modifiers to set the property 'archiveDirectory'. In successful scenario,
'archiveDirectory' is set to '/output'. In failure scenario, Exception Subprocess is used to catch the
exception and Content Modifier is used therein to set 'archiveDirectory' to '/error'.

Flow Steps

Source for SFTP

Here, Source > Directory is set to '/input' as per requirements. Please fill in Address and Credential
Name as per your configurations.
Success? Router

The Success? Router routes based on Success property's value. If '/Payload/Success' is 'true' then
'archiveDirectory' property is set to 'output' otherwise flow ends with Error End.

Set Archive Directory to 'output'

If successful, 'archiveDirectory' property is set to 'output'.

Set Archive Directory to 'error'

If failed, 'archiveDirectory' property is set to 'error'.


Processing for SFTP

After processing is complete (whether succesfully or not), the file is moved to a location specified
dynamically using 'archiveDirectory' property.

Parameter Value

Post-Processing Move File

Archive Directory /${property.archiveDirectory}/${file:name}


Execution

For testing this flow, Success.xml and Failure.xml files, containing payloads specified in Possible Payloads
section above are put in 'input' directory.

After flow polls files, Success.xml file is put in 'output' directory and Failure.xml file is put in 'error'
directory.
Alternate Scenario(s)

Error occurs when sending the file to receiver SFTP Server

In this scenario, file is sent to SFTP server. However, 'Create Directories' option on SFTP Receiver is
disabled and the directory in which file must be put is not created on server to deliberately cause
exception.

Integration Process

SFTP Receiver Target Configuration

Please fill in Address and Credential Name as per your configuration.


SFTP Receiver Processing Configuration

Note that 'Create Directories' option is disabled.

Example Run
1. Input

File 'Non Existing Directory.xml' is uploaded to 'input' directory. The file contents are same as Success
under Possible Payloads above.

2. Success? Router

As '/Payload/Success' is 'true', 'Yes' branch is followed.

3. Set Archive Directory to 'output'

Exchange Property 'archiveDirectory' is set to 'output'.

4. Send file to SFTPReceiver

File is sent to SFTPReceiver.

5. Error occurs because the directory 'nonexisting' does not exist

As 'nonexisting' directory does not exist, exception is thrown by SFTP and is caught in CPI as:

org.apache.camel.component.file.GenericFileOperationFailedException: Cannot change directory to:


nonexisting, cause: 2: No such file or directory
6. Exception Subprocess is triggered

As exception occured, Exception Subprocess is triggered.

7. Set Archive Directory to 'error'

Exchange Property 'archiveDirectory' is set to '/error'.

8. Exception Subprocess ends with Message End and triggers the Processing tab of SFTP Sender

In the processing tab, the configuration for Post Processing is to Move File to /$
{property.archiveDirectory}/${file:name}. As per these configurations, SFTP Sender Adapter moves the
read file from '/input' directory to '/error' directory.

9. Output

Conclusion

As demonstrated, file can be moved to different locations when processing is finished using dynamic
path. This is achieved when integration process ends with Message End i.e, message is completed either
through success path or error path via Exception Subprocess, The exchange property 'archiveDirectory'
was set at runtime and used to specify path where file should be moved dynamically.

You can use the combination of bramkeijers's comment and sriprasadshivaramabhat's answer like so:
 In the first flow

1. Read the file


2. Store contents in Data Store
3. Delete the file using Post Processing

 In the second flow

1. In success, write file to 'archive' directory and end with End Event
2. In failure, write file to 'error' directory and end with Error End Event
Just wanted to add that I was having a lot of trouble getting this to work. If I hardcoded 'error' into the
field Archive Directory, it worked fine, but as soon as I set the property archiveDirectory to 'error', it
wasn't working (the file just stayed where it was).

In the end, this did work: ${property.archiveDirectory}/}/${file:onlyname.noext}.${file:ext}

The file coming from the SFTP was a CSV, not an XML (no idea if this is the reason the solution described
in the blog post wasn't working for me).
Set Dynamic Adapter Parameters (File Name, Directory) – SAP BTP IS-CI (CPI)
In SAP Cloud Integration Suite (BTP-IS/CI/CPI), configuring dynamic file names is not only possible but
can be done using a variety of techniques.
In the past, I wrote some articles on defining dynamic parameters such as filename, directory, etc in the
receiver file adapters in SAP PI/PO. There are several techniques like ASMA and Variable Substitution.
However, SAP Integration Suite CI (BTP-IS/CI/CPI) technique is more straightforward.

Common Use Cases of Dynamic File Name and Directory

 Adding a Timestamp to File Names


 Include a custom timestamp in the file name
(e.g., filename_yyyyMMddHHmmss.xml).
 Creating Unique File Names with Message IDs
 How to append a unique message ID to avoid file overwriting
(e.g., filename_<messageId>.xml).
 Adding Custom Parameters (e.g., Sender or Receiver Information)
 Dynamically including sender/receiver names in the file name
(e.g., file_<senderID>_to_<receiverID>.xml).
 Adding Incoming Message Data Segments
 Dynamically including data elements from the incoming message like OrderID,
InvoiceID, etc in the file name (e.g., <OrderID>_yyyyMMddHHmmss.xml).
 Determination of Target Location Based on Content
 At runtime determine the target directory the file should be saved based on
incoming message content, incoming filename pattern, etc. (e.g. Move files
starting with “Order_” or “Order” directory)
Scenario – Content-Based File Passthrough Interface
I will use the following scenario to demonstrate how the target directory can be determined during
runtime and dynamically assigned to the receiver adapter.

We define the filename with a unique time stamp and copy the file name prefix from the incoming file.
Imagine a scenario where you have files with different file name prefixes in a certain directory in the
SFTP server. I want to build an iFlow that can fetch and route these files to the target based on their file
name prefix.

For example, files starting with “Order” should be moved to “Orders” target folder on SFTP server.
Invoices to “Invoices” folder and all other files to “Other” folder.

In this scenario, we will make use of the following features of SAP Integration Suite interface
development techniques,

 Standard Header/Exchange Property Parameters


 Custom Header/Exchange Property Parameters Using Content Modifier
 Camel Simple Expressions
Step 1 – Configure the SFTP Sender Adapter
I am fetching all the files in the directory “In”. Here the Location ID is the location I have registered in
Cloud Connector. If you are interested in learning more you can check my complete Cloud Integration
with SAP Integration Suite online course.
Step 2 – Configure Content-Based Router
The filename of the incoming file will be available in the header parameter, “CamelFileNameOnly“. We
will route the files based on the prefix of the filename. Using a regex expression, we can find if the
filename matches the pattern we are looking for.
${header.CamelFileNameOnly} regex 'Order.*'

Regular expression to check if the file name starts with “Order”

${header.CamelFileNameOnly} regex 'Invoice.*'

Regular expression to check if the file name starts with “Invoice”

Step 3 – Make Use of Exchange Parameter or Header Parameter to Set the Directory
Let’s make use of content modifiers to determine the directory at runtime. We will have an exchange
property parameter named “directory” to set the value of the directory.

Step 4 – Configure the Reciever Adapter Using Dynamic Parameters


Make use of the Exchange Parameter to define the target directory in the receiver adapter
configuration.
We will make use of a couple of Camel Expressions to define the filename dynamically.

${file:onlyname.noext}

Camel Simple Expression to get the prefix or the filename of the incoming file without the extension.

${date:now:yyyy-MM-dd}

Camel Simple Expression to add the date as the 2nd part of the file name in the format yyyy-MM-dd.

${date:now:HH-MM-SS}

Camel Simple Expression to add the time as the 3rd part of the file name in the format HH-MM-SS.

Other Methods of Setting a Dynamic File Name in SAP Integration Suite CI (BTP-IS/CI/CPI)
In the example, we made use of a custom Exchange/Header Parameter, a standard header parameter
and a Camel Simple Expression to dynamically define the directory and filename at the receiver adapter.

However, other methods can set adapter parameters dynamically at runtime.


Using Groovy Script Or an UDF
Groovy scripting allows for complex logic when setting dynamic file names. This method is helpful when
you need to combine multiple variables or perform complex transformation logic to define the adapter
parameters.

import java.text.SimpleDateFormat

// Get current timestamp

def sdf = new SimpleDateFormat("yyyyMMddHHmmss")

def timeStamp = sdf.format(new Date())

// Get message ID

def messageId = message.getHeader("CamelMessageId", String.class)

// Set file name dynamically

def fileName = "file_" + messageId + "_" + timeStamp + ".xml"

// Set the file name as a header

message.setHeader("CamelFileName", fileName)

Here we define a file name in pattern: file_<messageID>_<time stamp>.xml

Using Content from the Incoming Message


You can set a dynamic file name by extracting content from the incoming message payload or headers,
such as customer ID, order number, or invoice number, and appending it to the file name.
How to achieve EOIO using SFTP Adapter in SAP Integration Suite

When dealing with file transfers through the SFTP adapter in SAP CPI, the challenge of ensuring that
each file is processed one at a time in order is pivotal. Currently SAP CPI doesn't have a standalone
feature of EOIO processing as in SAP PO.

In this blog, I've explain how to achieve "Exactly Once In Order" processing within the SFTP adapter of
SAP CPI.

Business requirement:

To transfer files from external SFTP server to SAP systems or to external applications.

The file should be processed exactly once in order. After successful completion of previous file only the
next file should be processed by SAP CPI.

Challenges:

Exactly once in order processing is difficult to achieve in SAP CPI using the standard features, since there
was no build in feature in SFTP adapter.

In case, the client CPI tenant has multiple runtime nodes, SFTP adapter will pick each file per node in a
single poll, even if the "Max message per poll" parameter is set as 1.

Let's explore the step-by-step process of how to achieve this requirement by creating an integration
flow using SFTP adapter in SAP CPI.

Pre requisites:

The filename in the SFTP server should be static and have a sequence number at the end. Ex: abcd_1.txt

A done file is expected to trigger the sequential processing flow. The .done file should be under the
same directory as the original files.

Setting up an Integration flow:

Let's create an integration flow with an SFTP sender adapter.


SFTP Sender Adapter configurations:

In the processing tab, select the Read Lock Strategy as 'Done File Expected'. This will by default takes
the source filename with .done extension. We can rename it as per our requirements.

Select Sorting Order as 'Ascending' and Sorting as 'Timestamp', to follow the first in first out.

Set Maximum message per poll as '1' (This parameter is optional in our case)

Now, the sender adapter will poll for the respective file, if the file found in the directory it'll additionally
check for the done file with same name. Ex: If the file name was 'Test_1.xml', it'll check for
'Test_1.xml.done' file to send the message.
After processing the message, the integration flow will create a .done file for the next file in the
sequence.

Let's use a groovy script to create the next file name in the sequence.

import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
def Message processData(Message message) {

//get Headers
def headers = message.getHeaders();
def value = headers.get("CamelFileName");

//get & remove the extension from file name


def extension = value.substring(value.lastIndexOf('.'),value.length())
def withoutExt = value.replaceFirst(/\.[^\.]+$/, '')

//get & remove the sequence number at end


def count = withoutExt.split("_")[-1]
count++
def nameWithoutCount = withoutExt.replaceFirst(/\_[^\_]+$/, '')

//create filename with next sequence number


def FileName = nameWithoutCount+"_"+count+extension+".done"
message.setProperty("FileName", FileName);
return message;
}

This script will create the .done file for the next file in the sequence. This .done file acts as a flag to
process the next file from the SFTP server.
For example: If the first file processed has a filename of 'Test_1.txt' the script will create the filename as
'Test_2.txt.done'.

SFTP Receiver adapter configurations:

All configurations should be same as the sender - Directory, Address, Credentials. For File Name, use the
filename created using the script.

Let's test our scenario:

Files are placed in the server, with a .done file to trigger the process

After successful processing of Test_1.txt, the .done file for next sequence will be created in the server.

This will create a streamline process and transfer files one by one in sequence by SAP CPI.
we can have multiple raw files in the SFTP folder but only one '.done' file for the first sequence should
be placed. So even multiple nodes poll parallelly, only one file will be picked as per the Read Lock
Strategy.

Implementing Message Level Security in SAP CPI: A Deep Dive into PGP Encryption and Decryption

In the first part, we will introduce the fundamentals of PGP key exchange and explore how the roles of
signer and verifier work within this framework. This foundational knowledge will set the stage for our
subsequent discussions.

The second blog will delve into practical aspects, covering the creation of PGP keys and strategies for
monitoring and maintaining security within SAP CPI. Understanding these processes is essential for
implementing robust security measures in your integration scenarios.

Finally, in the third part, We will explore an end-to-end implementation of PGP encryption and
decryption in SAP CPI. By the conclusion of this series, you'll have a comprehensive understanding of
how to effectively secure your data exchanges and ensure the integrity of your communications.

Part 1: Understanding PGP Key Exchange and the Roles of Signer and Verifier

Welcome to the first part of our blog series on Message Level Security with PGP! In this installment, we
will briefly explore PGP key exchange and the essential roles of signer and verifier.

1: Introduction to PGP

 Definition: PGP (Pretty Good Privacy) provides encryption and authentication for secure data
communication.
 Goals: Ensure confidentiality, integrity, and authenticity.

For more details Please go below Documentation Page

 PGP ENCRYPTOR

https://help.sap.com/docs/cloud-integration/sap-cloud-integration/define-pgp-encryptor?
locale=zh-CNv...

 PGP DECRYPTOR
https://help.sap.com/docs/cloud-integration/sap-cloud-integration/define-pgp-decryptor?locale=zh-
C...

2: How PGP Works

 Key Pair: Each user has a public key (shared) and a private key (kept secret).
 Encryption: Data is encrypted using the recipient's public key, ensuring only they can decrypt it.

3: PGP Key Exchange

 Secure Distribution: Public keys are shared among users through secure channels.
 Trust Models: Users validate each other’s keys within a web of trust.

4: Roles of Signer and Verifier

 Signer: Creates a message and signs it with their private key, ensuring authenticity.
 Verifier: Uses the sender’s public key to verify the signature, confirming the message’s integrity
and origin.

5.Common scenario

Scenario 1- CPI to Partner


Scenario 2- Partner to CPI

6: Benefits of PGP

 Enhanced Security: Protects sensitive information.


 Data Integrity: Confirms data has not been altered.
 Authentication: Validates sender identity.

Conclusion

In this first part, we've covered the basics of PGP key exchange and the roles of signer and verifier. Stay
tuned for Part 2, where we will discuss PGP key creation and strategies for maintaining security in SAP
CPI!

Implementing Message Level Security : Creating and Managing PGP Key Pairs for SAP CPI Integration

In our previous blog, we explored the concept of PGP Key Exchange, focusing on the roles of the Signer
and Verifier. Now, in this blog, we’ll dive deeper into the practical aspects of security in SAP Cloud
Platform Integration (CPI). Specifically, we’ll cover how to create a PGP key pair for both SAP CPI and
your partners, as well as how to import and manage these keys within CPI monitoring. Additionally, we
will explore the configuration options available in the Manage PGP Keys section and walk you through
setting up the PGP Encryptor and PGP Decryptor in CPI.

1. Creating a PGP Key Pair for SAP CPI and Partners

Before securing your data, you must create the necessary PGP key pairs for both SAP CPI and your
integration partners. Here's how to go about it:

Step 1: Generating the PGP Key Pair

To ensure secure data exchanges between SAP CPI and your integration partner, you first need to create
a PGP key pair for both parties. In this demo, we will generate these key pairs
using https://onlinepgp.com.
Go to onlinepgp.com and create two separate key pairs for the demo:

CpiDemoKey: This key pair will be used for SAP CPI. It includes both the CpiDemoKey Public Key and
the CpiDemoKey Private Key.
PartnerDemoKey: This key pair will be used for your integration partner. It includes both
the PartnerDemoKey Public Key and the PartnerDemoKey Private Key.
The tool will generate both public and private keys for each party, which can be downloaded for use. for
Both the Private Keys passphrase is Admin which needs for importing these private key in monitoring.

Note: Added PGP Keys in Resource Section for you reference. use Admin as passphrase to import the
key

Step 2: Distributing the Public Key to Partners

Share the CpiDemoKey Public Key with your integration partner. The partner will use this public key to
encrypt messages they send to SAP CPI.

Similarly, you will need to obtain your partner's PartnerDemoKey Public Key, which you will use to
encrypt messages sent to them.

Step 3: Importing Public and Private Key in SAP CPI


In SAP CPI monitoring, under Manage Security, there is a PGP Keys tile where you can import PGP keys.

To import a PGP key, follow these steps:

Click on PGP keys tile you will get below UI

Now click on Add, You will get two option to add public keys and Private keys.

let's add Public keys.


Click on add. finally we have added both the Public keys as below.

Now we will add Private keys for both Partner and CPI.

Click on Add and select Secret Keys.


Please provide the passphrase that was set when the key pair was created in the tool.(Admin)

Note:- Here you can see under type it's Secret,Public for both the key. we have validity state Valid Until
and last modified date time.

2. in This section we will see the configuration of PGP Encryptor and PGP Decryptor.

Scenario 1: Let's assume we need to encrypt the payload with the partner's public key and sign it with
the CPI private key.

In this case, the PGP Encryptor Pallet function will be configured as follows:
Scenario 2: Let's assume we need to decrypt the payload with our private key and verify it using the
partner's public key.

In this case, the PGP Decryptor Pallet function will be configured as follows:

It is important to note that for the PGP Decryptor, to decrypt we do not need to provide the CPI private
key, as no configuration option is available for this. To verify the Signature we need to provide the
partner’s public key.

However, the private key must be added to the PGP keys. The private key is fetched directly from the
"secring.asc" file, while the public keys are stored in the "pubring.asc" file. To verify, you can download
the file and open it in any text editor.

Difference between request reply vs content enricher :


In the case of Request-Reply, the response from the external system replaces the current payload. This
means that if you need both the old payload and the response from the external system, you need to
store the old payload in e.g. an exchange property before making the external call.

In the case of Content Enricher, the response from the external system is merged into the current
payload, as per the step configuration. An example of this could be a lookup of e.g. customer details.
The returned information is added to a customer element in the current payload, thereby enriching

You can use the following XPath expression to capture the case where there is at least one Employee
element with no child elements:

//Employees/Employee[not(*)]

For Example :

(Parentnode : Employees, Childnode : Employee)


Second Childnode is empty, so what condition we can write in the router to restrict the data flow.

<Employees>

<Employee>

<Name>ACB</Name>

<Class>A</Class>

</Employee>

<Employee/>

</Employees>
Count expression in router to field the field/node exist

In router, we tent to find the node / field exist in the incoming xml , based on the we will route the
path.

Expression Type: XML

Condition: count(//PromisedDateTime) = 0

Extend the condition to include two double-quotes like so:

As far as I can tell, you cannot reference the body directly in a non-XML router expression. What you can do, is store the payload in
a property before the Router step, and then add the following non-XML router expression:

${property.Payload} = null or ${property.Payload} = '""' or $


{property.Payload.trim().length()} = '0'

I have an interface in which am calling the data from the another interface with which we are storing the data in write variable.
so in the current interface am the calling the write variable as a property which is type of global variable and then calling it in a body
of another content modifier and passing it to http adapter body.

Now issue is when the body is blank the interface is failing so i need to give a router condition here but am not getting how to give it.
Can you please suggest me for this.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy