PEGA For Developers
PEGA For Developers
1. Introduction ........................................................................................................................................ 5
1.1 Purpose of the book...................................................................................................................... 5
2. Best Practices for Case Implementation in Pega ................................................................................ 5
2.1. Case Type Implementation .......................................................................................................... 5
2.1.1. Using Dynamic Class Reference (DCR) to Specify Class Name in Pega Implementation
Layer Case Types ............................................................................................................................. 5
2.1.2. Providing assignment and work status for better tracking and management ..................... 6
2.1.3. Use of Data Page for Work queue/User routing ................................................................... 6
2.1.4. Optimizing Pega Application Maintenance by Marking Relevant Records........................... 7
2.2. Case Stages Implementation ....................................................................................................... 8
2.2.1 Adding Case Types to the Application Record for Easy Configuration and Management..... 8
2.2.2. Importance of Defining Stages in Case Types ....................................................................... 8
2.2.3. Defining Entry Conditions and Gathering Related Details for Stages ................................... 9
2.2.4. Leveraging Case Statuses for Effective Case Management ................................................ 10
2.2.5. Using Standard Prefixes for work status ............................................................................. 10
2.2.6. Defining and implementing actions in Case Lifecycle ......................................................... 11
2.2.7. Creating Child Cases for Complex Workflows ..................................................................... 12
2.3. Case Attachments and Categories ............................................................................................. 13
2.3.1. Defining Custom Attachment Categories in Pega ............................................................... 13
2.4. Service Level Agreements (SLAs) and Urgency .......................................................................... 13
2.4.1. Defining Service-Level Agreements (SLAs) for Tasks in Case Management ....................... 13
2.4.2. Using Urgency to Convey the Importance of a Task in Case Management ........................ 14
2.5. Tracking and Auditing Changes .................................................................................................. 15
2.6. Locking Strategies and Default Tags .......................................................................................... 16
2.6.1. Setting Locking Strategies for Multiple User Access ........................................................... 16
2.6.2. Providing Default Tags for Easier Case Association ............................................................ 16
2.7. Sending Notifications ................................................................................................................. 17
2.7.1. Using Send Notification Shape and Notification Framework ............................................. 17
2.7.2. Extending Standard Pulse Notifications .............................................................................. 18
3. Best Practices for Data Access .......................................................................................................... 18
3.1 Best Practice for Data Pages ....................................................................................................... 18
3.1.1. Adding/Removing Parameters in Shipped Data Page Rules ............................................... 18
3.1.2. Key-Value Pair for Referencing Data Pages ........................................................................ 18
3.1.3. Use of 'Code-Pega-List' ....................................................................................................... 19
1
linkedin.com/in/christin-varughese-b968152b/
3.1.4. Consistency in Data Page Naming, Type, and Structure in Pega ........................................ 19
3.1.5. Managing Data Page Scope in Pega .................................................................................... 20
3.1.6. Selecting User-Level Access Group for Node-Level Data Pages in Pega ............................. 21
3.2. Best Practices for Data Transform ............................................................................................. 22
3.2.1. Best Practices for Looping in Pega Applications ................................................................. 22
3.2.2. Using 'Call superclass data transform ................................................................................. 22
3.2.3. Adding comments to Data Transforms in Pega .................................................................. 23
3.2.4. Modularizing Data Transforms for Better Reusability and Maintenance ........................... 24
3.2.5. Transforming activities to data transforms for improved performance and maintainability
...................................................................................................................................................... 25
3.2.6. Reviewing and cleaning up disabled code in data transforms............................................ 25
3.2.7. Avoid Changing Class of Containing Page by Setting pxObjClass Property in Pega ............ 26
3.2.8. Defining Parameters in Data Transforms ............................................................................ 27
3.2.9. Improving Readability in Pega by Avoiding (<APPEND>) (<CURRENT>) and Using 'Append
and Map To' .................................................................................................................................. 28
3.2.10. Converting Date to DateTime in Pega............................................................................... 29
3.3 Best practice for Activity Rules ................................................................................................... 29
3.3.1. Importance of Adding Descriptive Step Description in Activity Rule .................................. 29
3.3.2. Avoid using Info forced logging level in activity rules for improved performance and
troubleshooting ............................................................................................................................ 30
3.3.3. Best Practices for Initializing Pages in Pega ........................................................................ 31
3.3.4. Commented Steps in Activities ........................................................................................... 31
3.3.5. Eliminating Obj-Validate and Page-Set-Messages in Activities with validate rule ............. 32
3.3.6. Enabling "Allow Direct Invocation from Client/Service" in Activity Rules .......................... 33
3.3.7. Localizing Page-Set-Message Method ................................................................................ 33
3.3.8. Avoid Using WriteNow Option in Obj-Save and Obj-Delete Activity Methods................... 34
3.3.9. Use of Step Pages and Applies To for Activity Calls ............................................................ 35
3.3.10. Acquiring Locks in Pega Activities ..................................................................................... 35
3.3.11. Using Obj-Delete-By-Handle method ............................................................................... 36
3.3.12. Specifying Step Pages in Pega Activities ........................................................................... 37
3.3.13. Limiting use of activity rules and refactoring into data transforms ................................. 38
3.3.14. Retrieving List of Objects in Pega...................................................................................... 38
3.3.15. Using local variables and properties in activities .............................................................. 39
3.3.16. Setting Activity Type in Security Tab for Better Performance and Security ..................... 40
3.3.17. Proper Verification of Clipboard Pages ............................................................................. 41
3.3.18. Do not use obj-save to save rules. Use "Call Save" instead .............................................. 41
2
linkedin.com/in/christin-varughese-b968152b/
3.3.19. Avoid hardcoding of pyWorkPage and pyWorkCover in Pega .......................................... 41
3.3.20. Handling Commit and Rollback in Pega ............................................................................ 42
4. Pega Best Practice for Section Rule .................................................................................................. 43
4.1. Design and Development Best Practices ................................................................................... 43
4.1.1. Using Visibility Expressions in Pega .................................................................................... 43
4.1.2. Understanding Field Value in Pega Section Rules ............................................................... 44
4.1.3. Consider Migrating to Auto-Generated Controls for Better Maintainability ..................... 45
4.1.4. Defining Style Formats in Pega Applications....................................................................... 45
4.1.5. Use of Wrapper Activity to Consolidate Multiple Actions .................................................. 46
4.1.6. Improving Performance with Defer Load for Tabs and Sections ........................................ 47
4.1.7. Using Images in Web Development .................................................................................... 48
4.1.8. Labelling controls using property description .................................................................... 49
4.2. Performance Best Practices ....................................................................................................... 49
4.2.1. Using Sections and Dynamic Layouts .................................................................................. 49
4.2.2. Optimizing Scroll Performance for Better User Experience ............................................... 51
4.2.3. Using Responsive Design for Table-Style Layouts ............................................................... 51
4.2.4. Streamlining User Experience by Reducing Unnecessary Clicks ......................................... 52
4.2.5. Enhancing User Experience through Efficient Use of Section Refresh in Pega................... 53
4.2.6. Use Set Value Instead of Data Transform to Set Flags ........................................................ 53
4.2.7. Using Text or Buttons instead of Icons for Controls ........................................................... 54
4.2.8. Separating Read-Only Display Control from Sections ......................................................... 55
4.3. Validation Best Practices ............................................................................................................ 56
4.3.1. Importance of Implementing Input Validation on Both Client and Server Sides in Pega ... 56
4.3.2. Form Validation and Data Model Lifecycle Validation in Pega ........................................... 57
4.4. Best practice for flow action rule ............................................................................................... 57
4.4.1. Disqualify Bulk Processing for Preprocessing Activity ........................................................ 57
4.4.2. Using Pre and Post Prefixes for Activities in Pega ............................................................... 58
5. Pega Best Practices for Report .......................................................................................................... 59
5.1. Performance .............................................................................................................................. 59
5.1.1. Specifying Max Records on Report Definition for Improved Query Performance ............. 59
5.1.2. Enabling Paging in Report Definition .................................................................................. 60
5.1.3. Performance considerations for filtering ............................................................................ 60
5.1.4. Optimizing properties for better query performance ........................................................ 61
5.1.5. Avoid using Ignore Case Configuration in Filter Condition ................................................. 62
5.2. Customization ............................................................................................................................ 62
5.2.1. Avoid using custom getContent activity for report data retrieval in Pega ......................... 62
3
linkedin.com/in/christin-varughese-b968152b/
5.2.2. Avoid using custom HTML control for formatting report columns .................................... 63
5.2.3. Efficient Data Retrieval: Fetch Only the Required Columns ............................................... 64
5.2.4. Best Practices for Optimizing Database Queries in Pega .................................................... 64
5.2.5. Managing Report Access: Best Practices for Adding Privileges and Restrictions ............... 65
6.Pega Best Practice for Property Rule ................................................................................................. 66
6.1. Why You Should Avoid Defining Max Length............................................................................. 66
6.2. Naming Properties in Pega: Why It Matters .............................................................................. 66
6.3. Avoid Using Local Lists as Table Types in Property Rules .......................................................... 67
6.4. Date and Time Property Types in Pega Applications ................................................................. 68
7. Other Generic best practices ............................................................................................................ 69
7.1. Avoid using explicit URLs in Pega applications .......................................................................... 69
7.2. Declare Expression and Named Page References...................................................................... 70
7.3. Performance Optimization of Decision Tables .......................................................................... 71
7.4. Performance Optimization......................................................................................................... 71
7.5. Security Roles ............................................................................................................................. 72
7.5.1. Custom Security Roles and Least Privileges ........................................................................ 72
7.5.2. Role-Based Access Control for Workbasket Assignments .................................................. 73
7.6. Best Practices for Maintaining Rules ......................................................................................... 74
7.6.1. Importance of Usage and Description fields in Pega .......................................................... 74
7.6.2. Label Usage in Pega Rule Names ........................................................................................ 75
7.6.3. Best Practices for Using Pega Provided Rules ..................................................................... 75
7.6.4. Use extension points wherever provided instead of overriding rules ................................ 76
7.6.5. Update references such as $ANY or $NONE with appropriate class references ................ 77
7.6.6. Efficient management of clipboard pages in Pega applications ......................................... 77
7.6.7. Best Practices for managing large text values in Pega Applications ................................... 78
7.6.8. Creating Reusable and Efficient Rules in Pega .................................................................... 79
7.6.9. Naming conventions for Workbaskets in Pega ................................................................... 79
7.6.10. Rule Check-in Comments .................................................................................................. 80
7.6.11. String Comparison in Pega: Use @equals function instead of '==' ................................... 81
7.7. Best practice for Pega Flow rule type ........................................................................................ 82
7.7.1. Model-driven approach for flow implementation .............................................................. 82
7.7.2. Configuring Audit Information in Flows .............................................................................. 82
7.7.3. Avoid using tickets in process flows.................................................................................... 83
8. Conclusion ......................................................................................................................................... 84
4
linkedin.com/in/christin-varughese-b968152b/
1. Introduction
1.1 Purpose of the book
The purpose of this book is to provide Pega developers with a comprehensive guide to Pega best
practices. Pega is a powerful platform that allows developers to create enterprise applications
quickly and efficiently. However, to achieve the full potential of the platform, developers need to
follow best practices to ensure that their applications are maintainable, scalable, and performant.
This book aims to provide developers with a set of guidelines that can be applied to various aspects
of Pega development, including case implementation, data access, section rules, and more. By
following these best practices, developers can ensure that their applications are designed and
developed in a way that meets business requirements, is easy to maintain, and can scale to handle
increasing volumes of data and users.
When creating a case type in Pega, you can select the class name in the "Create Case" smart shape
to specify the class name for the case. However, an alternative approach is to use Dynamic Class
References (DCRs) to specify the class name at runtime. This approach can be particularly useful
when creating implementation layer case types.
To use DCRs in this context, you can select the "Other" option in the "Create Case" smart shape and
enter a reference to a DCR data page that specifies the class name. The DCR data page can be
configured to use a Data Transform to determine the class name based on runtime conditions.
For example, let's say you are creating an implementation layer case type for a vehicle repair
process. You may have multiple subclasses of a base "Vehicle" class, such as "Car" and "Motorcycle".
Rather than selecting a specific subclass in the "Create Case" smart shape, you can use a DCR to
dynamically determine the class name based on user input.
To implement this, you can define a DCR data page called "D_VehicleDCR" that has a parameter
called "Type". The Data Transform used by the DCR data page, called "LoadVehicleDCR", determines
the class name based on the value of the "Type" parameter. For example, if the "Type" parameter is
"Motorcycle", the Data Transform sets the class name to "MyApp-VehicleRepair-Work-Vehicle-
Motorcycle".
In the "Create Case" smart shape, you can select the "Other" option and enter the reference to the
D_VehicleDCR data page using the syntax "D_VehicleDCR[Type:"Motorcycle"].pxObjClass". This tells
Pega to use the class name returned by the DCR data page as the class name for the case.
5
linkedin.com/in/christin-varughese-b968152b/
Overall, using DCRs to specify the class name for a case type at runtime can provide greater flexibility
and modularity, especially when dealing with complex class hierarchies.
2.1.2. Providing assignment and work status for better tracking and management
When working with assignments in Pega, it is important to provide clear status updates to ensure
that work is being tracked and managed effectively. One way to achieve this is by always including
the status of the assignment and the status of work in the assignment task parameters.
In practical terms, this means that whenever a user opens an assignment, the status of the
assignment and the status of the work associated with that assignment should be displayed in the
task parameters section. This allows the user to quickly and easily understand the current status of
the work and what steps still need to be completed.
The benefits of providing this level of visibility are clear. By giving users easy access to the status of
the assignment and work, they are able to make more informed decisions about how to proceed
with the work. This can help to reduce errors, prevent duplicate work, and ensure that work is
completed efficiently.
For example, consider a loan processing application where a loan officer is assigned to review a loan
application. By including different status for assignment and work, the loan officer can quickly see
whether the application has been reviewed or not, and what steps still need to be completed. This
allows them to make more informed decisions about how to proceed with the loan review process.
In summary, always including the status of the assignment and work in the task parameters provides
clear visibility into the status of work, which can help to reduce errors, prevent duplicate work, and
ensure that work is completed efficiently.
Instead of hardcoding the workbasket or router user name in the routing configuration, it is
recommended to use a data page to retrieve the value dynamically. This ensures that the routing
can be easily modified or updated without changing the configuration rule. It also allows for greater
flexibility and scalability in case of changes in the user or workbasket assignments.
1. Create a data page that sources the workbasket or router user name from a data source,
such as a data transform, a report definition or a data table.
2. In the routing configuration, use the data page with a parameter to pass the workbasket or
router user name value dynamically.
The benefits of using a data page to retrieve the workbasket or router user name dynamically are:
1. Flexibility: The routing can be easily modified or updated without changing the configuration
rule. For example, if a new workbasket is added or a user is removed, the routing can be
updated by modifying the data source of the data page.
6
linkedin.com/in/christin-varughese-b968152b/
2. Scalability: As the application grows and changes, the routing can be scaled up or down by
adding or removing workbaskets or users without changing the routing configuration.
3. Reusability: The same data page can be reused across multiple routing configurations,
reducing the need to duplicate configuration rules.
Example: Suppose you have a routing configuration that routes a case to a specific workbasket
based on the case type. Instead of hardcoding the workbasket name in the routing configuration,
you can use a data page to retrieve the workbasket name dynamically. Here are the steps to
implement this approach:
1. Create a data page named D_WorkbasketName that sources the workbasket name from a
data transform. The data transform should use a case type parameter to determine the
workbasket name dynamically based on the case type.
In this example, the routing configuration retrieves the workbasket name dynamically from the data
page D_WorkbasketName using a parameter page. The routing configuration is not hardcoded with
the workbasket name, which allows for greater flexibility and scalability in case of changes in the
user or workbasket assignments.
Overall, using a data page to retrieve the workbasket or router user name dynamically instead of
hardcoding it in the routing configuration is a recommended approach that provides greater
flexibility, scalability, and reusability.
When working with a complex Pega application, it is important to mark certain properties, actions,
rules, and other assets as relevant records. This helps to quickly identify and access critical
components of the application, especially during maintenance and updates.
The suggested approach is to use the "Mark as Relevant Record" feature in Pega. This can be done
by navigating to the specific asset and selecting the "Mark as Relevant Record" option. Once marked,
the asset will be easily accessible through the "Relevant Records" option in the Designer Studio.
If not done in this way, it can be difficult to identify and locate important assets in a large
application. This can lead to delays and errors during maintenance and updates, ultimately
impacting the overall performance of the application.
The benefits of marking important assets as relevant records include faster and more accurate
updates and maintenance, increased efficiency in development, and better organization of the
application.
For example, a property that is frequently used throughout the application can be marked as a
relevant record. This will ensure that developers can quickly locate and update the property when
needed, without having to search through the entire application. Similarly, a specific rule that is
critical to the application's functionality can also be marked as a relevant record for easy access.
7
linkedin.com/in/christin-varughese-b968152b/
Overall, marking important properties, actions, rules, and other assets as relevant records is a simple
yet effective way to optimize Pega application development and maintenance.
When building an application in Pega, it is essential to add every case type to the list on the Cases &
Data tab of the application record. Doing so ensures that the case type can be created and that it has
the appropriate configuration to support its creation.
By adding each case type to the list and configuring it appropriately, you ensure that it is available
for use within the application. This best practice helps to prevent errors and ensures that all
necessary configurations are in place to support the creation of new cases.
For example, let's say you are building an application to manage customer service requests. You
have several different types of service requests, such as product support, billing inquiries, and
technical issues. To ensure that all of these case types are available for use in the application, you
would add each one to the list on the Cases & Data tab and configure them appropriately.
The benefits of adding every case type to the list and configuring it properly include:
1. Improved application functionality: By adding each case type to the list, you ensure that it
can be created and managed properly within the application, improving overall functionality.
2. Reduced errors: This best practice helps to prevent errors that can occur if a case type is not
properly configured or is missing from the list.
3. Streamlined development: By adding and configuring each case type early on in the
development process, you can streamline the development process and ensure that all
necessary configurations are in place from the beginning.
Defining stages in case types is an essential best practice in Pega development. A stage is a logical
step or phase in a case, and each stage can have one or more steps that represent specific tasks to
be completed. Defining stages in a case type helps to standardize processes and ensures consistency
in case handling. This best practice ensures that every case has a structured approach to case
resolution, from creation to closure.
To implement this best practice, every case type should have stages defined, even if the stages are
only Create, Open, and Resolved, which are analogous to the create, read, update, and delete
(CRUD) operations. It is recommended to define a Create stage and at least one Resolution stage for
each case type. The Create stage is the starting point for every case, and the Resolution stage is the
endpoint that signifies the successful closure of a case.
Benefits of defining stages in case types include improved visibility and understanding of the case
resolution process, better control over case progression, and easier identification of potential
bottlenecks or inefficiencies in case processing. This practice also enables better reporting
8
linkedin.com/in/christin-varughese-b968152b/
capabilities, allowing for metrics such as case duration and stage durations to be measured and
analyzed.
For example, consider a case type for customer complaints. The case type can have stages defined
such as Create, Assign, Investigate, Escalate, and Resolve. Each stage can have specific tasks defined,
such as capturing the customer's complaint in the Create stage, assigning the case to an agent in the
Assign stage, investigating the issue in the Investigate stage, escalating to a supervisor in the
Escalate stage, and resolving the case in the Resolve stage.
In conclusion, defining stages in case types is a crucial best practice in Pega development that should
be implemented for every case type. By following this practice, organizations can achieve
standardization, consistency, and improved control in their case handling processes.
2.2.3. Defining Entry Conditions and Gathering Related Details for Stages
As you define stages for your case types, it is important to establish clear entry conditions and
gather all the necessary details before a case enters each stage. This best practice ensures that your
case progresses smoothly and efficiently, without any unnecessary delays or confusion.
The suggested approach is to carefully consider the entry conditions for each stage, including any
prerequisites that must be met before the case can move forward. This may include the completion
of certain tasks, the resolution of specific issues, or the attainment of particular approvals. You
should also gather all the relevant details and information that will be required to successfully
complete the stage.
Implementing this best practice requires a thorough understanding of your case type and the
specific needs of each stage. This may involve consulting with stakeholders, subject matter experts,
and end users to ensure that all relevant details are captured and accounted for. You can also
leverage the capabilities of the Pega platform to automate the collection and validation of data, as
well as to enforce the entry conditions for each stage.
The benefits of defining entry conditions and gathering related details for stages include:
• Increased efficiency and productivity: By clearly defining entry conditions and gathering all
necessary details upfront, you can avoid delays and ensure that your case progresses
smoothly and efficiently.
• Reduced errors and rework: With all relevant details captured and validated, you can avoid
errors and the need for rework, which can save time and resources.
• Improved visibility and control: By defining clear entry conditions and gathering all necessary
details, you can gain better visibility and control over the case as it progresses through each
stage.
For example, let's say you are designing a case type for processing loan applications. The Create
stage of the case type might include entry conditions such as the submission of a completed
application form and the completion of any required credit checks. Gathering related details for this
stage might include capturing applicant information, employment details, and financial data.
The Open stage of the case type might require entry conditions such as the completion of additional
due diligence checks and the resolution of any discrepancies or issues identified during the
application review process. Gathering related details for this stage might include capturing any
additional documentation or information required to support the application.
9
linkedin.com/in/christin-varughese-b968152b/
Finally, the Resolved stage of the case type might require entry conditions such as the completion of
final approvals and the preparation of loan disbursement documentation. Gathering related details
for this stage might include capturing disbursement details and any other information required to
finalize the loan processing.
By defining clear entry conditions and gathering all the necessary details for each stage, you can
ensure that your loan application processing case progresses smoothly and efficiently, with minimal
delays or errors.
As a best practice, it is important to use case statuses to help communicate the progress of a case to
users. Case statuses can help users understand where a case is in its lifecycle, identify bottlenecks,
and prioritize their work. In order to effectively leverage case statuses, it is recommended to use the
option to update the case status during the case lifecycle.
To implement this approach, start by defining the appropriate set of statuses for each case type.
Typically, statuses can include "New", "Open", "Pending", "Resolved", "Closed", or any other custom
status that aligns with your business requirements. Once the statuses are defined, it is important to
configure the system to automatically update the status of a case as it progresses through its
lifecycle.
One way to accomplish this is by configuring the stage entry and exit actions to update the case
status. For example, if a case moves from an "Open" stage to a "Pending" stage, the system can be
configured to automatically update the case status to "Pending". Additionally, it is also possible to
update the case status manually using a flow action or a service activity.
• Improved communication: Case statuses provide users with an at-a-glance view of where a
case is in its lifecycle, helping to communicate progress and identify bottlenecks.
• Increased productivity: Users can prioritize their work based on case status, helping to
ensure that they focus their efforts on the most critical tasks.
• Better reporting: Case statuses can be used to generate reports and dashboards, providing
insights into case volumes, backlog, and trends.
For example, let's consider a customer service case type. The case type can have statuses such as
"New", "Open", "Pending Customer Response", "Pending Internal Review", "Resolved", and
"Closed". As the case moves through each stage, the system can be configured to automatically
update the case status to reflect the current state of the case.
By leveraging case statuses, customer service representatives can quickly identify the status of a
case and prioritize their work accordingly. Managers can also use case statuses to track case volume,
identify bottlenecks, and monitor team performance. Overall, using case statuses can help ensure
that cases are managed effectively and efficiently, improving customer satisfaction and business
outcomes.
10
linkedin.com/in/christin-varughese-b968152b/
As a best practice, use one of the following prefixes to support standard reports and maintain
continuity between your custom status and standard status values: New-, Open-, Pending-, or
Resolved-. These prefixes help to maintain a consistent and clear understanding of the current state
of work items in your application.
The "New-" prefix is typically used to indicate that a work item has been created but has not yet
been processed or acted upon by anyone. When a user first creates a work item, it will often have a
status of "New" until it is assigned or otherwise acted upon.
The "Open-" prefix is generally used to indicate that a work item has been assigned to a user or
group and is currently being worked on. Once a work item has been assigned, its status will often be
updated to "Open" to indicate that it is actively being worked on.
The "Pending-" prefix is used to indicate that a work item is awaiting a specific action or input from
another party, and is therefore not actively being worked on. This prefix is often used when a work
item is waiting on an external system or user input before it can be completed.
The "Resolved-" prefix is typically used to indicate that a work item has been completed or
otherwise resolved. Once a work item has been resolved, its status will often be updated to reflect
its final state.
To implement these prefixes, you can define custom status values in your application's rule base,
and include the appropriate prefix as part of the status value's name. For example, you might define
a custom status value named "Open-In Progress" to indicate that a work item is currently being
worked on.
By using a consistent prefix for these custom status values, they are easily recognizable as being
related to the same concept and can be easily filtered and reported on.
In summary, using a standardized prefix for custom status values can help maintain continuity with
standard status values, support standard reporting, and ensure that custom status values are easily
recognizable as such.
When developing a case management application in Pega, it is important to follow best practices for
defining and implementing actions in the case lifecycle. One such practice is to avoid hardcoding
actions directly into the user interface (UI) and instead use standard case processing or optional
case-wide and stage-wide actions. This approach provides greater flexibility, maintainability, and
consistency in the application.
To implement this best practice, first identify the actions required for each stage of the case. These
can include standard actions, such as “Edit Case,” “Change Stage,” or “Transfer Case”, as well as any
custom actions specific to the case. Next, create a list of case-wide and stage-wide actions that can
be reused across different cases and stages, respectively.
One of the key benefits of using standard case processing or optional case-wide and stage-wide
actions is that it reduces the amount of redundant code in the application, making it easier to
maintain and update. For example, if a new action needs to be added or an existing action needs to
11
linkedin.com/in/christin-varughese-b968152b/
be modified, this can be done in a central location and will be automatically updated across all cases
that use that action.
In addition, this approach also promotes consistency in the application by ensuring that similar
actions are implemented in a similar way across different cases and stages. This makes it easier for
users to learn and use the application and reduces the risk of errors or confusion.
Example: Suppose you are developing a case management application for processing insurance
claims. One of the stages in the case is “Review Claim Request” where a claims adjuster reviews the
claim details and determines whether it should be approved or rejected or instead of rejecting the
request, they can “Request More Information”.
Instead of hardcoding this actions directly into the UI using a button, you could define them as
standard case processing actions that can be reused across different cases and stages. By using this
approach, you can ensure that the actions in the application are consistent, maintainable, and
flexible, making it easier for users to process claims efficiently and accurately.
To manage complex workflows effectively, it is often necessary to break down work into smaller
pieces that have a relationship or dependency with one another. One way to achieve this in Pega is
by creating child cases. Child cases represent a subset of work related to a larger parent case, and
they can have their own stages, tasks, and data.
The suggested approach for creating child cases is to identify a clear relationship or dependency
between the tasks that need to be performed. For example, imagine a travel expense
reimbursement process. The process may include several tasks such as submitting receipts,
approving expenses, and disbursing funds. Instead of managing all of these tasks in a single case, the
process can be broken down into child cases, each of which represents a subset of work related to
the larger parent case. For example, the approval task can be a separate child case, which is initiated
when all the receipts are submitted.
When the parent case reaches a specific stage or step, a new child case can be created automatically
or manually using an appropriate flow action. The child case can then be processed independently
and separately from the parent case, with its own stages, tasks, and data.
Not using child cases can lead to several issues, such as an overly complex parent case with too
many steps, difficulty in tracking progress and performance, and difficulty in delegating tasks to
different groups or individuals. Child cases help to simplify the parent case by breaking down the
workflow into smaller, manageable pieces. This makes it easier to track the progress and
performance of each task, and to delegate work to different groups or individuals.
1. Improved visibility: Child cases provide a better view of the progress of the work and its
status.
2. Simplified processing: Child cases simplify complex workflows by breaking them down into
smaller, more manageable pieces.
12
linkedin.com/in/christin-varughese-b968152b/
3. Easy delegation: Child cases make it easy to delegate work to different groups or individuals
based on their skills and availability.
4. Better performance tracking: Child cases enable better tracking of performance metrics for
each task, enabling continuous improvement.
In conclusion, creating child cases is a powerful technique for breaking down complex workflows
into smaller, more manageable pieces that are easier to track and delegate. By identifying clear
relationships and dependencies between tasks, child cases can be used to simplify the overall
workflow, improve performance tracking, and enable easy delegation of work to different groups or
individuals.
In Pega, it is a best practice to define custom attachment categories instead of relying solely on the
out-of-the-box categories. Defining custom categories allows for explicit categorization of related
files and enables easier referencing of attachments when sending them out by email. This helps to
maintain organization and improves efficiency in the case management process.
By defining custom attachment categories, you can more easily group related files together and
access them quickly. This helps to avoid confusion and saves time when searching for specific
attachments. Additionally, custom categories can be used in reporting and searching, allowing for
more efficient and accurate results.
If custom attachment categories are not defined, attachments can become disorganized and difficult
to find, leading to inefficiency in the case management process. Additionally, it may be harder to
keep track of related files, leading to mistakes or errors in the case management process.
For example, in a healthcare case management application, custom attachment categories can be
defined for medical records, lab results, and insurance information. Within each category,
subcategories can be defined for specific types of files. This allows for easy grouping and referencing
of related attachments and enables efficient communication of important information.
In summary, defining custom attachment categories is a best practice in Pega that can improve
organization, efficiency, and accuracy in the case management process. By explicitly categorizing
related files and referencing them easily, custom categories can save time, reduce errors, and
enhance the user experience.
In case management, it is important to ensure that tasks are completed within a reasonable
timeframe. One way to achieve this is by defining service-level agreements (SLAs) for each task. An
SLA is a contract between the service provider and the customer that specifies the level of service
that will be provided.
13
linkedin.com/in/christin-varughese-b968152b/
The suggested approach is to define relevant SLAs for every task in a case type. This involves
determining the amount of time that should be allocated to complete each task and specifying the
consequences if the task is not completed within the specified timeframe. It is also important to
identify the stakeholders who will be affected by the SLA and ensure that they are in agreement with
the terms.
SLAs can be implemented in Pega by using the Service Level Agreement rule. The rule can be created
and associated with a specific assignment in a case type. The SLA rule specifies the time interval
within which the task should be completed, and can also specify escalations or notifications if the
task is not completed within the specified timeframe.
If SLAs are not defined, there is a risk that tasks may take longer than expected, leading to delays in
case resolution and potential dissatisfaction from customers. Additionally, without SLAs, it can be
difficult to measure the performance of the case management process.
1. Clear expectations: SLAs provide clear expectations for task completion, ensuring that
everyone involved in the case management process understands what needs to be done and
by when.
2. Improved efficiency: With SLAs in place, tasks are completed within a specified timeframe,
which helps to improve the efficiency of the case management process.
3. Better customer service: SLAs ensure that tasks are completed within a reasonable
timeframe, which helps to improve customer satisfaction.
For example, in a customer service case type, an SLA could be defined for the task of responding to a
customer inquiry. The SLA could specify that the task should be completed within 24 hours of
receiving the inquiry. If the task is not completed within the specified timeframe, an escalation could
be triggered to ensure that the inquiry is addressed in a timely manner.
In summary, defining relevant SLAs for tasks in case management is a best practice that helps to
ensure that tasks are completed within a reasonable timeframe, leading to improved efficiency and
customer satisfaction.
In case management, it is important to convey the importance of tasks to users so they can prioritize
their work accordingly. One way to do this is by using urgency, which is a numerical value that
represents the level of importance of a task. Higher urgency values indicate higher importance.
The suggested approach is to define urgency values based on the impact of the task on the business
or customer, the time sensitivity of the task, and any regulatory or compliance requirements. The
urgency values should be consistent across all cases to ensure that users can easily understand the
importance of a task.
Urgency can be managed through a local action or an SLA. Local actions are used to set urgency
values for individual tasks, while SLAs are used to set urgency values for groups of tasks based on
14
linkedin.com/in/christin-varughese-b968152b/
certain criteria. For example, an SLA could be created to set a higher urgency value for tasks that are
related to critical issues or require immediate attention.
If urgency is not used to convey the importance of a task, users may struggle to prioritize their work
effectively, which can result in delays and missed deadlines. Urgency helps to ensure that tasks are
completed in a timely manner and that critical issues are addressed promptly.
The benefits of using urgency in case management include improved task prioritization, increased
efficiency, and better customer service. By clearly communicating the importance of tasks, users can
focus their efforts on the most critical work, which can lead to faster resolution times and higher
customer satisfaction.
For example, a customer service team might use urgency values to prioritize tasks related to
customer complaints. A task related to a customer who is experiencing a critical issue, such as a
service outage, would be assigned a higher urgency value than a task related to a less critical issue,
such as a billing inquiry. This would ensure that the most pressing issues are addressed first, which
can help to improve customer satisfaction and reduce customer churn.
When it comes to case management, it is important to track and audit changes to ensure
accountability, transparency, and compliance. Field-level auditing is a useful tool that allows you to
track changes to specific fields in your case records. However, it is important to use this feature
wisely to avoid performance issues and unnecessary clutter. Here are some best practices for field-
level auditing in case management:
1. Use only the fields that require tracking and auditing: Field-level auditing can be resource-
intensive, so it is important to avoid auditing unnecessary fields. Only track and audit the
fields that are critical for your business processes or compliance requirements.
2. Limit the number of fields to audit: Auditing too many fields can slow down the performance
of your case management system. Identify the key fields that require auditing and limit the
number of fields to be audited.
3. Keep track of audit logs: Audit logs can generate a lot of data, so it is important to manage
them properly. Set up a process to regularly review and purge old audit logs to avoid
unnecessary clutter.
By following these best practices, you can ensure that your field-level auditing is effective, efficient,
and focused. This helps you maintain accurate records, ensure compliance, and improve overall case
management.
For example, consider a legal case management system where the case records contain sensitive
client information. In this case, it is important to audit changes to fields such as "Client Name," "Case
Description," and "Case Outcome." By using selective auditing and excluding irrelevant fields, you
can limit the amount of information stored in the audit logs and ensure that only relevant changes
are tracked. This helps you maintain the confidentiality of client information and comply with data
protection regulations.
15
linkedin.com/in/christin-varughese-b968152b/
2.6. Locking Strategies and Default Tags
2.6.1. Setting Locking Strategies for Multiple User Access
In Pega, locking is the mechanism that allows only one user to modify a case at a time to prevent
conflicts and ensure data consistency. By default, Pega uses exclusive locking, which means only one
user can hold the lock at a time. However, there may be cases where multiple users need to access
the same case simultaneously, such as in a collaborative environment. In such cases, setting the
locking strategy to allow multiple users is recommended.
To implement this approach, you need to define the locking strategy for the case type. In the case
designer, go to the process tab and select the locking category. Here, you can choose the locking
mode as either “Allow one user or Allow multiple users”. Selecting the “Allow multiple users” locking
mode allows multiple users to work on the same case simultaneously, and the system automatically
merges the changes made by each user.
If multiple users try to save their changes to the same section or field at the same time, Pega handles
this conflict resolution by using optimistic locking. Pega will determine which updates were made
first and apply them first. If two or more users make changes at the same time, the system will
attempt to merge the changes, but if it cannot, Pega will notify the users that a conflict occurred.
The issue with not allowing multiple users to access a case simultaneously is that it can cause delays
and hinder collaboration. For example, if two users need to review and update the same case, only
one user can do so at a time, which can slow down the process and cause delays.
The benefits of allowing multiple users to access a case simultaneously include faster processing
times, increased collaboration, and improved efficiency. For example, if two users need to review
and update the same case, they can do so simultaneously, which speeds up the process and enables
them to work collaboratively to resolve the case.
A proper example of this approach could be in a customer service scenario where multiple agents
need to access a customer's case simultaneously to resolve their issue. By allowing multiple users to
access the case at the same time, the agents can collaborate and work together to resolve the issue
faster and more efficiently.
Tags are a useful way to categorize cases and facilitate search and filtering operations. By providing
default tags, users can quickly label cases with relevant keywords and easily find related cases. This
approach can also help ensure consistency in tagging practices across the organization.
To implement the use of default tags in Pega, you can create an initial list of tags for your application
that provides end users with tagging suggestions. This list helps users find and apply the most
commonly used tags and promotes consistent tagging throughout the application. You can create
tags at design time and make them available for use in all the cases in the application.
At runtime, users can tag cases by using existing tags or by adding new tags. They can tag a case
regardless of the current case status. To tag a case, users can simply select one or more tags from
16
linkedin.com/in/christin-varughese-b968152b/
the available list or add a new tag. The selected tags will then be associated with the case and can be
used to filter or group cases based on the tag.
1. Improved searchability: Tagging cases with relevant tags makes it easier to search for and
find cases.
2. Consistency: By providing users with a list of default tags, you can ensure that tags are
consistently applied throughout the application.
3. Collaboration: Tags make it easier to collaborate and share knowledge among users who are
working on similar cases.
For example, suppose you are building a customer service application where users handle various
types of inquiries, such as billing, account information, and technical support. You can create a list of
default tags such as "billing," "account info," and "tech support" to help users tag cases and make it
easier to find and group cases based on their category. This will make it easier for users to
collaborate and share knowledge across the team.
To use the Send Notification shape and notification framework to send notifications, you can create
a new notification rule in the relevant class to capture notification-related data, such as the message
to convey, intended recipients, and channels to use. Once the notification rule is set up, you can
trigger it using various options such as the Send Notification smart shape, pxNotify API activity, or
the Send Notification step in the case designer.
It is recommended to use the notification framework, especially when targeting large groups, as it
provides an easy way to manage and send notifications across various channels. By relying on the
framework, you can also ensure that end-users are informed about important events in the
application and can opt-in or out of receiving notifications.
For example, in a sales application, you can use the notification framework to send a notification to
the sales team when a new lead is created. The notification rule can capture the message to convey,
such as the details of the new lead, intended recipients (i.e., sales team), and channels to use, such
as email or web gadgets. By using the Send Notification shape or other triggering options, the
notification can be sent out automatically, ensuring that the sales team is informed about the new
lead in a timely manner.
17
linkedin.com/in/christin-varughese-b968152b/
2.7.2. Extending Standard Pulse Notifications
To configure Pulse for a case, you can provide case workers with a collaboration tool for open
discussion of their work. This can be done by enabling Pulse directly in the case type settings. When
enabled, case workers can post messages to their colleagues and receive replies.
Additionally, you can use the Automation shape "Post to Pulse" to create a message and send it to
the Pulse social stream, allowing for automated posting to the Pulse feed based on specific criteria.
By configuring Pulse for a case, you can provide case workers, such as customer service
representatives (CSRs), with a collaboration tool for open discussion of their work. This allows for
streamlined communication and collaboration between case workers, leading to increased efficiency
and better outcomes for the organization.
When working with shipped data page rules in Pega, it's important to follow best practices to ensure
backward compatibility and avoid breaking existing data page references. One such practice is to
avoid adding or removing parameters from any shipped data page rule.
Shipped data page rules are those that are included with the Pega platform and are available for use
out-of-the-box. They provide commonly used data that can be referenced across multiple
applications. However, if you modify a shipped data page rule by adding or removing a parameter, it
can cause issues with any Pre-existing data page references that rely on that rule.
To avoid this issue, it's recommended to create a new data page rule that extends the functionality
of the shipped data page rule instead of modifying it directly. This new rule can then be customized
as needed without affecting any existing references to the shipped rule.
For example, let's say you want to modify the shipped data page rule "D_Product" to include a new
parameter "price." Instead of modifying the rule directly, create a new data page rule
"D_ProductPrice" that extends the "D_Product" rule and includes the "price" parameter. This way,
any pre-existing references to "D_Product" remain intact, and you can customize the new rule
without affecting any other rules or references.
By following this best practice, you can ensure that your Pega applications are backward compatible
and avoid any issues with existing data page references.
The suggested approach is to use key-value pairs whenever referencing data pages, even if there is
only one single parameter. This is because using key-value pairs makes the code more readable,
maintainable, and less prone to errors.
To implement this approach, you can simply add a key-value pair in the parameter section when
referencing the data page. For example, instead of referencing a data page with a single parameter
like this:
18
linkedin.com/in/christin-varughese-b968152b/
D_PXCUSTOMERINFO(.CustomerID)
D_PXCUSTOMERINFO(CustomerID:.CustomerID)
The issue with not using key-value pairs is that it can lead to errors and confusion in the code. For
example, if you have a data page that takes multiple parameters, not using key-value pairs can make
it difficult to determine which parameter corresponds to which value.
The benefits of using key-value pairs are that they make the code more readable and easier to
understand. By using key-value pairs, it is clear what each parameter represents, which can help in
debugging and troubleshooting. Additionally, using key-value pairs can make the code more
maintainable, as it is easier to modify the code if needed.
From the above example, it clearly shows that the data page is retrieving information for a specific
customer ID, making it easier to understand and modify if needed.
The suggested approach is to define a 'list' property and then specify the intended list type as a
property reference. This can be achieved in the following steps:
1. Create a 'list' property with the intended list type as the property reference. For example, if
you want to specify a list of customers, create a 'list' property called 'CustomersList' with the
property reference as 'MyApp-Data-Customer'.
2. Use the 'CustomersList' property in your application wherever you need to reference the list
of customers.
By using this approach, you can ensure that your application is using the recommended 'list'
structure and is compatible with future upgrades of Pega.
The issue with not using the recommended approach is that the 'Code-Pega-List' object type is a
deprecated configuration and may not be supported in future versions of Pega. This could result in
compatibility issues when upgrading your application to a newer version of Pega.
The benefits of using the recommended approach include better performance and maintainability of
your application. Using a 'list' structure allows Pega to optimize the performance of list operations,
such as sorting and filtering, and also makes it easier to maintain and modify your application.
Overall, it is recommended to use the 'list' structure with the intended list type instead of specifying
an object type of 'Code-Pega-List' to ensure better performance, maintainability, and compatibility
with future versions of Pega.
19
linkedin.com/in/christin-varughese-b968152b/
When you receive the error message "A data page of this name already exists with a different type.
This may result in unexpected behavior at run-time and is not recommended" it means that there is
already a data page with the same name but a different object type or class defined in your
application.
For example, if you have a data page named "D_CustomerData" with an object type of "MyOrg-Data-
Customer", and you attempt to create another data page with the same name but with a different
object type, such as "MyOrg-MyApp-Data-Customer", you will receive this error message. This can
cause unexpected behavior at runtime because the system may be expecting data of one object
type, but is instead receiving data of a different object type.
Similarly, if you receive the error message "A data page of this name already exists with a different
structure. This may result in unexpected behavior at run-time and is not recommended", it means
that there is already a data page with the same name but a different structure defined in your
application.
For example, if you have a data page named "D_CustomerData" with a list structure, and you
attempt to create another data page with the same name but with a different structure, such as a
single page structure, you will receive this error message. This can cause unexpected behavior at
runtime because the system may be expecting data in a specific structure, but is instead receiving
data in a different structure.
To avoid these issues, it is recommended to give unique names to your data pages that reflect their
object type and structure, such as "D_CustomerDataList" and "D_CustomerDataPage", instead of
simply "D_CustomerData". This will help you avoid naming conflicts and ensure that your data pages
are accurately and consistently represented in your application.
The benefits of using unique names for data pages with specific object types and structures are
improved maintainability, reduced errors, and increased application stability. By following this
naming convention, you can also more easily identify and locate specific data pages in your
application when you need to make updates or changes.
In summary, when you receive the error message "A data page of this name already exists with a
different type/structure", it is important to give unique names to your data pages that accurately
reflect their object type and structure to avoid unexpected behavior at runtime.
When working with data pages in Pega, it is important to consider the appropriate scoping to ensure
efficient use of memory and optimal performance. One important best practice is to fetch records in
memory just-in-time, rather than persisting them throughout the user session. This can be achieved
through appropriate data page scoping.
To ensure efficient use of memory, it is generally recommended to use the narrowest possible scope
for each data page. For example, if a data page is only needed for a specific interaction, it should be
scoped to the thread or requestor. If the data page is needed across multiple interactions, it should
be scoped to the node.
In addition to scoping, there are two other options that can be used to further optimize data page
behavior:
20
linkedin.com/in/christin-varughese-b968152b/
1. Reload once per interaction – This option ensures that the data page is only loaded once per
user interaction, rather than every time it is referenced. This can be useful for data pages
that are expensive to load or for situations where the data is not expected to change
frequently.
2. Limit to a single data page – This option ensures that only one instance of the data page
exists for a given key value, regardless of the scope. This can be useful for data pages that
have a limited set of possible values or for situations where data consistency is critical.
If data pages are not scoped appropriately or if they are not reloaded and limited correctly, it can
lead to inefficient use of memory and decreased performance. For example, if a data page with a
wide scope is used for a single interaction, it may persist in memory unnecessarily, taking up
valuable resources.
Consider the following example: In a banking application, a data page is used to fetch the customer's
account balance. Since this data is needed for every interaction, the data page should be scoped to
the requestor. However, since the balance may change frequently, the data page should not be
limited to a single instance. Instead, it should be reloaded once per interaction to ensure the most
up-to-date information is displayed.
On the other hand, If the data is relatively static and doesn't change frequently, it may be
appropriate to use the Reload once per interaction option to avoid unnecessary reloading of the
data.
In summary, appropriate data page scoping is crucial for efficient use of memory and optimal
performance in Pega applications. By scoping data pages just-in-time and using options like Reload
once per interaction and Limit to a single data page, developers can ensure that their applications
are both fast and reliable.
3.1.6. Selecting User-Level Access Group for Node-Level Data Pages in Pega
In Pega, user-level access groups play an important role in defining the privileges and permissions of
users accessing the system. When it comes to node-level data pages, it is essential to select an
appropriate user-level access group to ensure that only authorized users have access to the data.
The suggested approach is to create an access group specifically for batch processing (data page
loading and agent activity processing) and assign it to all node-level data pages. This access group
should have the necessary privileges to perform batch processing tasks but should not provide any
additional privileges beyond that. For all other user interactions, a regular user-level access group
should be used.
This approach can be implemented by creating a new access group in Pega that is dedicated to batch
processing tasks. Once the access group is created, it can be assigned to all node-level data pages in
the system.
If this approach is not followed, there could be security risks where unauthorized users may have
access to the data pages. In addition, using an access group with unnecessary privileges can also
impact performance and increase the risk of data corruption.
For example, consider a scenario where a node-level data page contains sensitive customer data. If
the data page is assigned to a user-level access group that provides unnecessary privileges, such as
the ability to modify data, it can pose a significant security risk. However, by using a dedicated
21
linkedin.com/in/christin-varughese-b968152b/
access group for batch processing tasks, the data can be loaded securely without exposing it to
unauthorized users.
The benefits of using an appropriate user-level access group for node-level data pages include
enhanced security, improved performance, and reduced risk of data corruption. By following this
best practice, organizations can ensure that only authorized users have access to sensitive data, and
the system remains secure and performs optimally.
In summary, selecting an appropriate user-level access group for node-level data pages is a critical
aspect of securing Pega applications. By creating a dedicated access group for batch processing tasks
and assigning it to all node-level data pages, organizations can ensure that only authorized users
have access to sensitive data, and the system performs optimally.
Pega applications often rely on looping constructs to perform operations on collections of data.
While loops can be an efficient way to process data, they can also have a severe performance impact
when used improperly. One common mistake is to use duplicate loops, where the same list is looped
over multiple times. This can result in unnecessary processing and a decrease in application
performance.
To avoid using duplicate loops, it is important to rework the logic of the application. This can involve
consolidating multiple loops into a single loop. By reducing or eliminating duplicate loops, the
application will run more efficiently and improve its overall performance. This will result in faster
response times and a better user experience for end-users.
Example: Consider a scenario where a Pega application needs to process a list of customers and
calculate their total revenue. One way to accomplish this task is to use a loop to iterate over the list
of customers and calculate the revenue for each customer. Then, use another loop to iterate over
the list again and add up the revenue for all customers.
However, this approach involves duplicate looping and can result in poor performance. Instead, the
logic can be reworked to use a single loop that calculates the revenue for each customer and adds it
to a running total. This eliminates the need for a second loop and improves the overall efficiency of
the application.
In summary, it is important to avoid duplicate loops when processing data in Pega applications. By
reworking the application logic and utilizing the platform features, it is possible to improve
performance and provide a better user experience.
When working with inheritance in Pega, it's important to consider the use of the "call superclass
data transform" option. This option allows you to call a data transform that is defined in a superclass
of the current class, and it can be useful for reusing logic and minimizing redundancy.
When enabling "Call superclass data transform," Pega will search for a data transform in the
inheritance tree with the same name as the data transform being called. If one is found, Pega will
22
linkedin.com/in/christin-varughese-b968152b/
execute it before executing the current data transform. If no data transform with the same name is
found, Pega will simply move on to execute the current data transform.
However, enabling this option when there is no data transform with the same name in the
inheritance tree may not provide any additional benefits and can potentially slow down rule
resolution performance.
Therefore, it is recommended to only enable "Call superclass data transform" when there is a data
transform with the same name in the inheritance tree that needs to be executed.
A proper example would be if you have a data transform named "DT1" in your class and a data
transform named "DT1" in your parent class, enabling "Call superclass data transform" would ensure
that both data transforms are executed in the correct order.
The suggested approach for implementing this would be to first check the inheritance tree for a data
transform with the same name before enabling "Call superclass data transform." If no data
transform with the same name exists in the inheritance tree, then it is not necessary to enable this
option.
In Pega, Data Transforms are used to modify and transform data on the clipboard. While designing a
Data Transform, it is important to keep in mind the readability, maintenance, and potential reuse of
the rule. Adding comments to the Data Transform can help achieve these goals.
Consider the following best practices for adding comments to Data Transforms in Pega:
1. Add a clear and concise description of the Data Transform in the header comment.
2. Add comments to explain the purpose of each step and any complex logic used in the Data
Transform.
3. Use comments to document any dependencies or assumptions made in the Data Transform.
4. Use standard naming conventions for comments and keep them consistent across all Data
Transforms.
If comments are not added to Data Transforms, it can lead to the following issues:
1. Difficulty in understanding the purpose and functionality of the Data Transform, especially
for other developers or team members who may need to work on the rule.
2. Increased time and effort required for maintenance and troubleshooting if the rule needs to
be modified or debugged in the future.
3. Reduced potential for reuse of the rule in other parts of the application or in future projects.
Benefits: Adding comments to Data Transforms can provide the following benefits:
1. Improved readability and understandability of the rule for developers, testers, and other
stakeholders.
2. Increased efficiency and accuracy in maintaining and modifying the rule in the future.
3. Greater potential for reuse of the rule in other parts of the application or in future projects.
23
linkedin.com/in/christin-varughese-b968152b/
Example: Consider a Data Transform used to calculate the total price of a product based on its unit
price and quantity. A well-commented version of this rule could look like this:
Action: Comment /* This Data Transform calculates the total price of a product based on its unit
price and quantity */
In this example, the comments provide a clear description of the Data Transform and explain the
purpose of each step, making it easier to understand and maintain the rule.
In conclusion, adding comments to Data Transforms is a best practice that can improve the
readability, maintenance, and potential reuse of the rule in Pega. By following the suggested
approach and implementing the steps outlined above, developers can create well-documented Data
Transforms that are easy to understand and maintain.
In Pega, data transforms (DTs) are used to transform data from one form to another. When creating
DTs, it is important to keep them modular and easy to maintain for better reusability and
performance. If a DT has more than 20 primary steps, it is recommended to modularize it by Apply
Data transforming out other DTs.
The suggested approach is to identify the common steps that are used in multiple DTs and create
separate DTs for those steps. These DTs can then be Apply Data transformed from the original DT
using the "Apply Data transform" method. This will reduce the number of steps in the original DT
and make it easier to read and maintain.
To implement this approach, create separate DTs for the common steps and give them meaningful
names for easy identification. Then, in the original DT, use the "Apply Data transform" method to
Apply Data transform the separate DTs for the common steps. It is also recommended to add
comments to the DTs for better readability and maintenance.
If DTs are not modularized in this way, they can become very long and complex, making it difficult to
understand and maintain them. This can result in errors and performance issues. It may also lead to
duplication of effort, as developers may create new DTs for the same common steps.
The benefits of modularizing DTs include better reusability, readability, and maintainability. When
DTs are modular, it is easier to reuse common steps across multiple DTs, reducing the amount of
duplicate code. It also makes it easier to understand and maintain the DTs, as developers can focus
on specific modules rather than a long list of steps.
24
linkedin.com/in/christin-varughese-b968152b/
For example, consider a scenario where there are multiple DTs that need to perform validation on a
certain field. Instead of adding the same validation steps to each DT, a separate DT can be created
specifically for validation. The original DTs can then call the validation DT using the "Apply Data
transform" method, reducing the number of steps and making the DTs easier to maintain.
In summary, modularizing DTs is a best practice for better reusability and maintenance. By
identifying common steps and creating separate DTs, developers can reduce the complexity of DTs
and make them easier to read and maintain. This results in more efficient development and fewer
errors in the long run.
When evaluating performance optimizations for Pega applications, it is important to review the use
of activities versus data transforms. If a data transform uses the pxExecuteAnActivity function to call
an activity, it is worth evaluating if the called activity can be transformed to a data transform.
The pxExecuteAnActivity function is designed to execute activities from within an activity. While it is
possible to use this function to execute activities from a data transform, doing so can have negative
impacts on performance and maintainability.
First, using pxExecuteAnActivity in a data transform can make it difficult to debug issues that may
arise. The data transform may not show up in the activity execution history, making it difficult to
track down issues. Additionally, calling activities from data transforms can lead to unnecessary
overhead, as each call to an activity involves creating a new requestor thread.
Second, activities are typically designed to work with a requestor thread, while data transforms are
not. Activities may depend on the context of the requestor thread, which may not be present when
the activity is called from a data transform. This can lead to unexpected behavior and errors.
In summary, while it is possible to call activities from data transforms using pxExecuteAnActivity,
doing so can have negative impacts on performance and maintainability. It is generally
recommended to transform the called activity into a data transform, and then call that data
transform directly from the original data transform.
In Pega, it is common to come across data transforms with disabled code. This occurs when
developers temporarily disable code during the development or debugging process but do not
remove it when it is no longer needed. Disabled code can cause confusion and increase the
complexity of data transforms, and can also affect system performance if the code is still being
executed.
25
linkedin.com/in/christin-varughese-b968152b/
Therefore, it is recommended to review and clean up disabled code in data transforms. The
following approach can be used to accomplish this:
1. Identify disabled code: Go through the data transform and identify all disabled code.
Disabled code is often marked with a comment or by prefixing it with "//" to comment it out.
2. Determine if the disabled code is still needed: Review the disabled code to determine if it is
still required for the data transform to function correctly. If it is not needed, delete it.
3. Consider moving the disabled code to a separate data transform: If the disabled code is still
needed but does not belong in the current data transform, consider moving it to a separate
data transform.
4. Use version control to keep track of changes: Always use version control to keep track of any
changes made to the data transform.
The benefits of reviewing and cleaning up disabled code in data transforms include:
2. Increased readability: Cleaning up disabled code can make data transforms more readable
and easier to understand.
3. Simplified maintenance: Removing unused code can simplify maintenance by reducing the
complexity of the data transform.
For example, suppose a data transform is used to calculate the total amount of an order. During
development, the developer disabled some code that calculated the tax amount because the tax
calculation logic was still being worked on. Once the tax calculation logic was complete, the
developer forgot to remove the disabled code. Reviewing and cleaning up the disabled code would
help to simplify the data transform and prevent confusion for future developers who may be
working on the same code.
In summary, reviewing and cleaning up disabled code in data transforms is a best practice that can
help improve system performance, increase readability, and simplify maintenance. It is important to
regularly review and clean up disabled code to ensure that data transforms remain efficient and easy
to understand.
3.2.7. Avoid Changing Class of Containing Page by Setting pxObjClass Property in Pega
Pega provides the ability to change the class of an object at runtime by setting the pxObjClass
property. However, it is not a recommended best practice to change the class of the containing page
in this way.
When setting the pxObjClass property in a Data Transform, the class of the containing page will be
changed. This can cause issues if the page is later used by other rules in the application. To avoid
this, it is recommended to use alternative approaches for achieving the desired functionality.
One approach is to use dynamic class referencing in the rules that require a specific class. This
involves using a parameter or property to dynamically reference the class at runtime, instead of
hardcoding the class name in the rule. This allows the rule to be reused with different classes,
26
linkedin.com/in/christin-varughese-b968152b/
without the need for changing the rule definition. Another approach is to create a new page of the
desired class and copy the data from the old page to the new page.
If the class of the containing page is changed by setting the pxObjClass property, it can cause issues
such as:
3. It can affect other rules referencing the containing page, such as activities, data transforms,
and flow actions.
For example, consider a case where a data transform is used to create a new work object. In the
data transform, the pxObjClass property is set to "MyCo-Work-Custom". This will change the class of
the work page to "MyCo-Work-Custom". If another rule references the work page, such as an
activity, it will now reference the work page with the "MyCo-Work-Custom" class, which may not be
the desired behavior.
The benefits of following the suggested approach and not changing the class of the containing page
by setting the pxObjClass property include:
1. Avoiding unexpected behavior and issues with inheritance and rule resolution.
2. Ensuring that other rules referencing the containing page behave as expected.
By avoiding the use of pxObjClass property in Data Transforms and using alternative approaches
such as dynamic class referencing or activities, you can ensure the stability and maintainability of
your application, and avoid potential issues caused by changing the class of the containing page.
When creating a data transform, it's important to use parameters to clearly define the inputs and
outputs so that system architects understand how the data transform is used. The use of parameters
in data transforms helps in achieving encapsulation and polymorphism, two important principles of
object-oriented programming.
Encapsulation refers to the ability to hide the complexity of an object and present only the relevant
information to the users. When parameters are defined, they serve as a contract between the data
transform and the calling rules, allowing the data transform to encapsulate its complexity and
making it easier to understand and reuse.
Polymorphism refers to the ability to create different implementations of an object with the same
interface. In the context of data transforms, this means that a single data transform can be reused
across multiple use cases by defining different input and output parameters. This helps in reducing
redundancy and promoting code reuse.
27
linkedin.com/in/christin-varughese-b968152b/
There are a few approaches to identify the parameters for a given requirement:
1. Review the requirements and identify the data elements that are needed for the processing
logic. These data elements become the input parameters of the data transform.
2. Identify the output data elements that are needed by the calling rules or downstream
systems. These data elements become the output parameters of the data transform.
Once the parameters have been identified, they can be defined in the data transform rule form
under the "Parameters" tab.
If a data transform needs a specific value to process, rather than passing a complete page reference,
it's recommended to pass a scalar parameter. Passing complete page references can lead to issues
with performance, scalability, and maintainability.
1. Improved readability and maintainability: Parameters help to document the inputs and
outputs of a data transform, making it easier for other developers to understand and
maintain the rule.
3. Improved performance: Passing scalar parameters instead of complete page references can
improve the performance of the data transform.
If parameters are not defined in a data transform, it can lead to confusion and errors when the rule
is used in different contexts. It also makes the data transform harder to understand and maintain.
Suppose we have a data transform that calculates the tax for an order based on the order amount
and the tax rate. The input parameters for this data transform would be the order amount and the
tax rate, while the output parameter would be the calculated tax amount.
We can define the input parameters "OrderAmount" and "TaxRate" as scalar properties in the data
transform rule form. The output parameter "TaxAmount" can also be defined as a scalar property.
In the processing logic of the data transform, we can use these parameters to calculate the tax
amount and assign it to the "TaxAmount" property.
In summary, using parameters in data transforms is an important best practice that helps in
achieving encapsulation, polymorphism, and code reuse. By defining clear inputs and outputs, data
transforms become more readable, maintainable, and scalable.
In Pega, it is common to use the (<APPEND>) and (<CURRENT>) operators when working with
properties in a data transform. While these operators can be useful, they can also make the data
transform more difficult to read and maintain. As a best practice, it is recommended to use the
'Append and map to' option instead.
28
linkedin.com/in/christin-varughese-b968152b/
The 'Append and map to' option allows you to map properties from the source to the target, without
using the (<APPEND>) or (<CURRENT>) operators. This makes the data transform easier to read and
understand, especially for junior resources who may not be familiar with the (<APPEND>) and
(<CURRENT>) operators.
Using the 'Append and map to' option provides several benefits. First, it improves readability and
makes the data transform easier to understand. Second, it reduces the chance of errors or mistakes
that can occur when using the (<APPEND>) and (<CURRENT>) operators. Third, it can improve
performance by reducing the amount of processing required to execute the data transform.
To convert a Date property to a DateTime property, the recommended approach is to use the
@pxGetSpecifiedTimeOnDate function. This function takes five parameters: the date value, hours,
minutes, seconds, time zone. The first parameter is the Date property that needs to be converted,
last parameter is the time zone value (in this case, we use the pyUseTimeZone property of the
pyRequestor page)
Example: Suppose we have a Date property called "myDate" that needs to be converted to a
DateTime property. Instead of appending "T000000.000 GMT" to the Date property, we can use the
following expression to convert it to a DateTime property:
@pxGetSpecifiedTimeOnDate(@getCurrentDateStamp(),"","","",pxRequestor.pyUseTimeZone)
Appending "T000000.000 GMT" to a Date property to make it a DateTime property may not always
produce the correct time zone. This approach assumes that the time zone of the system is GMT,
which may not be the case. In addition, it may cause issues when working with time zones that
observe daylight saving time.
When creating an activity rule, it is essential to provide a meaningful description of each step to
ensure that the steps are easy to understand and maintain. The step description should indicate the
purpose of the step and what it is trying to accomplish. Pega strongly recommends that the step
description should not be empty and should be contextual to signify the operation for that step.
29
linkedin.com/in/christin-varughese-b968152b/
The suggested approach is to write step descriptions that are concise, clear, and descriptive. The
description should provide an overview of what the step is trying to achieve and why it is necessary.
It should also be contextual and reflect the intention of the step. The use of action words such as
"update," "create," and "delete" can help to provide a better understanding of what the step is
doing.
If the step descriptions are not added, it may cause confusion for other developers who may not
understand the purpose of the step, leading to mistakes and errors during development. This can
also make it challenging to debug and maintain the activity rule. Without a proper description, it
may also be challenging to make changes to the activity rule in the future.
Adding a descriptive step description has several benefits. It helps to make the activity rule more
readable and understandable for other developers who may need to work on it. It also improves the
maintainability of the activity rule, making it easier to make changes or updates in the future. This, in
turn, can lead to faster development times and reduced development costs.
For example, if an activity rule is created to update the status of a case, the step description should
be "Update the status of the case to 'Resolved-Completed.'" This description provides clarity on
what the step is trying to achieve and what the expected outcome should be.
In conclusion, adding a descriptive step description is crucial when creating activity rules. It helps to
make the rule more understandable, maintainable, and reduces the risk of errors during
development. By following this best practice, developers can ensure that their activity rules are
efficient, effective, and easy to maintain.
3.3.2. Avoid using Info forced logging level in activity rules for improved performance and
troubleshooting
The Info logging level is typically used to capture general information about the system and its
operations. However, if you enable Info logging on an activity rule, it can generate a significant
amount of logging output, which can quickly clutter log files. This can make it difficult to find
relevant information and can impact system performance due to the overhead of generating and
storing the extra logs.
Furthermore, enabling Info logging on all activity rules can cause additional performance issues
because the logging subsystem needs to process and write all of this information to disk. This can
cause delays in rule execution and overall system performance.
Therefore, it is recommended to use the appropriate logging level for each rule type, and only
enable the necessary logging levels on specific rules where it is required for debugging or
performance optimization.
30
linkedin.com/in/christin-varughese-b968152b/
For example, if you are troubleshooting an issue with a specific activity rule, you might temporarily
enable Debug logging for that rule to capture more detailed information about its execution. Once
the issue is resolved, you can revert the logging level back to its original setting.
The benefits of using the appropriate logging level for each rule include improved performance,
more efficient use of storage space, and easier troubleshooting by reducing clutter in the log files.
A proper example of setting the appropriate logging level for an activity rule would be to set it to
Error or Warning logging level if the rule is not used for debugging purposes. If you need to debug a
specific activity rule, you can set its logging level to Debug, but it should be reverted back to its
original setting once the issue is resolved.
A proper title for this topic could be "Avoid using Info forced logging level in activity rules for
improved performance and troubleshooting".
When initializing a page in Pega, it is recommended to use dynamic class referencing instead of an
empty class in the Page-New method. This approach allows for easy maintenance and extensibility in
the application.
The suggested approach is to use dynamic class referencing when initializing a page instead of using
an empty class. This can be done by using the Page-New method with a parameter for the class
name, or by using a property reference that contains the class name.
The issue with using an empty class in the Page-New method is that it initializes the page with the
class defined in the Pages and Classes rule. This can lead to confusion and maintenance issues if the
class is changed or updated in the future. Additionally, it can limit the extensibility of the application
by making it more difficult to use different classes for the same page structure.
Using dynamic class referencing provides several benefits for initializing pages in Pega. It allows for
greater flexibility in defining page structures and makes it easier to maintain and extend the
application. For example, consider an application that needs to create a new customer page. Instead
of using an empty class in the Page-New method, dynamic class referencing can be used to initialize
the page with the appropriate customer class based on the context of the request. This makes it
easier to maintain the application and extend it to support new customer types in the future.
In conclusion, it is recommended to use dynamic class referencing instead of an empty class when
initializing pages in Pega. This approach provides greater flexibility, maintainability, and extensibility
for the application.
In Pega, it is common practice to temporarily disable a step during development. However, leaving
commented steps in the final version of an activity can led to confusion and maintenance issues in
the long run. Therefore, it is recommended to review and clean up commented steps if they are not
required in the final version of the activity.
31
linkedin.com/in/christin-varughese-b968152b/
The suggested approach is to review each commented step and determine if it is necessary for the
final version of the activity. If the step is not needed, it should be deleted to avoid clutter and
confusion in the activity rule. If the step is necessary, it should be uncommented and any necessary
updates or modifications should be made to the step.
The issue with leaving commented steps in an activity is that it can make the rule difficult to read
and understand. This can lead to confusion for other developers or for future maintenance tasks. In
addition, commented steps can also cause performance issues if the rule becomes overly
complicated due to excessive commented code.
The benefits of cleaning up commented steps in an activity include improved readability and
maintainability of the rule. This makes it easier for developers to understand the purpose and
functionality of the activity, and also reduces the likelihood of errors or issues occurring during
maintenance tasks.
By following these best practices for commented steps in activities, developers can improve the
maintainability and readability of their Pega applications.
The suggested approach is to create a new validate rule that includes the required validations, and
then reference this in the Flow Action. By doing so, the activity rule becomes more readable and
easier to maintain. Additionally, the validation logic can be reused across multiple processes, further
improving efficiency.
If this approach is not followed, the activity rule can become cluttered and difficult to read.
Moreover, any changes to the validation rules will require modifying multiple activity rules, leading
to maintenance overhead.
Consider the following example: An activity is called to validate a user's input and return error
messages if any validation fails. The activity includes several Obj-Validate and Page-Set-Messages
methods, which can make the rule look complex and hard to understand. By creating a new
validation rule and moving the validation rules to the Flow Action, the activity rule is simplified and
more readable.
The benefits of using a Validate rule in the referenced Flow Action are:
In conclusion, it is recommended to use a Validate rule in the referenced Flow Action instead of
using Obj-Validate and Page-Set-Messages in activities. This approach simplifies the activity rule and
improves its maintainability.
32
linkedin.com/in/christin-varughese-b968152b/
3.3.6. Enabling "Allow Direct Invocation from Client/Service" in Activity Rules
In the context of an activity rule, enabling "Allow direct invocation from client/service" should be
done only for activities that are called from a browser. This best practice is important to maintain
system security and ensure optimal performance.
The suggested approach is to carefully review the activity and determine whether it is necessary to
allow direct invocation from client/service. If it is, enable the option only for the activities that are
called from a browser. This can be done by checking the "Allow direct invocation from client/service"
checkbox in the activity's rule form, and selecting the appropriate access group(s) from the "Access
Group" field.
If the option is not enabled only for the activities that are called from a browser, it can pose a
security risk as it allows external systems to directly invoke activities on the Pega platform. This can
lead to unauthorized access, data breaches, and potential performance issues.
Example: Suppose there is an activity that is called from a browser and requires direct invocation
from client/service. The activity is responsible for validating user input and saving the data to a case.
Enabling "Allow direct invocation from client/service" for this activity ensures that it can be invoked
directly from the browser without compromising system security. However, if this option is enabled
for activities that are not called from a browser, such as backend system processes, it can lead to
security risks and potential performance issues.
Enabling "Allow direct invocation from client/service" only for activities called from a browser
provides the following benefits:
• Improved security: Restricting access to activities ensures that only authorized users can
invoke them, reducing the risk of unauthorized access and data breaches.
• Better performance: Limiting direct invocation to activities called from a browser reduces
the potential for performance issues caused by external systems overloading the system
with activity invocations.
• Easier maintenance: By using a targeted approach for enabling direct invocation, it is easier
to manage and maintain activities, reducing the potential for errors and improving overall
system performance.
In summary, enabling "Allow direct invocation from client/service" should be done carefully and only
for activities that require it. By following this best practice, you can ensure system security and
optimal performance for your Pega applications.
When using the Page-Set-Message method, it is important to consider localization impact. Hard-
coded messages in Page-Set-Message can cause issues with localization and translation, making it
33
linkedin.com/in/christin-varughese-b968152b/
difficult to maintain the application in multiple languages. To avoid this, it is recommended to use
field values of type message label.
The suggested approach is to create message label fields for each message that needs to be
displayed. This can be done by creating a new field in the class of the page that will be used to
display the message. The field should have a type of Message and a message key defined. The
message key should be a unique identifier for the message and should be used as a parameter in the
Page-Set-Message method.
For example, instead of using Page-Set-Message with a hard-coded message "Invalid username or
password," a message label field can be created with a message key of "loginErrorMessage" and a
value of "Invalid username or password." Then, the Page-Set-Message method can be used with the
message key as a parameter, like this: Page-Set-Message "loginErrorMessage". This way, if the
application needs to be translated into another language, the message label can be updated with the
translated text without having to modify the activity rule.
If Page-Set-Message method is used with hard-coded messages, it can cause issues with localization
and translation, making it difficult to maintain the application in multiple languages. This can result
in inconsistent messaging and user experience for users in different locales. By using message label
fields, localization becomes easier and more manageable.
The benefits of using message label fields for Page-Set-Message method are:
3. Separation of message content from the code, making it easier to update and maintain the
application.
In summary, it is best practice to use message label fields for Page-Set-Message method instead of
hard-coded messages to avoid localization impact and make localization easier and more
manageable.
3.3.8. Avoid Using WriteNow Option in Obj-Save and Obj-Delete Activity Methods
The WriteNow option in the Obj-Save and Obj-Delete activity methods is used to commit changes
immediately to the database, bypassing the standard PRPC transaction model. However, this can
cause issues with transaction management and result in stale data in the database. It is
recommended to allow PRPC to handle transaction management and commit the record when the
transaction completes.
In Pega, by default, the system automatically commits the database transactions when a flow ends
or when an assignment is created or completed. This means that you do not need to explicitly call
the Commit method in activities that are flow-related, such as Utility, Route, Assign, Notify, or
Integrator activities.
However, if you have a custom activity that needs to commit a database transaction, it is important
to consider the impact of calling the Commit method. As mentioned earlier, using the WriteNow
option or calling the Commit method inappropriately can interfere with the Pega transaction model,
potentially leading to stale or inconsistent data in the database.
34
linkedin.com/in/christin-varughese-b968152b/
It is generally a good practice to rely on the automatic commit functionality provided by Pega, unless
there is a specific business requirement that necessitates a manual commit. In such cases, the
commit should be called at the appropriate point in the activity, after ensuring that all necessary
updates to the database have been made.
Suppose you have a requirement where you need to save data in real-time, without waiting for the
transaction to complete. For instance, you're building a chat application where messages need to be
saved in the database immediately after being sent. In such cases, you can use the WriteNow option
to force the system to write the data to the database immediately, without waiting for the
transaction to complete. This ensures that the data is available to other users in real-time.
However, it's important to note that using WriteNow in this way can still have some drawbacks. For
instance, it may result in inconsistent data if there are concurrent transactions accessing the same
records. It may also increase the chances of database deadlocks, as multiple transactions try to write
to the same records simultaneously. In general, using WriteNow should be done with caution and
only when absolutely necessary.
If the applies to class name is included in the activity call instead of using a step page, the called
activity may execute with the wrong context or may not execute at all. This can lead to data
inconsistency or other errors in the application.
Using a step page in the activity call ensures that the called activity executes with the correct context
and avoids any unnecessary errors. It also makes the code more readable and easier to maintain.
When defining the step page in the activity call, it is important to ensure that the page is properly
initialized with the correct data. This can be done using the "property-set" methods or by using a
data transform to set the properties on the step page. Additionally, it is important to ensure that the
step page is passed on to any subsequent activities or rules in the flow to maintain the context.
Example: Consider an application where an activity named "CreateAccount" is called from another
activity called "ProcessAccount". The CreateAccount activity creates a new account object of class
"Account" and sets various properties on the object. Instead of including the applies to class name in
the activity call, it is recommended to define a step page of class "AccountPage".
Then, in the CreateAccount activity, the "AccountPage" can be used as the context to reference
properties on the account object. This ensures that the activity executes with the correct context
and avoids any unnecessary errors.
35
linkedin.com/in/christin-varughese-b968152b/
The suggested approach is to use StepStatusFail in the transition of the step acquiring the lock or in
the precondition of the step immediately after the lock is acquired. This allows the activity to fail
gracefully and stop the execution if the lock is not granted. Additionally, the error message should be
informative and indicate that the lock could not be acquired, so that the user or system
administrator can take appropriate action.
If locks are not checked before proceeding with the execution of an activity, it may result in conflicts
between different users or processes trying to access or modify the same data simultaneously. This
can lead to unpredictable behavior, data corruption, and loss of data. It is important to note that the
locks acquired by an activity are held until the transaction completes, or until they are explicitly
released using Obj-Release method.
When designing activities that acquire locks, it is important to consider the following design
considerations and notes:
• Always check if the lock was granted before proceeding with the execution of the activity.
• Use StepStatusFail in the transition of the step acquiring the lock or in the precondition of
the step immediately after the lock is acquired.
• Use informative error messages that indicate the reason for the lock failure.
• Release the locks as soon as they are no longer needed using Obj-Release method.
• Avoid acquiring locks on too many objects or for too long, as this can lead to performance
issues and lock contention.
Example: Consider a scenario where a user is attempting to update a case record that is currently
being modified by another user. If the activity attempting to modify the case record does not check
if the lock was granted before proceeding, it may overwrite the changes made by the other user,
resulting in data inconsistency and integrity issues. By using StepStatusFail in the transition of the
step acquiring the lock, the activity can gracefully handle the error and inform the user that the case
record is currently locked by another user. This allows the user to retry the operation after the lock
is released by the other user.
The Obj-Delete-By-Handle method is used to delete a specific instance of a class by its instance
handle. When using this method, it is not necessary to specify a step page since the method uses the
instance handle parameter to identify the row to delete. However, it is important to follow some
best practices to ensure that the method is used correctly.
36
linkedin.com/in/christin-varughese-b968152b/
If a step page is specified in the activity, and the instance handle is passed as a property on the step
page, it can lead to confusion and errors. This is because the instance handle should not be treated
as a property of the class, and doing so can cause issues when deleting the instance.
t is important to note that the Obj-Delete-By-Handle method should be used with caution, as it
permanently deletes the instance from the database. It is recommended to first use the Obj-Open-
By-Handle method to open the instance and check if it exists before deleting it.
In conclusion, the Obj-Delete-By-Handle method is a powerful tool for deleting instances of a class
using their instance handle. By following the best practices outlined above, you can ensure that this
method is used correctly and efficiently.
When working with Pega activities that use methods such as Obj-Open, Obj-Open-By-Handle, Obj-
Refresh-And-Lock, RDB-Open, or Page-New, it is important to consider the step page you are
specifying. If an empty step page is used, the contents of the primary page will be replaced with the
results of the method call. This can lead to unexpected behavior and should be avoided.
The suggested approach is to specify a named page as the step page instead of using an empty step
page. This will ensure that the results of the method call are stored in a separate page, and the
contents of the primary page remain intact.
If the step page is not specified, the method call will use the primary page as the default step page.
This can lead to issues such as data loss, where the contents of the primary page are overwritten
with the results of the method call.
By specifying a named page as the step page, you can ensure that the results of the method call are
stored in a separate page, which can be used later in the flow without affecting the primary page.
This can help prevent unexpected behavior and make your application more reliable.
• When specifying a step page, be sure to choose a name that is unique and descriptive.
• Avoid using an empty step page, as this can lead to unexpected behavior.
• When using methods that return a single record, such as Obj-Open or RDB-Open, consider
using a single value page instead of a page list. This can simplify your logic and reduce the
risk of data loss.
Example: Suppose you have a flow that needs to retrieve customer information from a database
using RDB-Open. If you specify an empty step page, the results will be stored in the primary page,
overwriting any existing data. Instead, you can specify a named page such as "CustomerInfo" as the
step page. This will ensure that the results are stored in a separate page and can be used later in the
flow without affecting the primary page.
In summary, it is important to carefully consider the step page you are using when working with
Pega activities that use methods such as Obj-Open, Obj-Open-By-Handle, Obj-Refresh-And-Lock,
RDB-Open, or Page-New. By following best practices and specifying a named page as the step page,
you can prevent unexpected behavior and make your application more reliable.
37
linkedin.com/in/christin-varughese-b968152b/
3.3.13. Limiting use of activity rules and refactoring into data transforms
Activity rules in Pega Platform are used to perform business logic and other complex operations.
However, the use of activity rules should be limited as much as possible. In most situations, better
and more purpose-based configuration options exist that can be used instead of activities. Consider
refactoring the activity into a data transform, which is an easier-to-read and maintain configuration
option that provides most of the same capabilities.
The suggested approach is to evaluate the use of activity rules in your application and consider
whether a data transform or other configuration options can be used instead. Refactoring the
activity into a data transform can improve the readability and maintainability of the rule, as well as
simplify the rule for future changes. Data transforms are easier to understand and can be more
intuitive than activities.
If activity rules are overused in an application, it can lead to slower performance, increased
complexity, and reduced maintainability. Moreover, activities can be more difficult to debug and can
cause issues if the developer does not have a complete understanding of the Pega Platform.
Benefits of refactoring an activity into a data transform include better performance, simplified
debugging, and easier maintenance. Data transforms are easier to read, making it easier to
understand the intended behavior of the rule. They also make it easier to find and resolve any issues
that might arise.
Evaluate the use of activity rules on a case-by-case basis, considering whether a data transform or
other configuration options can be used instead, and refactor activities into data transforms as
needed. It is also important to document any changes made to the application to ensure that other
developers can understand the changes made and the reasoning behind them.
When retrieving lists of objects in Pega, it is recommended to use data pages backed by Report
Definitions (RD) or Lookups instead of using Obj-Browse. This approach provides better performance
and scalability, and reduces the impact on the database.The data page can then be used to populate
a list or grid on the user interface.
If Obj-Browse is used to retrieve a list of objects, it can cause performance issues because it retrieves
all the data from the database, which can be a huge amount of data. This can slow down the system
and put a strain on the database.
Using data pages backed by RD or Lookup provides several benefits, such as:
1. Reduced database impact: Data pages backed by RD or Lookup retrieve only the necessary
data, reducing the impact on the database.
2. Better performance: Data pages are cached by default, which reduces the number of
database calls and improves performance.
38
linkedin.com/in/christin-varughese-b968152b/
3. Reusability: Data pages can be reused across the application, reducing redundancy and
improving maintainability.
When designing the data page, it is important to consider the following design considerations and
notes:
1. Use pagination: When using an RD-backed data page, it is recommended to use pagination
to limit the number of rows retrieved from the database.
2. Use data transform: Use a data transform to map the RD or Lookup results to the properties
required by the data page.
3. Limit the number of columns retrieved: Retrieve only the necessary columns from the
database to reduce the amount of data retrieved and improve performance.
4. Avoid complex queries: Avoid complex queries that join multiple tables or use subqueries, as
they can be expensive and impact performance.
For example, consider a requirement to display a list of customer names and their corresponding
account balances. Instead of using Obj-Browse to retrieve all customer and account data, a data
page backed by an RD can be created. The RD can retrieve only the required data, such as customer
name and account balance. The data page can then be used to populate a list or grid on the user
interface.
In summary, using data pages backed by RD or Lookup instead of using Obj-Browse to retrieve lists
of objects is a best practice that improves performance and reduces the impact on the database.
Design considerations and notes must be kept in mind when implementing this approach.
In Pega, local variables and properties are used to store values for reuse in different parts of an
application. While both are useful, it is important to understand when to give precedence to a local
variable over a property and vice versa. In this topic, we will discuss the best practices for using local
variables and properties in activities.
When defining a local variable or a property, it is important to consider the scope of the variable. If
the scope of the variable is limited only to the activity, it is recommended to give precedence to the
local variable over the property. This is because local variables are defined only within the activity
and can be used to store temporary data or intermediate results that are used within the activity.
Local variables have better performance and consume less memory compared to properties.
If local variables are defined but not being used in the activity, it is recommended to remove them.
This is because unused variables can cause confusion and clutter the activity, leading to a decrease
in readability and maintainability. Additionally, unused variables can also affect performance, as
Pega needs to allocate memory for them even though they are not being used.
On the other hand, properties have a larger scope and can be used across multiple activities or rules
within an application. Properties are used to store data that is required across different rules and
processes. However, properties consume more memory and have a greater impact on performance
compared to local variables.
39
linkedin.com/in/christin-varughese-b968152b/
Here are some key benefits of following the above best practices:
• Better readability and maintainability: By removing unused variables and properties, the
activity becomes easier to read and maintain, reducing the risk of introducing errors.
• Optimized memory usage: By using local variables and properties appropriately, memory
usage can be optimized, leading to better performance and reduced resource consumption.
Suppose an activity needs to perform a simple calculation using two numbers. In this case, it is
recommended to use local variables to store the two numbers and the result of the calculation. This
will avoid the need to define properties for each number and result, leading to better performance
and improved readability of the activity.
In summary, giving precedence to local variables over properties in activities can lead to improved
performance, better readability, and optimized memory usage. By following the above best
practices, developers can ensure that their activities are efficient, maintainable, and performant.
3.3.16. Setting Activity Type in Security Tab for Better Performance and Security
In Pega, Activity rules are used to define a sequence of steps that perform a specific task. When
creating an activity rule, it is important to set an appropriate 'Activity type' in the Security tab to
ensure better performance and security. The available options are Actility (default), Route,Load Data
Page, etc.
• Activity: Activity type is typically used for creating standalone activities that can be invoked
independently of any flow or case. These activities can be used for a wide range of generic
tasks such as calculations, data transformations, integrations, and so on.
• Utility: Utility type is typically used for activities that are intended to be used within a flow or
a case. These activities are designed to perform specific actions related to case processing
such as updating properties, setting status, calling other activities, or invoking a service.
While both types of activities can be used for a wide range of tasks, it is generally recommended to
use the appropriate activity type based on the intended usage. Using the correct activity type not
only helps in organizing and managing your rule base but also helps in improving the performance of
your application.
If the activity type is not set properly, it can impact performance and result in warnings during the
application analysis. The warnings indicate that the activity may have more access than necessary or
may be performing unnecessary processing. Setting the appropriate activity type helps to improve
the performance of the application and also ensures that the activity is secure by limiting the access
to only what is necessary.
It is important to note that setting the appropriate activity type alone is not sufficient for ensuring
security. The activity should also be reviewed and tested thoroughly to ensure that it is performing
the intended task without any unintended consequences.
40
linkedin.com/in/christin-varughese-b968152b/
In summary, setting the appropriate activity type in the Security tab is an important step for
ensuring better performance and security in Pega applications. By selecting the appropriate activity
type, unnecessary access to data can be limited, and the performance of the application can be
improved.
When performing operations on clipboard pages in Pega, it's important to ensure that the page
actually exists before executing a data transform or activity that may reference it. This is particularly
important when the page is being created dynamically or is optional.
The suggested approach is to perform appropriate verification checks before attempting to perform
any operations on the clipboard page. This can be done by using the "PageExists" function, which
returns a Boolean value indicating whether the specified clipboard page exists or not.
If verification checks are not done, it may result in errors such as NullPointerException or invalid
parameter exceptions, which can impact the functionality of the system.
The benefits of performing verification checks include increased reliability and stability of the
system, as well as improved error handling and user experience.
Design considerations when implementing this approach include identifying the scenarios where
clipboard pages may be missing or optional, and implementing appropriate checks accordingly.
By following this approach, you can ensure that your Pega system is more robust and reliable, while
providing a better user experience.
3.3.18. Do not use obj-save to save rules. Use "Call Save" instead
In Pega, "obj-save" is used to save instances of data classes to the database. While it is possible to
use obj-save to save rules, it is not recommended. Instead, use the out-of-the-box "Save" activity.
The "Save" activity performs additional validations and checks before saving a rule, ensuring that it is
saved in the correct format and with all necessary information. Using obj-save bypasses these
validations and can result in inconsistencies or issues down the line. Using "Call Save" instead of obj-
save ensures that the rule is saved correctly and in a consistent manner.
The benefits of using "Call Save" over obj-save include better validation and consistency, as well as
improved maintainability of your application. It is also in line with Pega best practices and ensures
that your application remains compatible with future releases.
Hardcoding the names "pyWorkPage" and "pyWorkCover" in your rules can cause problems in
certain situations. One of these is when a subcase is added to a parent case, and the subcase is
currently in the part of its workflow before it reaches its first assignment. In this scenario,
pyWorkPage refers to the parent case that is blocked waiting for the "Create Case" shape to finish,
41
linkedin.com/in/christin-varughese-b968152b/
while the subcase is named something else. Any rules on the subcase case type that assume
pyWorkPage refers to an instance of itself are making an invalid assumption, as pyWorkPage still
points to the parent case.
Another situation where hardcoding pyWorkPage and pyWorkCover can cause issues is when
custom configuration, such as a Service, Agent, or Queue Processor activity, creates a new top-level
case and starts the workflow. In this case, the configurer can name the clipboard page holding the
new work item anything they want, such as "NewWorkItemPage." Here, too, there is a window
when the rules being performed on the workflow of the current work item are not named
pyWorkPage on the clipboard.
To avoid these issues, it's recommended to avoid hardcoding pyWorkPage and pyWorkCover in your
rules. Instead, reference the current work item using the "Primary" page, which provides more
resilience. By doing so, you can ensure that your rules work correctly in all scenarios and avoid any
potential errors caused by assumptions about the names of clipboard pages.
Consider a data transform that calculates the total amount for a work item. Instead of hardcoding
pyWorkPage or pyWorkCover in the data transform, use the Primary page of the rule to reference
the current work item. This can be done by using "Primary.TotalAmount" instead of
"pyWorkPage.TotalAmount" or "pyWorkCover.TotalAmount".
In Pega, when performing a Commit in an activity that is not part of flow processing, it is
recommended to perform a Rollback if an error occurs. The Rollback method can be used to cancel
or withdraw any previous uncommitted changes to the PegaRULES database and to external
databases accessed from an external class. All pending Obj-Save and Obj-Delete methods are
cancelled.
The Rollback method takes no parameters and all uncommitted database operations are withdrawn.
If an instance is locked and the ReleaseOnCommit box was selected, the lock is released. However,
locks on instances where the ReleaseOnCommit box was not selected are retained.
Failing to perform a Rollback when an error occurs can result in inconsistent data in the PegaRULES
database and external databases, as well as causing locks to remain on instances that are no longer
being updated. This can lead to performance issues and potential data corruption.
One design consideration when using Commit and Rollback is to ensure that transactions are kept as
short as possible. Long transactions can increase the likelihood of locking and blocking, which can
impact system performance. It is also important to handle exceptions appropriately and provide
meaningful error messages to users.
For example, suppose a Pega application allows users to update customer information. When a user
submits the updated information, the application executes an activity that performs a Commit to
save the changes to the database. If an error occurs during the Commit, such as a database
42
linkedin.com/in/christin-varughese-b968152b/
connection failure, it is important to perform a Rollback to cancel the changes and release any locks
on the affected instances.
By following best practices for handling Commit and Rollback in Pega, developers can ensure that
their applications maintain data consistency and avoid performance issues.
Visibility expressions in Pega are used to control the display of fields, sections, and other UI elements
based on certain conditions. While expressions can be used directly in the Visibility property of a UI
element, it is recommended to use "When" conditions for better maintainability and extensibility.
The "When" condition is a named expression that can be reused across different UI elements,
sections, or even rulesets. By creating a When condition, you can avoid duplicating the same
expression in multiple places and make it easier to update the condition in the future.
The benefit of using "When" conditions is that it reduces the risk of errors and saves development
time. By encapsulating the expression in a single location, you can ensure that all the UI elements
using that condition will be updated automatically when the condition is modified. This helps to
avoid inconsistencies in the UI and reduces the chance of introducing errors in the code.
When creating a When condition, it's important to consider the scope of the condition and the
naming conventions. The condition should have a descriptive name that reflects its purpose and
should be scoped appropriately to avoid name conflicts with other When conditions.
On the other hand, hardcoding visibility expressions directly in the Visibility property can lead to
maintainability issues. If the same expression is used in multiple places and needs to be updated, it
can be time-consuming to make the change in each location. Additionally, if the expression is
complex, it can be difficult to understand and debug.
To illustrate the recommended approach, consider the following example. Suppose you have a
section that should only be visible if the current user has a certain role. Instead of using the visibility
expression directly in the section's Visibility property, create a When condition with a descriptive
name, such as "hasRole_XYZ". Then, use that When condition in the Visibility property of the section.
When defining the When condition, you can use the expression builder or write the expression
manually. The expression should evaluate to a boolean value and can reference properties,
functions, or other expressions.
In conclusion, it is best practice to use "When" conditions for better maintainability and extensibility
of visibility expressions in Pega. By encapsulating the expression in a single location, you can ensure
consistency and reduce the risk of errors. When creating a When condition, consider the naming
convention and scope to avoid conflicts with other conditions.
43
linkedin.com/in/christin-varughese-b968152b/
4.1.2. Understanding Field Value in Pega Section Rules
When designing user interfaces, it is important to consider the ability to easily localize the
application for users in different regions or languages. Field values for labels can be used to store the
text that will be displayed to the user in a specific language.
In Pega, field values for section rules can be defined for various labels such as pyCaption, pyLabel,
pyButtonLabel, and more. These field values can be translated into different languages, allowing the
application to display the appropriate text based on the user's locale settings.
For example, suppose a section in an application contains a button labeled "Submit". To make the
application multilingual, a developer can add a field value for the pyButtonLabel property on the
section rule and set it to "Submit" in the default language (e.g. English). They can then add
translations for other languages by creating new field values for the same property and setting the
text to the appropriate translation. When the user opens the application, the system will check their
locale settings and display the appropriate text for the button label.
1. Use sentence case format: Labels should be written in sentence case format, where the first
letter of the first word is capitalized and all other letters are in lowercase. This makes labels
easier to read and understand for users.
2. Avoid special characters: Special characters such as ampersands, underscores, and hyphens
can cause issues with localization. It is recommended to avoid using these characters in
labels.
3. Use meaningful and concise labels: Labels should be clear and concise, and accurately reflect
the purpose of the element they are describing. Avoid using overly technical language or
abbreviations that may not be familiar to users.
4. Consider localization: Labels should be designed with localization in mind. This means using
standard, widely understood terms, avoiding colloquialisms or regionalisms, and leaving
enough space for longer translations in other languages.
By following these best practices, you can ensure that labels in your Pega application are consistent,
easy to understand, and can be easily translated for global audiences. Failure to follow these
guidelines can result in confusing or inaccurate labels, which can negatively impact user experience
and application usability.
For example, consider a button label that reads "SUBMIT_FORM". This label violates best practices
as it contains underscores and is written in all caps. A better label would be "Submit Form", written
in sentence case with no special characters. This label is clearer and more concise, and can be easily
localized for global users.
Overall, it is important to consider localization when designing user interfaces to ensure that the
application is accessible to users in different regions or languages. By using field values for labels in
section rules, developers can easily provide translations for different languages without having to
modify the UI layout or application code.
44
linkedin.com/in/christin-varughese-b968152b/
4.1.3. Consider Migrating to Auto-Generated Controls for Better Maintainability
When developing a section, it is possible to use custom controls to meet specific UI requirements.
However, custom controls can have a negative impact on the maintainability of the application, as
they require additional effort for development and testing. In addition, custom controls may not be
compatible with all browsers, leading to issues with cross-browser compatibility.
Pega provides an option to auto-generate controls based on the properties used in a section. These
auto-generated controls are highly maintainable, as they are updated automatically when changes
are made to the properties used in the section. Furthermore, they are compatible with all browsers
and can be easily customized to meet specific requirements.
To migrate from custom controls to auto-generated controls, the following approach is suggested:
1. Identify the custom controls used in the section and the properties they are bound to.
2. Remove the custom controls and replace them with auto-generated controls that
correspond to the same properties.
3. Test the section thoroughly to ensure that the auto-generated controls function correctly.
If this approach is not taken, the application may become increasingly difficult to maintain, and
issues with cross-browser compatibility may arise. In addition, custom controls may not be
compatible with future versions of Pega, leading to issues with application upgrades.
3. Future compatibility - auto-generated controls are compatible with future versions of Pega,
reducing the risk of issues with application upgrades.
Example: A developer has created a custom control for a date picker in a section. The custom control
is bound to a property that stores the date value. To improve maintainability and cross-browser
compatibility, the developer decides to migrate to an auto-generated date picker control. The
developer removes the custom control and replaces it with an auto-generated date picker control
that corresponds to the same property. The section is then tested thoroughly to ensure that the
auto-generated control functions correctly.
45
linkedin.com/in/christin-varughese-b968152b/
When creating styles for your Pega application, it is best practice to define them in the application
skin. This allows for consistent styling across all elements, making maintenance easier and reducing
the likelihood of styling errors. Additionally, by separating styling from the elements themselves, it
allows for greater flexibility in the future if you need to make changes to the styling of your
application.
To define a style format in your application skin, you can use the Skin rule form. The Skin rule form
provides various options for defining style formats, such as font styles, border styles, and
background colors.
If you do not define style formats in your application skin and instead rely on inline styles, you may
run into issues with inconsistent styling across elements. This can lead to maintenance difficulties as
it will be harder to track down and fix styling errors. Additionally, as your application grows in size
and complexity, it will become increasingly difficult to manage the styles.
Defining style formats in your application skin provides several benefits. Firstly, it ensures consistent
styling across all elements of your application. This results in a more polished and professional-
looking interface. Secondly, it makes maintenance easier as you can easily update styles across the
entire application in one place. Finally, it allows for greater flexibility in the future if you need to
make changes to the styling of your application.
When defining style formats in your application skin, it is important to consider the design of your
application. You should aim to create a cohesive and consistent look and feel throughout your
application. This includes choosing colors and fonts that complement each other and aligning your
styles with the overall branding of your organization.
It is also important to note that defining style formats in your application skin will not affect styles
defined in custom controls or layouts. You may need to modify these styles separately if you want to
ensure consistent styling across all elements.
Example: For example, imagine you are creating a Pega application for a bank. You want to ensure
that all elements in the application have a consistent look and feel. To do this, you define style
formats for fonts, colors, and borders in the application skin. You choose fonts and colors that align
with the branding of the bank and create a cohesive look throughout the application. When the
application is complete, it has a polished and professional look that is consistent across all elements.
When designing a section rule, it's common to have multiple actions triggered from a single event
such as a button click. However, having multiple activities or data transforms triggered from a single
event can result in increased network traffic and may impact system performance. To avoid this, it's
recommended to use a wrapper activity to consolidate multiple actions into a single activity.
A wrapper activity is an activity that contains one or more sub-activities that are executed
sequentially. By using a wrapper activity, you can consolidate all the necessary actions and data
transforms into a single activity. This not only reduces the number of requests to the server but also
simplifies the code and makes it easier to maintain.
While using a wrapper activity can be beneficial, it's important to consider the design implications.
Here are some key considerations and notes:
46
linkedin.com/in/christin-varughese-b968152b/
• Ensure that the wrapper activity is named appropriately and accurately reflects the actions it
contains. This will make it easier to understand the purpose of the activity and its contents.
• Make sure to include comments and documentation within the wrapper activity to clearly
explain the purpose and function of the contained actions.
• When designing the wrapper activity, take care to organize the contained actions in a logical
and intuitive manner. This will make it easier to maintain and update the activity in the
future.
• If necessary, use parameters to pass information between the wrapper activity and its
contained actions.
Using a wrapper activity to consolidate multiple actions triggered from a single event has several
benefits, including:
Example: Suppose you have a button on a section that triggers two actions: one to update a case
property and another to display a success message. Instead of having two separate actions, you
could create a wrapper activity that contains both actions. The wrapper activity would first update
the case property and then display the success message. By consolidating these actions into a single
activity, you can improve system performance and make the code easier to maintain.
In summary, using a wrapper activity to consolidate multiple actions triggered from a single event is
a best practice that can improve system performance and simplify code maintenance. By carefully
organizing and documenting the wrapper activity, you can create more efficient and effective
section rules.
4.1.6. Improving Performance with Defer Load for Tabs and Sections
Defer load is a feature in Pega that allows for faster screen loads by loading data just in time when
the end user needs it. Enabling defer load for tabs and sections means that the data for each tab or
section will not be loaded until the user actually clicks on that tab or expands that section. This
approach can significantly reduce the initial loading time for a screen that has multiple tabs or
sections with large data sets.
To enable defer load, navigate to the configuration options for the relevant tab or section, and set
the "Defer Load" option to "Yes." You can also set a custom loading message to display while the
data is being loaded.
Issues with not using defer load include slower loading times, increased network traffic, and
potentially higher server load. If a screen has multiple tabs or sections with large data sets, loading
everything upfront can be time-consuming and frustrating for the end user.
47
linkedin.com/in/christin-varughese-b968152b/
By using defer load you can faster screen load times, improved user experience, and reduced server
load. By only loading data as needed, Pega can more efficiently use server resources and network
bandwidth.
Choose carefully on which sections or tabs to defer load, as there may be certain sections that are
always needed on screen load. It's also important to test the screen thoroughly to ensure that data
is being loaded correctly as the user interacts with the tabs or sections.
For example, imagine a dashboard screen with multiple tabs, each displaying different reports.
Without defer load, the screen would need to load all of the data for each report upfront, resulting
in slower load times. By enabling defer load for each tab, the data for each report is only loaded
when the user clicks on that specific tab, resulting in faster screen load times.
In summary, enabling defer load for tabs and sections can significantly improve the performance and
user experience of a Pega application by loading data just in time when it's needed.
When it comes to using images in web development, it's important to consider the size of the image
and the impact it will have on website performance. Here are some best practices for using images:
1. Use JPEG or PNG images only for larger images: For images that are larger than 100 x 100
pixels, it's best to use JPEG or PNG file formats. These file formats are best for images with
complex color schemes or those that require transparency.
2. Use Icon fonts for smaller images: For smaller images such as icons or logos, it's best to use
Icon fonts. Icon fonts are vector-based fonts that can be easily scaled to any size without
losing quality. This helps avoid time-consuming requests to the server and improves website
performance.
Not following these best practices can result in slow loading times and a negative impact on user
experience. Large images can significantly increase the size of web pages, leading to longer load
times and increased bandwidth usage. This can be particularly problematic for users on slower
connections or mobile devices.
In addition to considering image size, it's also important to ensure that images are optimized for the
web. This includes compressing images to reduce file size without compromising image quality.
Using alt text to describe images is also important for accessibility and search engine optimization.
For example, let's say you have a website with multiple product categories that are represented by
icons. Instead of using JPEG or PNG images for the icons, you could use Icon fonts to reduce the
number of requests to the server and improve website performance.
In summary, following best practices for using images in web development can help improve website
performance and user experience. By using the appropriate file formats and optimizing images,
developers can create faster-loading web pages that provide a better user experience.
48
linkedin.com/in/christin-varughese-b968152b/
4.1.8. Labelling controls using property description
When labeling controls in a Pega application, it's recommended to set the label value for a control
from the short description for its property rather than from inline labels. The short description
describes the purpose of the property rule and is defined in the property rule form. The label of the
control should be set to the short description property value instead of creating inline labels in the
section.
If labels are set using inline labels in the section, it increases the maintenance cost of the application.
Any changes made to the labels will require updates to be made in every section where the label is
used, making it time-consuming and error-prone. Additionally, this approach can result in
inconsistent labeling of the same field in different sections.
Setting the label value for a control from the short description for its property provides a consistent
labeling approach across the application, reduces the need for redundant label definitions, and
makes it easier to maintain the application. The approach also helps in localization and saves time in
translation.
When using the short description for labeling controls, it's important to ensure that the short
description property values are clear, concise, and accurately describe the purpose of the field. This
will help users to easily understand the field's purpose and complete the form efficiently. Also, make
sure that the short description is written in a grammatically correct format.
Example: Consider a property rule "Customer Name" with a short description "Enter the customer's
name". Instead of creating an inline label in the section, the label for the text input control for
"Customer Name" can be set to the value of the "Enter the customer's name" short description
property. This approach eliminates the need for redundant label definitions and ensures consistency
in labeling.
By following this best practice, developers can create Pega applications that are easier to maintain,
provide consistent labeling, and can be localized more easily.
When designing a user interface, it's important to use the appropriate layout to achieve the desired
functionality and design. When deciding whether to use a section or nest a dynamic layout, minimize
the number of sections that you create to improve performance and maintenance.
If you use too many sections, it can lead to performance issues, such as slower loading times and
excessive memory usage. Additionally, maintaining a large number of sections can be difficult, as it
can be challenging to keep track of all the different sections and their contents.
The suggested approach is to use dynamic layouts whenever possible, as they are more efficient and
flexible than sections. Dynamic layouts can contain multiple layout elements, including sections, and
allow you to define the layout and behavior of your UI components at runtime.
49
linkedin.com/in/christin-varughese-b968152b/
By minimizing the number of sections used in your user interface, you can improve performance,
reduce memory usage, and simplify maintenance. Additionally, dynamic layouts provide more
flexibility and control over the layout and behavior of your UI components.
When designing your user interface, consider the following best practices:
• Use dynamic layouts whenever possible, as they are more efficient and flexible than
sections.
• Use nested dynamic layouts to organize your UI components and reduce complexity.
• Use containers, such as a "Group" or "Fieldset" layout, to group related fields together.
Example: Suppose you are designing a form for collecting user information. The form has several
fields, including name, email, phone number, and address. You can use a dynamic layout to define
the layout and behavior of the form, and use sections to group related fields together. For example,
you might use a dynamic layout with three sections: Personal Information, Contact Information, and
Address. By grouping related fields together in sections, you can improve the organization and
readability of the form, while minimizing the number of sections used.
Minimizing the number of sections can improve performance and maintenance in several ways:
1. Performance: Each section creates an additional HTML div element on the page, which can
increase the overall size and complexity of the page, leading to slower page loads and
rendering times. In addition, each section may contain its own ruleset, which can increase
the amount of CSS and JavaScript that needs to be loaded and executed, further slowing
down the page. By minimizing the number of sections, you can reduce the amount of HTML,
CSS, and JavaScript that needs to be loaded and executed, leading to faster page loads and
rendering times.
2. Maintenance: Each section represents a distinct area of the page, and may contain its own
set of rules and logic. This can make it difficult to maintain the page over time, especially if
changes are made to one section that affect other sections on the page. By minimizing the
number of sections, you can simplify the page structure and make it easier to maintain and
update over time.
3. Scalability: As the page grows in complexity, adding more sections can increase the
complexity of the page even further, making it more difficult to maintain and update. By
minimizing the number of sections, you can create a more scalable and flexible page
structure that can be easily updated and adapted as needed.
For example, if you have a page that contains several sections, each with its own set of rules and
logic, it can become difficult to maintain and update over time. However, by consolidating the
sections and using a single dynamic layout, you can simplify the page structure and make it easier to
maintain and update in the long term. This can also improve performance by reducing the amount of
HTML, CSS, and JavaScript that needs to be loaded and executed, leading to faster page loads and
rendering times.
In summary, minimizing the number of sections can lead to better performance, easier maintenance,
and improved scalability, making it a best practice for building Pega applications.
50
linkedin.com/in/christin-varughese-b968152b/
4.2.2. Optimizing Scroll Performance for Better User Experience
When designing user interfaces, it is important to optimize scrolling behavior to provide a seamless
user experience. Here are some best practices to keep in mind:
1. Avoid horizontal scrolling: Horizontal scrolling can be disorienting for users and make it
difficult to read content. Instead, design layouts that fit within the width of the user's
screen.
2. Minimize vertical scrolling: While vertical scrolling is inevitable for long pages or content,
aim to minimize it as much as possible. Break up content into smaller, bite-sized pieces, and
use pagination or infinite scrolling to avoid overwhelming users with too much content at
once.
3. Use lazy loading: Loading all the content at once can lead to slow loading times and poor
performance. Consider using lazy loading, which loads content only as the user scrolls down
the page.
4. Optimize images: Large, high-resolution images can significantly slow down scrolling
performance. Optimize images for the web by compressing file sizes, reducing image
dimensions, and choosing appropriate image formats.
5. Test performance: Always test your designs for scrolling performance on various devices and
platforms. Use tools like Performance Analyzer, profiler, tracer etc to identify any
performance issues and optimize accordingly.
By following these best practices, you can ensure that your UI provides a smooth and seamless
scrolling experience for users, which in turn leads to better engagement and satisfaction.
One important design consideration for Pega applications is the use of responsive design for table-
style layouts such as grids or tree grids. A best practice is to avoid using fixed widths and instead set
the width to 100% to allow the display to adapt to the device being used.
The suggested approach is to configure responsive behaviour using skin, that automatically adjust
the layout of the table based on the screen size. This means that the table will display properly
regardless of the device being used, whether it's a desktop, tablet, or mobile phone. For example,
you can use media queries to apply different styles based on the screen size, or use breakpoints to
create responsive layouts.
If fixed widths are used for table-style layouts, it can result in horizontal scrolling and other display
issues on smaller screens. This can make the application difficult to use and impact user experience.
In addition, fixed widths can make the application less adaptable to changes in screen size or device
orientation.
Using responsive design for table-style layouts offers several benefits. First, it improves the usability
of the application across different devices, which can lead to increased user satisfaction and
productivity. Second, it reduces the need for manual intervention to adjust the layout for different
51
linkedin.com/in/christin-varughese-b968152b/
devices, which can save time and effort. Finally, it future-proofs the application by making it more
adaptable to changes in device technology and screen sizes.
When designing table-style layouts, it's important to consider the design of the application as a
whole. The layout should be consistent with the overall design aesthetic and user experience of the
application. It's also important to consider the amount and type of data being displayed, as this can
impact the performance of the application.
For example, a Pega application used by an insurance company might display a grid of policy
information. The grid should be designed to be responsive, with a 100% width that adapts to the
screen size. The layout should also be consistent with the overall design of the application and use
appropriate font sizes and colors. Finally, the grid should be optimized for performance by only
displaying the necessary data and minimizing the use of complex calculations or queries.
In summary, using responsive design for table-style layouts is a key design consideration for Pega
applications. By avoiding fixed widths and using responsive layout configuration in skin rule,
applications can be designed to be more adaptable, usable, and future-proof.
A crucial aspect of designing user interfaces is to create a seamless user experience. A key
component of a positive user experience is the removal of unnecessary clicks, particularly for simple
or frequently repeated tasks. This practice aims to improve usability by minimizing the time and
effort required to complete tasks.
The suggested approach to achieving this is to carefully analyze and map out user workflows.
Identify tasks that are simple, repetitive, and require multiple clicks, and consider ways to streamline
the process. Examples of methods for achieving this include Prepopulating form fields, providing
auto-complete suggestions, and enabling one-click options for frequently performed tasks.
If not done in this way, users may become frustrated with the interface and may be less likely to use
the system. This can result in lower user engagement and adoption rates, leading to a negative
impact on the organization's goals.
Benefits of reducing unnecessary clicks include increased productivity, improved user satisfaction,
and faster task completion times. By minimizing the number of clicks required to complete tasks,
users can focus on the task at hand, leading to a more productive and positive user experience.
Design considerations include the need to balance simplicity and functionality. It is essential to avoid
cluttering the interface with too many options, which can lead to confusion and overwhelm users.
Ensure that the interface is designed to facilitate quick and easy navigation, with minimal
distractions and an intuitive layout.
For example, suppose a user is frequently required to create a new case in a case management
system. In that case, the system could provide a one-click option to create a new case from the
user's dashboard, eliminating the need for the user to navigate through multiple screens.
In conclusion, by reducing unnecessary clicks in the user interface, designers can significantly
improve the user experience, resulting in increased user satisfaction, productivity, and engagement.
52
linkedin.com/in/christin-varughese-b968152b/
4.2.5. Enhancing User Experience through Efficient Use of Section Refresh in Pega
Instead of using the "Refresh When" option for section refresh, it is recommended to use the
"Refresh on change" option on layout. This approach allows for a more efficient use of section
refresh as it minimizes the number of times a section is refreshed.
The "Refresh When" option triggers a refresh of the entire section when any of the conditions
specified in the option are met. This can result in unnecessary refreshes of the entire section,
causing performance issues and potentially leading to a poor user experience.
On the other hand, the "Refresh on change" option on layout refreshes only the portions of the
section that have changed, resulting in a faster and more efficient refresh process.
Benefits:
• Enhanced user experience: With faster and more efficient section refresh, the user
experience is enhanced, reducing the wait time for the user to see the updated information.
• Lower maintenance costs: With fewer section refreshes, there is less need for maintenance
and updates to the application.
Design considerations:
• Evaluate the frequency of data changes: The "Refresh on change" option is best suited for
sections with data that changes frequently. For sections with data that changes infrequently,
the "Refresh When" option may be more appropriate.
• Determine the scope of the refresh: When using "Refresh on change," ensure that only the
necessary portions of the section are refreshed, rather than refreshing the entire section.
• Test the performance: Before deploying the application, test the performance of the section
refresh to ensure that it meets the performance requirements of the application.
Example: Suppose a Pega application has a section that displays a list of orders. When a user creates
a new order, the section needs to be refreshed to display the new order. Instead of using the
"Refresh When" option, the "Refresh on change" option can be used on the layout of the section.
This ensures that only the new order is added to the section, rather than refreshing the entire
section. As a result, the user experience is enhanced, and the performance of the application is
improved.
When creating a property in Pega, it is sometimes necessary to set a flag or boolean value to control
the visibility or behavior of certain UI elements or functionality. Traditionally, developers would use
a data transform to set the value of the property, which involves creating and maintaining an
53
linkedin.com/in/christin-varughese-b968152b/
additional rule in the system. However, a lighter-weight approach is to use the "set value" method to
directly update the property value from the UI.
To implement this approach, simply add a hidden input field to the UI, and bind it to the desired
property. Then, use a "set value" action to update the property value based on some user action or
other event. For example, if you have a property called ".pyIsThingVisible" that controls the visibility
of a particular section, you can use a "set value" action to update this property to "true" or "false"
based on a button click or other user interaction.
Issue if not done in this way: Using a data transform to set flags can add unnecessary complexity to
the application and make it harder to maintain over time. Every additional rule in the system adds to
the overall complexity and can lead to performance issues. In addition, using a data transform to set
flags requires additional development effort, testing, and documentation.
Using the "set value" method to update property flags is a lightweight approach that can help
simplify the application and reduce development effort. By eliminating the need for a data
transform, developers can create more efficient and maintainable applications. Additionally, this
approach can improve the user experience by allowing for more responsive UI updates, as the
property value can be updated directly from the UI without needing to refresh the entire section.
When using the "set value" method to set flags, it is important to consider the impact on the overall
UI design. Adding hidden input fields to the UI can make the application more complex and difficult
to understand for other developers or designers. It is important to use clear and consistent naming
conventions for properties and input fields, and to document any custom code or logic that is added
to the UI.
Example: Suppose you have a form in your application that collects user data. You want to allow the
user to save their progress and come back to it later, so you create a boolean property called
".pyIsSaved" to keep track of whether the form has been saved. To update this property value from
the UI, you can add a hidden input field to the form, bind it to the ".pyIsSaved" property, and add a
"set value" action to a "Save" button on the form. When the user clicks the "Save" button, the "set
value" action updates the ".pyIsSaved" property to "true", and the user can come back to the form
later to finish filling it out.
When designing controls for user interfaces, it is recommended to use text or buttons instead of
icons for clarity and ease of localization. Icons are open to interpretation and may not convey the
intended meaning clearly, especially for users with accessibility issues or for non-native speakers.
Therefore, using text or buttons helps to ensure that the control's purpose is understood by all
users.
If an icon is essential to the design, then it should be accompanied by a text label or a tooltip that
provides a clear explanation of the control's function. The text should be short, concise, and easily
understandable. Additionally, the icon should be selected carefully and should be recognizable and
commonly used.
If icons are used without proper explanation or labels, they may cause confusion and lead to errors.
For instance, a magnifying glass icon is often used to represent search functionality. However, if the
54
linkedin.com/in/christin-varughese-b968152b/
icon is unclear or the user is unfamiliar with the icon, they may not understand its purpose, leading
to difficulty in finding the search functionality. This can lead to a frustrating user experience.
Using text or buttons for controls has several benefits. First, it helps users to understand the
control's purpose and use it correctly, reducing the risk of errors and increasing efficiency. Second, it
enhances accessibility, making it easier for all users, including those with disabilities and non-native
speakers, to use the application. Third, it simplifies localization, making it easier to translate the
interface into different languages.
When designing controls, it is essential to consider the context and the audience. For instance, in
some cases, such as in mobile applications or web pages with limited space, using icons may be
necessary. In such cases, icons should be selected carefully and should be recognizable and
commonly used. Additionally, the icons should be accompanied by a text label or a tooltip that
provides a clear explanation of their purpose.
Example: Suppose you are designing a web application that allows users to book flights. Instead of
using an airplane icon to represent the "Book Flight" button, you can use the text "Book Flight" or a
button with the same text. This approach ensures that users understand the button's purpose and
use it correctly. Additionally, if the application is localized, the "Book Flight" text can be translated
easily into different languages, making the application more accessible to non-native speakers.
If the read-only display control is implemented within the section, it can result in redundant or
conflicting code across multiple sections. This can make the maintenance of the application more
difficult and error-prone. Additionally, it can lead to inconsistencies in user experience when
different sections display read-only status in different ways.
Separating the read-only display control from the section itself provides several benefits. Firstly, it
promotes reusability and maintainability of code across different sections. Secondly, it ensures
consistency in the user experience across the application. Finally, it can improve performance by
reducing redundant code.
When designing the read-only display control, it is important to consider the following points:
1. Determine the read-only or editable status of the section: The section should be designed in
such a way that it can determine its own read-only or editable status. This determination can
be based on the user role, access permissions, or other factors.
2. Implement the read-only display control outside the section: The read-only display control
should be implemented outside of the section, either through the harness in which it is
included or through other related sections. This implementation should ensure consistency
across different sections and promote code reusability and maintainability.
3. Provide visual cues for read-only status: When the section is in read-only mode, it is
important to provide visual cues to the user. This can include greying out the fields,
displaying a read-only label, or disabling buttons.
55
linkedin.com/in/christin-varughese-b968152b/
Example: Suppose there is a form section that collects personal information from the user. The read-
only display control for this section can be implemented in the harness level or in a related section
that controls the form's editability based on user roles. The read-only status of the section can be
determined based on the user's access permissions. If the user does not have permission to edit the
section, it will be displayed as read-only with visual cues such as greyed-out fields and a "read-only"
label.
It is recommended to implement input validation on both the client and server sides. Pega provides
several ways to perform client-side validation, including visibility conditions using expressions,
required conditions in expressions, and edit validate rules in property definitions. These approaches
help to ensure that input data meets the expected format and values before it is submitted to the
server.
Server-side validation is typically implemented using validation rules or activities. These rules are
executed on the server, and they can perform more advanced checks on the input data, such as
verifying that the data conforms to business rules or ensuring that it is consistent with other data in
the system.
Failing to perform input validation on both the client and server sides can result in several issues,
including security vulnerabilities, inaccurate data, and system errors. If validation is not performed
on the client side, the user may be able to submit invalid data to the server, potentially
compromising the system's security. If validation is not performed on the server side, the system
may accept invalid data, leading to errors or incorrect processing.
Implementing input validation on both the client and server sides offers several benefits, including
improved data accuracy, enhanced system security, and better user experience. By catching errors
early on the client side, users can be provided with immediate feedback and correct their input
before submission. This can save time and effort for both the user and the system. Server-side
validation helps to ensure that only valid data is stored in the system, preventing errors and
inconsistencies that can lead to further problems down the line.
When designing input validation in Pega, it is important to consider factors such as user experience,
security, and system performance. Designers should strive to strike a balance between catching
errors early on the client side and ensuring that all necessary checks are performed on the server
side. Additionally, designers should consider how validation errors are presented to the user and
ensure that they are clear and informative.
For example, in a Pega application that collects user data, a client-side validation rule could be used
to verify that a phone number field contains only numeric characters, while a server-side validation
rule could verify that the phone number is in a valid format and is not already associated with
another user in the system. This approach can help to ensure that only accurate and consistent data
is stored in the system, improving overall system performance and reducing the risk of errors or
security vulnerabilities.
56
linkedin.com/in/christin-varughese-b968152b/
4.3.2. Form Validation and Data Model Lifecycle Validation in Pega
Form validation is an important aspect of user interface design in Pega. It is crucial to ensure that
users input the correct data in the right format to avoid errors and inconsistencies in the system.
Pega provides various features for form validation, including client-side validation and server-side
validation.
The suggested approach for form validation in Pega is to add validation rules to forms that require it.
This can be done using visibility conditions using expressions, required conditions in expressions, or
edit validate rules in property definitions. Validation rules should be designed to check for specific
data types and formats, such as email addresses, dates, phone numbers, and so on.
In addition, Pega also provides lifecycle validation on the Data Model tab of a case type. This feature
allows you to define the validation rules for each stage of the case's lifecycle. For example, you can
define rules to validate the data when a case is created, when it is moved to a specific stage, or
when it is resolved.
If form validation is not done correctly, it can lead to data inconsistencies, errors, and performance
issues. For example, if a user enters an incorrect email address or phone number, it may cause the
system to fail, or it may be difficult to locate the correct contact information for that user. In
addition, incorrect data can lead to inaccurate reporting and analysis.
The benefits of proper form validation and lifecycle validation include improved data quality,
increased efficiency, and reduced risk of errors and inconsistencies. It also helps to ensure that users
can complete tasks quickly and easily, without encountering unnecessary obstacles or delays.
Design considerations for form validation in Pega include using clear and concise error messages,
providing real-time feedback to users, and ensuring that the validation rules are aligned with the
user's expectations and the business requirements. It is also important to consider localization and
accessibility requirements when designing form validation rules.
For example, if you are designing a form for a loan application, you may want to include validation
rules that check for the correct data format for fields such as income, employment history, and
credit score. You may also want to include lifecycle validation rules to check for the completeness
and accuracy of the application at various stages of the process.
In summary, proper form validation and lifecycle validation are essential for ensuring data quality
and improving the user experience in Pega.
In Pega, Preprocessing activities are executed before an action is performed on a single record.
However, when performing bulk processing on multiple records, these Preprocessing activities can
cause significant performance issues. Therefore, it is recommended to disqualify Preprocessing
activities from bulk processing by selecting the "disqualify this action from bulk-processing" option in
the action rule.
57
linkedin.com/in/christin-varughese-b968152b/
The suggested approach is to carefully evaluate whether a Preprocessing activity is necessary for
bulk processing. If the Preprocessing activity is not required, then it should be disqualified from bulk
processing. This can be achieved by selecting the "disqualify this action from bulk-processing" option
in the action rule.
The issue with not disqualifying Preprocessing activities from bulk processing is that it can result in
significant performance issues. The Preprocessing activity is executed for each record that is
processed, resulting in a high overhead and processing time. This can lead to a degradation in
system performance and impact overall system usability.
The benefits of disqualifying Preprocessing activities from bulk processing are improved system
performance and reduced processing time. This results in a better user experience and increased
productivity for end-users. By selecting the "disqualify this action from bulk-processing" option, Pega
automatically skips the Preprocessing activity for bulk processing, resulting in faster and more
efficient processing.
Design considerations include evaluating the necessity of Preprocessing activities for bulk processing
and determining whether it is required for each record being processed. If the Preprocessing activity
is not required, then it should be disqualified from bulk processing. This helps to ensure optimal
system performance and overall user experience.
An example of this could be a bulk processing action to update a large number of cases. If the
Preprocessing activity for this action is not disqualified from bulk processing, it would result in the
activity being executed for each case being updated, leading to significant performance issues.
However, if the Preprocessing activity is disqualified from bulk processing, the processing time is
significantly reduced, resulting in better system performance and a better user experience.
In Pega development, it is a best practice to use pre and post prefixes for activities that are called
from flow actions. This helps to make them easier to identify and differentiate from other activities
in the system.
The suggested approach is to use a pre or post prefix followed by a meaningful name for the activity.
For example, if an activity is called before a flow action, it can be named "PreValidateInputFields".
Similarly, if it is called after the flow action, it can be named "PostSaveData".
If this approach is not followed, it may be difficult to identify the purpose of the activity, especially if
it is a generic name such as "ValidateFields". This can cause confusion and make it more difficult to
maintain the system.
The benefits of using pre and post prefixes for activities in Pega are:
1. Easy Identification: The prefixes make it easier to identify the purpose of the activity,
especially if there are many activities in the system.
58
linkedin.com/in/christin-varughese-b968152b/
3. Reusability: By using meaningful names for the activities, they can be reused in other parts
of the system, making the development process more efficient.
When designing activities in Pega, there are a few considerations to keep in mind. The names of the
activities should be descriptive, but not too long. It is also important to follow a consistent naming
convention across the system.
Here is an example of how pre and post prefixes can be used in Pega development:
Suppose there is a flow action named "UpdateOrder". Before this flow action is executed, it is
necessary to validate the user input. So, a Pre activity named "PreValidateOrderInput" can be
created to perform the necessary validation. Similarly, after the flow action is executed, a Post
activity named "PostUpdateOrder" can be created to perform any necessary Postprocessing.
In summary, using pre and post prefixes for activities in Pega can improve the maintenance and
reusability of the system, while making it easier to identify the purpose of each activity.
When creating a Report Definition in Pega, it is important to consider the maximum number of
records that will be returned. If the Max Records value is not specified, the default value of 10,000
will be used. This can lead to performance issues and negatively impact the user experience.
The suggested approach is to specify a maximum limit for the Report Definition. This can be done by
setting the Max Records value in the Query tab of the Report Definition rule form. By setting a limit,
the query will only return the necessary records, resulting in improved performance.
If the Max Records value is not specified, the query will retrieve all records that meet the criteria,
which can cause the query to take longer to execute. This can lead to slower page loads and
potentially impact the performance of other components on the page.
Benefits of specifying a maximum limit on the Report Definition include improved query
performance, faster page loads, and improved user experience. By only retrieving the necessary
records, unnecessary processing can be avoided, resulting in better overall system performance.
When designing a Report Definition, it is important to consider the use case and the expected
number of records. If the report is only meant to display a small number of records, setting a low
Max Records value can help improve performance. However, if the report is meant to display a large
number of records, a higher Max Records value may be necessary.
Example: Let's say we have a Report Definition that retrieves a list of customer accounts. The
expected number of records is around 5,000. If the Max Records value is left blank, the query will
retrieve all 5,000 records. This can cause performance issues and potentially impact the user
experience. By setting the Max Records value to 1,000, the query will only retrieve the necessary
records, resulting in improved performance.
59
linkedin.com/in/christin-varughese-b968152b/
5.1.2. Enabling Paging in Report Definition
Enabling paging in Report Definition is an important best practice in Pega development. Paging
allows a large amount of data to be displayed and navigated in smaller, more manageable chunks.
The suggested approach for enabling paging in Report Definition is to specify the maximum number
of rows to be returned per page, along with the total number of rows that match the report criteria.
This can be achieved by setting the "Max Records" field in the Report Definition form, as well as
enabling the "Enable paging" option and specifying the number of rows per page. It improves query
performance by only fetching the rows needed for the current page, reducing the amount of data
transferred between the server and client. It also improves user experience by allowing them to
navigate and find data more easily.
The issue with not enabling paging in Report Definition is that it can lead to performance issues,
particularly when working with large datasets. It can also result in poor user experience, with users
having to scroll through long pages of data to find what they are looking for.
When enabling paging determine the appropriate number of rows per page based on the amount of
data being displayed, as well as ensuring that the user interface is designed to support paging. For
example, it is important to provide clear navigation controls for users to move between pages.
An example of enabling paging in Report Definition could be a search page for customer records. The
Report Definition could be configured to display 25 records per page, with paging enabled to allow
users to navigate between multiple pages of results. This would ensure that the search results are
displayed quickly and efficiently, while providing a seamless user experience for finding the desired
data.
Filter conditions in a Pega Report Definition allow users to specify criteria for selecting data from the
database. While filtering can significantly improve the efficiency of data retrieval, certain design
considerations must be taken into account to ensure optimal performance.
One common issue is the use of relationship operators other than "Is Equal" in filter conditions.
Operators like "Greater Than" or "Contains" may be useful in specific scenarios, but they can lead to
poor performance because the database indexes cannot be used to optimize the query. As a result,
the database may have to perform a full table scan, which can be resource-intensive and time-
consuming.
To mitigate this issue, Pega best practices recommend using "Is Equal" as the default operator for
filter conditions wherever possible. If other operators are necessary, it is important to carefully
evaluate their impact on performance and take steps to optimize the query, such as adding
additional indexes or restructuring the database schema.
Another approach to optimizing performance is to use paging in Report Definitions. Paging allows
the report to retrieve and display data in smaller chunks, reducing the amount of data that needs to
be processed at once. This can result in faster page load times and a more responsive user
experience.
60
linkedin.com/in/christin-varughese-b968152b/
In addition to these technical considerations, it is important to consider the user experience when
designing filter conditions. Providing clear and intuitive filter options can help users quickly find the
data they need and reduce the risk of performance issues caused by overly complex queries.
For example, a report that displays customer orders may include filters for order date, order status,
and order total. To optimize performance, the report could use "Is Equal" operators for the date and
status filters and provide a dropdown list of predefined values for the order total filter. Paging could
also be enabled to display results in smaller batches.
In summary, when designing filter conditions in a Pega Report Definition, it is important to consider
the impact on performance and the user experience. Using "Is Equal" as the default operator,
evaluating the use of other operators, and enabling paging can all help to optimize performance and
create a more efficient and user-friendly report.
Pega applications often deal with large volumes of data, and as a result, the performance of queries
becomes a critical factor in ensuring that users get a responsive experience. One common cause of
poor query performance is the use of properties that have not been optimized.
In Pega, properties can be optimized for efficient querying by setting them as exposed columns in
the database table. When a property is optimized, it is stored in the database table as a separate
column, allowing queries to use database indexes to quickly filter, sort, or group by these properties.
If properties are not optimized, and filtering or grouping is done on them, it can result in poor
performance. In such cases, Pega has to retrieve all the data from the table, apply the filter or
grouping condition in memory, and then return the results. This can be a slow and resource-
intensive process, especially when dealing with large data sets.
The suggested approach is to optimize the properties that are commonly used for filtering or sorting
in queries. This involves marking the property as an exposed column in the corresponding database
table. Once the property is optimized, it can be used in queries more efficiently, resulting in
improved query performance.
It's important to note that optimizing properties also has some design considerations. Optimizing
too many properties can increase the size of the database table and result in slower insert and
update operations. Therefore, it's important to optimize only those properties that are frequently
used in queries.
For example, let's say we have a Pega application that manages customer orders. In our application,
we frequently filter orders by the customer's name. In this case, it would be a good practice to
optimize the "Customer Name" property as an exposed column in the orders table. This would allow
us to filter orders by customer name more efficiently and improve the overall performance of our
queries.
In conclusion, optimizing properties for efficient querying is an essential practice in Pega application
development. By optimizing commonly used properties as exposed columns in the database table,
queries can be performed more efficiently, resulting in better query performance and a better user
experience.
61
linkedin.com/in/christin-varughese-b968152b/
5.1.5. Avoid using Ignore Case Configuration in Filter Condition
When building an application in Pega, it is important to consider the performance impact of various
design decisions. One common issue is the use of case-insensitive search operations in filter
conditions, which can have a negative impact on performance.
While it may be tempting to use the "Ignore case" configuration option when filtering data in a
report definition or a data page, this can result in slower query performance. This is because the
query will need to scan the entire table or index to find matches, rather than being able to use an
index to quickly locate the desired rows.
Instead, it is recommended to use functional indexes to optimize the search performance of case-
insensitive searches. This involves creating a new index that includes the uppercase version of the
column being searched, using the UPPER() function. This will allow the database to use the index to
quickly locate the matching rows, even with case-insensitive searches.
Design considerations when creating functional indexes include selecting the appropriate columns to
include in the index, ensuring that the index is compatible with the database and the query, and
determining the appropriate index type (e.g., bitmap, B-tree, etc.).
For example, suppose we have a report definition that needs to search for customers whose last
name contains the letters "smi", regardless of case. Instead of using the "Ignore case" configuration
option, we can create a functional index on the last name column that includes the uppercase
version of the data, as follows:
Then, in the report definition, we can use the UPPER() function to convert the search term to
uppercase, allowing the database to use the functional index to quickly locate the matching rows:
By following this approach, we can improve the performance of case-insensitive searches in our
Pega application.
5.2. Customization
5.2.1. Avoid using custom getContent activity for report data retrieval in Pega
In Pega, it is not recommended to use a custom getContent activity to retrieve report data as it can
have negative impacts on the performance and maintainability of the application. Instead, a data
page can be used to retrieve report data.
When retrieving report data, it is recommended to use a data page rather than a custom getContent
activity. This approach ensures that report features remain available and usable and minimizes
maintenance and upgrade issues. Using data pages for report data retrieval provides several
benefits, including improved performance, better maintainability, and reduced upgrade issues. Data
pages can be easily updated or modified without affecting other parts of the application.
62
linkedin.com/in/christin-varughese-b968152b/
When designing data pages for report data retrieval, it is important to consider the following:
1. Query optimization: Optimize the data page queries to ensure optimal performance.
2. Caching: Cache the data page to minimize database hits and improve performance.
3. Data modeling: Ensure that the data page is designed to retrieve the required data and is
modeled correctly.
Example: Consider a scenario where a custom getContent activity is used to retrieve report data. In
this case, the report features may become unavailable or unusable, and any upgrades or
modifications to the report will require changes to the getContent activity, resulting in maintenance
issues. On the other hand, if a data page is used, the report features will remain available, and the
data page can be easily modified without affecting the other parts of the application.
In summary, it is important to avoid using a custom getContent activity for report data retrieval in
Pega applications. Instead, use data pages to ensure that report features remain available and
maintainability is not affected.
5.2.2. Avoid using custom HTML control for formatting report columns
When developing reports in Pega, it is essential to use the standard control set for column
formatting rather than creating custom HTML controls. Using custom HTML controls can result in
poor performance when displaying query results. Custom controls can also make the report harder
to maintain and update.
The suggested approach is to use the standard controls provided by Pega for formatting columns.
These controls are optimized for performance and will result in faster report rendering. Pega
provides a wide range of formatting options to meet the needs of different types of data.
If custom HTML controls are used for formatting report columns, it can result in a slower display of
the query results. This can negatively impact the user experience, leading to frustration and
dissatisfaction. Custom controls can also make it harder to maintain and update reports, as they are
not part of the standard Pega control set.
The benefits of using the standard Pega control set for formatting report columns include faster
report rendering, better user experience, and easier maintenance and updates. Standard controls
are optimized for performance and have been extensively tested, ensuring that they are reliable and
stable.
When designing reports, it is essential to consider the formatting requirements of the data being
displayed. Pega provides a wide range of standard controls, such as grids, trees, and charts, which
can be used to present data in different ways. It is important to choose the right control for the type
of data being displayed to ensure optimal performance and usability.
Example:
Suppose you are developing a report that displays a list of customer orders. The report includes
several columns, such as order ID, order date, customer name, and total amount. To format the
columns, you decide to create custom HTML controls using JavaScript and CSS.
Instead of creating custom HTML controls, you should use the standard controls provided by Pega.
For example, you could use a grid control to display the customer orders. The grid control allows you
63
linkedin.com/in/christin-varughese-b968152b/
to format the columns using a wide range of options, such as font size, colour, and alignment. By
using the grid control, you can ensure that the report is optimized for performance and is easy to
maintain and update.
In order to improve performance and reduce unnecessary database load, it is recommended to only
retrieve the columns that are actually needed for a given task. This can be especially important when
working with large data sets.
The suggested approach is to carefully consider which columns are needed before making a
database call, and to use a specific query that only retrieves those columns. This can be achieved by
specifying only required column in the report to retrieve the data. Additionally, it is important to
ensure that the results are optimized for performance, using appropriate indexes and filters where
possible.
If all columns are retrieved from the database, it can cause unnecessary network traffic and
processing overhead, which can result in slower performance, increased memory usage, and longer
response times. It can also put additional load on the database, potentially leading to scalability
issues.
Benefits of fetching only the required columns include faster performance, reduced network traffic
and resource usage, and improved scalability. Additionally, it can make the code easier to maintain
and modify, as it is more clear which columns are being used and for what purpose.
Design considerations when fetching only the required columns include carefully evaluating which
columns are needed for a given task, and considering the impact of future changes on the data
model. It is also important to ensure that the columns used to retrieve the data is properly
optimized, and that appropriate indexes and filters are used where necessary.
Efficient database query execution is critical to the performance of any application. It is essential to
ensure that the queries fetch only the required data from the database and avoid any unnecessary
operations. Pega provides several best practices to optimize database queries that can improve the
application's overall performance.
One of the best practices for optimizing database queries is to avoid unnecessary JOINS. A JOIN
operation is used to combine rows from two or more tables based on a related column between
them. However, a JOIN can be expensive and slow down the query execution if it is not necessary.
To avoid unnecessary JOINS, it is essential to understand the data model and the relationships
between tables. Ensure that the query only includes the columns that are required for the report,
and there are no additional columns from other tables that are not necessary. Consider using
subqueries to avoid unnecessary JOINS and improve the query's performance.
Another best practice is to fetch only the required columns and not all columns from the database.
When a query fetches all columns from a table, it can consume more resources than needed and
cause unnecessary overhead. This can impact the performance of the query and the application.
64
linkedin.com/in/christin-varughese-b968152b/
To fetch only the required columns, it is necessary to understand the report's requirements and
identify the columns required for the report. This can be achieved by analysing the report's purpose,
data source, and data model. By fetching only the required columns, the query execution time can
be reduced, and the report's performance can be improved.
Optimizing database queries is a critical aspect of ensuring the performance of an application. Pega
provides several best practices that can help developers optimize their queries and improve the
application's overall performance. By avoiding unnecessary JOINS, fetching only the required
columns, and following other best practices, developers can create efficient queries that provide fast
and accurate results.
5.2.5. Managing Report Access: Best Practices for Adding Privileges and Restrictions
Reports are an essential part of any business application that provides meaningful insights to the
users. However, not all users require access to all the reports, and it is important to control which
user groups can execute a report. Inappropriate access to reports can lead to security breaches and
misuse of data. Hence, it is necessary to add appropriate privileges and restrictions to reports.
Pega provides a built-in mechanism to manage access to reports. By using Access Roles, you can
control which users or user groups can access a report. Access Roles are groups of users or user
groups that have specific permissions to access a report. You can add an Access Role to a report in
the report definition or report definition rule form.
Adding appropriate privileges and restrictions to reports ensures that only the authorized users can
access them. This helps in maintaining data security and prevents unauthorized access. By
controlling access to reports, you can also prevent performance issues and optimize system
resources.
When designing reports, you should consider the following design considerations:
1. Define access roles based on user roles: You should define access roles based on user roles
and ensure that each user has access only to the reports that are necessary for their role.
2. Regularly review access roles: Regularly review the access roles to ensure that only
authorized users have access to the reports.
Example: Consider a scenario where an application has several reports related to customer data. The
application has three user groups: Managers, Analysts, and Operators. In this scenario, you should
define access roles such as "CustomerDataManagers", "CustomerDataAnalysts", and
"CustomerDataOperators" and add the appropriate access roles to the respective reports. By doing
so, you can ensure that only the authorized user groups can access the reports.
Managing report access is an important aspect of securing data in any application. By following the
suggested approach and design considerations mentioned above, you can ensure that only the
authorized users have access to the reports.
65
linkedin.com/in/christin-varughese-b968152b/
6.Pega Best Practice for Property Rule
6.1. Why You Should Avoid Defining Max Length
One common mistake that I have seen is defining a maximum length for a property. While it may
seem like a good practice, it can actually cause more harm than good.
The Issue with Defining Max Length for Properties Defining a maximum length for a property limits
the amount of data that can be stored in that property. If the actual data exceeds the maximum
length, Pega throws an exception, and the user experience is impacted. The exception message can
be confusing for users, and it can lead to data loss if the user does not take appropriate action.
Furthermore, defining max length can also lead to performance issues, as the database needs to
allocate enough space for the property value, even if it is not utilized fully.
The suggested approach is to define properties without a maximum length. This means that Pega
will allocate only the required amount of space for the property value, resulting in better
performance and reduced storage requirements. In addition, by not defining max length, you also
allow for future growth and changes in the data without having to redefine the property.
When defining properties without a maximum length, it is important to consider the potential
impact on database performance and storage requirements. It is also important to ensure that the
property value is properly validated to prevent data loss or corruption. For example, if a property is
used to store a date, it should be validated to ensure that the input is in a valid date format.
Example Let's consider an example of a property that is used to store a customer name. Instead of
defining a maximum length, we can define the property as follows:
By not defining a maximum length, we allow for future growth in customer names without having to
redefine the property. Furthermore, by setting the validation to required, we ensure that the
property always has a value and that the user cannot save an empty value.
Defining max length for properties can cause performance and usability issues, and it should be
avoided. Instead, properties should be defined without a maximum length to allow for future growth
and reduce storage requirements. However, design considerations should be taken to ensure proper
validation and performance.
Naming properties in Pega may seem like a trivial task, but it is actually an important aspect of
application development. In this section, we will discuss the importance of following database
naming standards when naming properties, the issues that can arise if this is not done properly, and
the suggested approach to avoid these issues.
When property names do not follow database naming standards, it can cause issues when working
with the database. For example, if a property name contains spaces or special characters, it may not
66
linkedin.com/in/christin-varughese-b968152b/
be recognized by the database, and queries or reports may fail. This can lead to delays in
troubleshooting and debugging, as well as reduced performance.
The suggested approach is to follow the database naming standards when naming properties. This
means using only alphanumeric characters and underscores in property names, and avoiding spaces
or special characters. In addition, property names should be meaningful and descriptive, and should
not be abbreviated or overly complex.
Following naming standards for properties can improve performance, simplify database queries and
reporting, and reduce the risk of errors or issues in the application. Meaningful and descriptive
property names can also make it easier for developers to understand the purpose and functionality
of the property.
When naming properties, it is important to consider the potential impact on database performance
and the readability of the application. Property names should be concise and descriptive, and should
accurately reflect the data that is being stored. It is also important to ensure that property names
are consistent throughout the application, to avoid confusion and errors.
Example Let's consider an example of a property that is used to store a customer's phone number.
Instead of using a complex or abbreviated name, we can name the property as follows:
By following the database naming standards and using a meaningful and descriptive name, we make
it easier for other developers to understand the purpose of the property. Furthermore, by validating
the property to ensure that it is required and does not exceed the maximum length, we ensure that
the data is accurate and consistent.
Naming properties in Pega may seem like a small detail, but it can have a significant impact on the
performance and usability of the application. By following database naming standards and using
meaningful and descriptive names, we can improve the readability and maintainability of the
application. However, in some cases, it may be necessary to use a non-standard name, in which case
it is important to document the reasons and ensure consistency throughout the application.
One issue with using Local List as Table type for Property Rule is that it can lead to validation errors if
a value is set to the property that is not present in the Local List as Table. This can happen if a
developer manually sets the value of the property or if the Local List as Table is modified after the
property is set.
In cases where the list of values is fixed and does not change frequently, using Prompt List is a better
approach as it provides better data integrity and can help avoid validation errors.
However, if the list of values can change frequently or if there are complex data relationships
between the property and the values, using Prompt List may not be the best approach. In these
67
linkedin.com/in/christin-varughese-b968152b/
cases, using a separate class and Page List property to reference instances of the class would be a
better approach.
To avoid using Local List as Table type for Property Rule, the suggested approach is to create a
separate class to store the values and use a Page List property to reference instances of the class.
This approach allows for better maintainability and scalability, as well as improved performance and
data integrity.
For example, if you want to store a list of skills for an employee, you can create a Skill class that
contains properties such as Skill Name and Skill Level. You can then create a Page List property called
Skills that references instances of the Skill class. This approach provides better data integrity and
allows for more flexibility in managing the values.
In conclusion, using Prompt List or a separate class with a Page List property is a better approach
than using Local List as Table type for Property Rule. The choice between Prompt List and a separate
class depends on the specific requirements of the use case. By following these suggested
approaches, you can ensure better maintainability, scalability, performance, and data integrity in
your Pega applications.
When designing persistent objects in Pega applications, it is important to choose the appropriate
property type for date and time values. While Pega offers several options for representing date and
time data, it is recommended to use the DateTime property type for properties that are part of
persistent objects in your application. In this section, we will explore the reasons for this
recommendation and the design considerations to keep in mind.
The DateTime property type is recommended for properties that are part of persistent objects in
your application. This is because it stores both date and time information in a single property, which
can help avoid issues related to time zone conversions and daylight saving time changes.
Additionally, DateTime properties can be easily sorted and searched in reports and lists, making
them a more versatile option.
Using the Date property type in persistent objects is not recommended, even if your application is
used in only a single time zone. This is because the Date property type does not store time
information, which can lead to issues when sorting or searching properties that require time
information. Similarly, using the TimeOfDay property type in persistent objects can also cause issues,
as it does not store date information.
1. Stores both date and time information in a single property, which can help avoid issues
related to time zone conversions and daylight saving time changes.
3. Can be used in calculations involving dates and times, such as calculating the difference
between two dates.
When using DateTime properties, it is important to keep the following considerations in mind:
68
linkedin.com/in/christin-varughese-b968152b/
1. When using DateTime properties in forms, ensure that the time zone is clearly displayed to
the user to avoid confusion.
2. When using DateTime properties in calculations, be sure to specify the time zone to avoid
inconsistencies in calculations.
Consider a case management system for a healthcare organization. The system needs to track the
date and time when a patient's medical test results were received. In this scenario, it is
recommended to use a DateTime property type to store this information. This will allow for easy
sorting and searching of test results, as well as accurate calculations involving time information.
In cases where time information is not necessary, such as tracking only the date a task was
completed, it is acceptable to use the Date property type in persistent objects. However, it is
important to consider the potential need for time information in the future and design the object
accordingly. Similarly, if a separate property stores the time zone value for an object, using the Date
or TimeOfDay property types may be appropriate.
In conclusion, selecting the appropriate property type for date and time values in Pega applications
is important for accurate data representation and efficient application design. Using the DateTime
property type for persistent objects is recommended, as it provides the necessary flexibility and
accuracy for a variety of use cases.
When building Pega applications, it is recommended to avoid using explicit URLs wherever possible.
An explicit URL is a hardcoded URL that is directly specified in the application code or configuration.
Instead, it is recommended to provide the URL using application settings.
The suggested approach is to define the URL as an application setting. This allows the URL to be
easily changed without modifying the code or configuration. It also makes it easier to manage the
URLs for different environments such as development, testing, and production. Pega provides a
built-in mechanism to define application settings, which can be accessed from the Designer Studio.
The main issue with using explicit URLs is that they can cause maintenance problems in the long run.
If the URL changes, every place where the URL is hardcoded will need to be updated. This can be a
tedious and error-prone process, especially in larger applications. Additionally, explicit URLs can lead
to security issues if they are exposed or compromised.
Using application settings for URLs provides several benefits. First, it makes the application more
flexible and easier to maintain. Secondly, it enhances security by keeping sensitive URLs out of the
application code. Finally, it allows for better manageability of URLs across different environments.
Design considerations when using application settings for URLs include deciding which URLs should
be defined as application settings and ensuring that the settings are properly secured. When
defining application settings for URLs, it is important to consider the access control for each setting.
Sensitive URLs should only be accessible to authorized users.
69
linkedin.com/in/christin-varughese-b968152b/
For example, instead of hardcoding the URL for a web service call, the URL can be defined as an
application setting. This allows the URL to be changed without modifying the code. If the URL
changes frequently, using application settings can save time and effort in updating the URL across
the application.
It may be acceptable to use explicit URLs in certain scenarios such as when the URL is a constant
value that is unlikely to change or when the application is a small prototype that will not be
maintained in the long term. However, it is generally best to avoid using explicit URLs and use
application settings instead.
In Pega, Declare Expressions are used to calculate values based on certain conditions and trigger
updates in the UI or other rules. However, there is a specific behavior that should be considered
when using named page references in Declare Expressions.
To avoid this issue, it is recommended to use the corresponding property of the named page
reference instead of the named page reference itself in the Declare Expression. This will ensure that
the expression is recalculated whenever the corresponding property is updated.
For example, consider a Declare Expression that calculates the total price of a product based on its
quantity and price per unit. The quantity is stored in a property called "Quantity" on a named page
reference called "ProductDetails". If the Declare Expression uses "ProductDetails.Quantity" as an
input, it will not recalculate if the "Quantity" property is updated. To avoid this issue, the expression
should use "ProductDetails.Quantity" as a dependency and "Quantity" as an input.
Using the corresponding property instead of the named page reference also has other benefits. It
reduces the complexity of the expression and improves performance by reducing the number of
page references. Additionally, it makes the expression more modular and easier to maintain.
Design considerations should also be kept in mind when using named page references in Declare
Expressions. It is important to consider the scope of the named page reference and ensure that it is
available throughout the Declare Expression. If the named page reference is not available, it may
result in errors or unexpected behavior.
In some cases, it may be appropriate to use named page references directly in Declare Expressions.
For example, if the named page reference is read-only and will not be updated during the lifecycle of
the expression, it may be acceptable to use it as an input or dependency. However, this should be
evaluated on a case-by-case basis and with consideration of the potential risks and benefits.
When using named page references in Declare Expressions, it is important to use the corresponding
property instead of the named page reference itself as an input or dependency to ensure proper
recalculation. This also improves the expression's modularity, performance, and maintainability.
Design considerations should also be kept in mind to ensure that the named page reference is
available throughout the expression. In some cases, it may be appropriate to use named page
references directly, but this should be evaluated with consideration of the potential risks and
benefits.
70
linkedin.com/in/christin-varughese-b968152b/
7.3. Performance Optimization of Decision Tables
Decision tables are a powerful tool in Pega that allow you to define rules for decision-making based
on a set of conditions. However, if not optimized, decision tables can negatively impact
performance. In this context, it is recommended to limit the number of rows in decision tables to
ensure fast processing.
To optimize the performance of decision tables, it is recommended to limit the number of rows to
no more than 300-500. If your decision table has more than 500 rows, consider splitting it into
smaller decision tables or using other rule types like decision trees, decision maps, or decision trees
with conditions to improve performance.
If decision tables have more than 300-500 rows, it can lead to slower processing times and can
impact the performance of your application. It can result in longer wait times for end-users, and can
potentially lead to system errors.
By following this best practice, you can ensure that decision tables are optimized for better
performance, and users will not face any delays in decision-making. Additionally, it can reduce the
risk of errors, improve the overall user experience, and help to ensure that the application is working
efficiently.
When designing decision tables, it is important to keep in mind the number of rows that will be
included. If the decision table has a large number of rows, consider splitting it into smaller decision
tables, or using other rule types like decision trees, decision maps, or decision trees with conditions.
This will not only help to optimize performance but will also make it easier to maintain and modify
your rules.
Example: Let's consider a scenario where you need to create a decision table that determines the
credit limit for a customer based on their income, credit score, and loan history. If you have a large
number of customers and want to ensure quick processing of the decision table, limit the number of
rows to no more than 300-500. Instead of creating a single decision table, split it into smaller
decision tables based on customer profiles like income level, credit score, or loan history.
While it is generally recommended to limit the number of rows in decision tables, there may be
scenarios where a larger number of rows is necessary. In such cases, it is important to analyze the
performance impact and take appropriate measures like optimizing the system configuration or
increasing hardware resources. It is also important to consider the trade-off between performance
and functionality and make a decision based on the specific requirements of the application.
Pega provides multiple ways to execute background processes, such as agents, queue processors,
and job schedulers. While agents have been a popular option for background processing, Pega
recommends using queue processors or job schedulers instead.
The suggested approach is to use queue processors or job schedulers for background processing.
Queue processors are used to process items from a queue, while job schedulers are used to
schedule and execute jobs at a specified time. By using queue processors or job schedulers, we can
achieve better performance, resiliency, fault tolerance, and flexibility in our applications. The issue
71
linkedin.com/in/christin-varughese-b968152b/
with using agents for background processing is that they can be resource-intensive and may not
scale well in high-volume environments.
Benefits of using queue processors or job schedulers include better performance, as they are
designed to handle high volumes of work items efficiently. They also provide better resiliency and
fault tolerance, as they can recover from errors and continue processing work items without
significant impact on the application. Additionally, using job schedulers can provide more flexibility
in scheduling and executing background processes, as they can be configured to run at specific times
or intervals.
Design considerations when using queue processors or job schedulers include properly configuring
the queue size, processing mode, and error handling. It is also important to consider the priority and
grouping of work items, as well as the number of worker threads needed to handle the workload
efficiently.
An example of using queue processors or job schedulers is for processing large volumes of customer
data in a financial application. Instead of using agents to process the data, we can use a queue
processor to handle the workload efficiently and reliably.
Use of agents should be avoided in most scenarios and should only be used to support legacy agent
configurations. However, for all new background processing, queue processors or job schedulers are
a better option for achieving better performance, resiliency, fault tolerance, and flexibility in our
applications.
Suggested Approach: It is important to implement custom security roles in Pega to maintain security
requirements. End-user access groups should not grant the PegaRULES:SysAdm4 security role to
prevent unauthorized access to sensitive areas of the application. Instead, roles should default to
having the least privileges required for end-user access groups.
To implement custom security roles, start by identifying the different types of users and their
corresponding access levels. For example, there may be different roles for managers, employees,
and customers, each with their own set of permissions. Once the roles have been defined, create
access groups that assign those roles to users.
When creating access groups, consider the principle of least privilege. This means that users should
only be granted the minimum access required to perform their job functions. For example, an
employee in a call center may only need access to view customer information and update their
contact details, but not to create new customer records or modify existing ones.
When designing custom security roles and access groups, it is important to consider the following:
2. Identify the different types of users and their corresponding access levels.
72
linkedin.com/in/christin-varughese-b968152b/
5. Regularly review and update security roles and access groups as the application evolves.
By implementing custom security roles and following the principle of least privilege, you can help
prevent unauthorized access to sensitive areas of the application and ensure that users only have
access to the functionality that they require. This can help improve the overall security of the
application and reduce the risk of data breaches and other security incidents.
Example: Suppose you are designing a banking application that allows customers to view their
account details, transfer funds, and pay bills. In this case, you might define the following custom
security roles:
1. Customer - can view account details, transfer funds, and pay bills.
2. Manager - can view account details and transfer funds, but cannot pay bills.
3. Administrator - can view account details, transfer funds, and pay bills, as well as manage
user accounts and perform other administrative tasks.
You would then create access groups that assign these roles to users based on their job functions
and responsibilities. For example, a customer service representative might be assigned the
"Customer" role, while a financial manager might be assigned the "Manager" role.
In Pega applications, workbaskets are used to hold assignments that can be processed by users with
the appropriate roles. It is important to ensure that only authorized users can access and process
assignments from a workbasket. This can be achieved through role-based access control (RBAC),
which allows you to define which roles are authorized to process assignments from a specific
workbasket.
To implement RBAC for workbasket assignments, you need to add the appropriate roles to the
workbasket definition. This can be done by opening the workbasket rule in the Pega Designer Studio
and navigating to the Security tab. Here, you can add or remove roles as needed.
It is important to ensure that only the required roles are assigned to each workbasket. Giving too
many roles access to a workbasket can lead to confusion and can make it difficult to manage access
permissions effectively. It is also important to ensure that roles are assigned consistently across all
workbaskets in the application.
If roles are not assigned to workbaskets, it is possible for unauthorized users to access and process
assignments, potentially compromising the security of the application. Additionally, without proper
RBAC controls in place, it can be difficult to track who has accessed or modified assignments, making
it harder to troubleshoot issues or enforce compliance requirements.
Implementing RBAC for workbasket assignments helps ensure that only authorized users can access
and process assignments, improving the security of the application. It also helps to ensure that roles
are assigned consistently across all workbaskets, making it easier to manage access permissions and
troubleshoot issues as they arise.
When defining roles for workbaskets, it is important to consider the specific needs of your
application and your organization. For example, you may need to define different roles for different
73
linkedin.com/in/christin-varughese-b968152b/
types of assignments or different workbaskets. You may also need to consider the impact of changes
to roles on other parts of the application.
Example: Consider an insurance claims processing application. In this application, there are
workbaskets for different types of claims, such as auto claims, property claims, and medical claims.
To ensure that only authorized users can process assignments from these workbaskets, you might
define specific roles for each type of claim. For example, the role "Auto Claims Processor" might be
assigned to the workbasket for auto claims, while the role "Medical Claims Processor" might be
assigned to the workbasket for medical claims.
The suggested approach is to provide clear and concise information in both Usage and Description
fields. In the Usage field, provide instructions on how to use the rule and any special considerations
that should be taken into account at design or run-time. In the Description field, provide an overview
of the rule and its purpose, including any additional context that may be helpful for users.
If these fields are not properly filled, it can lead to confusion and misinterpretation of the rule's
usage, which can ultimately result in errors or inefficiencies in the system. For example, if the Usage
field is left empty, it may be difficult for developers to understand how to use the rule, leading to
mistakes or inconsistencies in implementation. If the Description field is left empty, it may be
challenging to understand the purpose of the rule, making it harder to maintain or update in the
future.
The benefits of filling in these fields are numerous. Properly documenting rules using the Usage and
Description fields can improve efficiency, facilitate collaboration, and enhance the overall
maintainability of the system. When new developers join a project, the information in these fields
can help them quickly understand the purpose and intended use of a particular rule, leading to
faster onboarding and reduced errors. Additionally, well-documented rules can be more easily
updated and maintained over time, leading to a more sustainable and efficient system.
When designing rules in Pega, it is important to keep in mind the need to fill in the Usage and
Description fields. When developing a new rule, take the time to think through how the rule will be
used and provide clear instructions in the Usage field. Similarly, consider the purpose of the rule and
provide an overview in the Description field. If a rule is being modified, update the Usage and
Description fields as necessary to reflect the changes.
In some cases, it may be appropriate to leave the Usage or Description fields empty. For example, if
a rule is very simple and its usage is obvious, it may not be necessary to provide instructions in the
Usage field. Similarly, if a rule is being used for a very specific and well-defined purpose, it may not
be necessary to provide extensive context in the Description field. However, in general, it is a best
practice to provide clear and complete information in these fields to ensure proper usage and
maintainability of rules in the Pega platform.
74
linkedin.com/in/christin-varughese-b968152b/
7.6.2. Label Usage in Pega Rule Names
In Pega, label usage in rule names is a best practice that should be followed consistently throughout
the application development process. This approach helps to ensure that the labels used in the user
interface at runtime match the corresponding rules and make sense to end-users.
The suggested approach is to include a readable label in the rule name for every flow action,
property, or flow that is used in the UI at runtime. For example, a flow action used in a screen flow
can have the label "Submit Claim Request" added to its rule name as "SubmitClaimRequest".
Similarly, a property used in a section can have the label "First Name" added to its rule name as
"FirstName".
If this approach is not followed, it can lead to confusion for end-users who may see different labels
in the UI than what is shown in the rule names. This can lead to a poor user experience, and may
cause users to incorrectly interact with the system or have difficulty finding the right information.
1. Improved user experience: The use of consistent labels throughout the application helps
end-users understand the purpose of the rules and interact with the system more easily.
2. Easier maintenance: The use of labels in rule names makes it easier for developers to
identify the rules that are used in the UI at runtime and make changes accordingly.
3. Better communication: Label usage in rule names helps facilitate better communication
between developers and stakeholders, as everyone is speaking the same language.
When using labels in rule names, there are a few design considerations to keep in mind. First, labels
should be descriptive and intuitive, and should accurately reflect the purpose of the rule. Second, it's
important to use a consistent naming convention across the application, to avoid confusion or
inconsistency.
Pega provides a rich set of rules to develop and customize applications. These rules have a prefix of
px, py, or pz, which indicates that they are internal to the Pega platform. It is important not to block
these rules, as they are fundamental to the operation of the system. Additionally, it is not
recommended to reference internal Pega rules in custom implementations.
The suggested approach is to create custom rules by save as from Pega-provided rules instead of
modifying them directly. When inheriting from a Pega-provided rule, the custom rule overrides only
the selected parts of the base rule, while the rest of the rule remains unchanged. This approach
ensures that the system continues to function correctly and that the customizations can be easily
maintained and upgraded.
If Pega-provided rules are blocked or modified directly, it can lead to unexpected behavior, incorrect
data processing, and system instability. Additionally, customizations can be difficult to maintain and
upgrade, which can lead to increased development costs and longer development cycles.
75
linkedin.com/in/christin-varughese-b968152b/
The benefits of following this approach include:
1. Improved system stability: By not blocking or modifying Pega-provided rules, the system will
operate as expected.
2. Easier maintenance and upgrades: Inheriting from Pega-provided rules reduces the need for
extensive customizations, making it easier to maintain and upgrade the application.
3. Faster development: Using Pega-provided rules as a starting point for custom rules can save
development time.
For example, let's say you are building a custom report that needs to display the number of cases
assigned to each user. Instead of starting from scratch, you can save as from the
pxAssignedOperator report definition, which is a Pega-provided report that displays the number of
assignments assigned to each operator. By doing from this report, you can easily customize it to
meet your specific requirements while still benefiting from the built-in functionality of the base
report.
In conclusion, it is important to follow best practices when using Pega-provided rules. By inheriting
from Pega-provided rules and avoiding the modification of internal rules, developers can create
customizations that are stable, maintainable, and easily upgradable.
Pega provides extension points in the form of empty stub rules that are designed to be overridden
by application-specific functionality. These extension points should be used whenever possible
instead of directly overriding Pega-provided rules.
When an extension point is used, the Pega rule remains intact and the additional functionality is
appended to it. This approach allows for easier upgrades and maintenance since the Pega rule can
be updated without affecting the custom functionality appended to it.
On the other hand, overriding a Pega-provided rule entirely can cause issues during upgrades and
maintenance. If the rule is updated in a new Pega version, the custom functionality added to the rule
will need to be manually merged with the updated Pega rule.
It is important to note that not all Pega rules are designed to be extension points. In cases where
extension points are not provided, it may be necessary to override the rule entirely to add custom
functionality.
Design considerations should be taken when creating extension points as well. Extension points
should be designed to be flexible enough to accommodate different types of custom functionality,
while also maintaining consistency with the Pega-provided rule.
76
linkedin.com/in/christin-varughese-b968152b/
7.6.5. Update references such as $ANY or $NONE with appropriate class references
In Pega applications, it is important to update references such as $ANY or $NONE with appropriate
class references to ensure that the application performs efficiently and accurately.
The suggested approach for updating references is to use class references instead of $ANY or
$NONE, as it helps the system to identify the specific class and property at runtime. This can improve
the performance of the application and prevent any unexpected behaviors.
If references are not updated, it can lead to incorrect data retrieval and processing, leading to
performance issues and errors. For example, if $ANY is used in a report definition, it may result in
the system scanning all tables, leading to unnecessary processing time and resource usage.
Updating references also provides better maintainability and flexibility in the long run. It helps
developers to easily identify the class and property they are working with, and makes it easier to
update or modify them when needed.
Design considerations for updating references include understanding the structure of the application
and the relationships between classes and properties. It is important to use the correct syntax for
class references, such as "MyApp-Data-Customer" instead of "$ANY".
It is important to note that there may be some scenarios where using $ANY or $NONE may be
necessary, but this should be done with caution and only when there is no alternative approach.
For example, if a rule needs to reference any instance of a class regardless of its name, $ANY may be
used. However, it is recommended to use it only when necessary and to provide additional criteria
to filter the results as much as possible.
Overall, updating references with appropriate class references is an important best practice for Pega
applications. It can help improve performance, prevent unexpected behaviors, and provide better
maintainability and flexibility.
Clipboard pages are an essential part of Pega applications and play a crucial role in managing data
between different components of the system. However, it is equally important to ensure that
unnecessary pages are not retained in memory, as they can negatively impact system performance
and lead to higher memory usage. Therefore, it is recommended to identify and remove any
unnecessary clipboard pages at the end of each process.
The suggested approach for identifying and removing unnecessary clipboard pages is to regularly
monitor the clipboard page usage throughout the application lifecycle. This can be done using
various tools such as the Clipboard tool in the Pega Developer Studio, Pega Diagnostic Cloud, or
third-party tools. Additionally, Pega provides Property-Remove or Page-Remove method for activity,
that can be used to remove expired or unnecessary pages from memory.
If unnecessary pages are not removed at the end of each process, they will remain in the JVM
memory, leading to higher memory usage, slower performance, and potential system crashes. This
77
linkedin.com/in/christin-varughese-b968152b/
can be particularly problematic in applications that deal with large amounts of data or high levels of
user traffic.
The benefits of efficient management of clipboard pages include improved application performance,
reduced memory usage, and a more stable system overall. By keeping the clipboard pages clean and
optimized, the system can handle increased user traffic and large data volumes more efficiently.
When designing a Pega application, it is essential to consider the clipboard page usage and to ensure
that only the necessary data is retained in memory. To achieve this, it is recommended to design the
application in a way that promotes efficient memory usage, such as using property references
instead of page references wherever possible.
In some cases, it may be necessary to keep certain clipboard pages in memory for a more extended
period due to their continued use. In such cases, it is recommended to use Pega's page-level
persistence feature to maintain the pages in memory while reducing the memory usage to some
extent.
7.6.7. Best Practices for managing large text values in Pega Applications
One of the best practices in Pega application development is to avoid using large text values in top-
level pages and instead keep them in embedded page properties. This is particularly important when
elastic search is enabled on a class, as elastic search indexes all top-level properties by default.
The suggested approach is to create embedded pages for large text values and reference them from
the top-level page. This helps to minimize the size of the top-level page and reduces the amount of
data that needs to be indexed by elastic search. It also makes it easier to manage the data and
improves the application's overall performance.
If large text values are stored in top-level pages, they can cause performance issues and consume a
large amount of memory. When elastic search is enabled, indexing large text values can also result in
slower search queries and increased indexing times.
By following the best practice of storing large text values in embedded page properties, applications
can benefit from improved performance and reduced memory usage. It also ensures that elastic
search queries are faster and more efficient.
When designing the data model for a Pega application, it is important to consider the size and type
of data that will be stored. Text values that are expected to be large should be stored in embedded
page properties, while smaller values can be stored in top-level properties.
For example, consider an application that stores customer feedback in a case. The feedback may
include long comments or descriptions, which can be stored in an embedded page called
"FeedbackDetails". The top-level page can then reference this embedded page for the feedback
data.
78
linkedin.com/in/christin-varughese-b968152b/
7.6.8. Creating Reusable and Efficient Rules in Pega
When creating a new rule in Pega that is similar to an existing rule, it is important to minimize
duplicate code to improve efficiency and maintainability. This can be achieved in several ways:
• Refactor the rule to use parameters instead of creating multiple versions of the same rule.
This allows you to change only values without changing the code.
• Create well-documented reusable assets, such as functions or data transforms, that can be
called from different places instead of copying the same code multiple times.
• Determine the reusability of the rule to decide where it should be placed in the Class and
RuleSet hierarchy. A rule that can be reused in multiple applications should be placed in a
shared RuleSet, while a rule specific to a single application should be placed in that
application's RuleSet.
If duplicate code is used, it can lead to inconsistency and errors if changes are made to one instance
of the code but not another. It can also make the code more difficult to maintain and debug.
Furthermore, placing reusable rules in the wrong RuleSet can result in unnecessary duplication and
inconsistency across multiple applications.
By creating reusable and efficient rules, you can reduce the amount of code duplication, increase
consistency across applications, and improve maintainability. It also allows for easier debugging and
updates, since changes only need to be made in one place.
When creating reusable assets, it is important to document them well to ensure they can be easily
understood and used by others. Also, when determining the reusability of a rule, consider factors
such as the complexity of the rule, the number of times it will be reused, and whether it is specific to
a single application or can be used across multiple applications.
Example: Consider a scenario where you need to create a rule to calculate the total cost of an order.
Instead of creating a new rule for each type of order, you can create a reusable function that takes in
the type of order and calculates the total cost based on that type. This function can be placed in a
shared RuleSet to be used by multiple applications.
In Pega, a workbasket is a queue of work items that can be assigned to one or more operators or
workgroups. It is important to choose a clear and descriptive name for each workbasket to make it
easy for users to understand its purpose.
The suggested approach is to follow a naming convention that clearly identifies the purpose of the
workbasket. The name should be meaningful and should give an idea about the type of work items
that are assigned to the workbasket. The name should also be consistent with other naming
conventions in the system.
If the name of the workbasket is not clear or does not identify its purpose, it can lead to confusion
for users and operators. They may assign work items to the wrong workbasket, resulting in delays or
79
linkedin.com/in/christin-varughese-b968152b/
errors in processing work items. It can also cause issues during maintenance or upgrades when
workbaskets may need to be renamed or deleted.
1. Improved user productivity: Clear and descriptive names make it easy for users to identify
and select the right workbasket, resulting in improved productivity and faster processing of
work items.
2. Reduced errors: Clear naming conventions help to reduce errors in work assignment and
processing.
3. Consistency: Consistent naming conventions across the system make it easier for users to
understand and navigate the system.
1. Use descriptive names: The name of the workbasket should clearly describe its purpose and
the type of work items that are assigned to it.
2. Use consistent naming conventions: Consistent naming conventions across the system help
to reduce confusion and make it easier for users to navigate the system.
3. Avoid using generic names: Avoid using generic names like "Workbasket1" or "General
Workbasket" as they do not provide any information about the purpose of the workbasket.
4. Use abbreviations sparingly: Avoid using abbreviations unless they are commonly
understood across the organization.
For example, a workbasket that contains customer service requests could be named "Customer
Service Requests Workbasket" or "CS Requests Workbasket" (if CS is a commonly understood
abbreviation in the organization).
In summary, following a clear and consistent naming convention for workbaskets in Pega is
important to ensure that work items are assigned and processed correctly, resulting in improved
productivity and reduced errors.
When working on a project with multiple team members, it is crucial to provide clear and
informative comments when checking in a rule after modification. This best practice helps in keeping
track of the changes made to the rules, identifying the purpose of the change, and making it easier
for other team members to understand the changes made.
The suggested approach is to include a work item or a reference number at the beginning of the
description of the change. This work item could be a change request or an error number, which is
used to identify the purpose of the change. Additionally, a brief and concise summary of the changes
should be included in the description.
For example, if a change request is to remove a "pyWorkPage Page Delete Step" from an Activity,
the check-in comment should be "US-00001 – Removed pyWorkPage Page Delete Step from
Activity."
80
linkedin.com/in/christin-varughese-b968152b/
If check-in comments are not provided, it becomes difficult for team members to understand the
changes made to a rule. This may lead to duplicate efforts, miscommunication, and confusion. In
extreme cases, it may also lead to the loss of valuable changes and the need to redo work.
The benefits of providing appropriate check-in comments include easy tracking of changes made to
the rule, understanding the purpose of changes, and reducing the time and effort required to
troubleshoot issues or errors. It also makes it easier to search for rules based on a specific work item
or error number.
In summary, providing appropriate comments when checking in a rule after modification is a best
practice that can help in maintaining rule quality, facilitating collaboration between team members,
and reducing the time and effort required to troubleshoot issues.
When comparing string values in Pega, it is recommended to use the @equals function instead of
the '==' operator. The @equals function is a built-in function in Pega that compares two string values
and returns a Boolean value indicating whether they are equal or not. The '==' operator, on the
other hand, compares the object reference, which may not always work as expected.
To use the @equals function, you need to provide the two string values as its parameters, enclosed
in quotes. For example, to compare two string properties, you can use the following syntax in an
activity step: @equals(.property1, "value").
Using '==' instead of @equals for string comparison can lead to unexpected results, especially when
comparing two string literals. This is because the '==' operator checks for object reference equality,
not for the content of the string. This means that two different string objects with the same value
will not be considered equal by the '==' operator.
Using @equals function provides more accurate and reliable string comparison in Pega. It helps in
preventing errors and unexpected behaviors caused by using the '==' operator for string comparison.
While using the @equals function, it is important to ensure that the parameter values are of the
same type, either both are strings or both are text values. Mixing the types can lead to unexpected
results.
Example: Consider an example where you need to compare the value of a string property "status"
with a specific value "Completed". Using the '==' operator, you can use the following syntax in an
activity step: .status == "Completed". However, using the @equals function, you can use the
following syntax: @equals(.status, "Completed"). This will provide accurate and reliable results.
When is it ok to do in the other way? In some cases, such as when comparing non-string values or
when comparing object references, using the '==' operator may be appropriate. However, when
comparing string values in Pega, it is always recommended to use the @equals function for accurate
and reliable results.
81
linkedin.com/in/christin-varughese-b968152b/
7.7. Best practice for Pega Flow rule type
7.7.1. Model-driven approach for flow implementation
One of the benefits of using decision rules in flows is that it makes the logic of the flow more
transparent and easier to understand. Decision rules can be maintained and modified separately
from the flow, which makes the overall flow implementation more modular and flexible.
Additionally, using decision rules helps to minimize the amount of hard-coded logic in the flow,
which makes it easier to modify the logic of the flow in response to changing business requirements.
When implementing a flow, it is important to consider the reusability of the decision rules used in
the flow. Decision rules can be reused across different parts of the application, which helps to
promote consistency and standardization in the application. As such, it is recommended to create
well-documented, reusable decision rules that can be called from different parts of the application.
Example: Consider a flow that processes a customer order. The flow may include various decision
points, such as determining the availability of the requested item, verifying the customer's payment
information, and calculating shipping costs. Rather than implementing each of these decision points
using separate activities, decision rules can be used to model each decision point. This makes the
flow more modular, efficient, and easier to maintain over time.
Configuring meaningful and readable audit information in a flow is an essential best practice in Pega
development. This helps in keeping track of work item history and makes it easier for users to
understand the actions performed in a flow. In this topic, we will discuss the suggested approach,
the issue if not done in this way, the benefits, design considerations, a proper example, and when it
is okay to do it in the other way.
The recommended approach for configuring audit information in a flow is to use a proper action
description in the assignment task. You can add a meaningful description of the action that is being
performed by the user in the assignment task. This will be helpful for tracking the progress of a work
item, as whatever action comes in an assignment will be described in the work item history as
"Assignment To" and the added text. You can also use the "Add History" method to add more details
to the work item history.
If the audit information is not configured in a meaningful and readable manner, it can lead to
confusion among users about the actions performed in a flow. This can lead to mistakes and errors
in the processing of work items. Also, if there is insufficient or unclear audit information, it can be
difficult to identify the root cause of issues that arise in a flow.
Proper audit information helps in tracking the progress of work items, identifying the cause of issues,
and resolving them quickly. It also improves the readability and clarity of the flow, making it easier
for users to understand the logic of the flow. By using a proper action description in the assignment
82
linkedin.com/in/christin-varughese-b968152b/
task, the work item history becomes more descriptive and meaningful, and users can quickly
understand the actions performed on a work item.
When configuring audit information in a flow, it is important to use a clear and concise language that
is easily understandable by all users. Avoid using technical terms or abbreviations that might be
unfamiliar to some users. The audit information should also be consistent across all the flows to
maintain uniformity and avoid confusion.
Example: Suppose a flow involves a user completing a task to verify a customer's address. In this
case, you can add a description like "Verify customer's address" in the assignment task, so that it is
recorded in the work item history as "Assignment To - Verify customer's address." This description
helps in tracking the progress of the work item and gives a clear idea of the action performed by the
user.
If the action being performed in the flow is straightforward and does not require any additional audit
information, then it is okay to skip adding a description in the assignment task. For example, if the
user is simply updating the status of a work item, there might not be a need to add a description as
the default description "Assignment To" is sufficient.
Configuring meaningful and readable audit information in a flow is a crucial best practice in Pega
development. By using a proper action description in the assignment task, users can quickly
understand the actions performed on a work item, which improves the readability and clarity of the
flow. It is important to use a clear and concise language that is easily understandable by all users and
maintain consistency across all flows. Proper audit information helps in tracking the progress of work
items, identifying the cause of issues, and resolving them quickly.
Instead of using tickets, use features such as the Wait shape to handle business exceptions that may
arise at any point in a flow. The Wait shape can pause the flow until a specific condition is met or a
specified period of time has elapsed, allowing for more flexible and efficient flow design.
Using tickets in a process flow can add unnecessary complexity and confusion, as well as increase
the risk of errors and delays in processing work items. Tickets can also be difficult to track and
manage, making it harder to maintain and update process flows over time.
By avoiding the use of tickets and instead using the Wait shape, you can simplify your process flows,
improve their performance, and reduce the risk of errors and delays. This can help you to deliver
higher quality work items more quickly and efficiently.
When designing your process flows, consider the potential business exceptions that may arise and
how they can be handled using the Wait shape. Identify the specific conditions or time periods that
should trigger the Wait shape, and ensure that the flow is configured to resume processing at the
appropriate point once the condition is met or the time has elapsed.
Example: In a customer service process flow, a Wait shape could be used to pause the flow until a
customer responds to a request for more information. Instead of raising a ticket and waiting for the
customer to contact the support team again, the flow can be paused at the Wait shape until the
customer provides the necessary information. This can improve the customer experience and reduce
the risk of errors or delays in processing the request.
83
linkedin.com/in/christin-varughese-b968152b/
When is it ok to do in the other way? It may be appropriate to use tickets in certain situations where
a business exception cannot be handled using the Wait shape. However, in general, it is
recommended to avoid using tickets and use features such as the Wait shape instead to simplify and
optimize your process flows.
8. Conclusion
Pega is an incredibly powerful platform that enables developers to create enterprise applications
that meet business requirements in a quick and efficient manner. However, to fully realize the
potential of the platform, it's crucial to follow best practices that ensure applications are
maintainable, scalable, and performant. This book has provided a comprehensive guide to Pega best
practices, covering various aspects of development, including case implementation, data access,
section rules, and more. By following these guidelines, developers can create applications that meet
business needs, are easy to maintain, and can scale to handle increasing volumes of data and users. I
hope that this book has been a valuable resource for Pega developers looking to optimize their
development process and create high-quality applications on the Pega platform.
84
linkedin.com/in/christin-varughese-b968152b/