Blue Prism Best Practices: The Ultimate Guide(2021)
This Blue Prism Best Practices: The Ultimate Guide (2021) will help you to build a highly configurable, secure and reliable bot for your business process by –
- Improving overall code quality
- Ensuring the code adheres to pre-defined coding standards
- Ensuring that the code is easy to maintain and is readable
- Ensuring that consistent exception handling standards are followed.
Have you finished your development and getting the result you expected? Have you done RPA code review with utmost importance?
Even if we do with utmost importance there are certain areas where we fail to quickly check the quality of RPA code and best practices which later causes Bug or Production issues.
Sometimes it also results in inefficient BOT which makes more error than completing the task successfully. (Yes, it’s true…)
To reduce post-deployment defects and costs, its highly imperative to build quality control checklist before putting any bot into production.
Based on my experience, I have listed my RPA Design recommendations for RPA which serves as a quality control checklist.
In case you are looking for building Robust RPA solutions these quick checks will help you to reduce post productions issues and provide better ROI for your next automation.
Not only this it will also help you to adhere to compliance, security and audit challenges.
Let’s look at different areas of quality checks.
(This article has taken consideration of best practices and guideline provided by Blue Prism along with practices I follow at my CoE along with the issues we have faced during our RPA Journey.)
Table of Contents
Blue Prism Best Practices Based on Different Areas of Improvement
Adheres to the high compliance and security standards that each industry demands requires comprehensive checks to build Secure Robotic Process Automation. As the cost of failure in this sector can result in the incurrence of huge fines and even lead to legal action by federal agencies.
Here is quick check you need to ensure in your RPA project implementation.
- Data Encryption – Blue Prism uses recognized encryption standards and includes Federal Information Processing Standard (FIPS) however for data in use/data in motion needs to be secured using certificate-based encryption and appropriate certificates to deployed on each runtime resources.
- Authorization – There are several instances where the user account is required as part of RPA implementation. Make sure the credentials for application users are secured in credential manager and to prevent unauthorized use within the environment access to specific credentials is restricted to specific Runtime Resources, processes and users.
- Authentication – logical access permissions need to be configured as part of the project initiation to provide an appropriate level of control and governance across the various environment. It helps in preventing accidental change and control over who got access to what? if your organization using the Multi-team Environments (MTE) make the sure the right set of role-based control are placed and the only team responsible for the process should have the right set of access.
- Password Management –Credentials Management functionality provides a secure repository for login details required to access target applications by the Runtime Resources. Credentials are stored in the Blue Prism database and are encrypted using the encryption scheme defined by the client. However, make sure –
- Passwords are not hardcoded in process/object workflow (Some developers do for quick test)
- Password is not written in plain text. I have seen instances where the developer has written password in the description field of Credential store to get it quickly.
- Password reset should be designed within the process to avoid any manual effort by the team, this can be easily achieved by checking the password expiry for all the credentials used during the process initiation phase and invoking the flow responsible for changing the password with the application and saving it back to Credential store or any third party store such as CyberArk.
Documentation of Process Automation
If you fail to plan, you plan to fail. This holds true for RPA as well. It’s not advisable to jump on development without documenting your process.
Documentation of process depends on practices and guidelines followed by COE(RPA) and it differs organization to organization.
But here are some common minimal checks you need to do to ensure your RPA documentation are in line with best practices and guidelines.
- Process Definition Document (PDD) or minimally process map, as well as a detailed step-by-step walkthrough of the process should be captured.
- PDD must include in-scope and out-of-scope details along with a guideline to handle exception cases
- Operational guide to ensure that process can be completed using manual steps incase of BCP or any unwanted situation
- If there is a change in the process after production release, the document must be updated to support current workflow
Checks at Process Level
Quality of bot development process and scaling the bot velocity is the topmost concern for many CoE (RPA Center of Excellence). The major challenge is the highly tedious task of manual reviews of RPA Workflows which involves Hundreds of Variables, arguments, activity, components, objects, process etc.
A well-designed process will help to minimize the operational issues and in-turn returns better ROI.
Here is quick check you need to ensure at the Process level.
- The process should be designed to be stopped in case required in the middle of the operation, stopped after specific no of runs, stopped after a specific time. The other consideration you should ensure is session variables are used to ensure bot stopping time can be extended or reduced by control room operators.
- Process recoverability should be considered using effective use of exception handling and retry mechanism. (See exception handling section for more details)
- Your process should close/reset extra window and navigate to home screen or dashboard before processing the next queue item or retry same in case of system exception.
- Check to ensure the same system exception is not occurring in every case. System exception should be checked for repeated instances of the same exception to avoid all cases are being rejected due to the same error. For example, if workitem1 has failed with a system error, in case there is again the failure of workitem2 with system error your process should be smart enough to compare it with last few errors so that in case of an issue with application/database or network can be checked before resuming the process.
- Where possible retry system exception within the process. This may require special navigation or even a restart of the system
- Session timeouts with the application need to be handled by using logging into the application after a certain time interval or based on no of items being processed by bot. For example, in bot should log-off and log in again every two hours or every 50 work items completion (This depends on process and timeout of your application)
- Hard close of application (Killing Process/Session) workflow must be designed so that applications will be terminated in case any unexpected system errors.
- If you are following Queue based retrieval, the process should be designed using proper uses of environmental locks and so that multiple bots can work on the same queue and avoid file processing or duplicate case error
- You should also consider process design for multiboot operation by ensuring file locks and environmental locks
- The process should use configurable files or environment variables to store external settings, such as credentials, business rule threshold parameters, log messages, email formats, file paths, URLs, document names, email text and email recipients. This allows process owners to make changes to automation variables without developer intervention.
Checks at Object Level
Objects are building blocks of Process and they support the better reusability & modularity of the process design. You must check following at object level to ensure standard design practices.
- Exposure of the object should be set appropriately, the run modes (foreground, background, exclusive) for business objects must be set based on object usages and design.
- Provide a description to inputs, outputs and pre-post conditions
- No hardcoded data in data items instead use input parameter so that process provides those values. (Not even environment variables they must be passed from the process)
- No business decision at object level they must be done at the Process level
- Dynamic waits should be used instead of static waits whenever possible and always check element exist before click or any other operation.
- Object Action pages which perform select fields in controls (like a Dropdown list or dynamic search or predictive text boxes), there should be a verification after the selection or search result to confirm the outcome is the same one which was supplied.
- Ensure navigation stages between application screens followed by a Wait Stage to verify success.
- One Action should not call other published Actions.
- Only minimum required attributes should be selected in spying and building object modeller, this will reduce code changes in case of minor changes to UI.
Checks at Application Modeler level
Application Modeler is the embedded capability within Object Studio where the configuration to interact with Application UI elements exist. These elements are identified by the robot with the help of selectors (aka element attributes), which can be configured and updated to make them unique for each element.
It acts as the robot’s eyes, telling it where to find the item that needs to be clicked, copied or key text into.
You must check followings at application modeller level to ensure selectors don’t break when moving to production environment.
- Environment specific data will cause the process to fail when migrated. If required make the value dynamic. Make sure to check URLs & title don’t contain environment-specific words like ‘UAT’ in case required to make then dynamic so that they can be used accordingly.
- Avoid using attributes which are potentially inconsistent, such as Path, X, Y, Parent X, Parent Y, Class etc.
- Experiment with different Spy modes to find the most effective method of identifying an element.
- Use Ordinal, Match Index and Match Reverse attributes as they are very effective in establishing unique element matches.
- Are there any technology-specific attributes that are recommended to be checked/unchecked?
- Attribute values should not contain any Customer data.
The common habit seen across is to cover the only the happy path of the process while building the bot but there may be many cases where the process gets halted.
Errors like failed logins, nonexistence directories, or no more disk space stop bot to perform its task.
Exceptions like a timed-out application, bad data, or a new screen within the application also halt the processing.
Whether it is business or application exception – Process should be designed to handle the exception and react accordingly.
For example, if a business exception occurs on queue item number two, the RPA bot should log the exception and prepare the environment to process queue item number three.
The bot should recover from exceptions and continue processing all the transactions. If an unexpected error occurs, the robot should notify a human operator via email and include a screenshot of the error message, when the error occurred, important argument values and the source of the error.
Here are a few quick checks you should do to ensure proper exception handling.
- The error-prone flow/code should be covered under Try-catch code construct with all known exceptions identified as business exceptions.
- Exception from objects should bubble up at process sub-page and from sub-page to the main page to decide re-try or abort of process.
- If there exist two consecutive same system exceptions, it should terminate and so that cause of the issue can be investigated before resuming the process. (Such as Application performance, DB and Network Downtime etc.).
- In the event of a serious problem with an application, it may be better for a process to stop rather than carry on working. For example, if there are 1000 items in the queue and the target application is down, it makes no sense to work all items and have them all marked as exception ‘application is unavailable’.
- Login failures should not be tried more than twice to avoid locking of credentials.
- All exception emails should follow the standard format. Which must contain below information so that appropriate action can be taken by the operation team.
- Robot runtime information
- Error Message & Source
- Case ID (Or unique identifiers to identify data items)
- Screenshot if it was captured and possible (Without data breach issue)
- Check on retry using counter and exception type should be included. The decision of whether to retry is normally governed by the exception details and the number of retries that have been attempted. This can be achieved using if the Retry Count & exception type is either a System Exception or an Internal Exception to handle case more gracefully.
- Preserve the type and detail of the current exception in case re-throw is required.
- Copy and Paste is a great time saver in Blue Prism but can result in sloppy detail, ideally, each exception message should be unique so that the task of retracing the root cause of a problem is made easier.
- Ensure adding a Tags to work queue has been considered while developing process. This helps in post-production to filter records for reporting and better handling of Queue as “Tag Filter” can be used to get next item form Queues.
- Each work queue items should be updated with status as this ensures the recording of what work has been done so far on the item and in case any failure it can be resumed from the steps which are not completed instead of starting fresh.
- The process should be configured to use the item status to ensure steps within your process that should never be repeated are never repeated.
- By default, items are selected by getting Next Item is First-In-First-Out (FIFO), or in other words, the same order in which they were added to the queue. Make sure to add priorities in case your process is designed to work in priority order.
- You Process solution should use more than one method of prioritizing, make sure to add logic to your design that works lower priority items if they are ageing. This reduces the risk that lower priority work might never be worked.
- Make sure your environment is configured to enable encryption scheme, so that queue can be configured to encrypt the data automatically when it is saved to the queue and decrypt it automatically when it is retrieved from the queue. This can be done using the Encrypted checkbox ticket on Queue details page.
- If your Work queue is used to feed external MI Reporting such as PowerBI or Tableau make sure to mark queue items as potential resolutions either as “Exception” or “Completed”.
- If you are using Parent / Child Relationships make sure to tag items to ensure that the Parent/Child Relationship of your work items is maintained.
- Ensure that all parent items have the correct number of child items, and all child items have a “Parent” Work Queue item.
- There should not be broken relationship between parent and child queues, for example, there should be an active parent all the time until all child items are processed for the same
Some Common checks & Guidelines
- Follow your organization-specific guidelines for naming conventions, arguments and logging practices.
- Avoid heavy logging and only meaningful logs to be inserted inflow. (No Customer sensitive data in logs)
- Use composition of variables & Booleans to ensure branch flow and case statements.
- Make sure the main page of your process should only contain high-level workflow which explains what process is all about and all works should be done in sub-pages.
- Resources like Excel, file, Database connections, etc. should be released/closed immediately after use to avoid memory and performance-related errors.
- File & folder, SharePoint connectivity should be checked in code before accessing or writing to files. Locks also need to be ensured to avoid accidental damage due to multiple rob working on same items.
- It’s suggested to provide comments before every core business logic block
- Make sure to add timeout with database operation while insert and delete, also ensure connections are properly closed after each operation.
- Capture Screenshot mechanism on exceptions wherever privacy and customer sensitive information is not at risk to be exposed
- Dynamic waits should be used instead of static waits whenever possible
- Customer PII Sensitive information should never be logged at any stage
- Passwords should always be of type password (not of type text) to avoid leaking of credentials in plain text.
- Datatypes & exposure of variable should be used effectively to avoid variables getting overridden
- The coding stage adds the power and flexibility of a programming language, make sure that external References and Namespace Imports are used to include libraries of .NET functions that exist in other files. Examples of these could be an SQL Server or MySQL database library or perhaps a library for machine learning
- The coding stage has a close relationship with the global code declared as part of the business object. Make sure that on Global Code tab global functions & DLL etc. are kept so that they will be accessible in every code stage within your object.
- Runtime resources should be allowed a breathing period of 30 minutes & weekly or fortnightly restart should be scheduled to resolve a lot of memory issues and performance issues as the session would be started afresh
- Standard OPS operating procedures should include common definitions for patch Severity and common processes and lead times for deployment so that RPA runtime resources patching can be planned accordingly.
Over the past decades, many challenges of RPA tools and technology maintainability has been simplified through various design principles and best practices guideline set by CoE. This has helped in improving the average handling time of bot and directly improve your return on investment (ROI) by saving Operation cost of the control room.
Hence, it’s important to put efficient practices while designing the process and architecture of RPA bots.
If you thoroughly review the code and ensure that the above-said points are covered in your process design. It will help you to build a highly configurable, secure and reliable bot for your business process by –
- Improving overall code quality
- Ensuring the code adheres to pre-defined coding standards
- Ensuring that the code is easy to maintain and is readable
- Ensuring that consistent exception handling standards are followed
Thanks for reading.