Testing and Model Simulation Workbench

Testing and Model Simulation Workbench

Referred to as either TAMS or just the ‘Workbench’, this product aims to support the discrete and agile ‘business rules development cycle’ that is normally associated with use of the IDIOM Decision Manager.

 

Customers seek a business rules or similar low-code engine for many reasons:

 

  • to empower their Subject Matter Experts [SME] and to reduce the number of handovers required to enact change; 
  • to expand business product capabilities while shortening development cycles and making them more efficient (more business change for less work); 
  • to reduce risk, by providing more certainty that business outcomes will be met prior to deployment;
  • to make workflow more flexible and agile;
  • and many other process changes that reduce the time, cost, and risk of making business changes in general. 

 

A major impediment to the above is the traditional Software Development Life Cycle [SDLC].


IDIOM promotes the idea that a brand-new Business Rules Development Cycle [BRDC] can be implemented both within and alongside the SDLC. While the IDIOM Decision Manager already supports such a concept, we have extended and enriched the concept of the BRDC with the Workbench.


Introduction to the TAMS Workbench.mp4 [5 Minutes]

 

Detailed instructions for TAMS Workbench set-up.mp4 [8 Minutes]

 

Information guidelines for TAMS Workbench Process Runs.mp4 [5 Minutes]


The Workbench operates independently of the standard system environments (e.g. development, test, UAT, Production). Instead, it works with champion and challenger rules strategies. The modelling of changes in business rules (and any associated logic or algebra) is supported by the Workbench; then, when iterative modelling efforts deliver the new candidate strategy, a full regression test including masked differencing and other validations can be easily run, even across a full set of production instances of any scale.


Following testing in the Workbench, the decision models are assured to be complete, correct, and consistent.  The IDIOM decision model ‘candidates’ can then be selectively deployed directly from the Workbench into the standard system environments for system level testing at whatever level is deemed appropriate for the changes in process. At this point, the decision model artefacts rejoin the standard SDLC ‘system of record’ code-base. with a complete manifest of changes, documentation, source code, executables, and related artefacts. 


For instance, if there is no change to model interface data, then the first target environment is likely to be UAT, followed by a simple smoke test in production. It is plausible to implement a robust and risk averse business rules change cycle that is measured in days with daily releases. 


The Workbench fills the gap between the on-board testing capabilities of the IDIOM Decision Manager, and execution of the same rules in a production setting, bypassing the traditional SDLC process. 


When host system changes are required to support the business rules changes, then it is a simple matter to align the Workbench testing and release cycles with the SDLC system testing timelines.
 

Where it Fits

The table below provides a comparison for these products to explain where the Workbench fits in:

 

Feature Inline Debugger Test Executive IDM Runner The Workbench
Application Decision Manager Decision Manager Command line utility Stand-alone Windows application
Controlled execution with breakpoints and variable watch      
Multiple input and output schemas 1)
Execution of control model and transaction models 2) 2)
Execution of a single model for multiple Context xml files.  
Comparison of xml output produced by different sources (models, applications, exports, ...)   3)  
Capable of executing two models simultaneously   4)  
Capable of excluding known differences from the comparison      
Capable of comparing the output of a decision model that was produced using different configuration files.    
Capable of creating an xml file for the differences between output xml files   5)  
Capable of executing an Alerts model      
Capable of executing a Summary model      

1)  The Workbench handles one input and one output schema, plus the configuration schema and any number of work schemas (neither input nor output) plus the Alerts and Summary schemas
2)  The Decision Manager tools only run the control model itself. The other two tools run the control model and the models recorded in the TransactionControl xml file.
3) The Test Executive's Expected Output feature.
4) The Test Executive can compare the output from different models but these models need to be executed separately. The Workbench executes two models simultaneously.
5) The Test Executive can compare a model's output with expected output, but the differences are not saved to an xml file. They are only visible within the Test Executive's UI.

 

Example Use Cases

Examples of test scenarios for which the Workbench is ideal are:


Note: Due to the strong use of the Workbench that we have seen in our insurance customers, we usually use insurance related examples when discussing the Workbench.

 

  • You want to adjust your business policies to improve the business outcomes or perhaps to meet changing market conditions. With the Workbench, you can experiment with different approaches and analyze the outcomes at the individual transaction level.
  • You have created a new version of an existing model that calculates a different premium, implements a different method to approve or decline applications etc. You would like to verify that your improvements have not resulted in any changes other than the known/expected differences. The Workbench makes it easy to do this, using models that have already been released as executables to development, test and production environments.
  • Your model is using a Configuration xml file in which you record rates, fees and ages. You would like to analyze what the effect is of configuration changes on the model's output, for example the approval rate of applications if the qualifying age is reduced from 21 to 18 years.
  • You have a working model and a large number of input xml files. You would like to run the model and analyze its output to calculate totals, percentages, approval rates, distributions etc.
  • You are developing a new model or a new version and you want to compare its output with known or expected output. The expected output has not necessarily been produced by a decision model. It can be output created by another system or it has been derived from a database export.