Custom Underwriting Workflows
Overview
The Risk Modeler API facilitates the management of end-to-end workflows by enabling risk management organizations to define data processing pipelines as custom workflows.
A custom workflow is a mechanism that enables you to package and manage multiple workflow jobs as a single request. Each workflow job is submitted and processed by the workflow engine separately and in the order specified in the workflow. For detailed information, see Workflow Jobs and Operations.
Custom workflows facilitate the management of end-to-end processes that incorporate multiple workflow jobs by defining a data processing pipeline. The custom workflow specifies the order of operation, manages all messaging, and passes the output of one operation to the next operation in the workflow. Clients can submit complete workflows with a single API request and do not need to poll the platform for the status of each workflow job.
Step 1: Authenticate client
The Intelligent Risk Platform restricts access to protected API resources by means of security credentials.
A client application must pass valid security credentials in every request it makes to the API. These credentials enable the platform to authenticate the identity of the client application and confirm that the client application is authorized to access and leverage the requested resources. For details, see Authentication and Authorization.
Step 2: Submit custom workflow
The Process custom workflow resource enables you to define workflows that consist of multiple workflow jobs called operations. The resource supports two types of custom workflows: account workflows and portfolio workflows.
In this step, we'll define the outline of a custom workflow for managing account exposures. This workflow consists of three operations: an operation that creates and updates a batch of account exposures, an operation that geohazards these accounts, and an operation that analyzes these accounts and generates ELT analysis results.
curl --request POST \
--url https://{host}/riskmodeler/v1/workflows \
--header "Authorization: XXXXXXXXXX" \
--header "accept: application/json" \
--header "content-type: application/json"
--header "X-Rms-Resource-Group-Id: {resource_group_id}"
--data "{exposure_data}"
When using UnderwriteIQ, you must also specify the resource group ID (X-Rms-Resource-Group-Id
) key in the request header. The resource group ID enables the Intelligent Risk Platform to correctly allocate computing capacity to the business units within a tenant's organization. For details, see Resource Groups.
All other parameters are specified in the request body. The workflow is defined by a unique name (required) and multiple operations that are defined in the operations
array.
The following snippet shows the basic structure of the request package. This request defines five operations; each operation is identified by a unique label. The first operation (ExposureBatch
) defines an account with four child locaitons. Subsequent operations (ExposureSummary
,GEOHAZ
, Demo_profile
, and LocationResults
) perform account-level analysis of those locations.
{
"name": "UIQ Demo Template",
"operations": [
{
"continueOnFailure": "true",
"dependsOn": [],
"input": {
"accounts": [
{
"name": "NAEQ-JSON",
"number": "NAEQ-JSON",
"label": "DemoAccount1",
"locations": [
{ location_object1 },
{ location_object2 },
{ location_object3 },
{ location_object4 }
]
}
]
},
"label": "ExposureBatch",
"operation": "/v3/exposurebatches?datasource=edmName"
},
{
"continueOnFailure": "true",
"dependsOn": [
"ExposureBatch"
],
"input": { ... },
"label": "ExposureSummary",
"operation": "/v2/accounts/{{$.ExposureBatch.output.accounts.[?(@.label == 'DemoAccount1')].id}}/summary_report?datasource={{EdmName}}"
},
{
"continueOnFailure": false,
"dependsOn": [
"ExposureBatch"
],
"input": [...],
"label": "GEOHAZ",
"operation": "/v2/accounts/{{$.ExposureBatch.output.accounts.[?(@.label == 'DemoAccount1')].id}}/geohaz?datasource={{EdmName}}"
},
{
"continueOnFailure": true,
"dependsOn": [
"GEOHAZ"
],
"input": {...},
"label": "Demo_profile",
"operation": "/v2/accounts/{{$.ExposureBatch.output.accounts.[?(@.label == 'DemoAccount1')].id}}/process"
},
{
"continueOnFailure": "true",
"dependsOn": [
"Demo_profile"
],
"input": { ... },
"label": "LocationResults",
"operation": "/v2/exports"
}
]
}
This snippet omits the details of individual operations. In an actual request, the four location
objects that constitute the bulk of the input in the ExposureBatch
operation would specify all of the data about those locations. For detailed information on location object attributes, see Create location.
Postman Collection
Moody's makes sample code available to you in a Postman Collection that you can download from our Github repo. https://github.com/RMS/rms-developers/releases/tag/2023-04-uw
On success, the operations returns a 202 Accepted
response and adds an EXPOSURE_BATCH_EDIT
job to the workflow engine queue. The Location
response header specifies the job ID as part of a URL that you can use to track the status of the job, e.g. https://{host}/riskmodeler/v1/workflows/9034518
.
Step 3: Poll job status
The Get job status operation enables you to view the status of the EXPOSURE_BATCH_EDIT
job and provides a link to completed exposure summary report when it is complete.
The workflow ID is specified in the endpoint path.
curl --location --request GET "https://{host}/riskmodler/v1/workflows/9034518" \
--header "Authorization: {api_key}"
If successful, the response returns a 200
status code and workflowId
of the job in the Location response header. You can poll this URL operation to track the status of the workflow job.
Updated 6 months ago