Custom Modeling Workflows
Overview
The Risk Modeler API facilitates the management of end-to-end workflows by enabling risk management organizations to define data processing pipelines as a single user-defined workflow.
A user-defined workflow is a mechanism that enables you to package and manage multiple workflow jobs as a single request. Each workflow job is submitted and processed by the workflow engine separately and in the order specified in the workflow. For detailed information, see Workflow Engine.
User-defined workflows facilitate the management of end-to-end processes that incorporate multiple workflow jobs by defining a data processing pipeline. The user-defined workflow specifies the order of operation, manages all messaging, and passes the output of one operation to the next operation in the workflow. Clients can submit complete workflows with a single API request and do not need to poll the platform for the status of each workflow job.
This recipe provides an overview of a user-defined workflow that manages an end-to-end catastrophe modeling process. It defines a single request that adds a new account exposures, prepares those exposures for analysis, and calculates a risk analysis of those exposures.
Step 1: Define user-defined workflow
The Process user-defined workflow resource enables you to define workflows that consist of multiple workflow jobs called operations. The resource supports two types of user-defined workflows: account workflows and portfolio workflows.
In this step, we'll define the outline of a user-defined workflow for managing account exposures. This workflow consists of three operations: an operation that creates and updates a batch of account exposures, an operation that geohazards these accounts, and an operation that analyzes these accounts and generates ELT analysis results.
All parameters are specified in the request body. The workflow is defined by a unique name (required) and multiple operations that are defined in the operations
array.
{
"name": "My custom workflow",
"operations": [
{
"operation": "/v3/exposurebatches",
"label": "BatchUpdateAccount",
"input": {...},
}
{
"operation": "/v2/accounts/{id}/geohaz",
"label": "GeohazAccount",
"dependsOn": "[BatchUpdateAccount]",
"input": {...},
},
{
"operation": "/v2/accounts/{id}/process",
"label": "AnalyzeAccount",
"dependsOn": "["BatchUpdateAccount", "GeohazAccount"]",
"input": {...},
},
]
}
Our user-defined workflow ("My custom workflow") consists of three operations (a batch
operation, a geohaz
operation, and a process
operation) that are executed in sequence. (We can omit the optional type
parameter because we are defining operations by endpoint rather than keyword.) The operations
array specifies a list operations to be submitted to the workflow engine for processing. Operations may be defined by the following request parameters:
- The
operation
parameter identifies the API resource that is called by the operation. The operation may be defined using either a keyword (geohaz
,group
,process
, andbatch
) or the endpoint of an API resource. In this tutorial, we define all operations using an endpoint path. There are a number of compelling reasons for preferring this option which we will discuss later on. - The
label
attribute uniquely identifies the operation within the workflow. Thelabel
enables you to specify the order of in which operations are submitted to the workflow engine and to reference the output of a pervious operation (such as a newly created account or portfolio) using operation path parameters. We look at each of these scenarios in detail. - The
dependsOn
attribute specifies (by label value) an array of operations to be run before the current operation. This enables you to specify the order that operations are executed within a user-defined workflow job. Note that we have specified that the "GeohazAccount" operation must run before the "AnalyzeAccount" operation. - The
input
attribute is an object that defines operation type-specific data. Depending on the operation value specified, the input object defines the request body of needed for the corresponding operation. In subsequent steps, we will define the appropriateinput
objects for abatch
, geohaz(geohazard account), and
process` (analyze account) operations.
The values provided in the outline above are placeholders that we will replace when we define each of the operations in our user-defined workflow. Now that you have the basic structure, we'll define our first operation.
Step 2: Define batch operation
In this step, we're going define a batch
operation that updates group of account exposures by adding a new location exposure to each of these accounts. This step will demonstrate how batch operations facilitate the management of large numbers of exposures by enabling to easily define updates to exposures and the relationships between exposures.
In a user-defined workflow, a batch
operation is an implementation of the Manage exposures in batch resource. The resource enables you to do the following:
- Create, update, or delete large numbers of exposures by means of a batch process. The resource supports the management of portfolio, account, location, profile, and treaty exposures.
These exposures may be defined independently of one or another or nested within a single structure. - Define parent-child relationships between new or updated exposures. Exposures and the relationships between these exposures (including parent-child relationships) are defined in a JSON structure in the request package.
You will recall that In Step 1, we created an outline of a batch
operation, that defined the operation
and label
request parameters but omitted the input
object. We'll now turn to defining this object by demonstrating how you can use this structure to define nested exposures:
Batch input object
For batch
operations, the input
object may define five arrays: a portfolios
array, accounts
array, locations
array, policies
array, and treaties
array. Each array enables you to define multiple exposures of a particular type, e.g. the accounts
array may specify multiple account exposure definitions. If we were to define three accounts, the accounts array would contain three objects that define the attributes of three account exposures:
{
"name": "My custom workflow",
"type": "AccountWorkflow",
"operations": [
{
"operation": "/v3/exposurebatches?datasource={{datasourceName}}",
"label": "ExCustomWorkflow",
"input":
{
"portfolios": [],
"accounts": [
{ #account1 },
{ #account2 },
{ #account3 }
],
"locations": [],
"policies": [],
"treaties": []
}
}
]
}
Each object that defines an exposure represents a distinct operation that creates, updates, or deletes a exposure. For the time being we've sketched out objects for managing three accounts ("account1", "account2", and account3"). In this scenario, you would create three new accounts, but these accounts would not be related to any existing portfolios.
Nested exposures
If you want to define the relationships between exposures, you can nest exposure definitions in a hierarchical structure that efficiently defines parent-child relationships between the exposures. (We could define separate arrays of account exposures and location exposures. If we define location exposures within an account, the locations are associated with that account.) Where parent-child relationships exist between exposures (e.g. between a portfolio and associated accounts), child exposures may be defined within the definition of the parent exposure.
For example, you may define an array of location objects within an account object:
{
...
"input": {
"portfolios": [],
"accounts":
[
{
#account1
locations:
[
{ #location1-1 }
{ #location1-2 }
]
},
{ #account2 },
{ #account3 }
],
"locations": [],
"policies": [],
"treaties": [],
}
...
}
This structure enables you to nest operations to create, update, or delete exposure within operations that define a parent exposure (e.g. accounts within portfolios, locations within accounts, and so on all the way through the hierarchy).
Batch operation syntax
Now that you understand the basic structure of a batch operation, we will turn our attention to defining operations that create, update, or delete exposures.
Every object that you define within an exposure array represents a distinct batch operation that creates, updates, or deletes a specific exposure. The object specifies the operation type (INSERT
, UPDATE
, DELETE
), a unique operation label, and all exposure attributes. Depending on the operation type specified and the type of exposure defined, different attributes may be required.
Let's focus on the accounts
array within the input
object. If you wanted to add or update locations to an existing account, you could define a locations
array with the account object. The locations array would contain objects that define an operation for managing (creating, updating, or deleting) a location exposure:
{
"accounts": [
{
"operationType": "INSERT",
"label": "createAccount1",
"locations": [
{
"operationType": "INSERT",
"label": "account1-location1",
...
},
{
"operationType": "UPDATE",
"label": "account1-location2",
...
},
{...}
],
"id": 24,
"name": "Account_Update",
"number": "Account_Num_Update",
"description": "Test_Location_Batch_Update",
"createDate": "2022-06-21T23:47:58.076Z",
"stampDate": null
}
]
}
Note that the syntax for each nested operation is identical. It consists of an operationType
, a label
, and an input object defines the exposure. The input depends on the value of the operationType
parameter:
- The
INSERT
operation type creates a new exposure. An operation that inserts a new account exposure is equivalent to a distinct Create account request (POST /v2/accounts
) and specifies an object that matches the request package of that resource. - The
UPDATE
operation type updates an existing exposure. An operation that inserts a new account exposure is equivalent to a distinct Update account request (PUT /v2/accounts/{id}
) and specifies an object that matches the request package of that resource. - The
DELETE
operation type deletes an existing exposure. An operation that deletes a new account exposure is equivalent to a distinct Delete account request (DELETE /v2/accounts/{id}
) and specifies an object that matches the request package of that resource.
The label
uniquely identifies an operation within a workflow and serves two important purposes. As we have seen, it enables you to specify the order of operation within a workflow. It also enables use to reference the output of a previously run operation. In some operations we may need to reference a object (such as an account) that was created earlier in the workflow. We can do that using the value of the label attribute.
In the next step, we will use this label to reference the output of a previous task.
Batch Process Limits
The Manage exposures in batch resource enables you to process large numbers of exposures in batch. Consequently, the size of the request package may be quite large. Maximum supported payload: 5MB. Request packages that define up to 1000 exposures may be submitted in JSON format: Content-Type: application/json. The
batch
operation type enables you to add, update, or delete exposures "in batch", that is, multiple exposures in a single request. Depending on attributes specified in the request body, you can define portfolios, accounts, locations, policies, and treaties and define the relationships between these exposures.
Step 3: Define geohazard operation
In this step, we're going define a geohaz
operation that prepares the accounts that we updated in Step 2 for risk analysis processing.
In this step, you will use operation path variables to identify the newly created account exposures. The geohaz path variable uses a JSON query string to retrieve data a previous operation output. Every operation is defined by a unique label and this label may be used to query the output of that operation in subsequent operations.
For example, you may define a batch operation (accountlabel1) to create a new account. A subsequent geohaz operation (geohazJob) in the same user-defined workflow may use an operation path variable to set the account ID in the operation path:
{
"operation" : "/v1/accounts/{{$..[?(@.label =='createAccount1')].id}} /geohaz/?datasource={{datasourceName}}",
"label" :"geohazJob",
...
}
The {{$..[?(@.label == 'accountLabel1')].id}}
variable defines a query for retrieving the output generated by the accountLabel1 operation. During processing, the workflow engine "expands" this query to retrieve the unique ID number for this account using the path (label.output.id). The path variable is a simple JSON query string enclosed in double curly braces.
The remainder of the object specifies a standard geohazard request object. For see Geohazard account:
{
"operation" : "/v1/accounts/{{$..[?(@.label == 'accountLabel1')].id}}/geohaz/?datasource={{datasourceName}}",
"label":"geohazJob",
"dependsOn": ["ExCustomWorkflow"],
"input":
[
{
"name": "geocode",
"type": "geocode",
"engineType": "RL",
"version": "18.1",
"layerOptions": {
"skipPrevGeocoded": false,
"aggregateTriggerEnabled": "false",
"geoLicenseType": "0"
}
},
{
"name": "earthquake",
"type": "hazard",
"engineType": "RL",
"version": "18.1",
"layerOptions": {
"skipPrevHazard": false,
"overrideUserDef": false
}
},
{ ... }
]
}
Now, that the exposures have been geohazarded, you can analyze these accounts in bulk using the process operation.
Step 4: Define Process Operation
In this step, we'll define the final operation in our user-defined workflow. We have already created an account and its child location exposures and geocoded that account and its location exposures. Now we will "process" that account to generate an analysis of risk to those exposures.
The process
operation analyzes the account or portfolio. For accountWorkflow
jobs, the operation analyzes the account.
As with the previous step in the workflow, we will need to specify an operation path variable to identify the account that we created earlier in the workflow.
{
"operation" : "/v1/accounts/{{$..[?(@.label == 'accountLabel1')].id}}/process/?datasource={{datasourceName}}",
"label":"analyzeAccount",
"dependsOn": ["ExCustomWorkflow"],
The input
object specifies an array of object that defines all of the values required to perform an analysis job including the exposureType
, edm
, and currency
parameters. For details, see Analyze account:
"input":
[
"exposureType": "ACCOUNT",
"edm": "my_edm",
{
"currency": {
"code": "USD",
"scheme": "RMS",
"vintage": "RL18",
"asOfDate": "2020-03-01"
},
"treaties": [ 0 ],
"globalAnalysisSettings": {
"franchiseDeductible": true,
"minLossThreshold": 0,
"treatConstructionOccupancyAsUnknown": true,
"numMaxLossEvent": 0
},
"modelProfileId": 14172,
"eventRateSchemeId": 168,
"jobName": "string",
"id": 14,
"treaties": [],
"outputProfileId": 0
"outputSetting": {
"metricRequests": [
{
"granularity": [
"Portfolio"
],
"metricType": "STATS",
"perspective": "GU"
},
{
"granularity": [
"Portfolio"
],
"metricType": "EP",
"perspective": "GU"
},
{
"granularity": [
"Portfolio"
],
"metricType": "LOSS_TABLES",
"perspective": "GU"
},
{
"granularity": [
"Portfolio"
],
"metricType": "STATS",
"perspective": "GR"
},
{
"granularity": [
"Portfolio"
],
"metricType": "EP",
"perspective": "GR"
},
{
"granularity": [
"Portfolio"
],
"metricType": "LOSS_TABLES",
"perspective": "GR"
},
{
"granularity": [
"Portfolio"
],
"metricType": "STATS",
"perspective": "RL"
},
{
"granularity": [
"Portfolio"
],
"metricType": "EP",
"perspective": "RL"
},
{
"granularity": [
"Portfolio"
],
"metricType": "LOSS_TABLES",
"perspective": "RL"
}
]
}
}
Updated 6 months ago