Import Exposures in Batch

Import large volumes of exposures and generate summary reports

Overview

Exposure Batch for UnderwriteIQ leverages the Risk Modeler API to facilitate the importation of batches of exposure data in support of underwriting workflows. Underwriters and other users that are primarily interested in account-level modelling will benefit from the increased performance and usability of this process when importing exposure data.

Exposure Batch for UnderwriteIQ is fast and easy to use. Unlike MRI, which is designed to enable large-scale data migration projects and emphasizes data throughput, batch processing is designed to quickly import smaller batches of exposure data in a single request. Exposures may be defined in a single JSON request package that defines the relationships between accounts, locations, policies, and treaties in a hierarchical structure. You do not need to upload and process multiple upload packages or define mappings between exposures.

Exposure Batch for UnderwriteIQ is best suited for importing exposures that are applicable to single accounts. To take advantage of its performance benefits, limit the size and scale of the request package:

  • Ensure that the request payload is not larger the 5MB in size.
  • Ensure that the number of location exposures in the request is less than 1000 records.

Step 1: Authenticate client

The Intelligent Risk Platform restricts access to protected API resources by means of security credentials.

A client application must pass valid security credentials in every request it makes to the API. These credentials enable the platform to authenticate the identity of the client application and confirm that the client application is authorized to access and leverage the requested resources. For details, see Authentication and Authorization.

Step 2: Import exposures

The Manage exposures in batch resource enables you to create, update, or delete large numbers of exposures in a single request.

As with all requests that use Risk Modeler API resources, you must specify a host in the resource URL and provide a valid API key in the request header. The {host} variable identifies the environment hosting your UnderwriteIQ application instance (one of api-euw1.rms.com or api-use1.rms.com) and the apiKey specifies an API key token.


curl --location --request POST "https://{host}/riskmodeler/v3/exposurebatches?datasource={myEDM}"
    --header "Authorization: {api_key}" 
    --header "X-Rms-Resource-Group-Id: {resource_group_id}"
    --data "{exposure_data}"

When using UnderwriteIQ, you must also specify the resource group ID (X-Rms-Resource-Group-Id) key in the request header. The resource group ID enables the Intelligent Risk Platform to correctly allocate computing capacity to the business units within a tenant's organization. For details, see Resource Groups.

The request body defines an object that consists of a set of nested arrays. The resource accepts a portfolios array, accounts array, locations array, policies, and treaties array. Where parent-child relationships exist between exposures (e.g. between an account and associated locations), child exposures may be defined within the definition of the parent exposure.

📷

Best Practice

For best results, omit the portfolios array and make the accounts array the top level of the exposure hierarchy. Within that array, limit the number of location exposures in the request to 1000 records or less.

🍏

Warning

Ensure that the request package is no larger than 5MB in size. Request packages that define up to 1000 exposures may be submitted in JSON format: Content-Type: application/json.

Each array in the request package defines one or more operations that create, edit, or delete an exposure. An operation is defined by an optional operationType, a required number, and a required request body. The operationType specifies the action performed (INSERT, UPDATE, or DELETE). By default, INSERT. If you are adding new exposures, as we are here, this parameter is not required. The number uniquely identifies an operation.

The following snippet represents the basic structure of a request package that defines a single account object that includes four location objects. Notice that since we are not adding multiple accounts that belong to a parent portfolio, we have omitted the portfolios array.

{
    "accounts": [
        {
            "operationType": "INSERT",
            "name": "NAEQ-JSON",
            "number": "NAEQ-JSON",
            "locations": [
                { location_object1 },
                { location_object2 },
                { location_object3 },
                { location_object4 }
            ]
        }
    ],
    ...
}

📷

Postman Collection

Moody's makes sample code available to you in a Postman Collection that you can download from our Github repo. https://github.com/RMS/rms-developers/releases/tag/2023-04-uw

On success, the operations returns a 202 Accepted response and adds an EXPOSURE_BATCH_EDIT job to the workflow engine queue. The Location response header specifies the job ID as part of a URL that you can use to track the status of the job, e.g. https://{host}/riskmodeler/v1/workflows/9034518.

Step 3: Poll job status

The Get job status operation enables you to view the status of the EXPOSURE_BATCH_EDIT job and provides a link to completed exposure summary report when it is complete.

The workflow ID is specified in the endpoint path.

curl --location --request GET 'https://{host}/riskmodler/v1/workflows/9034518' \
--header 'Authorization: {api_key}'

If successful, the response returns a 200 status code and workflowId of the job in the Location response header. You can poll this URL operation to track the status of the workflow job.