UnderwriteIQ Exposure Import

Understand options for importing exposure data in bulk

Overview

Bulk Edit for UnderwriteIQ leverages the Platform API to facilitate the creation of batches of exposure data in support of underwriting workflows.

Underwriters and other users that are primarily interested in account-level modelling will benefit from the increased performance and usability of this process when importing exposure data. Exposure Batch for UnderwriteIQ is best suited for importing exposures that are applicable to single accounts.

The Platform APIs support two workflows for creating large volumes of exposure data to the Intelligent Risk Platform.

  • Method 1: Exposure data is defined in the body of a Exposure Bulk Edit request. The request JSON defines the relationships between accounts, locations, policies, and treaties in a hierarchical structure.
  • Method 2: Exposure data is defined in the body of a JSON file that is uploaded to a storage bucket on AWS and the body of the Create Batch Job operation specifies a uploadId of the uploaded file.

Exposure Batch for UnderwriteIQ is fast and easy to use. Unlike MRI and other import workflows, which are designed to support large-scale data migration projects and emphasize data throughput, batch processing is designed to quickly import smaller batches of exposure data in a single request.

Method 1: Create exposures in bulk

In this workflow, exposure data is defined in the body of a Exposure Bulk Edit request. The request JSON defines an object that consists of a set of nested arrays that define the relationships between accounts, locations, policies, and treaties in a hierarchical structure.

The Exposure Bulk Edit operation creates, updates, or deletes multiple exposures depending on the operation type (INSERT, UPDATE, DELETE specified in the request. The request body defines an object that consists of five arrays: a portfolios array, accounts array, locations array, policies, and treaties array. Where parent-child relationships exist between exposures (e.g. between a portfolio and its accounts, or accounts and locations), child exposures may be nested within the parent exposure.

The request requires that the exposureId path parameter identify the EDM to which the exposures are to be added.

curl --request POST \
     --url https://api-euw1.rms.com/platform/riskdata/v1/exposures/444/bulk-edit \
     --header 'accept: application/json' \
     --header 'content-type: application/json'

The request accepts a portfolios array, accounts array, locations array, policies, and treaties array.

{
    "portfolios:" [
        {
            "portfolioId": 0,
            "portfolioName": "Port01",
            "portfolioNumber": "Port_ExpBatch_1_Num",
            "description": "Test_Location_Batch",
            "createDate": "2022-05-24T23:47:58.076Z",
            "stampDate": "",
            "operationType": "INSERT",
            "label": "Port_LocationBatch_01_Label",
            "accounts": [
                    {
                        "accountId": 23,
                        "label": "Acct_LocationBatch_01_Label",
                        "policies": [...],
                        "locations": [...],
                    },
                    {...}
            ],
        }
    ],
    "accounts": [...],
    "locations": [...],
    "policies": [...],
    "treaties": [...]
}

Each array defines one or more operations that creates, edits, or deletes an exposure. An operation is defined by an optional operationType, a required label, and a required request body. The operationType specifies the action performed (INSERT, UPDATE, or DELETE). By default, INSERT. The label uniquely identifies an operation enabling you refer to that operation or the output of that operation in user-defined workflows. The request body specifies exposure property values:

📷

Best Practice

For best results, omit the portfolios array and make the accounts array the top level of the exposure hierarchy. Within that array, limit the number of location exposures in the request to 1000 records or less.

🍏

Warning

Ensure that the request package is no larger than 5MB in size. Request packages that define up to 1000 exposures may be submitted in JSON format: Content-Type: application/json.

On success, the operations returns a 202 Accepted response and adds a BULK_EDIT job to the workflow engine queue. The Location response header specifies the job ID as part of a URL that you can use to track the status of the job, e.g. https://{host}/platform/riskdata/v1/jobs/9034518.

Method 2: Import exposure data from JSON

In this workflow, exposure data is imported from JSON file uploaded to an import folder using the Import API. This method is similar to other Platform data migration workflows that leverage Amazon S3 storage buckets (e.g. the MRI workflow) in that it accepts a predefined file of exposure data.

This method uses a combination of Import API operations and Amazon Web Service SDKs to upload exposure data to an import folder (i.e. AWS storage bucket) on Amazon Simple Storage Service (Amazon S3) and importing those exposures into an EDM on the Intelligent Risk Platform

The workflow consists of four steps:

  • Write exposure data to a file JSON format.
  • Create an import folder using the Create Import Folder operation.
  • Upload exposure data to the import file using AWS.
  • Import exposure data from the import folder into an EDM on the Intelligent Risk Platform.

Large volumes of exposure data may be uploaded to an import folder in JSON format. The EXPOSURE_BULK_EDIT import folder accepts both JSON files and compressed JSON files (GZ).

📷

Note

The UnderwriteIQ dedicated interactive queue is not supported if the File Upload method is used to define exposures or if the number of location exposures specified in the request package exceeds 10000 exposures.

Step 2.1: Write exposure data to JSON file

The first step in this workflow is to write exposure data to a JSON file.

The body defines an object that consists of five arrays: a portfolios array, accounts array, locations array, policies, and treaties array. Where parent-child relationships exist between exposures (e.g. between a portfolio and its accounts, or accounts and locations), child exposures may be nested within the parent exposure.

{
    "portfolios:" [... ],
    "accounts": [...],
    "locations": [...],
    "policies": [...],
    "treaties": [...]
}

This is the same structure that is specified in body of an Exposure Bulk Edit request.

Step 2.2 Create import folder

The Create Import Folder operation creates an import folder.

An import folder is a temporary storage location and a logical path on AWS S3. It is essentially an AWS storage bucket, i.e. a resource for storing objects on Amazon S3. Several Intelligent Risk Platform processes leverage Amazon Web Services to upload and temporarily store data in storage buckets prior to data migration. When the workflow is complete, the storage bucket and all data uploaded to it are automatically deleted.

An EXPOSURE_BULK_EDIT import folder accepts exposure data (an exposureBulkEditFile) in JSON or GZ (compressed JSON) format.

This operation accepts three body parameters: folderType, fileExtension, fileTypes.

curl --request POST \
     --url https://api-euw1.rms.com/platform/import/v1/folders \
     --header 'accept: application/json' \
     --header 'content-type: application/json' \
     --data '
{
  "folderType": "EXPOSURE_BULK_EDIT",
  "properties": {
    "fileExtension": "json",
    "fileTypes": [
      "exposureBulkEditFile"
    ]
  }
}
'

All three parameters are required.

ParameterTypeDescription
folderTypeStringType of import folder to create, i.e. EXPOSURE_BULK_EDIT.
fileExtensionStringType of exposure data to import. EXPOSURE_BULK_EDIT import folder accepts one of JSON or GZ.
fileTypesArrayType import file. AnEXPOSURE_BULK_EDIT import folder accepts only exposureBulkEditFile.

On success, the response returns a 201 Created HTTP response and a response object that specifies the ID of the EXPOSURE_BULK_EDIT import folder on AWS and base64-encoded temporary security credentials that will enable the client to upload a JSON or GZ file to that folder.

{
  "folderType": "EXPOSURE_BULK_EDIT",
  "folderId": "107275",
  "uploadDetails": {
    "exposureBulkEditFile": {
      "fileUri": "platform/import/v1/folders/{{folderId}}/files/{{fileId}}",
      "presignParams": {
        "accessKeyId": "QVNJQTNDTlNBUlFPVVo0NVo0QzI=",
        "secretAccessKey": "aTZjcWc3eW1ZSmpZN2tLaHJ",
        "sessionToken": "RndvR1pYSXZZWGR6RUNJYURCQUNYOENncTN5bmVVRE5",
        "path": "cm1zLXRlbmFudHMtbnBlLWV1LXdlc3QtMS8yMDAxNjY3L2ltcG",
        "region": "ZXUtd2VzdC0x"
      },
      "uploadUrl": "https://rms-tenants-xxx.s3.amazonaws.com/xxx/import/platform/exposure_bulk_edit/xxx/xxx-exposurebulkeditfile.json"
    }
  }
}

The uploadDetails object returns the temporary security credentials that enable you to programmatically sign AWS requests. Signing helps to secure requests by verifying the identity of the requester and protecting the data in transit.

PropertyDescription
accessKeyIdA base64 encoded S3 access key ID, a unique identifier for the S3 access key.
secretAccessKeyA base64 encoded S3 secret access key.
sessionTokenA base64 encoded S3 session token.
pathA base64 encoded path to the Amazon S3 bucket.
regionRegion of host.

The pre-signed security credentials will enable the client to sign the AWS request when uploading the exposure data to the import folder on AWS.

Step 2.3: Upload JSON file to AWS

The Import API does not provide operations for uploading local files to AWS. Rather, you must use the Amazon S3 API or an Amazon SDK to upload the database artifact to the EXPOSURE_BULK_EDIT import folder.

Once a EXPOSURE_BULK_EDIT import folder on AWS, use Amazon SDK APIs to upload the database artifact to this folder.

In this step, you will use the Amazon S3 bucket path and temporary user credentials to upload account data to the EXPOSURE_BULK_EDIT import folder. First, you must decode to the accessKeyId, secretAccessKey, sessionToken, and s3Path values returned by the Create Import Folder response and pass the decoded values to a S3 client.

Step 2.4: Import exposures

The Create Import Job operation creates an import job that ingests the uploaded exposure data into an EDM on the Intelligent Risk Platform.

This operation supports seven different types of import jobs. The EXPOSURE_BULK_EDIT import type imports exposure data from the specified import folder (folderId) into the specified EDM (resourceUri). All parameters are required.


curl --request POST \
     --url https://api-euw1.rms.com/platform/import/v1/jobs \
     --header 'accept: application/json' \
     --header 'content-type: application/json' \
     --header 'x-rms-resource-group-id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' \
     --data '
{
  "importType": "EXPOSURE_BULK_EDIT",
  "settings": {
    "folderId": "45"
  },
  "resourceUri": "/platform/riskdata/v1/exposures/213"
}
'

The folderId parameter specifies the ID number of the import folder. The resourceUri parameter identifies the ID of the EDM to which the exposure data is uploaded.

On success, the operations returns a 201 Created response and adds an EXPOSURE_BATCH_EDIT job to the workflow engine queue. The Location response header specifies the job ID as part of a URL that you can use to track the status of the job, e.g. https://{host}/platform/import/v1/jobs/9034518.