MRI Import

Overview

Multi-relational import (MRI) is a data migration process that enables Intelligent Risk Platform™ tenants to import large volumes of exposure data into an EDM.

The MRI import workflow depends upon the definition of account, location, and reinsurance exposure data in strucutred text files that are uploaded to temporary storage (an import folder) on Amazon S3. Once uploaded to the import folder, the Import API supports a process for ingested this data from the folder in the specified EDM.

While you can use Import API operations to securely connect to storage buckets on the Amazon S3, you must use an AWS client to upload flat files of data to those storage buckets. Once you have uploaded source files, you can use Risk Modeler API operations to import the exposure data in those flat files into an EDM.

This worklflow describes each step in the workflow.

🍊

Postman Collection

Moody's makes a Postman Collection available for testing and evaluating the MRI Import workflow.

Moody's RMS™ Developer Portal provides collections for testing standard Platform workflows for importing and exporting exposures to and from the Intelligent Risk Platform. The site is freely available to the public. To learn more, see RMS Developers Portal Team.

Step 1: Prepare source files

The MRI import process requires that data is defined in a particular format in resource files. These resoruce files are essentially flat files. A flat file is a delimited text file that stores structured data in a plain text format. MRI leverages a separate mapping file to define the relationships between the account, location, and reinsurance exposures.

The first step, is to create one or more source files that define the exposures to be imported in flat files.

A flat file is two-dimensional database in a plain text format. Each row of text in the file defines a record. The file uses a delimiter character to organize text into discreet columns of structured data. The flat file may use commas, semicolons, or tabs to delimit data values.

MRI Import enables you to import data from four different source files that are defined a flat files:

Resource FileDescription
accountA flat file that contains exposure data for one or more accounts and, optionally, policy data associated with each account.
locationA flat file that contains location exposure data.
reinsuranceA flat file of reinsurance exposure data.
mappingA flat file that defines mappings between exposure data defined in different resource files.

The following restrictions apply to the flat files.

  • The file must have a unique name and be saved in the CSV, TXT, or XSLX format. For example, accexp.csv or accexp.txt .
  • The file must contain ACCTNUM data. The ACCTNUM attribute is required in both the account source file and the location source file enabling the MRI service to map locations to accounts.
  • Each column in the flat file contains data contains account attribute values separated by means of a text delimiter. To ensure accurate interpretation of any numbers that use commas, Moody's recommends tab or semicolon delimiters. In Step 4, you will need to specify a delimiter value.
  • The first line of text in the flat file may specify column headers, strings that identify the data attribute represented by a column of data. Column header data is ignored during the import process.
  • If included, the POLICYNUM attribute cannot be blank or null.
  • If included, the POLICYTYPE attribute cannot be blank or null. One of 1 (Earthquake), 2 (Windstorm), 3 (Winterstorm), 4 (Flood), 5 (Fire), 6 (Terrorism), 7 (Workers Compensation).
  • If included, policy limits, deductibles, and premiums, must be specified as positive values. Negative numbers are not allowed.
  • If included, policy coverage limits, deductibles, and premiums, must be specified as positive values. Negative numbers are not allowed.
  • If included, the INCEPTDATE cannot specify a value later than that of the EXPIREDATE value.

For a comprehensive description of account data requirements for MRI import, see DLM Reference Guide on Moody's RMS Support Center.

The following snippet shows an example of a flat file that uses tabs to delimit account attribute values:

ACCNTNUM	ACCNTSTAT	ACCNTNAME	ACCNTSTATE	USERDEF1	USERDEF2	USERDEF3	USERDEF4
AD Andorra		AD Andorra	0					Property
AR Argentina	AR Argentina	0					Property
AT Austria		AT Austria	0					Property
AU Australia	AU Australia	0					Property
BE Belgium		BE Belgium	0					Property
BG Bulgaria		BG Bulgaria	0					Property
BO Bolivia		BO Bolivia	0					Property
...

In Step 4 of this recipe, the MRI_IMPORT job will ingest account data from this file and import that data into the EDM you specify. To ensure that the platform correctly parses the file, you will need to specify the delimiter used to structure text in the file and indicate if the file uses column headers.

📷

DATA VALIDATION

Intelligent Risk Platform™ validates the data specified in the account source file prior to importing that data into the EDM. The import process performs basic data validations to make sure that imported data is valid and does not create orphan records.

If a required data attribute value is missing or invalid, the platform throws a validation error.

Step 2 : Create import folder

The Create Import Folder operation creates a storage bucket on Amazon S3.

An import folder is a temporary storage location to upload the relevant for performing an import. Import Folder is a logical path on AWS S3 for which the temporary credentials can be obtained through POST /folders request. The import folder is disposed immediately after import job is complete

This import folder is a temporary storage location and a logical path on Amazon S3 to upload the relevant for performing an import.

You must create a unique import folder for each source file. If you are uploading account, location, reinsurance, and mapping source files, you must repeat this step four times.

The resource takes two required body parameters: the folderType parameter and a properties object that defines the extension of the output file.

curl --request POST \
     --url https://{host}/platform/import/v1/folders \
     --header 'accept: application/json' \
     --header 'content-type: application/json'

All parameters are specified in the request body. The folderType and properties object are required.

If the folderType is MRI, the properties object specifies the fileExtension of the data to be uploaded to the import folder and the fileTypes array, which lists the names of the files to be uploaded.

If the fileExtension parameter is TXT or CSV, the fileTypes parameter may define an array of values to identify the accountsFile, locationsFile, reinsuranceFile and mappingsFile uploaded.

{
  "folderType": "mri",
  "properties": {
    "fileExtension": "csv",
    "fileTypes": [
      "accountsFile",
      "locationsFile",
      "mappingsFile",
      "reinsuranceFile"
    ]
  }
}

If successful, the operation returns a 201 Created status code and uploadDetails for the file to be uploaded to this folder: spreadsheet, accountsFile, locationsFile, reinsuranceFile, or mappingFile.

{
  "folderType": "MRI",
  "folderId": "string",
  "uploadDetails": {
    "spreadsheet": {
      "uploadUrl": "string",
      "fileUri": "import/folders/123213/files/1",
      "presignParams": {
        "accessKeyId": "string",
        "secretAccessKey": "string",
        "sessionToken": "string",
        "path": "import/folders/123213",
        "region": "string"
      }
    },
    "accountsFile": {
      "uploadUrl": "string",
      "fileUri": "import/folders/123213/files/1",
      "presignParams": {
        "accessKeyId": "string",
        "secretAccessKey": "string",
        "sessionToken": "string",
        "path": "import/folders/123213",
        "region": "string"
      }
    },
    "locationsFile": {
      "uploadUrl": "string",
      "fileUri": "import/folders/123213/files/1",
      "presignParams": {
        "accessKeyId": "string",
        "secretAccessKey": "string",
        "sessionToken": "string",
        "path": "import/folders/123213",
        "region": "string"
      }
    },
    "reinsuranceFile": {
      "uploadUrl": "string",
      "fileUri": "import/folders/123213/files/1",
      "presignParams": {
        "accessKeyId": "string",
        "secretAccessKey": "string",
        "sessionToken": "string",
        "path": "import/folders/123213",
        "region": "string"
      }
    },
    "mappingFile": {
      "uploadUrl": "string",
      "fileUri": "import/folders/123213/files/1",
      "presignParams": {
        "accessKeyId": "string",
        "secretAccessKey": "string",
        "sessionToken": "string",
        "path": "import/folders/123213",
        "region": "string"
      }
    }
  }
}

The presignParams object returns the temporary security credentials that enable you to programmatically sign AWS requests. Signing helps to secure requests by verifying the identity of the requester and protecting the data in transit.

ParameterDescription
accessKeyIdA base64 encoded S3 access key ID, a unique identifier for your S3 access key.
secretAccessKeyA base64 encoded S3 secret access key. The access key ID and secret access key enable you to sign AWS requests.
pathA base64 encoded path to the Amazon S3 bucket. For example, import/folders/123213
sessionTokenA base64 encoded S3 session token.
region

In Step 6, we can use AWS SDK APIs to upload the source file to this folder. But first, we need to get details about the location of the location of the resource files.

Step 3: Search EDMs

The Search EDMs operation returns a list of EDMs available to an Intellient Risk Plaform tenant.

This operation supports response filtering based the value of a subset of properties. Depending on the property, you may use a combination of comparison operators, list operators, and logical operators.

curl --request GET \
     --url 'https://api-euw1.rms.com/platform/riskdata/v1/exposures?sort=exposureName%20ASC&limit=100&offset=0' \
     --header 'accept: application/json'

The response returns an array of EDM objects.

[
  {
    "exposureName": "string",
    "exposureId": 0,
    "uri": "string",
    "status": "string",
    "databaseName": "string",
    "metrics": {
      "additionalProp": {}
    },
    "ownerName": "string",
    "exposureSetId": 0,
    "serverType": "PLATFORM",
    "serverName": "string",
    "serverId": 0,
    "tagIds": [0]
  }
]

The exposureId identifies the ID number of an EDM. In Step 4, we will use this ID to retrieve a list of the portfolios available on that EDM.

Step 4: Search Portfolios

The Search Portfolios API resource returns information about the portfolios within a specific EDM.

This operation supports response filtering based the value of a subset of properties. Depending on the property, you may use a combination of comparison operators, list operators, and logical operators.

curl --request GET \
     --url https://api-euw1.rms.com/platform/riskdata/v1/exposures/exposureId/portfolios \
     --header 'accept: application/json'

The response returns an array of portfolio objects.

[
  {
    "portfolioId": 0,
    "portfolioName": "string",
    "portfolioNumber": "string",
    "description": "string",
    "createDate": "string",
    "stampDate": "string",
    "uri": "string",
    "geocodeVersion": "string",
    "hazardVersion": "string",
    "ownerName": "string",
    "updatedBy": "string",
    "tagIds": [0]
  }
]

The portfolioId identifies the ID number of a portfolio.

Step 5: Search Resource Groups

The Create Import Folder (Step 2) and Create Import Job (Step 7) operations both require that the client specify a resource group in the header parameter of the request.

A resource group is a mechanism for allocating resource quotas to Intelligent Risk Platform tenants. A tenant is provisioned ("seeded") one resource group per entitlement. This seeded resource group is available to all tenant principals. The entitlement specifies the resource quota licensed to the tenant. Licensed tenants with the RI-RISKMODLER entitlement may create additional resource groups that are tied to specific user groups and manage use those groups to allocate resource quotas.

The Search Resource Groups operation returns a list of resource groups filtered by entitlement. If you do know the resource group, you can use this operation to return applicable resource groups.

This operation accepts four different path parameters: RI-RISKMODELER, RI-UNDERWRITEIQ, EXPOSUREIQ, and TREATYIQ`.

curl --request GET \
     --url https://{HOST}/platform/tenantdata/v1/entitlements/RI-RISKMODELER/resourcegroups \
     --header 'accept: application/json'

The response returns an array of resource group objects including the resourceGroupId for each resource group.

[
  {
    "resourceGroupId": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "name": "string",
    "entitlement": "RI-RISKMODELER",
    "createdBy": "string",
    "createdAt": "2024-02-07T01:26:17.883Z",
    "modifiedBy": "string",
    "modifiedAt": "2024-02-07T01:26:17.883Z",
    "default": true,
    "allAccess": true
  }
]

The MRI_IMPORT job is an example of a secondary (non-analysis) job. To learn more, see Resource Quota Management.

Step 6: Upload Resource Files

The Moody's RMS Import API does not provide operations for uploading local files to Amazon S3. Rather, you must use the Amazon S3 API or an AWS SDK to upload the database artifact to the Amazon S3 bucket you created in Step 2.

In this procedure, you will use the Amazon S3 bucket path and temporary user credentials to upload account data to the MRI folder. First, you must decode to the accessKeyId, secretAccessKey, sessionToken, and s3Path values returned in Step 2 and pass the decoded values to a S3 client. The sample code is in Java 8.

private static String base64Decode(String text) {
    return new String(Base64.getDecoder().decode(text));
}

Pass the decoded accessKeyId, secretAccessKey, and sessionToken to the Amazon getS3Client() method to create an Amazon S3 client.

private static AmazonS3 getS3Client(String accessKey, String secretKey, String sessionToken){
    BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
            accessKey,
            secretKey,
            sessionToken);

    return AmazonS3ClientBuilder.standard()
            .withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
            .withRegion(Regions.EU_WEST_1)
            .build();
}

Amazon TransferManager is a high-level utility for managing transfers to Amazon S3 that makes extensive use of Amazon S3 multipart uploads.

Once you have the Amazon S3 client, you can pass the s3Client, bucketName, key, and the filePath to the TransferManager.

private static void upload(AmazonS3 s3Client, String bucketName, String key, String filePath) {
    try {
        TransferManager tm = TransferManagerBuilder.standard()
                .withS3Client(s3Client)
                .build();

        // TransferManager processes all transfers asynchronously,
        // so this call returns immediately.
        Upload upload = tm.upload(bucketName, key, new File(filePath));
        System.out.println("Object upload started");

        // Optionally, wait for the upload to finish before continuing.
        upload.waitForCompletion();
        System.out.println("Object upload complete");
    }catch( Exception ex){
        System.out.println(ex.getMessage());
    }
}

The parameters are derived from previous steps:

ParameterDescription
bucketNameThe bucketName can be extracted from the the initial section of the decoded s3Path. If the s3Path is rms-mi/preview/tenant/50000/import/mri/3929, the bucketName is rms-mi.
keyCombines the remaining portion of the s3Path with the fileId, fileName in the pattern: s3Path/fileId-fileName. For example, preview/tenant/50000/import/mri/3929/12373-fileName
filePathThe absolute path to the file you want to upload.

If successful, AWS will upload the files to the MRI folder on Amazon S3. From this folder, we can use the Import API to import that data into the Intelligent Risk Platform. But before we can do that we need to identify an appropriate data server and create an exposure set on that data server.

Step 7: Import MRI Data

The Create Import Job operation imports data from the specified import folder into the specified EDM (i.e. resourceUri) on the Intelligent Risk Platform.

The required x-rms-resource-group-id header parameter must be specified in the header of the request. This parameter identifies how the resoruce quota is allocated for this job.


curl --request POST \
     --url https://api-euw1.rms.com/platform/import/v1/jobs \
     --header 'accept: */*' \
     --header 'content-type: application/json' \
     --header 'x-rms-resource-group-id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'

All other parameters are specified in the body of the request.

In an MRI import job, this operation requires that parameters be specified in the body of the request: the importType (MRI), resourceUri (URI of EDM), and a settings object that specifies MRI import job specific settings.

{
  "importType": "mri",
  "settings": {
    "import files": {
      "accountsFileId": "accountfile",
      "locationsFileId": "locationsfile",
      "reinsuranceFileId": "reinsurancefile",
      "mappingFileId": "mappingfile"
    },
    "sheets": {
      "accountsSheetIndex": 67,
      "locationsSheetIndex": 89,
      "reinsuranceSheetIndex": 98
    },
    "folderId": "888888",
    "currency": "USD",
    "delimiter": "SEMICOLON",
    "skipLines": 0,
    "appendLocations": true,
    "geoHaz": false
  }
}

The required importType parameter specifies the import job type. One of CEDE, EDM, OED, or MRI.

Because we are defining an MRI job, you must specify the accountsFileId and locationsFileId body parameters as well as the bucketId, dataSourceNqme, and the delimiter parameters. We have also uploaded a reinsurance source file and a mapping file and will identify those files.

ParameterTypeDescripton
importFilesObjectName of accountsFileId, locationsFileId, reinsuranceFileId, and mappingFileId resource files uploaded to the import folder.
sheetsObjectID of the accountsSheetIndes, locationsSheetIndex, reinsurnceSheetIndex.
folderIdStringID of the import folder.
currencyStringA default value for the currency if the currency column for an imported exposure is undefined or blank. The value specified does not overwrite currency values in the source file themselves.
delimiterStringDelimiter used to structure data in the account, location, and reinsurance source files. One of TAB, COMMA, or SEMICOLON. Required.
skipLinesStringNumber of lines to skip at the beginning of source files if flat files contain header columns.
appendLocationsBooleanIf true, imported location data is added to existing locations. If false, new locations replace existing locations.
geohazBooleanIf true, locations are geocoded and harzarded on import. For more information see Create Geohaz Job.

The importFiles object identifies the name of the source files to import. The accountFileId, locationFileId, and reinsuranceFielId identify the account source file, location source file, and reinsurance source file respectively.

The MRI process only works if both an accounts resource file and locations resource file are uploaded to the import folder. Consequently, the accountsFileId and locationsFileId must also be defined in MRI jobs. The mappingFileId parameter specifies the ID number of an import mapping file. The import mapping file maps the columns in uploaded resource files to IRP database columns. The mapping file both defines the relationships between account, location and reinsurance data and between columns in the uploaded account, location, and reinsurance flat files and the corresponding columns in the EDM.

If successful, returns a 201 Accepted HTTP response and adds an MRI_IMPORT job to the job queue. The Locations header returns the URL of the job which will enable you to poll the status of the job.

Step 8: Poll Import Job Status

The Get Import Job operation returns the status of the specified import job.

The job ID is specified in the endpoint path.

curl --request GET \
     --url https://api-euw1.rms.com/platform/import/v1/jobs/89999 \
     --header 'accept: application/json'

A successful response returns the Workflow object, which provides detailed information about this workflow job including the submitTime, startTime, type, status.

When the job status is FINISHED, the exposure data will have been imported into the EDM.

{
  "jobId": "string",
  "priority": "verylow",
  "userName": "string",
  "status": "QUEUED",
  "submittedAt": "string",
  "startedAt": "string",
  "endedAt": "string",
  "name": "string",
  "type": "string",
  "progress": 0,
  "resourceGroupId": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
  "entitlement": "string",
  "details": {
    "resources": [
      {
        "uri": "string"
      }
    ],
    "summary": "string"
  },
  "tasks": [
    {
      "taskId": "string",
      "guid": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
      "jobId": "string",
      "status": "string",
      "submittedAt": "string",
      "createdAt": "string",
      "name": "string",
      "percentComplete": 0,
      "priorTaskGuids": ["3fa85f64-5717-4562-b3fc-2c963f66afa6"],
      "output": {
        "summary": "string",
        "errors": [
          {
            "message": "string"
          }
        ],
        "log": {}
      }
    }
  ]
}