OED Import
Import exposure data with OED
Overview
The Import API defines a standard process for importing exposure data into the Intelligent Risk Platform using several different file formats.
The OED (Open Exposure Data) schema is an exposure database format defined by OASIS that is serves as an industry standard schema for Oasis LMF-based models.
Postman Collection
Moody's makes a Postman Collection available for testing and evaluating the OED Import workflow.
While this page documents this workflow and describes step-by-step the workflow and the endpoints that constitute this process, the Plaform: OED collection enables Intelligent Risk Platform tenants to test this workflow.
Moody's RMS Developer Portal provides collections for testing standard Platform workflows for importing and exporting exposures to and from the Intelligent Risk Platform. The site is freely available to the public. To learn more, see RMS Developers Portal Team.
Understand OED-to-EDM Mapping
Platform API uses a mapping engine based on the OED 1.1.5 schema definition to map OED data to EDM data. During import, the Import API maps the exposure data, financial data, and secondary modifiers defined in the OED flat files to the corresponding EDM tables. The OED-to-EDM mapping engine supports the mapping of OED version 1.1.5 data only.
- The mapping engine does not validate the data stored in OED import files.
- The mapping engine may fail while mapping the data to EDM if the data includes invalid values or missing records.
- The mapping engine defines default mappings for construction codes, occupancy codes, and secondary modifiers that you cannot change.
Recommendation
Moody's has tested the mapping engine with a variety of portfolios, but it is important that you test the engine on your portfolios and evaluate the mapped data before utilizing the tool for production workflows.
Step 1: Prepare Exposure Data
The first step is to create one or more flat files of exposure data. These flat files define the account, location, reinsurance, and reinsurance scope data that you want to upload, import, and transform into EDM data.
Imported exposure data, financial data, and secondary modifiers may be defined four different flat files (in CSV or TXT file format):
Exposure | File Type | EDM Tables |
---|---|---|
account | accountsFile | Portinfo , Accgrp , Policy , Polcvg , Hdsteppolicy |
location | locationFile | Property , Address , Loccvg , xxdet |
reinsurance | reinsuranceFile | Reinsinf |
reinsurance scope | reinsuranceScopeFile | Reinsinf |
Both CSV and TXT files are supported. Depending on the file type, you may create accountsFile.csv
or accountsFile.txt
.
Each column in the flat file contains data with exposure attribute values separated by means of a text delimiter: comma, semicolon, or tab. To ensure accurate interpretation of any numbers that use commas, Moody's recommends tab delimiters. In Step 4, you will need to specify the delimiter value used to structure data within the uploaded CSV files.
Flat files have the following restraints:
- Maximum file size: 300MB
- Maximum number of accounts: 1.5 million records
- Maximum number of locations: 1.5 million records
Once you have the flat files of exposure data, you can create a folder on AWS to which this data can be uploaded.
Step 2: Create OED Upload Folder
The Create Upload Folder operation creates a temporary storage bucket (called a folder) on AWS for importing the data.
curl --request POST \
--url https://{{host}}/platform/import/v1/folders \
--header 'Authorization: XXXXXXXXXX' \
--header 'accept: application/json' \
--header 'content-type: application/json'
All parameters are specified in the request body. Depending on the folderType
specified, the operation supports different fileExtension
and fileTypes
parameters.
In this example, we are uploading OED data stored in four CSV flat files to an OED folder. This operation will create the OED folder.
{
"folderType": "OED",
"properties": {
"fileExtension": "CSV",
"fileTypes": [
"accountsFile",
"locationsFile",
"reinsuranceInfoFile",
"reinsuranceScopeFile"
]
}
}
A OED folder is a AWS storage bucket that is defined to accept data in the OED format.
If successful, the response returns a 201 Created
HTTP Response Code. This response also returns a folderId
in the Location
response header. This identifies the the folder's location on AWS and will be used in Step 4 to upload the CSV to the OED folder using the AWS API.
Step 6: Upload files
The Import API does not provide operations for uploading local files to AWS. Rather, you must use the Amazon S3 API or an Amazon SDK to upload the database artifact to the Amazon S3 bucket you created in Step 2.
In this procedure, you will use the Amazon S3 bucket path and temporary user credentials to upload account data to the MRI folder. First, you must decode to the accessKeyId
, secretAccessKey
, sessionToken
, and s3Path
values returned in Step 2 and pass the decoded values to a S3 client. The sample code is in Java 8.
private static String base64Decode(String text) {
return new String(Base64.getDecoder().decode(text));
}
Pass the decoded accessKeyId
, secretAccessKey
, and sessionToken
to the Amazon getS3Client(
) method to create an Amazon S3 client.
private static AmazonS3 getS3Client(String accessKey, String secretKey, String sessionToken){
BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
accessKey,
secretKey,
sessionToken);
return AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
.withRegion(Regions.EU_WEST_1)
.build();
}
Amazon TransferManager is a high-level utility for managing transfers to Amazon S3 that makes extensive use of Amazon S3 multipart uploads.
Once you have the Amazon S3 client, you can pass the s3Client
, bucketName
, key
, and the filePath
to the TransferManager.
private static void upload(AmazonS3 s3Client, String bucketName, String key, String filePath) {
try {
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
// TransferManager processes all transfers asynchronously,
// so this call returns immediately.
Upload upload = tm.upload(bucketName, key, new File(filePath));
System.out.println("Object upload started");
// Optionally, wait for the upload to finish before continuing.
upload.waitForCompletion();
System.out.println("Object upload complete");
}catch( Exception ex){
System.out.println(ex.getMessage());
}
}
The parameters are derived from previous steps:
Parameter | Description |
---|---|
bucketName | The bucketName can be extracted from the the initial section of the decoded s3Path. If the s3Path is rms-mi/preview/tenant/50000/import/mri/3929 , the bucketName is rms-mi . |
key | Combines the remaining portion of the s3Path with the fileId , fileName in the pattern: s3Path/fileId-fileName . For example, preview/tenant/50000/import/mri/3929/12373-fileName |
filePath | The absolute path to the file you want to upload. |
If successful, the Amazon API will upload the files to the OED folder on AWS. From this folder, we can use the Import API to import that data into the Intelligent Risk Platform. But before we can do that we need to identify an appropriate data server and create an exposure set on that data server.
Step 4: Import Data
The Import Job API resource defines a job for importing data previously uploaded to a folder on AWS into an EDM.
The request accepts a required x-rms-resource-group-id
header that identifies the ID number of the :resource group to which this job is assigned.
curl --request POST \
--url https://{{host}}/platform/import/v1/jobs \
--header 'accept: application/json' \
--header 'content-type: application/json'
--header 'x-rms-resource-group-id: {{resource ID}}'
All parameters are specified in the body of the request. The request body defines the import job specifying the import type, the URI of the exposure set, and import job settings.
{
"importType": "OED",
"resourceUri": "/platform/riskdata/v1/exposuresets/{{exposureSetRiId}}",
"settings": {
"folderId": "{{folderId}}",
"exposureName": "SA_OED_EIQ1",
"geoHaz": false,
"currency": "USD",
"delimiter": "COMMA"
}
}
The importType
, resourceUri
, and settings
parameters are required.
The settings
object specifies information about the imported data.
Attribute | Type | Description |
---|---|---|
folderId | String | ID of the folder on AWS. Returned in the response in Step 1. |
exposureName | String | Unique name of workflow. |
geohaz | Boolean | If true , locations are geocoded and hazarded on import. |
currency | String | ISO currency code, e.g. EUR , GBP , USD . |
delimiter | String | One of COMMA or TAB . |
portfolio | Object | Portfolio into which the data is imported identified by portfolioName, portfolioNumber, and description`. |
If successful, the response returns a 201 Created
HTTP Response Code. This response indicates that the API has created an IMPORT
job and added that job to the job queue. The response also returns an URI in the Location
response header that enables you to poll the status of the this job. In Step 5, you will use this jobId
to poll the status of the job.
Step 5: Poll Job Status
The Get Job Status enables you track the status of an import job.
The request takes a single parameter that is specified as a path parameter.
curl --request GET \
--url https://{{host}}/platform/import/v1/jobs/778 \
--header 'Authorization: XXXXXXXXXX' \
--header 'accept: application/json'
Updated about 1 month ago