CEDE Import

Import exposure data from the AIR CEDE schema

Overview

The Import API is a collection of API resources that define a standard workflow for importing large volumes of exposure or analysis result data into the Intelligent Risk Platformโ„ข. The same API resources support the definition of CEDE, EDM, MRI, OED, RDM, and RDM Data Bridge workflows.

CEDEโ„ข (Catastrophe Exposure Data Exchange) is an open-source exposure data schema that is designed to facilitate the exchange of exposure data and development of models for analyzing risk. The CEDE data schema was developed by AIR Worldwide and is used by the Touchstoneยฎ risk management platform.

This page documents a basic workflow for importing CEDE exposure data into the Intelligent Risk Platform. This workflow uses a combination of Import API resources and AWS SDK services to upload a database of CEDE exposure data to an Amazon S3 bucket on Amazon Simple Storage Service (Amazon S3), transforming CEDE data to EDM data based on a standard mapping, and importing EDM data into the Intelligent Risk Platform.

๐ŸŠ

Postman Collection

Moody's RMS makes a Postman Collection available for testing and evaluating the CEDE Import workflow.

While this page documents this workflow and describes step-by-step the workflow and the endpoints that constitute this process, the Plaform: CEDE collection enables Intelligent Risk Platform tenants to test this workflow.

RMS Developer Portal provides collections for testing standard Platform workflows for importing and exporting exposures to and from the Intelligent Risk Platform. The site is freely available to the public. To learn more, see RMS Developers Portal Team.

Understand CEDE-to-EDM Mapping

The Import API leverages a data mapping engine to uses CEDE-EDM data mappings to convert CEDE schema objects into EDM schema objects. The CEDE-EDM mapping engine defines default mappings between CEDE schema objects and EDM schema objects.

The CEDE-EDM mapping engine is based on exposure definitions in the CEDE 10.0 schema definition. The Import API supports importing exposure data defined using versions 8.0, 9.0, and 10.0 of the CEDE schema.

CEDE-EDM mapping defines default mappings between CEDE schema objects and EDM schema objects. CEDE-EDM mapping defines default mappings for exposure data, financial data, and secondary modifiers .
For a detailed discussion of standard CEDE-EDM mappings, see Considerations for Mapping in Moody's RMS Support Center.

๐Ÿ“ท

CEDE Validation

The CEDE-EDM mapping engine does not validate the data stored in CEDE database. The CEDE-EDM mapping engine may fail while mapping the data to EDM if the data includes invalid values or missing records.

๐Ÿ“ท

Best Practice

Moody's RMS has tested the mapping engine with a variety of portfolios, but it is important that you test the engine on your portfolios and evaluate the mapped data before utilizing the tool for production workflows.

Step 1: Create database artifact

The CEDE import workflow requires that you upload CEDE data to AWS as a database artifact. A database artifact is a copy of a database that has been saved to a special format for backup, storage, or data transfer.

The first step is to create a database artifact that contains the CEDE exposure data.

The Import API supports importing exposure data from database artifacts in the BAK or MDF file formats.

ParameterDescription
BAKA BAK file is a database artifact with the .bak extension that is a complete backup of a Microsoft SQL Server database. The BAK file is a compressed file that includes one or more data files (MDF files), transaction log (LDF) files, and secondary (NDF) files.
MDFA MDF file is a database artifact that contains the database's data files only. This is a SQL Server database's Master Database File and it has the .mdf extension. The MDF file does not include the database's transaction (LDF) logs and secondary (NDF) files.

For step-by-step instructions on creating BAK or MDF database artifacts, see the relevant Microsoft SQL Server documentation.

Step 2: Create CEDE folder

The Create Upload Folder operation creates a temporary container (called a folder) on Amazon S3 for importing the data. This operation supports the creation of four different types of folders: CEDE, EDM, MRI, and OED.

In this case, we will create a CEDE folder that will serve as a temporary container for the CEDE data we will upload to Amazon S3. In subsequent steps, we will use an Amazon S3 SDK to upload a database artifact to this folder and an Import API operation to import data from this folder into an EDM on the Intelligent Risk Platform.

This CEDE folder is a temporary container on an Amazon S3 bucket on Amazon S3. The Import API streamlines the creation of this temporary container on Amazon S3 and returns the authentication credentials that will enable you to upload a database artifact to that Amazon S3 bucket Step 3.

Request

curl --request POST \
--url https://{ host }/platform/import/v1/folders \
--header 'Authorization: XXXXXXXXXX' \
--header 'accept: application/json' \
--header 'content-type: application/json' 

All request parameters are specified in the request body. This operation supports four folderType options: CEDE, EDM, MRI, and OED. Depending on the folderType specified, the operation supports different fileExtension and fileTypes parameters.

In this example, we are uploading exposure data stored as a BAK database artifact to a CEDE folder.

{
  "folderType": "cede",
  "properties": 
    {
        "fileExtension" : "bak",
        "fileTypes" : ["exposureFile"]
    }
}

The properties object identifies the name of the file to be uploaded to the CEDE folder. Although the fileTypes parameter is an array, this API operation supports a file. In this case, we will upload a single database artifact named exposureFile.bak.

Response

If successful, the response returns a 201 Created HTTP Response Code and base64 encoded temporary security credentials from the AWS Security Token Service.

The response identifies the CEDE folder's location on AWS and returns security credentials that will enable you to access the CEDE folder. In Step 3, you will use this information to upload the BAK to the CEDE folder using the AWS API.

{
  "folderType": "CEDE",
  "uploadDetails": {
    "exposureFile": {
      "uploadUrl": "string",
      "fileUri": "import/folders/123213/files/1",
      "presignParams": {
        "accessKeyId": "XXXXXXXXXX",
        "secretAccessKey": "XXXXXXXXXX",
        "sessionToken": "string",
        "path": "import/folders/123213",
        "region": "XXXXXXXXXX"
      }
    }
  }
}

The presignParams object returns the temporary security credentials that enable you to programmatically sign AWS requests. Signing helps to secure requests by verifying the identity of the requester and protecting the data in transit.

ParameterDescription
accessKeyIdA base64 encoded S3 access key ID, a unique identifier for your S3 access key.
secretAccessKeyA base64 encoded S3 secret access key. The access key ID and secret access key enable you to sign AWS requests.
pathA base64 encoded path to the Amazon S3 bucket. For example, import/folders/123213
sessionTokenA base64 encoded S3 session token.
region

Now that we have a CEDE folder on AWS, we can use Amazon SDK APIs to upload our database artifact to this folder.

Step 3: Upload database artifact

The Moody's RMS Import API does not provide operations for uploading local files to AWS. Rather, you must use the Amazon S3 API or an Amazon SDK to upload the database artifact to the Amazon S3 bucket you created in Step 2.

In this procedure, you will use the Amazon S3 bucket path and temporary user credentials to upload account data to the CEDE folder. First, you must decode to the accessKeyId, secretAccessKey, sessionToken, and s3Path values returned in Step 2 and pass the decoded values to a S3 client. The sample code is in Java 8.

private static String base64Decode(String text) {
    return new String(Base64.getDecoder().decode(text));
}

Pass the decoded accessKeyId, secretAccessKey, and sessionToken to the Amazon getS3Client() method to create an Amazon S3 client.

private static AmazonS3 getS3Client(String accessKey, String secretKey, String sessionToken){
    BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
            accessKey,
            secretKey,
            sessionToken);
 
    return AmazonS3ClientBuilder.standard()
            .withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
            .withRegion(Regions.EU_WEST_1)
            .build();
}

Amazon TransferManager is a high-level utility for managing transfers to Amazon S3 that makes extensive use of Amazon S3 multipart uploads.

Once you have the Amazon S3 client, you can pass the s3Client, bucketName, key, and the filePath to the TransferManager.

private static void upload(AmazonS3 s3Client, String bucketName, String key, String filePath) {
    try {
        TransferManager tm = TransferManagerBuilder.standard()
                .withS3Client(s3Client)
                .build();
 
        // TransferManager processes all transfers asynchronously,
        // so this call returns immediately.
        Upload upload = tm.upload(bucketName, key, new File(filePath));
        System.out.println("Object upload started");
 
        // Optionally, wait for the upload to finish before continuing.
        upload.waitForCompletion();
        System.out.println("Object upload complete");
    }catch( Exception ex){
        System.out.println(ex.getMessage());
    }
}

The parameters are derived from previous steps:

ParameterDescription
bucketNameThe bucketName can be extracted from the the initial section of the decoded s3Path. If the s3Path is rms-mi/preview/tenant/50000/import/mri/3929, the bucketName is rms-mi.
keyCombines the remaining portion of the s3Path with the fileId, fileName in the pattern: s3Path/fileId-fileName. For example, preview/tenant/50000/import/mri/3929/12373-fileName
filePathThe absolute path to the file you want to upload.

If successful, the Amazon API will upload the files to the CEDE folder on AWS. From this folder, we can use the Import API to import that data into the Intelligent Risk Platform. But before we can do that we need to identify an appropriate data server and create an exposure set on that data server.

Step 4: Get data server ID

The Search Data Servers operation enables you to retrieve information about data servers on your Intelligent Risk Platform. This operation supports query string parameters that enable you to search for data servers by ID (databaseId) or type (databaseType).

In this example, we are looking for data servers that are PLATFORM data servers. The request appends a filter to the endpoint that selects data servers of the PLATFORM server type: filter=servertype=PLATFORM.

curl --request GET \
     --url 'https://api-euw1.rms.com/platform/riskdata/v1/dataservers?filter=servertype%3DPLATFORM' \
     --header 'Authorization: XXXXXXXXXX' \
     --header 'accept: application/json'

The response returns an array of server objects that match the specified criteria.

[
  {
    "serverName": "string",
    "serverId": 0,
    "serverType": "PLATFORM",
    "totalDiskSpaceInMb": "string",
    "availableDiskSpaceInMb": "string",
    "usedDiskSpaceInMb": "string"
  }
]

Now that we have the ID number of a data server, we an create an exposure set on that data server.

Step 5: Create exposure set

The Create Exposure Set enables you to create an exposure set. An exposure set is a collection of exposure data and related analysis results data that is managed by means of an access control list (ACL).

By assigning the imported CEDE data to an exposure set, you can control access access to this data based on group membership. For a detailed discussion of Platform access rights and privileges, see Access Controls.

curl --request POST \
     --url https://{{host}}/platform/riskdata/v1/exposuresets \
     --header 'accept: application/json' \
     --header 'content-type: application/json'

All parameters are specified in the request body. The required exposureSetName parameter specifies the name of the exposure set.

The request body also accepts an array of groupIds, which may be used to share the imported database and its exposure set with other groups. If share is true, principals that are members of the specified groups can access data in the exposure set.

{
  "exposureSetName": "cede_securable",
  "share": true,
  "groups": [
    {
      "groupId": "35"
    },
    {
      "groupId": "42"
    },
    {
      "groupId": "49"
    }
  ]
}

If successful, the response returns a 201 Created HTTP response code and initiates a job to create the exposure set. The response returns an URI in the Location response header that enables you to poll the status of the this job.

Step 6: Import data

Once you have uploaded the database artifact to the CEDE folder AWS, you can initiate job to import the uploaded data into the EDM.

The Import Job enables you to initiate and process an import job.

The request accepts a required x-rms-resource-group-id header that identifies the ID number of the :resource group to which this job is assigned.

curl --request POST \
     --url https://{{host}}/platform/import/v1/jobs \
     --header 'accept: application/json' \
     --header 'content-type: application/json' 
     --header 'x-rms-resource-group-id: {{resource ID}}' 

All parameters are specified in the body of the request. The request body defines the import job specifying the import type, the URI of the exposure set, and import job settings.

{
    "importType": "CEDE",
	"resourceUri": "/platform/riskdata/v1/exposuresets/{{exposureSetRiId}}",
	"settings": {
        "folderId": "{{folderId}}",
        "cedeSchemaVersion" : "10.0",
        "exposureName": "SA_RDM_9_12",
         "serverId" : 1
		}
}

If successful, the response returns a 201 Created HTTP Response Code. This response indicates that the API has created an IMPORT job and added that job to the job queue. The response also returns an URI in the Location response header that enables you to poll the status of the this job. In Step 5, you will use this jobId to poll the status of the job.

Step 7: Poll job status

The Get Job Status enables you track the status of an import job.

The request takes a single parameter that is specified as a path parameter.

curl --request GET \
     --url https://api-euw1.rms.com/platform/import/v1/jobs/778 \
     --header 'Authorization: XXXXXXXXXX' \
     --header 'accept: application/json'