Upload Account Data File

Upload account data for MRI import

Overview

Now that you have created a storage bucket on Amazon Simple Storage Service (Amazon S3), you can upload exposure data to that storage bucket using a combination of Risk Modeler API operations and the AWS SDK.

In this procedure, you will define an account resource file to store data for one or more accounts in a structured format. You will then use the Risk Modeler API to acquire the URL and security credentials for a storage bucket on Amazon S3. You can then upload the account resource file to the storage bucket using AWS SDK services.

Step 2.1: Define account source file

In this step, you will create an account source file, a flat file that contains attribute data for one or more accounts and, optionally, policy data associated with each account.

A flat file is two-dimensional database in a plain text format. Each row of text in the file defines a record. The file uses a delimiter character to organize text into discreet columns of structured data. The flat file may use commas, semicolons, or tabs to delimit data values.

  • The file must have a unique name and be saved in either the CSV or TXT formats. For example, accexp.csv or accexp.txt .
  • The file must contain ACCTNUM data. The ACCTNUM attribute is required in both the account source file and the location source file enabling the MRI service to map locations to accounts.
  • Each column in the flat file contains data contains account attribute values separated by means of a text delimiter. To ensure accurate interpretation of any numbers that use commas, RMS recommends tab or semicolon delimiters. In Step 6, you ill need to specify a delimiter value.
  • The first line of text in the flat file may specify column headers, stringsthat identify the data attribute represented by a column of data. Column header data is ignored during the import process.
  • If included, the POLICYNUM attribute cannot be blank or null.
  • If included, the POLICYTYPE attribute cannot be blank or null. One of 1 (Earthquake), 2 (Windstorm), 3 (Winterstorm), 4 (Flood), 5 (Fire), 6 (Terrorism), 7 (Workers Compensation).
  • If included, policy limits, deductibles, and premiums, must be specified as positive values. Negative numbers are not allowed.
  • If included, policy coverage limits, deductibles, and premiums, must be specified as positive values. Negative numbers are not allowed.
  • If included, the INCEPTDATE cannot specify a value later than that of the EXPIREDATE value.

For a comprehensive description of account data requirements for MRI import, see DLM Reference Guide on RMS Owl.

The following snippet shows an example of a flat file that uses tabs to delimit account attribute values:

ACCNTNUM	ACCNTSTAT	ACCNTNAME	ACCNTSTATE	USERDEF1	USERDEF2	USERDEF3	USERDEF4	
AD Andorra		AD Andorra	0					Property
AR Argentina	AR Argentina	0					Property
AT Austria		AT Austria	0					Property
AU Australia	AU Australia	0					Property
BE Belgium		BE Belgium	0					Property
BG Bulgaria		BG Bulgaria	0					Property
BO Bolivia		BO Bolivia	0					Property
...

In Step 6 of this recipe, the MRI_IMPORT job will ingest account data from this file and import that data into the EDM you specify. To ensure that the platform correctly parses the file, you will need to specify the delimiter used to structure text in the file and indicate if the file uses column headers.

📷

DATA VALIDATION

Intelligent Risk Platform™ validates the data specified in the account source file prior to importing that data into the EDM. The import process performs basic data validations to make sure that imported data is valid and does not create orphan records.

If a required data attribute value is missing or invalid, the platform throws a validation error.

Now that you have an account source file, you can acquire the URL and credentials that will enable you to upload this file to the storage bucket.

Step 2.2: Get security credentials and storage bucket path

In this procedure, you will acquire the URL and security credentials that will enable you to upload the account source file you just created to the storage bucket we created in Step 1.

The Get file location on S3 operation fetches a URL for the specified storage bucket, and returns temporary security credentials that will enable you to connect to that storage bucket using the AWS SDK.

This operation requires that you specify a bucketId path parameter that identifies the ID number of the storage bucket we created in Step 1 (e.g. 3333).

curl --location --request POST 'https://{host}/riskmodeler/v1/storage/3333/path' \
--header 'Content-Type: application/json' \
--header Authorization: XXXXXXX \

All other parameters are specified in the request body. Here we specify the name of the account input file that you just created (e.g. accexp.txt).

{
    "fileInputType": "MRI",
    "fileName": "accexp.txt",
    "fileSize": 9105,
    "fileType": "account"
}

The request package specifies fileInputType, fileName , fileSize , and fileType. The fileName and fileType body parameters are required.

  • The fileInputType body parameter identifies the input process. One of MRI or ALM.
  • The fileName body parameter specifies the name of the account source file to be uploaded.
  • The fileSize body parameter specifies the size of the account source file in kilobytes.
  • The fileType body parameter specifies the type of data contained in the file. One of account (account data), risk (location data), reins (reinsurance data), or mapping (mapping data).

If successful, the response returns a 201 status code and base64 encoded temporary security credentials from the AWS Security Token Service.

"accessKeyId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"secretAccessKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"sessionToken": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"s3Path": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

The temporary security credentials enable you to programmatically sign AWS requests. Signing helps to secure requests by verifying the identity of the requester and protecting the the data in transit.

  • The accessKeyId attribute specifies a base64 encoded S3 access key ID, a unique identifier for your S3 access key.
  • The secretAccessKey attribute specifies a base64 encoded S3 secret access key. The access key ID and secret access key enable you to sign AWS requests.
  • The s3Path attribute specifies a base64 encoded path to the storage bucket. For example, https://{{bucketname}}/tenant/50000/import/mri/3929
  • The sessionToken attribute specifies a base64 encoded S3 session token.

The Location HTTP response header specifies the fileId at the end of the header value. For example, location: http://<host>/mi/api/v1/storage/3929/12373. In this example, the fileId is "12373". You will use this fileId when you upload the flat file to the storage bucket.

Step 2.3: Upload account source file to storage bucket

The RMS Risk Modeler API does not provide a custom operation for uploading flat file data to storage buckets. You must use the AWS APIs to manage this process.

In this procedure, you will use the storage bucket path and temporary user credentials to upload account data to the storage bucket.

First, you must decode to the accessKeyId, secretAccessKey, sessionToken, and s3Path values and pass the decoded values to a S3 client. The sample code is in Java 8.

private static String base64Decode(String text) {
    return new String(Base64.getDecoder().decode(text));
}

Pass the decoded accessKeyId, secretAccessKey, and sessionToken to the Amazon getS3Client() method to create an Amazon S3 client.

private static AmazonS3 getS3Client(String accessKey, String secretKey, String sessionToken){
    BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
            accessKey,
            secretKey,
            sessionToken);
 
    return AmazonS3ClientBuilder.standard()
            .withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
            .withRegion(Regions.EU_WEST_1)
            .build();
}

Amazon TransferManager is a high-level utility for managing transfers to Amazon S3 that makes extensive use of Amazon S3 multipart uploads.

Once you have the Amazon S3 client, you can pass the the s3Client, bucketName, key, and the filePath to the TransferManager.

private static void upload(AmazonS3 s3Client, String bucketName, String key, String filePath) {
    try {
        TransferManager tm = TransferManagerBuilder.standard()
                .withS3Client(s3Client)
                .build();
 
        // TransferManager processes all transfers asynchronously,
        // so this call returns immediately.
        Upload upload = tm.upload(bucketName, key, new File(filePath));
        System.out.println("Object upload started");
 
        // Optionally, wait for the upload to finish before continuing.
        upload.waitForCompletion();
        System.out.println("Object upload complete");
    }catch( Exception ex){
        System.out.println(ex.getMessage());
    }
}

The parameters are derived from previous steps:

  • The bucketName can be extracted from the the initial section of the decoded s3Path. If the s3Path is rms-mi/preview/tenant/50000/import/mri/3929, the bucketName is rms-mi.
  • The key combines the remaining portion of the s3Path with the fileId, fileName in the following pattern: s3Path/fileId-fileName. For example, preview/tenant/50000/import/mri/3929/12373-fileName.
  • The filePath specifies the absolute path to account resource file you want to upload.