Upload Reinsurance Data File

Upload reinsurance data for MRI import

Overview

Now that you have uploaded the account and location source files to the storage bucket on Amazon S3, you can optionally upload reinsurance data to that storage bucket using a combination of Risk Modeler API operations and the AWS SDK.

The MRI process requires that you upload both account and location data. Reinsurance data is entirely optional.

In this procedure, you will define an reinsurance_resource file to store reinsurance data in a structured format. You will then use the Risk Modeler API to acquire the URL and security credentials for our storage bucket on Amazon S3 and leverage AWS SDK services to upload the reinsurance resource file to that storage bucket.

Step 4.1: Define reinsurance source file

In this step, you will create a reinsurance source file, a flat file that contains attribute data for one or more reinsurance cessions.

A flat file is two-dimensional database in a plain text format. Each row of text in the file defines a record. The file uses a delimiter character to organize text into discreet columns of structured data. The flat file may use commas, semicolons, or tabs to delimit data values.

Reinsurance exposure data is optional for MRI import. You do not need to upload a reinsurance file to successfully complete an MRI import job. If reinsurance data is imported, one or more reinsurance layers may be imported per location or policy.

📷

RESTRICTIONS

The MRI process imports facultative and surplus share cessions only. If you want to analyze surplus share cessions, you must create a surplus share treaty. When you create that treaty, the treaty number you enter must match the surplus share cession REINSID that you imported. The REINSID attribute is required in the reinsurance flat file and cannot blank.

Some reinsurance fields are required. See "Reinsurance File Import Information" in the Help Center for detailed information about reinsurance exposure fields.

Step 4.2: Get security credentials and S3 bucket path

The Get S3 path and credentials enables you to fetch the path to an S3 bucket and temporary security credentials that will enable you to upload an flat file of exposure data.

The service requires that you specify the bucketId of an Amazon S3 bucket as a path parameter.

curl --location --request POST 'https://{host}/riskmodeler/v1/storage/{{bucketId}}/path' \
--header 'Content-Type: application/json' \
--header 'AAuthorization: {api_key}' \
--data-raw '{
    "fileName": "reinsuranceexp.txt",
    "fileSize": 9105,
    "fileType": "reins"
}'

The request package identifies the fileInputType, fileName, fileSize, and fileType.

{
	"fileInputType": "MRI",
	"fileType": "mapping",
	"fileSize": 9105,
	"fileName": "mapping.txt"
}

The fileType and fileName attributes are required.

  • The fileInputType attribute identifies the job type (ALM or MRI).
  • The fileName attribute specifies the name of the flat file to be uploaded.
  • The fileSize attribute specifies the size of the flat file in kilobytes.
  • The fileType attribute specifies the type of data contained in the flat file. One of account (account data), risk (location data), reins (reinsurance data), or mapping (mapping data).

If successful, the response returns a 201 status code and base64 encoded temporary security credentials from the AWS Security Token Service.

Step 4.3: Upload reinsurance file to S3

The RMS Risk Modeler API does not provide a custom service for uploading flat file data to S3 buckets. You must use the AWS APIs to manage this process.

In this procedure, you will use the S3 bucket path and temporary user credentials to upload account data to the S3 bucket.

First, you must decode to the accessKeyId, secretAccessKey, sessionToken, and s3Path values and pass the decoded values to a S3 client. The sample code is in Java 8.

private static String base64Decode(String text) {
    return new String(Base64.getDecoder().decode(text));
}

Pass the decoded accessKeyId, secretAccessKey, and sessionToken to the Amazon getS3Client() method to create an Amazon S3 client.

private static AmazonS3 getS3Client(String accessKey, String secretKey, String sessionToken){
    BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
            accessKey,
            secretKey,
            sessionToken);
 
    return AmazonS3ClientBuilder.standard()
            .withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
            .withRegion(Regions.EU_WEST_1)
            .build();
}

Amazon TransferManager is a high-level utility for managing transfers to Amazon S3 that makes extensive use of Amazon S3 multipart uploads.

Once you have the Amazon S3 client, you can pass the the s3Client, bucketName, key, and the filePath to the TransferManager.

private static void upload(AmazonS3 s3Client, String bucketName, String key, String filePath) {
    try {
        TransferManager tm = TransferManagerBuilder.standard()
                .withS3Client(s3Client)
                .build();
 
        // TransferManager processes all transfers asynchronously,
        // so this call returns immediately.
        Upload upload = tm.upload(bucketName, key, new File(filePath));
        System.out.println("Object upload started");
 
        // Optionally, wait for the upload to finish before continuing.
        upload.waitForCompletion();
        System.out.println("Object upload complete");
    }catch( Exception ex){
        System.out.println(ex.getMessage());
    }
}

The parameters are derived from previous steps:

  • The bucketName can be extracted from the the initial section of the decoded s3Path. If the s3Path is rms-mi/preview/tenant/50000/import/mri/3929, the bucketName is "rms-mi".
  • The key combines the remaining portion of the s3Path with the fileId, fileName in the following pattern: s3Path/fileId-fileName. For example, "preview/tenant/50000/import/mri/3929/12373-fileName".
  • The filePath specifies the absolute path to flat file you want to upload.