Upload Location Data File
Upload location data for MRI import
Overview
Now that you have uploaded an account source file to the storage bucket on Amazon S3, you can upload location exposure data to that storage bucket using a combination of Risk Modeler API operations and the AWS SDK.
In this procedure, you will define an location resource file to store data for locations in a structured format. You will then use the Risk Modeler API to acquire the URL and security credentials for our storage bucket on Amazon S3 and leverage AWS SDK services to upload the location resource file to that storage bucket.
Step 3.1: Define location source file
In this step, you will create a location source file, a flat file that contains attribute data for one or more locations.
A flat file is two-dimensional database in a plain text format. Each row of text in the file defines a record. The file uses a delimiter character to organize text into discreet columns of structured data. The flat file may use commas, semicolons, or tabs to delimit data values.
- The location source file may be no larger than 18GB in size and contain a maximum of 12 million records.
- The location source file must contain data for the following attributes:
ACCNTNUM
,OCCSCHEME
,OCCTYPE
,BLDGSCHEME
,BLDGCLASS
,CNTRYSCHEME
,CNTRYCODE
. TheACCTNUM
attribute is required in both the account source file and the location source file enabling the MRI service to map locations to accounts. Additional attributes may be required depending on the perils specified for that location. - The file must have a unique name and be saved in either the CSV or TXT formats. For example,
locexp.csv
orlocexp.txt
.
The MRI process requires that certain location exposure data fields are included in the flat file. For a comprehensive description of account data requirements for MRI import, see DLM Reference Guide on RMS Owl.
DATA VALIDATION
Intelligent Risk Platform™ validates the data specified in the location source file prior to importing that data into the EDM. The import process performs basic data validations to make sure that imported data is valid and does not create orphan records.
If a required data attribute value is missing or invalid, the platform throws a validation error.
Step 3.2: Get security credentials and S3 bucket path
The Get S3 path and credentials enables you to fetch the path to an S3 bucket and temporary security credentials that will enable you to upload an flat file of exposure data.
The service requires that you specify the bucketId
of an Amazon S3 bucket as a path parameter.
curl --location --request POST 'https://{host}/riskmodeler/v1/storage/{{bucketId}}/path' \
--header 'Content-Type: application/json' \
--header 'Authorization: {api_key}'
The request package identifies the fileInputType
, fileName
, fileSize
, and fileType
.
curl --location --request POST 'https://{host}/riskmodeler/v1/storage/{{bucketId}}/path' \
--header 'Authorization: {api_key}' \
--header 'Content-Type: application/json' \
--data-raw '{
"fileName": "locexp.txt",
"fileSize": 9105,
"fileType": "location"
}'
The fileType
and fileName
attributes are required.
- The
fileInputType
attribute identifies the job type (ALM
orMRI
). - The
fileName
attribute specifies the name of the flat file to be uploaded. - The
fileSize
attribute specifies the size of the flat file in kilobytes. - The
fileType
attribute specifies the type of data contained in the flat file. One ofaccount
(account data),risk
(location data),reins
(reinsurance data), ormapping
(mapping data).
If successful, the response returns a 201
status code and base64 encoded temporary security credentials from the AWS Security Token Service.
Step 3.3: Upload location source file to storage bucket
The RMS Risk Modeler API does not provide a custom service for uploading flat file data to S3 buckets. You must use the AWS APIs to manage this process.
In this procedure, you will use the S3 bucket path and temporary user credentials to upload account data to the S3 bucket.
First, you must decode to the accessKeyId
, secretAccessKey
, sessionToken
, and s3Path
values and pass the decoded values to a S3 client. The sample code is in Java 8.
private static String base64Decode(String text) {
return new String(Base64.getDecoder().decode(text));
}
Pass the decoded accessKeyId
, secretAccessKey
, and sessionToken
to the Amazon getS3Client()
method to create an Amazon S3 client.
private static AmazonS3 getS3Client(String accessKey, String secretKey, String sessionToken){
BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
accessKey,
secretKey,
sessionToken);
return AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
.withRegion(Regions.EU_WEST_1)
.build();
}
Amazon TransferManager is a high-level utility for managing transfers to Amazon S3 that makes extensive use of Amazon S3 multipart uploads.
Once you have the Amazon S3 client, you can pass the s3Client, bucketName
, key
, and the filePath
to the TransferManager
.
private static void upload(AmazonS3 s3Client, String bucketName, String key, String filePath) {
try {
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
// TransferManager processes all transfers asynchronously,
// so this call returns immediately.
Upload upload = tm.upload(bucketName, key, new File(filePath));
System.out.println("Object upload started");
// Optionally, wait for the upload to finish before continuing.
upload.waitForCompletion();
System.out.println("Object upload complete");
}catch( Exception ex){
System.out.println(ex.getMessage());
}
}
The parameters are derived from previous steps:
- The
bucketName
can be extracted from the initial section of the decoded s3Path. If the s3Path is rms-mi/preview/tenant/50000/import/mri/3929, the bucketName is "rms-mi". - The
key
combines the remaining portion of the s3Path with thefileId
,fileName
in the following pattern:s3Path/fileId-fileName
. For example, "preview/tenant/50000/import/mri/3929/12373-fileName". - The
filePath
specifies the absolute path to flat file you want to upload.
Updated about 1 year ago