Import RDMs

Overview

The Risk Modeler API facilitates the migration of RDM data modules to the Intelligent Risk Platform by leveraging Amazon S3 storage buckets and Amazon SDK APIs.

Risk Modeler API operations enable client applications to securely connect to Amazon S3 and access temporary storage buckets on Amazon S3 for uploading results data.

Risk Modeler API does not provide operations for uploading EDMs or RDMs to Amazon S3. For that, you can use Amazon AWS API. Amazon S3 APIs enable tenants to upload database artifacts (BAK or MDF files) to storage buckets on Amazon Simple Storage Service (Amazon S3).

The Risk Modeler API provides File Storage API operations for importing analysis results data once the RDM is uploaded to a storage bucket on S3. Client applications may specify filters for selecting analysis results data based on the source EDM, analysis ID numbers, tables, or perspectives.

All Intelligent Risk Platform data is protected by means of group-based access rights. Access to exposure sets (collections of exposure and analysis result data) is restricted to those groups that have access rights to use that data. By default, the principal that imports the RDM to the Intelligent Risk Platform "owns" the resulting exposure set. The owner can optionally share the exposure set with one or more groups by specifying a list of groups in the request body.

Step 1: Create database artifact

The Risk Modeler API supports the uploading of RDM databases from on-premise SQL Server instances to the Intelligent Risk Platform in BAK or MDF format. No other file format or file extension is supported.

  • The database artifact name must have use the .bak or .mdf file extension.
  • The database artifact name is limited to 80 characters, and can include only 0-9, A-Z, a-z, underscore (_), hyphen (-) and space.
  • Each database artifact must have a unique file name that is not used by any other uploaded database in Risk Modeler. Databases with non-unique file names will generate an error during upload.

In Microsoft SQL Server (version 2012 or later) create a database artifact (BAK or MDF) of the RDM you want to upload.

Step 2: Get security credentials and S3 bucket path

The Get storage bucket path and credentials enables client applications to identify the database artifacts (BAK or MDF) to be uploaded, fetch the path to an Amazon S3 storage bucket, and acquire the temporary security credentials that it will use to upload the specified database artifacts to the storage bucket.

The operation supports the uploading of up to 50 BAK or MDF files in a single request. The maximum allowable size of uploaded files is 100 GB.

curl --location --request GET 'https://{host}/riskmodeler/v1/uploads?filename=filename.BAK&dbtype=rdms&fileextension=BAK' \
--header 'Content-Type: application/json' \
--header 'Authorization: {api_key}'\
--data-raw ''

This operation supports a number of different scenarios, and accepts many different parameter values to support those scenarios. In this workflow the following query parameter values are required:

  • The filename query parameter specifies the name of one or more database artifacts in BAK or MDF format. (A maximum of 50 database artifacts may be included in a single upload.) The file extension is included in the specified file name. For example, fileName1.BAK.
  • The dbtype query parameter specifies the database type of the data source. The dbtype parameter must specify the RDM database type.
  • The fileextension specifies the file format of the specified database file. One of BAK or MDF. EDMs data must use the BAK file format. The filename parameter must specify BAK as the file extension.

If successful, the response returns a 201 status code and base64 encoded temporary security credentials from the AWS Security Token Service.

{
    "bucketPrefix": "<rms-bucket-folder-location>",
    "awsRegion": "<rms-s3-bucket-region>",
    "uploadId": "345678",
    "uploadKey1": "<base64 encoded S3 Access Key>",
    "uploadKey2": "<base64 encoded S3 Secret Key>",
    "uploadKey3": "<base64 encoded S3 Session Key>"
}

The temporary security credentials enable you to programmatically sign AWS requests. Signing helps to secure requests by verifying the identity of the requester and protect the data in transit.

  • The Prefix attribute specifies a base64-encoded path to the storage bucket.
  • The awsRegion specifies the region. For example, us-east-1. The awsRegion enables the client application to create the S3 client in Step 4.
  • The uploadId specifies a base64-encoded unique ID number for the upload job. The client application specifies uploadId when it submits the job to import RDM data from the storage bucket in Step 4.
  • The uploadKey1 specifies a base64-encoded access key. The access key, secret key, and session key enable you to sign the AWS request in Step 3.
  • The uploadKey2 specifies a base64-encoded secret key. The access key, secret key, and session key enable you to sign the AWS request in Step 3.
  • The uploadKey3 specifies a base64-encoded session key. The access key, secret key, and session key enable you to sign the AWS request in Step 3.

Step 3: Upload the files to S3

The RMS Risk Modeler APIs does not provide a custom service for uploading RDM data to S3 buckets. You must use the AWS APIs to manage this process.

In this procedure, you will use S3 bucket path attributes and temporary user credentials to upload account data to the S3 bucket.

First, you must decode the access key, secret key, and session key so that you may pass the decoded values to a S3 client. These parameters correspond to the uploadKey1, uploadKey2 and uploadKey3 values returned by the File Storage operation in Step 2.

String base64Decode(String text) {
    try {
      return new String(decoder.decodeBuffer(text), DEFAULT_ENCODING);
    } catch (IOException e) {
      return null;
    }
  }

Pass the decoded accessKey, secretKey, sessionToken, and region to the AmazonS3 createS3Client method to create an Amazon S3 client.

static AmazonS3 createS3Client(String accessKey, String secretKey, String sessionToken,
      String region) {
    BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
        accessKey,
        secretKey,
        sessionToken);
    return AmazonS3ClientBuilder
        .standard()
        .withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
        .withRegion(region)
        .build();
 }

Once you have the Amazon S3 client, you can pass the s3Client, bucketPrefix, fileName, and filePath.

static boolean uploadToS3UsingClient(String bucketPrefix, String fileName,
      String filePath, AmazonS3 s3Client) {
    System.out.println("START: multipart upload process");

    File file = new File(filePath);
    long contentLength = file.length();
    long partSize = 5 * 1024 * 1024; // Set part size to 5 MB.

    List<PartETag> partETags = new ArrayList<>();

    String[] s3FileLocation = bucketPrefix.split("/");
    String bucketName = s3FileLocation[0];
    StringBuilder filePrefix = new StringBuilder();
    for (int i = 1; i < s3FileLocation.length; i++) {
      filePrefix.append(s3FileLocation[i]);
      filePrefix.append("/");
    }
    System.out.println("START: Initiate multipart upload");

    String fileKey = filePrefix.toString() + fileName;
    // Initiate the multipart upload.
    InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucketName,
        fileKey);
  InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest);

    System.out.println("END: Initiate multipart upload");
    // Upload the file parts.
    long filePosition = 0;
    for (int i = 1; filePosition < contentLength; i++) {
      long percentageComplete = (filePosition * 100 / contentLength);
      System.out.println(
          String.format("Uploading in progress... %d%% complete", percentageComplete));
      // Because the last part could be less than 5 MB, adjust the part size as needed.
      partSize = Math.min(partSize, (contentLength - filePosition));

      // Create the request to upload a part.
      UploadPartRequest uploadRequest = new UploadPartRequest()
          .withBucketName(bucketName)
          .withKey(fileKey)
          .withUploadId(initResponse.getUploadId())
          .withPartNumber(i)
          .withFileOffset(filePosition)
          .withFile(file)
          .withPartSize(partSize);

      int retriesLeft = 5;
      while (retriesLeft > 0) {
        try {
          // Upload the part and add the response's ETag to our list.
          UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest);
          partETags.add(uploadResult.getPartETag());
          retriesLeft = 0;
        } catch (Exception e) {
          System.out.println("failed to upload file part. retrying... ");
          if (retriesLeft == 1) {
            throw new RuntimeException("File upload to S3 failed");
          }
          retriesLeft--;
        }
      }
      filePosition += partSize;
    }
    System.out.println("file upload 100% complete");
      // Complete the multipart upload.
    CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(bucketName,
        fileKey,
        initResponse.getUploadId(), partETags);
    s3Client.completeMultipartUpload(compRequest);
    System.out.println("END: multipart upload process");
    return true;
  }
//This method returns true once the upload is complete

Now that the database artifact has been uploaded to the storage bucket on AWS, the client application may initiate an upload RDM job to transfer the results data from the storage bucket to the RDM.

Step 4: Submit the upload RDM job

The Upload RDM operation enables you to import the results data uploaded to the storage bucket to the RDM.

The required uploadId path parameter identifies the ID number the AWS task used to upload the database artifact to the storage bucket on Amazon S3. Use the unique ID number (uploadId) returned in Step 2 to identify the AWS task.

curl --request POST \
     --url https://api-euw1.rms.com/riskmodeler/v1/uploads/345678/rdm \
     --header 'Accept: application/json' \
     --header 'Authorization: XXXXXXXXXX' \
     --header 'Content-Type: application/json'

All other parameters are specified in the body of the request. The uploadId and the rdmName attributes are required.


{
	"uploadId": "345678",
	"rdmName": "my-rdm",
	"share": true,
	"exposureSetId": "5f095210-a1e5-46e6-a55e-5f095210a1e5",
	 "perspectiveFilter": [
	      "GR"
	 ]
}

Two body parameters are required in all requests:

  • The uploadId body parameter identifies the ID number the AWS task used to upload the database artifact to the storage bucket on Amazon S3. The uploadId parameter is required and must be specified both as a path parameter and as a body parameter.
  • The rdmName body parameter specifies the name of the RDM to be created on the managed server instance.

The optional share, groups, and exposureSetId body parameters may be used to specify which groups may access the imported RDM data.

  • The share body parameter indicates if the data is to be shared with others. If true, the client must pass a value for either groups or exposureSetId, but not both.
  • The exposureSetId body parameter identifies the ID number of an existing exposure set. If specified, the imported results data is added to that exposure set and al groups with access rights to that exposure set can access the imported results data.
  • The groups body parameter identifies an array of groups identified by their group IDs. If specified, the specified groups have access rights to the imported RDM and its analysis results data.

The optional edmSourceName, analysisIdFilter, tableFilter, and perspectiveFilter attributes enable you to specify filters for selecting the results data that is imported into the hosted RDM.

  • The edmSourceName body parameter specifies a filter for selecting analysis results data based on the exposure set used to generate those results. If an EDM is specified, the operation only imports analysis results data that is based on exposure data stored in the specified EDM.
  • The analysisIdFilter body parameter specifies a filter for selecting analysis results by it analysis ID. The parameter accepts and array of values. If specified, the operation imports the specified analysis ID and ignores all other analysis results.
  • The tableFilter body parameter specifies a filter for selecting analysis results data based on database table. The parameter accepts and array of values. If a table is specified, the operation only imports analysis results data that stored in the specified table.
  • The perspectivefilter body parameter specifies a filter for selecting analysis results data based on the financial perspectives that were considered in the calculation of those results. The parameter accepts and array of values. Financial perspectives are identified by a two-character string (e.g. GU "Group Up Loss", GR "Gross Loss", RG "Reinsurance Gross Loss").

If successful, returns a 202 Accepted HTTP response and initiates a UPLOAD RDM workflow job on the workflow engine. Use the Get workflow or operation service to track the status of the job. Once the job is complete, the database artifact is automatically deleted from the storage bucket.