Import EDMs
Overview
The Risk Modeler API faciliates the migration of EDM data modules to the Intelligent Risk Platform by leveraging Amazon S3 storage buckets and Amazon SDK APIs.
Risk Modeler API operations enable client applications to securely connect to Amazon S3 and access temporary storage buckets on Amazon S3 for uploading exposure data.
Risk Modeler does not provide a service for uploading the EDM to Amazon S3. For that, you can use Amazon AWS API. Amazon S3 APIs enable tenants to upload database artifacts (BAK or MDF files) to storage buckets on Amazon Simple Storage Service (Amazon S3). Once uploaded to Amazon S3, use File Storage API operations to initiate a workflow job for creating a cloud-based EDM using the uploaded exposure data.
Step 1: Create database artifact
The Risk Modeler API supports the uploading of EDM databases from on-premise SQL Server instances to the Intelligent Risk Platform in the BAK or MDF format. No other file formats or file extensions are supported.
- The database artifact name must have use the
.bak
or.mdf
file extension. - The database artifact name is limited to 80 characters, and can include only
0
-9
,A
-Z
,a
-z
, underscore (_
), hyphen (-
) and space. - Each database artifact must have a unique file name that is not used by any other uploaded database in Risk Modeler. Databases with non-unique file names will generate an error during upload.
If you need to import an EDM or RDM database, note that certain restrictions do apply. Data Bridge enables you to import database artifacts representing EDMs and RDMs managed in RiskLink/RiskBrowser versions 13.0, 15.0, 16.0, 17.0, and 18.0. During import to Data Bridge, EDMs and RDMs are automatically updated to Microsoft SQL Server 2019.
Step 2: Get storage bucket path and temporary credentials
The Get storage bucket path and credentials enables client applications to identify the BAK or MDF files to be uploaded, fetch the path to an Amazon S3 storage bucket, and acquire the temporary security credentials that it will use to upload the specified database artifacts to the storage bucket.
The operation supports the uploading of up to 50 database artifacts in a single request.
Three query parameters are required: filename
, dbtype
, and fileextension
.
curl --location --request GET 'https://{host}/riskmodeler/v1/uploads?filename=filename.BAK&dbtype=EDM&fileextension=BAK' \
--header 'Content-Type: application/json' \
--header 'Authorization: {api_key}'\
--data-raw ''
This operation supports a number of different scenarios, and accepts many different parameter values to support those scenarios. In this workflow the following query parameter values are required:
- The
filename
query parameter specifies the name of one or more database artifacts. (A maximum of 50 database artifacts may be included in a single upload.) The file extension is included in the specified file name. For example,fileName1.BAK
. - The
dbtype
query parameter specifies the database type of the data source. Thedbtype
parameter must specify theEDM
database type. - The
fileextension
specifies the file format of the specified database file. One ofBAK
orMDF
. Thefilename
parameter must specify.bak
or.mdf
as the file extension.
If successful, the response returns a 201
HTTP status code and base64-encoded temporary security credentials from the AWS Security Token Service.
"Prefix": "<rms-bucket-folder-location>",
"awsRegion": "<rms-s3--region>",
"uploadId": "<upload-uuid>",
"uploadKey1": "<base64-encoded S3 Access Key>",
"uploadKey2": "<base64-encoded S3 Secret Key>",
"uploadKey3": "<base64-encoded S3 Session Key>"
}
The temporary security credentials enable you to programmatically sign AWS requests. Signing helps to secure requests by verifying the identity of the requester and protect the data in transit.
- The
Prefix
attribute specifies a base64-encoded path to the storage bucket. - The
awsRegion
specifies the region. For example,us-east-1
. TheawsRegion
enables the client application to create the S3 client in Step 4. - The
uploadId
specifies a base64-encoded unique ID number for the upload job. The client application specifiesuploadId
when it submits the upload job to the workflow engine for processing in Step 3. - The
uploadKey1
specifies a base64-encoded access key. The access key, secret key, and session key enable you to sign the AWS request in Step 3. - The
uploadKey2
specifies a base64-encoded secret key. The access key, secret key, and session key enable you to sign the AWS request in Step 3. - The
uploadKey3
specifies a base64-encoded session key. The access key, secret key, and session key enable you to sign the AWS request in Step 3.
Step 3: Upload the files to S3
The Moody's Risk Modeler API does not offer an operation for uploading EDMs to AWS storage buckets. Rather, application clients must use the AWS APIs to manage the uploading of database artifacts (MDF or BAK files) to storage buckets.
Client applications may use use the S3 path attribute values (Prefix
, awsRegion
, uploadId
, access key, secret key, session key) and security credentials returned upload to upload database artifacts to the storage bucket.
First, you must decode the access key, secret key, and session key so that you may pass the decoded values to a S3 client. These parameters correspond to the uploadKey1
, uploadKey2
and uploadKey3
values returned by the service in Step 2.
String base64Decode(String text) {
try {
return new String(decoder.decodeBuffer(text), DEFAULT_ENCODING);
} catch (IOException e) {
return null;
}
}
Pass the decoded accessKey
, secretKey
, sessionToken
, and region
to the AmazonS3 createS3Client
method to create an Amazon S3 client.
static AmazonS3 createS3Client(String accessKey, String secretKey, String sessionToken,
String region) {
BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
accessKey,
secretKey,
sessionToken);
return AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
.withRegion(region)
.build();
}
Once you have the Amazon S3 client, you can pass the s3Client
, Prefix
, fileName
, and filePath
.
static boolean uploadToS3UsingClient(String Prefix, String fileName,
String filePath, AmazonS3 s3Client) {
System.out.println("START: multipart upload process");
File file = new File(filePath);
long contentLength = file.length();
long partSize = 5 * 1024 * 1024; // Set part size to 5 MB.
List<PartETag> partETags = new ArrayList<>();
String[] s3FileLocation = Prefix.split("/");
String Name = s3FileLocation[0];
StringBuilder filePrefix = new StringBuilder();
for (int i = 1; i < s3FileLocation.length; i++) {
filePrefix.append(s3FileLocation[i]);
filePrefix.append("/");
}
System.out.println("START: Initiate multipart upload");
String fileKey = filePrefix.toString() + fileName;
// Initiate the multipart upload.
InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(Name,
fileKey);
InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest);
System.out.println("END: Initiate multipart upload");
// Upload the file parts.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
long percentageComplete = (filePosition * 100 / contentLength);
System.out.println(
String.format("Uploading in progress... %d%% complete", percentageComplete));
// Because the last part could be less than 5 MB, adjust the part size as needed.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create the request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(Name)
.withKey(fileKey)
.withUploadId(initResponse.getUploadId())
.withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);
int retriesLeft = 5;
while (retriesLeft > 0) {
try {
// Upload the part and add the response's ETag to our list.
UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest);
partETags.add(uploadResult.getPartETag());
retriesLeft = 0;
} catch (Exception e) {
System.out.println("failed to upload file part. retrying... ");
if (retriesLeft == 1) {
throw new RuntimeException("File upload to S3 failed");
}
retriesLeft--;
}
}
filePosition += partSize;
}
System.out.println("file upload 100% complete");
// Complete the multipart upload.
CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(Name,
fileKey,
initResponse.getUploadId(), partETags);
s3Client.completeMultipartUpload(compRequest);
System.out.println("END: multipart upload process");
return true;
}
//This method returns true once the upload is complete
Now that the database artifact has been uploaded to the storage bucket on AWS, the client application may initiate an upload EDM job to transfer the EDM from the storage bucket to cloud-based EDM.
Step 4: Submit the upload EDM job
The Upload EDM operation enables you to migrate exposure data from a storage bucket to the specified EDM.
The required uploadId
path parameter identifies the ID number the AWS task used to upload the database artifact to the storage bucket on Amazon S3. Use the unique ID number (uploadId
) returned in Step 2 to identify the AWS task.
The operation takes two query parameters. The required datasource
query parameter specifies the EDM into which the exposure data is imported. The optional servername
query parameter identifies the server instance that hosts the EDM.
curl --request POST \
--url 'https://{host}/riskmodeler/v1/uploads/345678/edm?datasource=myEDM&servername=myServer' \
--header 'Accept: application/json' \
--header 'Authorization: XXXXXXXXXX'
Whenever you upload an EDM to the Intelligent Risk Platform, a new exposure set is created. By default, the principal that uploaded the EDM "owns" that EDM and its exposure set. The owner can optionally share the exposure set with one or more groups by specifying a list of groups in the request body.
{
"share": "true",
"groups": ["8aa67aae-ebc2-46e6-a55e-5f095210a1e5"]
}
If successful, returns a 202 Accepted
HTTP response and initiates a UPLOAD EDM
workflow job on the workflow engine. Use the Get workflow or operation operation to track the status of the job.
Once the job is complete, the database artfifact is automatically deleted from the storage bucket and the EDM is available to Risk Modeler API operations.
Updated 5 months ago