November, 2024
Highlights
The November release introduces
The Batch API now supports the management of user-defined workflows as batch jobs. A batch job defines a series of tasks that are managed and performed in sequence. The API supports operations for running, updating, retrieving batch jobs.
The Reference Data API now supports operations for defining and managing the rollup models used in rollup analysis. Clients may manage default rollup job settings, rollup model setting sets, rollup model sets, and rollup models.
Batch API
The Batch API exposes operations that enable client applications to define and execute user-defined workflows as batch jobs.
A user-defined workflow is a series of operations that are managed in sequence. These operations are defined as a series of task objects in JSON format. Each of these tasks represents an expensive or long-running process that is submitted to the workflow engine in a single request as part of a BATCH
job for processing. To learn more, see Resource URIs.
The following resource URIs can be submitted as tasks in a batch job:
/platform/model/v1/jobs
/platform/geohaz/v1/jobs
/platform/grouping/v1/jobs
/platform/export/v1/jobs
/platform/riskdata/v1/analyses/{analysisId}/pate
/platform/riskdata/v1/reports
/platform/riskdata/v1/v1/exposures/{exposureId}/bulk-edit
Limitations
This operation's request package may define a workflow that managed up to 1000 exposures in JSON format. Consequently, this operation supports request packages of up to 5MB.
Create Batch Job
The Create Batch Job operation (POST
/platform/batch/v1/jobs
) initiates a batch job that consists of multiple tasks in a user-defined workflow.
The client must provide its resource group id in the required x-rms-resource-group-id
header parameter.
All other parameters are specified in the request package. The tasks
array defines a series of operations. Each task in the workflow specifies the URI of a Risk Data API endpoint.
{
"name": "Workflow_Name",
"tasks": [
{
"continueOnFailure": "true",
"dependsOn": [],
"skipMissingVariables": true,
"requestBody": {
// Depends on the request URI specified in the `operationUri` parameter.
},
"label": "ExposureBatch",
"operationUri": "/riskdata/v1/exposures/{exposureId}/bulk-edit"
},
{
//task 2
},
{
//task 3
}
]
}
Each task object in the tasks
array may define up to six parameters:
Parameter | Type | Description |
---|---|---|
continueOnFailure | boolean | If true , this job continues if this operation fails. By default, false . |
dependsOn | array | List of tasks identified by their label s. Identifies all of the tasks that must be completed prior to this task in the workflow. |
skipMissingVariables | boolean | If true , performs this task even if the variables are missing or undefined. missing. |
requestBody | object | Object that defines the request body of the specified operationUri . For details, see documentation for the corresponding operation. |
label | string | Name that uniquely identifies this task within this batch operation. |
operationUri | string | Resource URI that identifies the operation performed by this task. |
If successful, adds a BATCH
job the workflow engine queue and returns 202 Accepted
HTTP response. Use the Get Batch Job operation to poll the status of this job. When the status
of the job is FINISHED
, the batch job is complete.
Search Batch Jobs
The Search Batch Jobs operation (GET
/platform/batch/v1/jobs
) returns a list of batch jobs.
This operation supports response filtering based on the value of a subset of batch job properties. Depending on the property, you may use a combination of comparison operators, list operators, and logical operators.
Property | Type | Comparison | List |
---|---|---|---|
endedAt | YYY-MM-DD | = | |
jobId | string | = | IN |
name | string | = | |
startedAt | YYY-MM-DD | = | |
status | string | = | |
type | string | = | |
userName | string | = |
Get Batch Job
The response return information about the BATCH
job status and job details:
Property | Type | Description |
---|---|---|
jobId | string | ID of the batch job. |
userName | string | Login of principal that submitted the job. |
status | string | Status of batch job. |
submittedAt | string | Time that batch job was submitted and added to the queue. |
createdAt | string | Time that batch job started. |
endedAt | string | Time that batch job completed. |
name | string | Name of batch job as specified in the request. |
guid | string | Unique, system-defined ID for task. |
type | string | Type of job, e.g. BATCH . |
startedAt | string | Time that batch job started. |
output | object | Job details object. Varies depending on the resourceUri specified in the request. |
progress | object | Percent of the batch job completed. |
resourceGroupId | array | ID of resource group. This job is allocated to the specified resource group. One of RI-UNDERWRITEIQ , RI-RISKMODELER , RI-EXPOSUREIQ , RI-TREATYIQ |
subTasks | array | Array of subtasks, e.g. within a GEOHAZ job. |
priority | string | Priority of batch job. One of veryhigh , high , medium , low , or verylow . |
details | object | Detailed information about the job including an array of resource URIs that identify tasks performed in the batch job. |
Get Batch Job Task
The Get Batch Job Task operation (GET
/platform/batch/v1/jobs/{jobId}/tasks/{taskId}
) returns information about a specific task in a user-defined workflow.
A batch job consists of multiple tasks that are linked together into a user-defined workflow. These tasks are performed in order according to rules specified in request body of the batch job. Each task in the user-defined workflow is identified by a system-defined task ID.
The response returns the status of a task within the BATCH
job:
"guid": "9a2cdaca-a35e-40b2-8d8f-xxxxxxxxxxxx",
"taskId": "1",
"jobId": "xxxxxxxx",
"status": "Created",
"createdAt": "2024-08-19T16:02:25.855Z",
"submittedAt": "2024-08-19T16:02:25.855Z",
"name": "DOWNLOAD_RDM",
"output": {
// this object is same as the details object on the individual job
"summary": "string",
"errors": [
{
"message": "string"
}
],
"log": {}
},
"priorTaskGuids": ["7704369b-9275-4804-8062-xxxxxxxxxxxx"],
"percentComplete": 0,
"subTasks": [
{
// Copy of task from job. If this is geohaz job, subTask objects represent subtask run during geohaz job
}
]
}
The response includes detailed information about the task including its current status and information about subtasks.
Property | Type | Description |
---|---|---|
taskId | string | ID of task. |
guid | string | Unique, system-defined ID for task. |
submittedAt | string | Time that batch job was submitted and added to the queue. |
startedAt | string | Time that batch job started. |
output | object | Job details object. Varies depending on the resourceUri specified in the request. |
name | string | Name of job initiated by this task. |
priorTaskGuids | array | List of task GUIDs that precede this task in user-defined workflow. |
subTasks | array | Array of subtasks, e.g. within a GEOHAZ job. |
When the status
of the job is FINISHED
, the batch job is complete. To learn more, see [Platform Jobs]
Update Batch Job
The Update Batch Job operation (PATCH
/platform/batch/v1/jobs/{jobId}
) may be used to prioritize or cancel the specified batch job.
A batch job can be cancelled if it has a QUEUED
status. Use the Get Batch Job operation to poll the status of this job. The job priority can be specified as veryhigh
, high
, medium
, low
, or verylow
.
Import API
Import Folders
The Create Import Folder operation (POST
/platform/import/v1/folders
) now adds support for uploading reinsurance data to CEDE import folders.
The properties
object now accepts the both exposureFile
and reinsuranceFile
file types. Both file types can be uploaded in a single request.
{
"folderType": "CEDE",
"properties": {
"fileExtension": "BAK"
"fileTypes": [
"exposureFile",
"reinsuranceFile"
]
}
}
The response identifies the CEDE folder's location on AWS and returns security credentials that will enable you to access the CEDE folder.
The presignParams
object returns the temporary security credentials that enable you to programmatically sign AWS requests. Signing helps to secure requests by verifying the identity of the requester and protecting the data in transit.
{
"uploadDetails": {
"exposureFile": {
"fileUri": "...",
"presignParams": {
"accessKeyId": "...",
"secretAccessKey": "...",
"sessionToken": "...",
"path": "...",
"region": "..."
},
"uploadUrl": ".../xxx-exposurefile.bak"
},
"reinsuranceFile": {
"fileUri": "platform/import/v1/folders/...",
"presignParams": {
"accessKeyId": "...",
"secretAccessKey": "...",
"sessionToken": "...",
"path": "..."
"region": "..."
},
"uploadUrl": ".../xxx-reinsurancefile.bak"
}
},
"folderType": "CEDE",
"folderId": "..."
}
To learn more about importing CEDE data, CEDE Import.
Import Jobs
The Create Import Job operation creates an import job that imports resource data previously uploaded to an import folder into the Intelligent Risk Platform.
This operation can now be used to import both exposure data and reinsurance data from the specified CEDE import folder. The value of the resourceUri
parameter varies depending on whether this operation imports both exposure and reinsurance data or reinsurance data alone.
- If the request uploads both expsosure data alone or both exposure data and reinsurance data, the
resourceUri
parameter specifies the URI of the exposure set that controls access to the uploaded exposures. - If the request uploads reinsurance data only, the
resourceUri
parameter specifies the ID of an existing exposure. The uploaded reinsurance data will be automatically linked to the specified exposure.
The following request identifies an existing exposure. The operation imports reinsurance data from the specified CEDE import folder ( folderId
) and attaches that reinsurance programs and treaties that that exposure (resourceUri
):
{
"importType": "CEDE",
"resourceUri": "/platform/riskdata/v1/exposures/{exposureId}",
"settings": {
"folderId": "{folderId}",
"cedeSchemaVersion": "{cedeSchemaVersion}",
"exposureName": "{exposureName}",
"serverId": "{serverId}"
}
}
Upload Reinsurance File
The new Upload Reinsurance to CEDE operation (PUT
/import/{folderId}/{reinsuranceFileName}
) uploads a BAK file of reinsurance data to the the specified CEDE import folder.
Reference Data API
The Reference Data API now supports operations for defining and managing the rollup models used in rollup analysis.
Rollup analysis is the process of aggregating treaty-level losses for programs, program variations, and business hierarchies.
Rollup analysis is available to tenants that have the RI-EXPOSUREIQ
entitlement.
Rollup Defaults
Rollup analysis settings are generally defined in a rollup model, a kind of model profile defines settings and configurations for that analysis. These rollup models are based on a set of default settings.
The Update Rollup Defaults operation (POST/platform/referencedata/v1/update-rollup-defaults
) updates default rollup job settings.
The rollup models created by a tenant's clients inherit these default settings.
The Get Rollup Defaults operation (GET /platform/referencedata/v1/rollup-defaults
) returns a list of default rollup job settings.
Rollup Model Settings Sets
Every rollup setting set is identified by a unique modelSettingsSetUUid
. This modelSettingsSetUUid
must be specified whenever the client initiates a rollup job using the Create Rollup Job operation.
The Create model setting sets operation (POST /platform/referencedata/v1/modelsettingssets
) creates a model setting set.
The Search Model Setting Sets operation (GET /platform/referencedata/v1/modelsettingssets
) returns a list of model setting sets.
The Get Model Setting Set operation (GET /platform/referencedata/v1/modelsettingssets/{modelsettingssetUuid}
) returns the specified model setting set.
The Update Model Setting Set operation (PATCH /platform/referencedata/v1/modelsettingssets/{modelsettingssetUuid}
) updates the specified model settings set.
The Delete Model Setting Set operation (DELETE /platform/referencedata/v1/modelsettingssets/{modelsettingssetUuid}
) deletes the specified model setting set.
Rollup model sets
A rollup set is a securable that represents a collection of rollup models. The rollup model set determines which groups have access rights to rollup models controlled by that rollup model set.
The Create Rollup Model Set operation (POST /platform/referencedata/v1/modelsets
) creates a model setting set.
The Search Rollup Model Sets operation (GET /platform/referencedata/v1/modelsets
) returns a list of model setting sets.
The Get Rollup Model Set operation (GET /platform/referencedata/v1/modelsets/{modelsetsUuid}
) returns the specified model setting set.
The Update Rollup Model Set operation (PATCH /platform/referencedata/v1/modelsets/{modelsetsUuid}
) updates the specified rollup model set.
The Delete Rollup Model Set operation (DELETE /platform/referencedata/v1/modelsets/{modelsetsUuid}
) deletes the specified model setting set.
Rollup models
The Create Rollup Model operation (POST /platform/referencedata/v1/models
) creates a new rollup model.
A rollup model is a type of model profile that defines settings for a rollup analysis.
Every rollup model is defined by an event set and a regional hierarchy. Models can be analytical (e.g. DLM) or simulated (e.g. HD). Risk sources are associated with rolllup models. Rollup models may be include/exclude models at analysis time.
The Search Rollup Models operation (GET /platform/referencedata/v1/models
) returns a list of rollup models.
The Get Rollup Model operation (GET /platform/referencedata/v1/models/{modelUuid}
) returns the specified rollup model.
The Update Rollup Model operation (PATCH /platform/referencedata/v1/models/{modelUuid}
) updates the specified rollup model.
The Delete Rollup Model operation (DELETE /platform/referencedata/v1/models/{modelUuid}
) deletes the specified model.
Tags
The Search Tags returns a list of data tags.
A data tag is a keyword or term assigned to an instance of a data entity (e.g. an account, database, profile, or treaty ) which describes that entity and facilitates searching or automation. All users may assign tags to risk data entities.
Every data tag is defined by an isActive
property, which identifies whether the tag is active or inactive. Inactive tags cannot be applied to data entities.
By default, this operation returns all tags where the value of the isActive
property is true
, and ignores inactive tags. Client applications can view inactive tags by selecting tags where the value of the inactive
property is false
:
curl --request GET \
--url 'https://api-euw1.rms.com/platform/referencedata/v1/tags? \
filter=isActive%20%3D%20false&limit=5000' \
--header 'accept: application/json'
The Update Data Tag operation now supports changing the activity status of the specified tag.
A tag is a keyword or term assigned to an instance of a data entity (e.g. an account, database, profile, or treaty ) which describes that entity and facilitates searching or automation. All users may assign tags to risk data entities.
Every tag is defined by an isActive
property, which identifies whether the tag is active or inactive. Inactive tags cannot be applied to data entities.
Risk Data API
Exposures
The Search EDMs operation returns the specified EDM (exposure data module). This operation now supports filtering exposures by exposureSetId
.
An exposure set is a securable that represents a collection of exposures that are stored and managed in an exposure data module. The exposure set determines which groups have access rights to those exposures.
This parameter supports the comparison and list operators:
Property | Comparison | List |
---|---|---|
exposureSetId | = ,!= , > , < , >= , <= , | IN , NOT IN |
To learn more, see Response Filtering.
Risk Data Reports
The Create Risk Data Reports operation (POST
/platform/riskdata/v1/reports
) initiates a job that creates a downloadable report.
This operation now supports two new report types:
- The
PORTFOLIO_ACCUMULATION_DETAILS
report type returns detailed information about a portfolio accumulation. Reports can be download as flat files in CSV format. - The
BUSINESS_HIERARCHY_ACCUMULATION_DETAILS
report type returns detailed information about a business hierarchy accumulation. Reports can be download as flat files in CSV format.
This operation now supports exposure summary reports (EXPOSURE_SUMMARY
), analysis summary reports (ANALYSIS_SUMMARY
), California DOI reports (EXPOSURE_DOI_REPORT
), portfolio accumulation reports (PORTFOLIO_ACCUMULATION_DETAILS
), and business hierarchy reports (BUSINESS_HIERARCHY_ACCUMULATION_DETAILS
).
If the value of the reporttype
parameter is EXPOSURE_SUMMARY
, this operation now accepts custom currency object. The exposure summary report returns losses in the specified currency. If no currency object is specified, the exposure summary report calculates losses in USD
.
{
"perilList": [
"EQ",
"FL",
"FR",
"TR",
"WS"
],
"reportName": "NAEQ_NAHUwSurge_CA_WC MaxAtOneTime v18 USD",
"currency": {
"currencySchemeName": "RMS",
"currencyVintageName": "RL24",
"currencyCode": "USD"
}
}`
The currency
object is optional and can be excluded from the request. If the currency
object is included in the request, all three properties must be specified, or the platform returns an error.
Property | Type | Definition |
---|---|---|
currencySchemeName | String | Name of scheme that defines exchange rates for currencies of different currency vintages. |
currencyVintageName | String | Version of exchange rates in the currency scheme that are used in analysis. |
currencyCode | String | ISO currency code that identifies the default currency treaty’s terms in this program. For example, USD |
If user does not provide these currency details, the report will be generated against the following currency setting (current use case)
Report Views
The Search Report Views operation (GET /platform/riskdata/v1/reportviews
) and Get Report View operation (GET /platform/riskdata/v1/reportviews/{reportViewId}
) now return a metricTypes
array that lists all metric types within the report.
The Search Report Views operation supports both the =
and IN
operators enabling you to filter report views by metric type. For example, metricTypes IN ("EXPOSURE_SUMMARY", "ELT")
:
curl --request GET \
--url 'https://{host}/platform/riskdata/v1/reportviews? \
filter=metricTypes%20IN%20%28%22EXPOSURE_SUMMARY%22%2C%20%22ELT%22%29& \
limit=100& \
offset=0' \
--header 'accept: application/json'
If the metricTypes
query parameter is not specified in the request, report details are returned for all metric types.
[
{
"reportViewId": 10267,
"reportViewName": "xxxxxxxxxxxxxxxxxxxxxxxxxxx",
"exposureName": "rms_edm_marginal_impact_V23",
"createdAt": "2024-11-08T21:30:25.586Z",
"exposureId": 351649,
"exposureResourceId": 5,
"exposureResourceType": "ACCOUNT",
"exposureResourceTypeId": 8019,
"createdBy": "[email protected]",
"notes": "",
"metricTypes": [
"STATS",
"ELT",
"EP",
"EXPOSURE_SUMMARY",
"POLICY_EP",
"POLICY_STATS",
"LOCATION_AAL",
"MARGINAL_EP",
"MARGINAL_STATS",
"GROUP_EP",
"GROUP_STATS",
"GROUP_POLICY_EP",
"GROUP_POLICY_STATS",
"GROUP_ELT",
"GROUP_MARGINAL_EP",
"GROUP_MARGINAL_STATS"
]
}
]
The Get Report View operation supports the =
operator enabling you to filter report views by metric type using a comma-separated list of metric types. For example, metricTypes=EXPOSURE_SUMMARY,EP,ELT
:
curl --request GET \
--url https://{host}/platform/riskdata/v1/reportviews/777777? \
metricTypes=EXPOSURE_SUMMARY,EP,ELT'
--header 'accept: application/json'
EXPOSURE_SUMMARY
data is grouped six sections that included in the addtionalInfo
object:
tiv_summary
geocoding_summary
primary_building_characteristics_summary
hazard_summary
workers_compensation
location_coverage_summary
For example,
{
"reportViewId": 9077,
"reportViewName": "EQ_WS_WC_POL : NAEQ_NAHUwSurge_CA_WC MaxAtOneTime v18 USD",
"exposureName": "RMS_EDM_UIQ_UIAutomation",
"createdAt": "2024-11-05T21:29:24.547Z",
"exposureId": 17202920,
"exposureResourceId": 6556,
"exposureResourceType": "ACCOUNT",
"createdBy": "[email protected]",
"notes": "",
"metricTypes": [],
"details": [
{
"metricType": "EXPOSURE_SUMMARY",
"metricUrl": "https://host.amazonaws.com/xxxxxxx/export/reports/xxxxxxx",
"additionalInfo": {
"sections": [
{
"url": "https://host.amazonaws.com/xxxxxxx/export/reports/xxxxxxx",
"name": "tiv_summary"
},
{
"url": "https://host.amazonaws.com/xxxxxxx/export/reports/xxxxxxx",
"name": "geocoding_summary"
},
{
"url": "https://host.amazonaws.com/xxxxxxx/export/reports/xxxxxxx",
"name": "primary_building_characteristics_summary"
},
{
"url": "https://host.amazonaws.com/xxxxxxx/export/reports/xxxxxxx",
"name": "hazard_summary"
},
{
"url": "https://host.amazonaws.com/xxxxxxx/export/reports/xxxxxxx",
"name": "workers_compensation"
},
{
"url": "https://host.amazonaws.com/xxxxxxx/export/reports/xxxxxxx",
"name": "location_coverage_summary"
}
],
"reportUuid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
}
]
}
The additionalInfo
object may return custom currency details:
{
"currencyCode": "USD",
"currencySchemeName": "RMS",
"currencyVintageName": "RL24"
}
Rollup API
The Create Rollup Job operation (POST /platform/rollup/v1/jobs
) now accepts two new body parameters CorporateFormulaUuid
, technicalFormulaUuid
when initiating a rollup analysis.
This operation supports the creation of three types of rollup reports: ProgramRollupRequest
, ProgramVariationRollupRequest
, and BusinessHierarchyRollupRequest
. These parameters are accepted in the settings
object specified in all requests.