What's Coming in 2026.01.c
Highlights
The 2026.01.c release introduces the Peril Converter API
Moody's publishes preliminary information to inform stakeholders ahead of the targeted Intelligent Risk Platform update. Note that these described features are not guaranteed for the next update or any subsequent updates and may be changed without notice. The definitive list of features will be provided in the Changelogs at the time of the official release.
Auto Select API
Create Auto Select Task
The Create Auto Select Task operation (POST platform/autoselect/v1/tasks) creates a smart selection task that automatically identifies and applies catastrophe modeling configurations (e.g. model profiles) applicable to a specified exposure.
In this initial release, this operation can be used to automatically apply model profiles to accounts prior to catastrophe modeling.
A model profile is a set of pre-configured settings that enable catastrophe modeling based on a single peril- and region-specific model. Every Create Model Job request must specify a model profile and one or more exposures (aggregate portfolio, portfolio, or account resource). To model an exposure, the region and peril of covered locations must match the region and peril of the model profile.
This operation selects model profiles that can be used to model the specified account based on the region and peril coverage account locations. It compares the model profile's region and peril with that of specified account's location exposures. The response rejects rejects model profiles that are not applicable to the account.
The request body accepts the resource URI of an account exposure and auto select task settings in the optional settings object, which can be used to specify a list of model profiles for possible selection:
{
"resourceType": "ACCOUNT",
"resourceUri": "/platform/riskdata/v1/exposures/123/accounts/12",
"settings": {
"taskName": "ACCOUNT_123",
"modelProfileIds": []
}
}
The request accepts the following parameters:
| Parameter | Type | Description |
|---|---|---|
resourceType | String | Type of resource. Current implementation supports ACCOUNT only. |
resourceUri | String | Resource URI of account resource. |
settings | Array | Configurations for auto select task. The taskName parameter specifies the name of the task. The modelProfileIds array accepts the ID numbers of up to 100 model profiles. If unspecified, the tasks checks all model profiles. |
Invalid modelProfileIds are ignored.
If successful, returns 201 Created HTTP status code and the URI of the AUTOSELECT task in the location header: {host}/platform/autoselect/v1/tasks/{uuid}. The client can use this URL to poll the status of the task.
This operation is supported for tenants with the RI-UNDERWRITEIQ entitlement. The client must pass a valid resource group ID in the x-rms-resource_group_id header parameter. The client must belong to a group that has been assigned the UnderwriteIQ Admin or UnderwriteIQ User role.
Get Auto Select Task
The Get Auto Select Task operation (POST platform/autoselect/v1/tasks/taskId) returns the results of a specific auto select task.
This operation can be used to poll the status of an auto select task.
In the initial release, the response returns a list of the model profiles that are applicable to the exposure specified in the original job.
{
"taskUuid": "488ea126-xxxx-47bd-85ea-e0fcebe1931f",
"taskType": "AUTOSELECT",
"taskName": "ACCOUNT_123",
"status": "COMPLETED",
"createdAt": "2025-09-24T22:34:20.018Z",
"updatedAt": "2025-09-24T22:35:20.018Z",
"expiresAt": "2025-09-25T22:34:20.018Z",
"createdBy": "[email protected]"
"output": {
"errors": [],
"log": {
"modelProfileIds": [123, 456]
}
}
}
The response returns the following properties:
| Property | Type | Description |
|---|---|---|
taskUuid | String | UUID of task. |
taskType | String | Type of task, e.g. AUTOSELECT |
taskName | String | User-defined name of task. |
status | String | Status of task. One of COMPLETED, FAILED, IN PROGRESS, PENDING. |
createdAt | String | Time AUTOSELECT task started in ISO 8601 format, e.g. 2020-01-01T00:00:00.000Z. |
updatedAt | String | Time AUTOSELECT task started in ISO 8601 format, e.g. 2020-01-01T00:00:00.000Z. |
expiresAt | String | Time AUTOSELECT task started in ISO 8601 format, e.g. 2020-01-01T00:00:00.000Z. |
createdBy | String | Login of principal that created task. |
output | Object | Object that returns information about the task including errors array and log object that returns an list of applicable model profiles in the modelProfileIds array. |
This operation is supported for tenants with the RI-UNDERWRITEIQ entitlement. The client must belong to a group that has been assigned the UnderwriteIQ Admin or UnderwriteIQ User role.
Export API
Create Export Job
The Create Export Job operation now supports exporting statistics and metrics segmented by granularity to CSV or Parquet.
The RESULTS job type supports exporting analysis result data (loss tables, EP metrics, and statistics) to a flat file in CSV or PARQUET format. This operation now supports exporting results data grouped by output level.
An output level is a category that identifies the granularity of analysis result data, i.e. the resolution level used to aggregate computed losses. This operation now supports exporting loss details at the following output levels: Account, Admin1 by Cedant, Admin1 by LOB by Cedant, Admin1 by LOB, Admin1, Admin2 by Cedant, Admin2 by LOB by Cedant, Admin2 by LOB, Admin2, Cedant, City by LOB, City, Country by Cedant, Country by LOB by Cedant, Country by LOB, Country, Cresta by Cedant, Cresta by LOB by Cedant, Cresta by LOB, Cresta, District/Admin3 by LOB, District/Admin3, Facultative, LOB, Location, Other GeoID by LOB, Other GeoID, Policy, Portfolio, PostalCode by Cedant, PostalCode by LOB by Cedant, PostalCode by LOB, PostalCode, Treaty
The outputlevels parameter specifies a list of output levels that specify the granularity of exported losses :
{
"analysisIds": ["292653"],
"exportFormat": "CSV",
"exportType": "RDM",
"lossDetails": [
{
"lossType": "EP",
"outputLevels": [
"Admin1",
"Admin1 by Cedant",
"Admin1 by LOB",
"Admin1 by LOB by Cedant",
"Admin2",
"Admin2 by Cedant",
"Admin2 by LOB",
"Admin2 by LOB by Cedant",
"Account",
"Cedant",
"City",
"City by LOB",
"Country",
"Country by Cedant",
"Country by LOB",
"Country by LOB by Cedant",
"Cresta",
"Cresta by Cedant",
"Cresta by LOB",
"Cresta by LOB by Cedant",
"District/Admin3",
"District/Admin3 by LOB",
"LOB",
"Location",
"Other GeoID",
"Other GeoID by LOB",
"Policy",
"Portfolio",
"PostalCode",
"PostalCode by Cedant",
"PostalCode by LOB",
"PostalCode by LOB by Cedant",
"Treaty",
"Facultative"
],
"perspectives": ["GU", "GR", "RL"]
}
],
"type": "ResultsExportInputV2",
"nonWeightedPlt": false
}
Estimate Export Job Size
The Create Estimate Export Job Size operation (POST /platform/export/v1/rdm-estimates/tasks) creates job that estimates the size of exported RDM data.
The Intelligent Risk Platform no longer imposes a limit of 100 analysis results for RDM and RDM_DATABRIDGE export jobs.
This operation calculates the size of data to be exported. Can be used for the following export types: RDM and RDM_DATABRIDGE. This release introduces the Estimate Export Job operation which calculates and returns the estimated size of RDMs prior to export. Tenants can use this information to determine whether the size of the export is too large or if the export job will take too long to process.
Exports of analysis data is no longer limited to 100 analyses, we now estimate the size of your allowed export based on the data stored in the application. As you select analyses for export, we calculate and display the estimated RDM size, so you know up front if your export might be too large or take a long time.
If your estimated export size exceeds a certain threshold, we prevent the job from running to avoid failures and wasted time. For exports to the Platform, we check the SQL Server disk size and warn you if your export is large, or stop the export if it’s too big. For Data Bridge exports, we check available space on your server and warn you or block the export if there is not enough room.
These changes help you manage large exports efficiently and avoid unexpected failures.
To estimate the size of an RDM export job, the request accepts three parameters: exportType, resourceUris, and exportLossFormat (one of one of ELTorPLT):
{
"exportType": "RDM",
"resourceUris": ["/platform/riskdata/v1/analyses/183322"],
"exportLossFormat": "PLT"
}
To estimate the size of an RDM_DATABRIDDGE export job, the request accepts three parameters: exportType, resourceUris, and settings:
{
"exportType": "RDM_DATABRIDGE",
"resourceUris": ["/platform/riskdata/v1/analyses/183322"],
"settings": { "serverId": 11, "databaseId": 23044712 }
}
If successful, returns a 201 status and creates an ESTIMATE_RDMjob.
Get Estimate Export Job Size
The Get Estimate Export Job Size operation (POST /platform/rdm-estimates/v1//{task_uuid}) returns an estimate of the size of RDM data.
The response returns estimates of the size of the exported results data. The data returned depends on the export type (RDM or RDM_DATABRIDGE) and the fileExtension of the exported data. RDM exports support exporting to BAK and MDF files. RDM_DATABRIDGE exports support exporting to PARQUET files.
{
"mdfEstimateSize": "72 MB",
"estimatedSizeBytes": 76408148,
"numberOfAnalyses": 1,
"totalTables": 9,
"totalFiles": 15,
"topTablesBreakdown": "rdm_policy:2 MB, rdm_metadata:1 MB, rdm_locstats:1 MB, rdm_anlsevent:137 KB, rdm_ratescheme:68 KB, rdm_anlspersp:61 KB, rdm_anlsregions:60 KB, rdm_analysis:50 KB, rdm_policystd:0 bytes",
"totalProcessingTimeSeconds": 11,
"validationStatus": "SUCCESS",
"validationWarnings": [],
"diskUsagePercentage": 0.0,
"currentAvailableSpacePercentage": 41.95,
"availableSpaceAfterExportPercentage": 41.95
}
The response object may include the following properties:
| Property | Type | Description |
|---|---|---|
mdfEstimateSize | String | Estimated size of MDF file. |
estimatedSizeBytes | Number | Estimated size. |
numberOfAnalyses | Number | Number of analysis results exported. |
totalTables | Number | Number of database tables exported. |
totalFiles | Number | Number of files exported. |
topTablesBreakdown | String | Comma-separated list of table names and sizes, e.g. (rdm_policy: 2 MB). |
totalProcessingTimeSeconds | Number | Time to estimate size of exported data. |
validationStatus | String | Status of estimation job, e.g. SUCCESS. |
validationWarnings | Array | Array of warnings. |
diskUsagePercentage | Number | Percentage of Data Bridge disk space in use. |
currentAvailableSpacePercentage | Number | Percentage of Data Bridge disk space still available. |
availableSpaceAfterExportPercentage | Number | Percentage of Data Bridge disk space still available after data export. |
Create Export Job
The Create Export Job operation creates and initiates a variety of in support of a variety of export workflows. Depending on the export type, this operation can export exposure data, result data to a variety of formats.
This operation now enables the client to select analysis result data to export to an RDM by metric type and output level.
The settings object accepts the lossDetails parameter that specifies an array of losses to export.
{
"exportType": "RDM",
"resourceType": "analyses",
"resourceUris": ["/platform/riskdata/v1/analyses/15165331"],
"settings": {
"fileExtension": "BAK",
"sqlVersion": 2019,
"rdmName": "rdm_327993",
"lossDetails": [
{
"metricType": "STATS",
"outputLevels": ["Policy", "Location"]
},
{
"metricType": "EP",
"outputLevels": ["Portfolio", "Account"]
},
{
"metricType": "LOSS_TABLES",
"outputLevels": ["Geographic", "Facultative"]
}
]
}
}
Each loss details object specifies the metricType and an array of outputLevels for that metric.
| Property | Type | Description |
|---|---|---|
metricType | String | The metric or statistic returned, e.g. (STATS, EP, LOSS_TABLES). |
outputLevels | Array | A list of output levels, e.g. Policy, Portfolio, Account. |
Create Export Job
This operation now accepts a schemaVersion parameter that specifies the schema version of the exposure or result data exported to an EDM or RDM database.
By default, the exposure data exported to an on-premise EDM database or analysis results data exported to a new RDM database is automatically exported to the latest version of schema, Version 25.
The Export API now enables tenants to export exposure and analysis result data to on-premise EDM and RDM databases that are based on earlier version of the database including Version 18, Version 21, Version 22, Version 23, and Version 24.
The schemaVersion parameter can be specified in the body of Create Export Job requests with the EDM, RDM, and RDM_DATABRIDGE export types. The parameter accepts the following values: v18, v21, v22, v23, v24, v25:
{
"exportType": "EDM",
"resourceType": "exposure",
"settings": {
"fileExtension": "BAK",
"sqlVersion": "2019",
"schemaVersion": "v18"
"filters": {
"exposureResourceType": "ACCOUNTS",
"exposureResourceIds": [
555,
556,
557
]
},
"fileName": "myEDM"
},
"resourceUri": "/platform/riskdata/v1/exposures/5555"
Intelligent Risk Platform exposure and analysis result data are managed in EDM and RDM databases respectively. These databases are defined by versioned EDM schema and RDM schema. "A database schema defines how data is organized within a relational database; this is inclusive of logical constraints such as, table names, fields, data types and the relationships between these entities."
As new features are added to the Intelligent Risk Platform, Moody's releases new versions of the EDM database schema and RDM schema that support the required tables, fields, and data types. The EDM database schemas and RDM database schema are updated together. The most current versions of these database schemas is Version 25.
Exporting to earlier database versions allows you to:
- Share data with teams and partners on older environments without changing your primary EDMs or RDMs.
- Keep integrations stable by exporting to the exact data version required by downstream tools.
- Confirm and trace which version is used for each export, supporting audits, and troubleshooting.
The following rules apply:
- Exporting an EDM with
schemaVersionV23 supports exporting to v23, v22, v21 and v18 - Exporting an EDM with
schemaVersionV23 supports exporting to v23, v22, v21 and v18 - Export Analyses should support exporting to RDMs with version to v23, v22, v21 and v18
- Export Analyses should support exporting to RDMs with version to v23, v22, v21 and v18 when exporting to a new RDM on Data Bridge
- Exporting Analyses to an existing RDM hosted on Data Bridge must export in the version of the target RDM
Get Export Job
The Get Export Job operation (GET /platform/export/v1/jobs/{jobId) now returns information about the database schema version of exported EDM and RDM databases.
Every on-premise EDM or RDM conforms to a particular schema version, which defines how the data is organized within that database. The database schema defines the names of tables, fields, data types and the relationships between these entities in the database. As new features are added to the Intelligent Risk Platform, Moody's creates new versions of the EDM schema and RDM schema, which are updated in parallel.
This operation now returns the schema version of an exported EDM or RDM. The Create Export Job operation now enables client applications to specify the schema version of exported exposure and analysis results data in EDM, RDM, and DOWNLOAD_RDM export jobs. Consequently, the Get Export Jobs operation returns information the response.
The schemaVersion property is returned in the output object for all EDM, RDM, and DOWNLOAD_RDM export jobs:
{
"jobId": "40481892",
"userName": "[email protected]",
"status": "FINISHED",
"submittedAt": "2026-01-12T18:46:00.383Z",
"startedAt": "2026-01-12T18:46:23Z",
"endedAt": "2026-01-12T18:48Z",
"name": "Single Analysis",
"type": "DOWNLOAD_RDM",
"progress": 100,
"entitlement": "RI-RISKMODELER",
"resourceGroupId": "0660550b-32fb-4360-b7ac-61e1b3761131",
"priority": "medium",
"details": {
"resources": [
{
"uri": "/platform/riskdata/v1/analyses/19797273"
}
],
"summary": "Export is successful"
},
"tasks": [
{
"guid": "affb312e-b53b-4f60-89c4-765d66e382c6",
"taskId": "1",
"jobId": "40481892",
"status": "Succeeded",
"submittedAt": "2026-01-12T18:46:02.500Z",
"createdAt": "2026-01-12T18:46:00.377Z",
"name": "DOWNLOAD_RDM",
"output": {
"summary": "Export is successful",
"errors": [],
"log": {
"analysisName": "Port_All_Acc",
"rdmName": "A1_Databridge_Local_3_UXRt",
"analysisIdMappings": "19797273->1",
"schemaVersion": "V18"
}
},
"percentComplete": 100
}
]
}
Grouping API
The Validate Grouping operation (POST /platform/grouping/v1/validate ) validates the components of an analysis group.
This operation validates the specified analysis results, profiles, and simulation sets.
The request body accepts four parameters. If regionPerilSimulationSet is not specified, the operation uses the specified array of analysisIds and profileIds.
{
"analysisIds": [1, 2, 3],
"profileIds": [4, 5, 6],
"regionPerilSimulationSet": []
"skipTreaties": "true" //or false
}
The request accepts four parameters:
| Parameter | Type | Description |
|---|---|---|
analysisIds | Array | List of analysis IDs. |
profileIds | Array | List of profile IDs. |
regionPerilSimulationSet | Array | Optional list of simulation set IDs. |
skipTreaties | Boolean | If `true``, validates treaties. |
The response returns either valid, warning or error messages.
Import API
The Create Import Folder operation (POST /platform/import/v1/folders) creates an import folder.
Peril Converter API
The Peril Converter API enables Intelligent Risk Platform tenants to add peril coverage in bulk to the location exposures associated with account or portfolio resources.
A location is a property, building, business, or other asset that may be damaged by catastrophe events. Each location exposure is defined by location coverage, which specifies the liability of the underwriter for damages to entities (buildings, building contents, businesses) at a specific location due to catastrophe events of a particular peril type, e.g. EARTHQUAKE, FIRE, FLOOD, TERRORISM, TORNADO, WINDSTORM, WORKERSCOMP.
Using the Peril Converter, the client can update in bulk location coverage terms for all of the location exposures associated with a particular account or portfolio. Thus, the Peril Converter can significantly reduce effort required to update location coverage terms and reduce the risk of error.
Create Peril Converter
The Create Peril Converter Job operation (POST /platform/perilconverter/v1/jobs) creates a peril converter job that converts the perils specified in a llst of account or portfolio resources to another peril.
A peril is a natural or man-made phenomenon that generates insurance loss. Perils include earthquake (EQ), flood (FL), fire (FR), severe convective storm (CS), terrorism (TR), windstorm/hurricane/cyclone/typhoon (WS), and winterstorm (WT). A primary peril is the principal modeled peril, associated with or caused by the modeled phenomenon responsible for causing loss. For example, ground shaking is the main cause of loss and thus the primary peril in earthquakes. A secondary peril (or sub-peril) is an additional modeled peril that is associated with or caused by the original primary modeled phenomenon responsible for causing loss.
A peril converter job updates a list of specified exposures (account and portfolio resources) by converting an existing peril (as defined in the sourcePeril parameter) into another peril or list of perils (as defined in the targetPerils parameter). Both source perils and target perils can be identified by causeOfLoss or by peril and newCauseOfLoss.
curl --request POST \
--url https://api-euw1.rms.com//platform/perilconverter/v1/jobs/ \
--header 'accept: application/json' \
--header 'content-type: application/json' \
--header 'x-rms-resource-group-id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' \
--data '
{
"resourceUris": [
"/platform/riskdata/v1/exposures/{exposureId}/portfolios/2",
"/platform/riskdata/v1/exposures/1002106/accounts/83",
],
"resourceTypes": ["portfolios", "accounts"],
"settings":
{
"sourcePeril":
{
"peril": 2,
"newCauseOfLoss": 27,
"causeOfLoss":
},
"targetPerils": [
{
"peril": 4,
"newCauseOfLoss": 28,
"causeOfLoss":
}
],
"countryCodesFilter": ["CA"],
"includeSubPolicyConditions": true,
"includePolicyReinsurance": true,
"overwriteExistingCoverage": false,
"createBackup": false
}
}
'
The request must pass a valid resource group ID in the required x-rms-resource-group-id header parameter.
All other parameters are specfiied in the body of the request.
| Parameter | Type | Description | |
|---|---|---|---|
resourceUris | Array | List of resource URIs of multiple account or portfolio resources. | |
resourceTypes | Array | List of resource types, e.g. accounts, portfolios. | |
settings | Object | Peril converter service settings including sourcePeril, targetPerils, countryCodesFilter, includeSubPolicyConditions, includePolicyReinsurance, overwriteExistingConverage, and createBackup | |
sourcePeril | Object | Object consists of causeOfLoss or peril and newCauseOfLoss. | |
targetPerils | Array | List of objects. Each object consists of causeOfLoss or peril and newCauseOfLoss. | |
countryCodesFilter | Array | List of ISO2A country codes, e.g. CA. | |
includeSubPolicyConditions | Boolean | One of true or false. | |
includePolicyReinsurance | Boolean | One of true or false. | |
overwriteExistingConverage | Boolean | One of true or false. | |
createBackup | Boolean | One of true or false. |
If successful returns 201 Created and adds a PERIL_CONVERTER non-model job to the job queue.
Returns 400 for Bad Request and other proper default error codes.
Search Peril Converter Jobs
The Search Peril Converter Job (Get /platform/perilconverter/v1/jobs) returns a list of specific peril converter jobs.
[
{
"jobId": "string",
"priority": "verylow",
"userName": "string",
"status": "QUEUED",
"submittedAt": "2020-01-01T00:00:00.000Z",
"startedAt": "2020-01-01T00:00:00.000Z",
"endedAt": "2020-01-01T00:00:00.000Z",
"name": "string",
"type": "string",
"progress": 0,
"details": {
"resources": [
{
"uri": "string" //inputted resourceUris
}
],
"summary": "string"
},
"tasks": [
{
"taskId": 0,
"guid": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"jobId": "string",
"status": "CANCELED",
"submittedAt": "2020-01-01T00:00:00.000Z",
"createdAt": "2020-01-01T00:00:00.000Z",
"name": "string",
"percentComplete": 0,
"priorTaskGuids": ["3fa85f64-5717-4562-b3fc-2c963f66afa6"],
"output": {
"summary": "string",
"errors": [
{
"message": "string"
}
],
"log": {
"newResources": [
{
"resourceUri": "string",
"exposureResourcename": "string"
}
],
"countryCodesFilter": "Not Available",
"includeSubPolicyConditions": true,
"includePolicyReinsurance": true,
"overwriteExistingCoverage": true,
"totalLocations": "6",
"totalPolicies": "6",
"sourcePeril": "Earthquake",
"targetPerils": "Flood"
}
}
}
]
}
]
Get Peril Converter Job
The Get Peril Converter Job (Get /platform/perilconverter/v1/jobs/{jobId}) returns information about a specific peril converter job.
{
"jobId": "string",
"priority": "verylow",
"userName": "string",
"status": "QUEUED",
"submittedAt": "2020-01-01T00:00:00.000Z",
"startedAt": "2020-01-01T00:00:00.000Z",
"endedAt": "2020-01-01T00:00:00.000Z",
"name": "string",
"type": "string",
"progress": 0,
"details": {
"resources": [
{
"uri": "string" //inputted resourceUris
}
],
"summary": "string"
},
"tasks": [
{
"taskId": 0,
"guid": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"jobId": "string",
"status": "CANCELED",
"submittedAt": "2020-01-01T00:00:00.000Z",
"createdAt": "2020-01-01T00:00:00.000Z",
"name": "string",
"percentComplete": 0,
"priorTaskGuids": ["3fa85f64-5717-4562-b3fc-2c963f66afa6"],
"output": {
"summary": "string",
"errors": [
{
"message": "string"
}
],
"log": {
"newResources": [
{
"resourceUri": "string",
"exposureResourcename": "string"
}
],
"countryCodesFilter": "Not Available",
"includeSubPolicyConditions": true,
"includePolicyReinsurance": true,
"overwriteExistingCoverage": true,
"totalLocations": "6",
"totalPolicies": "6",
"sourcePeril": "Earthquake",
"targetPerils": "Flood"
}
}
}
]
}
Update Peril Converter Job
The Update Peril Converter Job (PATCH /platform/perilconverter/v1/jobs/{jobId}) updates the status of a peril converter job.
Risk Data API
Get Loss Tables for PLT Risk Sources
The Get Loss Tables for PLT Risk Sources operation (GET /platform/riskdata/v1/risksources/{uuid}/imported-plt) returns a list of loss tables for the specified risk source.
A risk source is a representation of risk to a cedant including modeled losses that underlies a program (reinsurance program) or business hierarchy position. The risk source links the program or business hierarchy to EDMs that contain the portfolios or analysis results.
The period loss table is an output table that simulates event losses over the course of a time period, providing greater flexibility to evaluate loss metrics than the analytical calculations based on event loss tables (ELTs).
By simulating events through time, an HD model computes total loss as well as maximum event occurrence loss for each simulation period in the table, and generates loss statistics based on the distribution of losses across the large number of simulated periods. This methodology can calculate the impact of all contract terms, including terms with time-based features, such as contracts that are shorter (or longer) than a single year.
[
{
"periodId": 503,
"weight": 0.00002,
"eventId": 3508644,
"eventDate": "2020-08-07T00:00:00.000Z",
"lossDate": "2020-08-13T00:00:00.000Z",
"loss": 111642.35349968076,
"peril": "Unrecognized",
"region": "string"
}
]
The response object returns information for each loss table:
| Property | Type | Description |
|---|---|---|
eventDate | Date | Date of event, , e.g. 2020-08-07T00:00:00.000Z. |
eventId | Number | ID of event, a representation of a peril that may produce catastrophe losses. |
exposureResourceNumber | Number | Number of exposure resource. |
lossDate | Date | Date of first policy payout, e.g. 2020-08-07T00:00:00.000Z. |
loss | Double | Expected sampled loss based on the position or financial perspective. |
peril | String | Natural or man-made phenomenon that generates insurance loss, e.g. Earthquake, Fire. |
periodId | Number | ID of simulation period. |
region | String | Model region of the analysis. |
weight | Double | Likelihood that a simulation period occurs relative to the other simulation periods, e.g. 2.0E-5. |
Tenant Data API
VPN for Data Bridge Product Launch
VPN for Data Bridge, a new product, allows Data Bridge customers to establish the Data Bridge connectivity via site-to-site VPN capability. This ensures that you can connect to Data Bridge just like you connect to your on-premises infrastructure. This was released to a limited number of customers in 2025, but is now available for purchase by all.
Data Bridge licensees can create, manage, and monitor site-to-site VPN connections for their Data Bridge instances. This helps streamline the T-SQL connectivity for your Data Bridge instance, making the connection more secure. With VPN for Data Bridge, Data Bridge administrators no longer need to maintain the CIDR block IPs.
Data Bridge admins can:
- Create, update, or delete VPN connection for their Data Bridge server instances.
- Switch the Data Bridge connectivity from CIDR block whitelisting to site-to-site VPN connection.
- View and manage VPN configurations and logs from the Admin Center.
To learn more about VPN for Data Bridge and how it can help your organization, contact Moody's Sales.
Create Encryption Keys
The Create Encryption Keys operation (POST /platform/tenantdata/v1/encryption-keys) creates an encryption key.
"An encryption key is a string of specifically organized bits designed to unscramble and decipher encrypted data. Each key is specific to a specific encryption code, therefore making each key unique and difficult to replicable." Data Bridge encryption keys are defined by a status (available, active).
This operation creates an idemptoptent request.
To perform this operation, the client must belong to a group that has been assigned the Create Encryption Keys action.
The request supports the creation of encryption keys in support of the following types of encryption: vpn, tde, or byok.
If successful, returns 201 Created HTTP status code and adds the encryption key to the AWS key store. Newly created encryption keys have a status of Available. For detailed information about the AWS key store, see Key Stores.
NOTE: With the deployment of the VPN along with the Data Bridge provisioning, the default settings should continue to be the same i.e. allowing user to whitelist the CIDR block along with running SQL traffic over the public internet. Hence, user should have the ability to configure the VPN settings and then turn the feature ON.
VPN encryption keys
A VPN encryption key is an X.
"VPN encryption is a secret language or code only you and your intended receiver can decipher or understand. A VPN encrypts your data, making it unusable for anyone trying to snoop into your online activities. This protection shields your personal and confidential data and keeps it safe from your ISPs, hackers, and cybercriminals."
Requests to create vpn encryption keys must specify theencryptionKeyType, encryptionKeySubType, encryptionKeyName, and encryptionKeyValue parameters.
{
"encryptionKeyType": "vpn",
"encryptionKeySubType": "pre-shared-key",
"encryptionKeyName": "202507VpnKey",
"encryptionKeyValue": "myPreSharedKey_EcnryptionKeyValue"
}
The request accepts the following parameters:
| Parameter | Type | Description |
|---|---|---|
encryptionKeyType | String | Type of encryption key, i.e. vpn. |
encryptionKeySubType | String | Subtype of vpn encryption key, i.e. pre-shared-key. |
encryptionKeyName | String | User-defined name of encryption key. |
encryptionKeyValue | String | User-defined string (between 8-32 characters in length) that defines the encryption key. |
TDE encryption keys
A TDE encryption key is X that supports encryption of data using Transparent Data Encryption (TDE).
"Transparent Data Encryption (TDE) is an essential security feature for databases, designed to encrypt data at rest—meaning the actual database files on disk. It is widely implemented in database management systems by major vendors such as Microsoft, IBM, and Oracle. TDE works by encrypting the storage of the entire database or specific critical files without changing how applications access the data. This encryption process is seamless to end-users and applications, providing an effective layer of security against unauthorized access to the physical files."
Requests to create tde encryption keys must specify theencryptionKeyType, encryptionKeySubType, encryptionKeyName, serverName, and encryptionKeyValue parameters.
{
"encryptionKeyType": "tde",
"encryptionKeySubType": "master-key-password",
"encryptionKeyName": "202507MasterKey",
"serverName": "sql-instance-1",
"encryptionKeyValue": "myMasterKeyPassword_EcnryptionKeyValue"
}
The request accepts the following parameters:
| Parameter | Type | Description |
|---|---|---|
encryptionKeyType | String | Type of encryption key, i.e. tde. |
encryptionKeySubType | String | Subtype of tde encryption key, i.e. master-key-password. |
encryptionKeyName | String | User-defined name of encryption key. |
serverName | String | Name of database server instance. |
encryptionKeyValue | String | User-defined string (between 8-32 characters in length) that defines the encryption key. |
BYOK encryption keys
A TDE encryption key is X that supports encryption of data using Bring Your Own Key (BYOK) encryption.
"Bring Your Own Key (BYOK) is an encryption key management system that allows enterprises to encrypt their data and retain control and management of their encryption keys."
Requests to create byok encryption keys must specify theencryptionKeyType, encryptionKeySubType, encryptionKeyName, and encryptionKeyValue parameters.
{
"encryptionKeyType": "byok",
"encryptionKeySubType": "customer-uploaded-key",
"encryptionKeyName": "202507ByokKey",
"encryptionKeyValue": "myBYOK_EcnryptionKeyValue"
}
The request accepts the following parameters:
| Parameter | Type | Description |
|---|---|---|
encryptionKeyType | String | Type of encryption key, i.e. byok. |
encryptionKeySubType | String | Subtype of byok encryption key, i.e. customer-uploaded-key. |
encryptionKeyName | String | User-defined name of encryption key. |
encryptionKeyValue | String | User-defined string (between 8-32 characters in length) that defines the encryption key. |
Search Encryption Keys
The Search Encryption Keys operation (GET /platform/tenantdata/v1/encryption-keys) returns a list of encryption keys.
Get Encryption Key
The Get Encryption Key operation (GET /platform/tenantdata/v1/encryption-keys/id) returns the specified encryption key.
Search Tenant Data Jobs
The Search Tenant Jobs operation (GET /platform/tenantdata/v1/jobs) returns a list of tenant data jobs.
Get Tenant Data Job
The Get Tenant Job operation (GET /platform/tenantdata/v1/jobs/{jobid}) returns the specified tenant data job.
Configure VPN Connections
The Configure VPN Connections operation (POST platform/tenantData/v1/vpnconnections) enables the tenant to configure VPN configuration settings.
To perform this operation, the client must belong to a group that has been assigned the RI Admin or Data Bridge Admin roles.
Search VPN Connections
The Search VPN Connections operation (GET platform/tenantData/v1/vpnconnections) returns a list of VPN configuration settings.
This operation supports filtering VPN configurations by customerGatewayIp or bgpRouting. Responses can be sorted by vpnConnectionId.
To perform this operation, the client must belong to a group that has been assigned the Consumer, Contributor, RI Admin, or Data Bridge Admin roles.
This operation returns a list of VPN configurations.
[
{
"vpnConnectionId": 1,
"customerGatewayIps":
[
"21.32.01.01/32",
"25.32.01.01/32"
],
"comments": "Public IP using natting",
"dnsResolverIps":
[
"56.01.01.01",
"58.01.01.01",
],
"bgpRouting": false,
"customerSubnetIps":
[
"10.01.01.01/16",
"11.01.01.02/24",
"12.01.01.03/8",
],
"tunnelSettings":
[
{
"id": 1,
"tunnelOutsideIp": "21.32.01.01/32",
},
{
"id": 2,
"tunnelOutsideIp": "31.32.01.01/32",
}
],
"irpSubnetIps":
[
"20.01.01.01/16",
"21.01.01.02/24",
"22.01.01.03/8",
],
"encryptionType": "preSharedKey",
"encryptionKeyResourceId": {encryptionkeyId},
"enabledApps": ["DATA_BRIDGE"],
"vpnStatus": "off"
}
]
| Property | Type | Description |
|---|---|---|
vpnConnectionId | Integer | ID of VPN tunnel. |
customerGatewayIps | Array | Required. List of IP gateways. |
comments | String | Maximum 50 characters. |
dnsResolverIps | Array | "tbc by client on their DNS resolver" |
bgpRouting | Boolean | true/false |
customerSubnetIps | Array | List of private IP addresses for the nating. |
tunnelSettings | Array | List of tunnel setting opjects. Includes id and tunnelOutsideIp |
irpSubnetIps | Array | List of private IP for the nating. |
encryptionType | String | ENUM values, i.e. preShareKey |
encryptionKeyResourceId | Object | ID of encryption key. |
enabledApps | Array | List of supported applicationsm i.e. DATA_BRIDGE. |
vpnStatus | String | Status of VPN. One of on or off. By default, off. Once cloud services request is complete. |
View VPN Connection
The View VPN Connections operation (GET platform/tenantData/v1/vpnconnections/{vpnconnectionId}) returns detailed information about the specified VPN configuration setting.
To perform this operation, the client must belong to a group that has been assigned the Consumer, Contributor, RI Admin, or Data Bridge Admin roles.
This operation returns detailed information about a single VPN connection.
{
"vpnConnectionId": 1,
"customerGatewayIps":
[
"21.32.01.01/32",
"25.32.01.01/32"
],
"comments": "Public IP using natting",
"dnsResolverIps":
[
"56.01.01.01",
"58.01.01.01",
],
"bgpRouting": false,
"customerSubnetIps":
[
"10.01.01.01/16",
"11.01.01.02/24",
"12.01.01.03/8",
],
"tunnelSettings":
[
{
"id": 1,
"tunnelOutsideIp": "21.32.01.01/32",
},
{
"id": 2,
"tunnelOutsideIp": "31.32.01.01/32",
}
],
"irpSubnetIps":
[
"20.01.01.01/16",
"21.01.01.02/24",
"22.01.01.03/8",
],
"encryptionType": "preSharedKey",
"encryptionKeyResourceId": {encryptionkeyId},
"enabledApps": ["DATA_BRIDGE"],
"vpnStatus": "off"
}
| Property | Type | Description |
|---|---|---|
vpnConnectionId | Integer | ID of VPN tunnel. |
customerGatewayIps | Array | Required. List of IP gateways. |
comments | String | Maximum 50 characters. |
dnsResolverIps | Array | "tbc by client on their DNS resolver" |
bgpRouting | Boolean | true/false |
customerSubnetIps | Array | List of private IP addresses for the nating. |
tunnelSettings | Array | List of tunnel setting opjects. Includes id and tunnelOutsideIp |
irpSubnetIps | Array | List of private IP for the nating. |
encryptionType | String | ENUM values, i.e. preShareKey |
encryptionKeyResourceId | Object | ID of encryption key. |
enabledApps | Array | List of supported applicationsm i.e. DATA_BRIDGE. |
vpnStatus | String | Status of VPN. One of on or off. By default, off. Once cloud services request is complete. |
Update VPN Connection
The Update VPN Connections operation (PATCH platform/tenantData/v1/vpnconnections/{vpnconnectionId}) updates the specified VPN configuration setting.
Allow the user to change the value of the VPN configuration for the tenant.
To perform this operation, the client must belong to a group that has been assigned the RI Admin or Data Bridge Admin roles.
Delete VPN Connection
The Delete VPN Connections operation (DELETE platform/tenantData/v1/vpnconnections/{vpnconnectionId}) deletes the specified VPN configuration setting.
To perform this operation, the client must belong to a group that has been assigned the RI Admin or Data Bridge Admin roles.
