March 2025

🥥

Highlights

The March 2025 release introduces updates to the Risk Data API, Exposure API, Grouping API, and Import APIs.

  • The Create Marginal Impact Report operation supports that creation of marginal impact reports that measure the effect of adding additional accounts to an existing portfolio as differential losses.
  • The Bulk Replace Condition Criteria operation supports updating complex policy condition criteria in bulk.
  • The Delete Report View operation supports the deletion of specific report views.
  • The Create Import Folders and Create Import Job operations now supports workflows that import exposure data from JSON files uploaded to AWS.

Learn More

Grouping API

The Create Grouping Job operation (POST /platform/grouping/v1/jobs) enables the client to create a job that creates an analysis group of DLM analysis groups, HD analysis results, and other analysis groups.

This operation now limits the size of description parameter values to 80 characters.

{
    "resourceType": "analyses",
    "resourceUris": [
        "/platform/riskdata/v1/analyses/1776591",
        "/platform/riskdata/v1/analyses/1776517",
        "/platform/riskdata/v1/analyses/1776609"
    ],
    "settings": {
        "analysisName": "group_florida_grp_test_acc3",
        "description": "api_grouping NAEQ NAWS NAEQ NAWS NAEQ NAWS NAEQ NAWS NAEQ NAWS NAEQ NAWS NAEQ NAWS ",
        "currency": {
            "code": "USD",
            "scheme": "RMS",
            "asOfDate": "2020-03-01",
            "vintage": "RL18"
        },
        ...

The analysis group combines analysis results generated using the analytical DLM framework with those generated using the simulation-based HD framework to provide a unified loss result across multiple perils and regions.

Import API

Import Exposures as JSON Files

The Create Import Folders operation (/POST platform/import/v1/folders) and Create Import Job operation (POST /platform/import/v1/jobs) now enable workflows that import exposure data into EDMs from JSON files uploaded to an storage bucket on AWS.

The Create Import Folders operation supports the creation of EXPOSURE_BULK_EDIT import folders. These folders support enable client applications to upload exposureBulkEditFile files to AWS.

{
  "folderType": "EXPOSURE_BULK_EDIT",
  "properties": {
    "fileExtension": "JSON/GZ",
    "fileTypes": ["exposureBulkEditFile"]
  }
}

Using AWS, the client can then upload a file of exposure data to the EXPOSURE_BULK_EDIT.

{
  "uploadDetails": {
    "exposureBulkEditFile": {
      "fileUri": "platform/import/v1/folders/{{folderId}}/files/{{fileId}}",
      "presignParams": {
        "accessKeyId": "xxxxxxxxxxxxxxxxxxxxxxxxxxx=",
        "secretAccessKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==",
        "sessionToken": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
        "path": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
        "region": "xxxxxxxxxxxx"
      },
      "uploadUrl": "https://xxx-xxxxxxx-xxx-xx-xxxx-x.s3.amazonaws.com/xxxxxxx/import/platform/exposure_bulk_edit/xxxxxx/xxxxxx-exposurebulkeditfile.json"
    }
  },
  "folderType": "EXPOSURE_BULK_EDIT",
  "folderId": "107275"
}

The Create Import Job operation now supports the creation of EXPOSURE_BULK_EDIT jobs.

The request package specifies a exposure resource by resourceUri and the folderId of an EXPOSURE_BULK_EDIT folder. The job ingests data from the folder and loads that data into the specified exposure resource.

{
  "importType": "EXPOSURE_BULK_EDIT",
  "resourceUri": "/platform/riskdata/v1/exposures/8613739",
  "settings": {
    "folderId": 103773
  }
}

If successful, the response returns a 201 Created HTTP response. The client can use the Get Import Job operation to poll the status of the job.

OED Import Field Mapping

The Create Import Folders operation (POST /import/v1/folders) now enable custom field mapping between uploaded CSV data and the OED columns in OED imports.

This release enables clients to specify mappingFile as a fileType in a request. This mapping file specifies mappings between the columns in an uploaded CSV file (accountFile, locationFile, reinsurancFile, or reinsuranceScopeFile) and the columns in the OED schema.

Risk Data API

Report Views

Metadata in Report View

The Get Analyses by Report View operation (POST /platform/riskdata//v1/reportviews/{id}/analyses) now returns more information about the analyses in the report.

For each analysis in the report view, the response now returns:

  • analysisCurrency
  • analysisDescription
  • analysisFramework
  • analysisType
  • peril
  • referenceAnalysisDescription
  • region

For example,

[
  {
    "analysisId": 59836,
    "appAnalysisId": 2971,
    "analysisName": "BatchAPI_Lana_EDM_wRefData_PSC: ACCOUNT: Global Acct 100K v2",
    "analysisDescription": "RM2.0_API_JPEQ_V24",
    "analysisCurrency": {
      "name": "US Dollar",
      "code": "USD",
      "scheme": "RMS",
      "asOfDate": "2024-04-03T00:00:00Z",
      "vintage": "RL24"
    },
    "analysisType": "DLM",
    "analysisFramework": "ELT",
    "peril": "EQ",
    "region": "JP",
    "metricTypes": ["POLICY_EP", "EP", "STATS", "POLICY_STATS", "ELT"]
  },
  {
    "analysisId": 64940,
    "appAnalysisId": 3022,
    "analysisName": "MI_Marginal_profile_RM2_0_API_CHEQ_V24_policy",
    "analysisDescription": "Grouped Portfolio Impact",
    "analysisCurrency": {
      "name": "US Dollar",
      "code": "USD",
      "scheme": "RMS",
      "asOfDate": "2024-04-03T00:00:00Z",
      "vintage": "RL24"
    },
    "analysisType": "MARGINAL_IMPACT",
    "analysisFramework": "ELT",
    "peril": "ZZ",
    "region": "ZZ",
    "referenceAnalysisId": 41212,
    "referenceAppAnalysisId": 2214,
    "referenceAnalysisName": "Global Reference Analyses Group",
    "referenceAnalysisDescription": "",
    "metricTypes": ["MARGINAL_STATS", "MARGINAL_EP"]
  }
]

To view this information the client must have the RI-UNDERWRITEIQ entitlement and pass an x-rms-resource-group-id that identifies the UnderwriteIQ resource group.

Delete Report View

The Delete Report View operation (POST /platform/riskdata/v1/reportviews/{id}) enables the client to delete the specified report view. The id path parameter identifies the report view to delete and the requried x-rms-resource-group-id identifies the UnderwriteIQ resource group.

A report view is a collection of reports that return exposure-specific metrics and statistics. The report view is generated automatically whenever an UnderwriteIQ client creates exposures in batch using the Exposure Bulk Edit operation.

This operation deletes the specified report, but does not delete the underlying analysis results.

If successful, the response returns a 204 No Content HTTP response code.

Analyses

Analysis Results

The Search Analysis Results operation (GET /platform/riskdata/v1/analyses) and Get Analysis Result operation (GET /platform/riskdata/v1/analyses/analysisId) now return the analysisType in the response.

The analysis type property identifies the type of analysis. For DLM, ALM, and HD models, Platform APIs return the following probabilistic and deterministic analysis types.

Analysis types returned depend on the entitlements assigned to the client: EQ (RI-EXPOSUREIQ), RM (RI-RISKMODELER), or TQ (RI-TREATYIQ).

TypeEntitlementDescription
buildingLevelRMReturns accumulations by building, either for all buildings in the portfolio or for user-selected buildings. For this analysis type, buildings indicate locations present in the high-resolution ESDB HRB data with an ESDB Building ID. Building level analyses only consider exposure data that has been successfully geocoded with a Building ID using ESDB HRB data, and consequently are applicable only in the U.S. Building level matches always provide Building IDs. Coordinate level matches return Building IDs if the latitude and longitude fall within a building outline. This is the only accumulation analysis type that requires building-level exposure data.
businessHierarchyRollupTQReturns aggregated loss metrics for programs within a business hierarchy. Rollup analysis results include EP results, retrocession, scenarios, composition, and analysis details.
circleSpiderEQ, RMLocates circular areas of a fixed diameter containing the highest level of exposure. If you license ExposureIQ, you can also apply band-specific damage factors to these circular areas in up to three damage bands (circles).
eventResponseEQ, RMComputes exposure concentrations defined by footprints representing real-world events. If you license ExposureIQ, you can also apply a set of damage factors to regions.
exceedanceprobabilityRM,TQRuns a full probabilistic analysis on the exposure at risk, producing OEP and AEP curves that are cumulative distributions showing the probability that losses will exceed a certain amount, from either single or multiple occurrences.
footprintFileRMRuns recent events and analyzes losses based on the Moody's RMS best estimate of the scope and scale of a specific catastrophe event. Includes a temporal aspect, such as the highest flood depth, the strongest wind speed, or the strongest ground shaking.
geopoliticalSpiderEQ, RMLocates geopolitical regions of a specified granularity that contain the highest exposure concentration within a broader region's boundaries.
geopoliticalEQ, RMComputes exposure concentrations specified at some level of geographic granularity. If you license ExposureIQ, you can also apply a set of damage factors to regions.
hazardEQ, RMComputes exposure concentrations defined by hazard layers. If you license ExposureIQ, you can also apply a set of damage factors to regions.
historicalRMCalculates loss based on parameters defined for one or more actual historical events from the event catalog.
maximumCredibleRMIdentifies the event among all possible events that would cause the worst damage to exposed locations for the selected financial perspective.
maximumHistoricalRMIdentifies the event among all historical events that would cause the worst damage to exposed locations for the selected financial perspective.
probablisticlossRM
programRollupTQCalculates losses, pricing, and marginal metrics for a program and attached inward program treaties.
scenarioRM, TQCalculates loss based on parameters defined for one or more individual events from the stochastic event catalog.
specificAreaEQ, RMComputes exposure concentrations by applying damage factors to locations around user-specified targets defined in custom map layers. If you license ExposureIQ, you can also apply damage factors to locations in different radii (i.e., damage bands).
spiderRMIdentifies the top number of areas that would generate the most loss as the result of an attack and evaluate whether to diversify their risks. The analysis finds the largest scenario losses for a portfolio for a given financial perspective based on a method of attack. Results are ranked by loss for the selected financial perspective.
terrorismSimpleRMType of accumulation analysis where you select a method of attack, similar to a Terrorism-Simple Footprint analysis. This analysis finds the largest scenario losses for a portfolio based on the method of attack selected. The damage calculation within a Simple Footprint is dependent on a location’s position within the footprint, since the hazard tends to drop off quickly as you move away from the center.
terrorismVrgRMThis analysis is like the Terrorism-Simple Footprint analysis, except that the footprints are not simple circle but are instead areas defined by the selected target. The footprints are also at a higher resolution (VRG) than simple footprints. A VRG (Variable Resolution Grid) is a grid that is superimposed over the footprint area. Unlike simple footprint analyses where ground up loss percentage and casualty distribution are calculated according to the location’s placement within a damage ring, VRG footprint calculations are made according to the location’s placement within a specific cell of the grid. The grid’s cells provide finer resolution than the concentric damage rings.
unrecognizedUnknown analysis type

The analysisType property can be used to filter responses returned by the Search Analysis operation.

Create Marginal Impact Report

The Calculate Marginal Impact operation (POST /platform/riskdata/v1/analyses/{analysis-id}/marginal-impact) generates a marginal impact report that measures the effect of adding additional accounts to an existing portfolio as differential losses.

Marginal impact analysis compares a portfolio-level analysis result with a new analysis that incorporates one or more account-level analysis results. The Intelligent Risk Platform calculates the difference between the reference analysis "group" and the new marginal impact "group".

The id of a portfolio-level analysis is specified in the required path parameter. This analysis serves as the reference analysis for the report.

The request body specifies an array of marginalImpactAnalysisIds, a currency object, outputType, and jobName:

{
  "marginalImpactAnalysisIds": [316345],
  "currency": {
    "currencyCode": "USD",
    "currencyScheme": "RMS",
    "currencyAsOfDate": "2023-12-31",
    "currencyVintage": "RL18.1"
  },
  "outputLevel": "ACCOUNT", //POLICY
  "jobName": "TEST JOB",
  "eventRateSchemeIds": [],
  "tagIds": []
}

The marginalImpactAnalysisIds, currency, and jobName body parameters are required.

ParameterTypeDescription
marginalImpactAnalysisIdsArrayList of account-level analysis results.
currencyObjectThe currencyCode, currencyScheme, currencyAsOfDate, currencyVintage are all required.
outputLevelStringGranularity of the report. If ACCOUNT, the report returns projected losses for all policies and locations grouped by peril. If POLICY the report returns projected losses for all analyses grouped by policy.
jobNameStringName of GROUPING job.
eventRateSchemesArrayList of event rate scheme IDs to be used in metric calculations. If specified, overrides the event rate schemes used in the original analysis results.
tagIdsArrayList of tag IDs assigned to this resource to facilitate searches.

Policies

The Bulk Replace Condition Criteria operation (POST /platform/riskdata/v1/exposures/{exposureId}/policies/{policyId}/policyconditions/{conditionId}/criteria/bulk-replace) enables the client to replace existing policy condition criteria with define complex policy condition criteria.

Policy condition criteria defines criteria for assigning a policy condition to a location. Locations that that match the specified criteria may are assigned the specified policy condition. Alternatively, locations may be assigned policy conditions using location conditions. The condition criteria may be used to define a query that selects locations. Locations that match the specified criteria may be assigned the policy condition.

The request defines an array of condition objects.

[
  {
    "closePar": "",
    "field": "COUNTRY",
    "logic": "",
    "openPar": "",
    "operator": "=",
    "value": "US"
  },
  {
    "closePar": "",
    "field": "COUNTRY",
    "logic": "AND",
    "openPar": "(",
    "operator": "=",
    "value": "US"
  },
  {
    "closePar": ")",
    "field": "COUNTRY",
    "logic": "OR",
    "openPar": "",
    "operator": "=",
    "value": "US"
  }
]

Within each object,the field, operator, and value parameters are required.

ParameterTypeDescription
closedParStringClosing parenthesis in a compound query. One of ), )), ))).
openParStringOpening parenthesis in a compound query. One of (, ((, (((.
fieldStringProperty to be searched in query.
valueStringValue of field property
operatorStringComparison operator in query. One of =, >, <, >=
logicStringLogical operator in compound query. One of AND, OR

The response returns 201 Created HTTP response code.