Pre-Release 2025.12.b

🥥

Highlights

The 2025.12.b release introduces the Autoselect API, updates to Admin Data API and Risk Data API, and the North America Severe Convective Storm HD models.

Moody's publishes preliminary information to inform stakeholders ahead of the targeted Intelligent Risk Platform update. Note that these described features are not guaranteed for the next update or any subsequent updates and may be changed without notice. The definitive list of features will be provided in the Changelogs at the time of the official release.

Learn More

Model

The new Moodyʼs RMS North America Severe Convective Storm HD Models (HDv1.0) represent a significant advancement in risk modeling for tornado, hail, and straight-line wind perils across the contiguous United States.

Version HDv1.0 includes the United States, and version HDv1.1 to be released in 2026 will expand coverage to southern Canada. These models address the rapidly increasing losses from severe convective storms, which have recently surpassed hurricanes as the leading cause of insured losses in North America. Key drivers of this trend include urban expansion, rising construction costs, the proliferation of vulnerable building components, and evolving claims practices. The HD models are designed to help insurers and reinsurers better understand and manage these escalating risks by providing more accurate, high-resolution assessments.

Leveraging the latest scientific data and advanced computational techniques, the HD models introduce innovations such as location-coverage level simulation, temporal modeling of hazard events, and a recalibrated vulnerability framework. The models incorporate extensive meteorological and claims data, enabling a more realistic representation of both frequent and severe events. By delivering transparent, granular risk insights, the HD models empower insurers to make informed decisions, optimize risk management strategies, and enhance resilience in the face of increasingly severe convective storm activity.

Autoselect Task API

The Autoselect API enables UnderwriteIQ tenants to streamline their underwriting processes via the smart selection process. Smart selection automatically identifies model profiles that can be used to model specific exposures.

Model profiles are the foundation of all catastrophe modeling workflows. Each model profile is defined at the region- and peril-level. Presently, no validation is done before a modeling job is submitted to ensure that an account contains exposures that match a model profile’s region and peril. In these cases, we allow the job to be submitted and fail it. This leads to wasted compute on our end, inefficiency for clients and us, and an increased mismatch between submitted and modeled locations.

In initial release, the Autoselect API can only be used to identify appropriate model profiles for modeling account exposures. Future releases will support additional services, i.e. the identification of other appropriate settings (e.g. output profiles, hazard lookups) for other exposures (account variations, aggregate portfolios, aggregate portfolio variations, portfolios, portfolio variations).

Create Autoselect Task

The Create Autoselect Task operation (POST platform/autoselect/v1/tasks) creates a smart selection task that automatically identifies and applies catastrophe modeling configurations (e.g. model profiles) applicable to a specified exposure.

In this initial release, this operation can be used to automatically apply model profiles to accounts prior to catastrophe modeling.

A model profile is a set of preconfigured configurations that enable catastrophe modeling based on a single peril- and region-specific model. Every Create Model Job request must specify a model profile and one or more exposures (aggregate portfolio, portfolio, or account resource). To model an exposure, the region and peril of covered locations must match the region and peril of the model profile.

This operation selects model profiles that can be used to model the specified account based on the region and peril coverage account locations. It compares the model profile's region and peril with that of specified account's location exposures. The response rejects rejects model profiles that are not applicable to the account.

The request body accepts the resource URI of an account exposure and autoselect task settings in the optional settings object, which can be used to specify a list of model profiles for possible selection:

{
  "resourceType": "ACCOUNT",
  "resourceUri": "/platform/riskdata/v1/exposures/123/accounts/12",
  "settings": {
    "taskName": "ACCOUNT_123",
    "modelProfileIds": []
  }
}

The request accepts the following parameters:

ParameterTypeDescription
resourceTypeStringType of resource. Current implementation supports ACCOUNT only.
resourceUriStringResource URI of account resource.
settingsArrayConfigurations for autoselect task. The taskName parameter specifies the name of the task. The modelProfileIds array accepts the ID numbers of up to 100 model profiles. If unspecified, the tasks checks all model profiles.

Invalid modelProfileIds are ignored.

If successful, returns 201 Created HTTP status code and the URI of the job in the location header: {{host}}/platform/autoselect/v1/tasks/{uuid}. The client can use this URI to poll the status of the job.

This operation is supported for tenants with the RI-UNDERWRITEIQ entitlement. The client must pass a valid resource group ID in the x-rms-resource_group_id header parameter.

Get Autoselect Task

The Get AutoSelect Task operation (POST platform/autoselect/v1/tasks/taskId) returns the results of a specific autoselection task.

This operation can be used to poll the status of an autoselection task.

In the initial release, the response returns a list of the model profiles that are applicable to the exposure specified in the original job.

{
  "taskUuid": "488ea126-xxxx-47bd-85ea-e0fcebe1931f",
  "taskType": "AUTOSELECT",
  "taskName": "ACCOUNT_123",
  "status": "COMPLETED",
  "createdAt": "2025-09-24T22:34:20.018Z",
  "updatedAt": "2025-09-24T22:35:20.018Z",
  "expiresAt": "2025-09-25T22:34:20.018Z",
  "createdBy": "[email protected]"
  "output": {
                "errors": [],
                "log": {
                    "modelProfileIds": [123, 456]
                }
            }
}

The response returns the following properties:

PropertyTypeDescription
taskUuidStringUUID of task.
taskTypeStringType of task, e.g. AUTOSELECT
taskNameStringUser-defined name of task.
statusStringStatus of task. One of COMPLETED, FAILED, IN PROGRESS, PENDING.
createdAtStringTime AUTOSELECT task started in ISO 8601 format, e.g. 2020-01-01T00:00:00.000Z.
updatedAtStringTime AUTOSELECT task started in ISO 8601 format, e.g. 2020-01-01T00:00:00.000Z.
expiresAtStringTime AUTOSELECT task started in ISO 8601 format, e.g. 2020-01-01T00:00:00.000Z.
createdByStringLogin of principal that created task.
outputObjectObject that returns information about the task including errors array and log object that returns an list of applicable model profiles in the modelProfileIds array.

Export API

The Intelligent Risk Platform no longer imposes a limit of 100 analysis results for RDM and RDM_DATABRIDGE export jobs. This release introduces the Estimate Export Job operation which calculates and returns the estimated size of RDMs prior to export. Tenants can use this information to determine whether the size of the export is too large or if the export job will take too long to process.

Exports of analysis data is no longer limited to 100 analyses, we now estimate the size of your allowed export based on the data stored in the application. As you select analyses for export, we calculate and display the estimated RDM size, so you know up front if your export might be too large or take a long time.

If your estimated export size exceeds a certain threshold, we prevent the job from running to avoid failures and wasted time. For exports to the Platform, we check the SQL Server disk size and warn you if your export is large, or stop the export if it’s too big. For Data Bridge exports, we check available space on your server and warn you or block the export if there isn’t enough room.

These changes help you manage large exports efficiently and avoid unexpected failures.

Create Export Job

The Create Export Job operation creates an export job.

If the export type is RDM, this operation now accepts an array of lossDetails parameters in the settings object that identify the detailed information about the exported data.

{
  "exportType": "RDM",
  "resourceType": "analyses",
  "resourceUris": ["/platform/riskdata/v1/analyses/15165331"],
  "settings": {
    "fileExtension": "BAK",
    "sqlVersion": 2019,
    "rdmName": "rdm_327993",
    "lossDetails": [
      {
        "metricType": "STATS",
        "outputLevels": ["Policy", "Location"]
      },
      {
        "metricType": "EP",
        "outputLevels": ["Portfolio", "Account"]
      },
      {
        "metricType": "LOSS_TABLES",
        "outputLevels": ["Geographic", "Facultative"]
      }
    ]
  }
}

Each loss details object specifies the metricType and an array of outputLevels for that metric.

PropertyTypeDescription
metricTypeStringThe metric or statistic returned, e.g. (STATS, EP, LOSS_TABLES).
outputLevelsArrayA list of output levels, e.g. Policy, Portfolio, Account.

Estimate Export Job Size

The Estimate Export Job Size operation (POST /platform/export/v1//rdm-estimates) returns an estimate of the size of exported RDM data.

The Intelligent Risk Platform no longer imposes a limit of 100 analysis results for RDM and RDM_DATABRIDGE export jobs.

This operation calculates the size of data to be exported. Can be used for the following export types: RDM and RDM_DATABRIDGE. This release introduces the Estimate Export Job operation which calculates and returns the estimated size of RDMs prior to export. Tenants can use this information to determine whether the size of the export is too large or if the export job will take too long to process.

Exports of analysis data is no longer limited to 100 analyses, we now estimate the size of your allowed export based on the data stored in the application. As you select analyses for export, we calculate and display the estimated RDM size, so you know up front if your export might be too large or take a long time.

If your estimated export size exceeds a certain threshold, we prevent the job from running to avoid failures and wasted time. For exports to the Platform, we check the SQL Server disk size and warn you if your export is large, or stop the export if it’s too big. For Data Bridge exports, we check available space on your server and warn you or block the export if there isn’t enough room.

These changes help you manage large exports efficiently and avoid unexpected failures.

The response returns estimates of the size of the exported results data. The data returned depends on the export type (RDM or RDM_DATABRIDGE) and the fileExtension of the exported data. RDM exports support exporting to BAK and MDF files. RDM_DATABRIDGE exports support exporting to PARQUET files.

{
  "mdfEstimateSize": "72 MB",
  "estimatedSizeBytes": 76408148,
  "numberOfAnalyses": 1,
  "totalTables": 9,
  "totalFiles": 15,
  "topTablesBreakdown": "rdm_policy:2 MB, rdm_metadata:1 MB, rdm_locstats:1 MB, rdm_anlsevent:137 KB, rdm_ratescheme:68 KB, rdm_anlspersp:61 KB, rdm_anlsregions:60 KB, rdm_analysis:50 KB, rdm_policystd:0 bytes",
  "totalProcessingTimeSeconds": 11,
  "validationStatus": "SUCCESS",
  "validationWarnings": [],
  "diskUsagePercentage": 0.0,
  "currentAvailableSpacePercentage": 41.95,
  "availableSpaceAfterExportPercentage": 41.95
}

The response object may include the following properties:

PropertyTypeDescription
mdfEstimateSizeStringEstimated size of MDF file.
estimatedSizeBytesNumberEstimated size.
numberOfAnalysesNumberNumber of analysis results exported.
totalTablesNumberNumber of database tables exported.
totalFilesNumberNumber of files exported.
topTablesBreakdownStringComma-separated list of table names and sizes, e.g. (rdm_policy: 2 MB).
totalProcessingTimeSecondsNumberTime to estimate size of exported data.
validationStatusStringStatus of estimation job, e.g. SUCCESS.
validationWarningsArrayArray of warnings.
diskUsagePercentageNumberPercentage of Data Bridge disk space in use.
currentAvailableSpacePercentageNumberPercentage of Data Bridge disk space still available.
availableSpaceAfterExportPercentageNumberPercentage of Data Bridge disk space still available after data export.

Get Export Job

The Get Export Job operation returns information about the specified export job.

An export job is a system-defined process that supports the exporting of Intelligent Risk Platform resource (e.g. exposure, analysis result, EDM).

This operation now returns information about the estimated size of RDMs.

If the export type is RESULTS, this operation now returns information about the size of the analysis results exported to the RDM:

As user selects multiple analyses for export, calculate the total estimation and provide that info to the user at the time when user kicks off the export job

If the estimated total size is beyond a certain threshold (maybe tied to the package size), prevent the job from executing

{
  "result": {
    "number of analyses": "4",
    "totalProcessingTime": "5",
    "parquet estimate size": "4 GB",
    "hybrid mdf estimate size": "27 GB",
    "parquet_size_per_analysis": [
      { "506821": "3 MB" },
      { "507075": "4 GB" },
      { "507076": "239 MB" },
      { "507074": "43 MB" }
    ]
  }
}

The result object returns estimates of the resulting RDM from the export operation on one or more analyses

about the data:

PropertyTypeDescription
numberOfAnalysisNumberNumber of analysis results exported.
totalProcessingTimeNumberTime needed to process the export job.
parquetEstimateSizeNumberEstimated size of the exported Parquet file.
hybrid mdf estimatesizeNumberEstimated size of the exported MDF file.
parquet_size_per_analysisArrayList of objects that map an analysis ID to the estimated size of the exported file.

Search Export Jobs

The Search Export Job operation returns a list of export jobs.

This operation now returns an estimate of the size of the resulting RDM from an export operation on one or more analyses.

Grouping API

The Validate Grouping operation (POST /platform/grouping/v1/validate ) validates the components of an analysis group.

This operation validates the specified analysis results, profiles, and simulation sets.

The request body accepts four parameters. If regionPerilSimulationSet is not specified, the operation uses the specified array of analysisIds and profileIds.

{
  "analysisIds": [1, 2, 3],
  "profileIds": [4, 5, 6],
  "regionPerilSimulationSet": [] //optional
  "skipTreaties": "true" //or false
}

The request accepts four parameters:

ParameterTypeDescription
analysisIdsArrayList of analysis IDs.
profileIdsArrayList of profile IDs.
regionPerilSimulationSetArrayList of simulation set IDs.
skipTreatiesBooleanIf `true``, validates treaties.

The response returns either valid, warning or error messages.

Risk Data API

Get Import Period Loss Tables

The Get Imported Period Loss Tables returns a list of period loss tables (PLTs).

The period loss table is an output table that simulates event losses over the course of a time period, providing greater flexibility to evaluate loss metrics than the analytical calculations based on event loss tables (ELTs).

By simulating events through time, an HD model computes total loss as well as maximum event occurrence loss for each simulation period in the table, and generates loss statistics based on the distribution of losses across the large number of simulated periods. This methodology can calculate the impact of all contract terms, including terms with time-based features, such as contracts that are shorter (or longer) than a single year.

[
  {
    "periodId": 503,
    "weight": 0.00002,
    "eventId": 3508644,
    "eventDate": "2020-08-07T00:00:00.000Z",
    "lossDate": "2020-08-13T00:00:00.000Z",
    "loss": 111642.35349968076,
    "peril": "Unrecognized",
    "region": "string",
    "exposureResource`Number`": "string"
  }
]

Get Loss Tables for ELT Risk Sources

The Get Loss Tables for ELT Risk Sources operation (GET /platform/riskdata/v1/risksources/ {{uuid}}/imported-elt) returns a list of loss tables for the specified risk source.

A risk source is a representation of risk to a cedant including modeled losses that underlies a program (reinsurance program) or business hierarchy position. The risk source links the program or business hierarchy to EDMs that contain the portfolios or analysis results.

The event loss table (ELT) is an output table that contains information about the loss-causing events in an DLM analysis, including the mean loss standard deviation (split into an independent and a correlated piece), exposure value, and event rate. The ELT is the basis of all modeled losses for all financial perspectives at all exposure levels and is used in computing output statistics.

The response returns information about an array of ELTs:

[
  {
    "analysisId": 16738,
    "sourceId": 21477,
    "eventId": 2864907,
    "positionValue": 182961890.62228483,
    "perspectiveCode": "GR",
    "stdDevI": 13084002.9308518,
    "stdDevC": 97423142.35982497,
    "expValue": 3464996216.86613,
    "rate": 5.9357381587688e-7,
    "peril": "Windstorm",
    "region": "North America",
    "oepWUC": 0.000018740929793703565,
    "exposureResourceType": "PORTFOLIO"
  }
]

The response object returns information for each loss table:

PropertyTypeDescription
analysisIdNumberID of analysis result.
eventIdNumberID of event.
expValueDoubleMaximum loss that can be incurred for the event.
exposureResourceIdNumberID of exposure.
exposureResourceTypeNumberType of exposure.
oepWUCNumberAnnual probability of occurrence without secondary uncertainty.
perilStringName of peril, e.g. Fire.
perspectiveCodeNumberFinancial perspective that determines how losses are calculated. See Financial Perspectives
positionValueDoubleMean loss incurred for event. Based on the granularity and financial perpective specified in the output profile.
rateDoubleAnnual rate of event occurrence, i.e. primary uncertainty. Probability of the event occurring within a year.
regionStringRegion of exposure.
sourceIdIntegerID of source. Peril model-specific property that links related events, e.g. events of different magnitudes that share the same source ("Calaveras Fault")
stdDevCDoubleStandard deviation from mean loss value. Correlated standard deviation assumes that all locations are correlated, which implies that if the losses are large for one location, they are likely to be large for the other location.
stdDevIDoubleStandard deviation from mean loss value. Independent standard deviation assumes that all locations are completely independent, which means that knowing the size of the loss at one location does not provide any information about the size of the loss at the other location.

Get Loss Tables for PLT Risk Sources

The Get Loss Tables for PLT Risk Sources operation (GET /platform/riskdata/v1/risksources/ {{uuid}}/imported-plt) returns a list of loss tables for the specified risk source.

A risk source is a representation of risk to a cedant including modeled losses that underlies a program (reinsurance program) or business hierarchy position. The risk source links the program or business hierarchy to EDMs that contain the portfolios or analysis results.

The period loss table is an output table that simulates event losses over the course of a time period, providing greater flexibility to evaluate loss metrics than the analytical calculations based on event loss tables (ELTs).

By simulating events through time, an HD model computes total loss as well as maximum event occurrence loss for each simulation period in the table, and generates loss statistics based on the distribution of losses across the large number of simulated periods. This methodology can calculate the impact of all contract terms, including terms with time-based features, such as contracts that are shorter (or longer) than a single year.

[
  {
    "periodId": 503,
    "weight": 0.00002,
    "eventId": 3508644,
    "eventDate": "2020-08-07T00:00:00.000Z",
    "lossDate": "2020-08-13T00:00:00.000Z",
    "loss": 111642.35349968076,
    "peril": "Unrecognized",
    "region": "string"
  }
]

The response object returns information for each loss table:

PropertyTypeDescription
eventDateDateDate of event, , e.g. 2020-08-07T00:00:00.000Z.
eventIdNumberID of event, a representation of a peril htat may produce catastrophe losses.
exposureResourceNumberNumberNumber of exposure resource.
lossDateDateDate of first policy payout, e.g. 2020-08-07T00:00:00.000Z.
lossDoubleExpected sampled loss based on the position or financial perspective.
perilStringNatural or man-made phenomenon that generates insurance loss, e.g. Earthquake, Fire.
periodIdNumberID of simulation period.
regionStringModel region of the analysis.
weightDoubleLikelihood that a simulation period occurs relative to the other simulation periods, e.g. 2.0E-5.

Get Report View

The Get Report View operation returns the specified report view.

A report view is a collection of reports that return exposure-specific metrics and statistics. The report view is generated automatically whenever an UnderwriteIQ client creates exposures in batch using the Exposure Bulk Edit operation.

This operation now returns two new reports: ACCUMULATIONS_BY_GEOGRAPHY_LOB, ACCUMULATIONS_BY_TREATY.

{
    "reportViewId": 3202,
    "reportViewName": "DemoJB with Marginal Impact",
    "exposureName": "DemoNov2022",
    "createdAt": "2024-01-31T19:40:13.707Z",
    "exposureId": 28104,
    "exposureResourceId": 80,
    "exposureResourceType": "ACCOUNT",
    "createdBy": "[email protected]",
    "notes": "",
    "details": [
    {
        "metricType": "ACCUMULATIONS_BY_TREATY",
        "analysisId": 231774,
        "appAnalysisId": 20406,
        "metricUrl": "https://xxxxx/platform/riskdata/v1/analyses/{analysisId}/treaty-accumulations?eventId={eventId}&treatyId={treatyId}",
        "additionalInfo": {
            "analysisName": "DemoNov2022: ACCOUNT: DemoJB"
        }
     }
    ]

The operation supports filtering by accumulation analyses type and accumulation metric types.

Create Risk Data Report

The Create Risk Data Report operation creates a downloadable report.

This operation now supports ACCOUNT_ACCUMULATION_DETAILS report type.

The request body accepts required four parameters:

{
  "reportType": "ACCOUNT_ACCUMULATION_DETAILS",
  "resourceUri": "/platform/riskdata/v1/analyses/5555",
  "resourceType": "analysis",
  "settings": {
    "fileExtension": "CSV",
    "fileName": "Test",
    "data": ["Locations"]
  }
}

The request must specify ACCOUNT_ACCUMULATION_DETAILS as the report type. The report is supported for accounts only.

ParameterTypeDescription
resourceUriStringRequired. URI of account resource.
resourceTypeStringRequired. Type of resource, i.e. ACCOUNT.
reportTypeStringRequired. The report type to export, i.e ACCOUNT_ACCUMULATION_DETAILS.
settingsObjectRequired. Collection of required and optional parameters (e.g. reportName, fileExtension, fileName, data, Locations).

Create Report Job

The Create Risk Data Report operation (POST /platform/riskdata/v1/reports) creates a variety of risk data reports including portfolio accumulation reports for RI-EXPOSUREIQ tenants.

A portfolio accumulation report (PORTFOLIO_ACCUMULATION_DETAILS) is based on an existing accumulation analysis that is specified in the resourceUri of the request.

This operation now supports creating PORTFOLIO_ACCUMULATION_DETAILS reports that return event-level accumulation data.

The request now accepts Events as an optional value in the required data parameter:

{
  "reportType": "PORTFOLIO_ACCUMULATION_DETAILS",
  "settings": {
    "fileExtension": "CSV",
    "fileName": "multi_event_accumulation_export",
    "data": ["Events", "Locations", "Accounts"]
  },
  "resourceUri": "/platform/riskdata/v1/analyses/34324",
  "resourceType": "analysis"
}

The data parameter identitifies the types of summary-level data to export in a PORTFOLIO_ACCUMULATION_DETAILS report. This parameter accepts five options: Accounts, Geographies, Locations, Policies andEvents. The report returns summary-level analysis data in multiple files, one for each data option.

The Events option cannot be specified along with the Geographies or Policies options. If the Events option is specified with either the Geographies or Policies options or both, the API returns a 400 Bad Request error. The Events option can be specified along with the Accounts and Locations options.

The Events option is supported in multiple-event reports only. The Events option cannot be specified, if the parent settings object specifies an eventId parameter.
If both the eventId parameter and Eventsoption are specified, the API returns a400 Bad Request` error.

North America Severe Convective Storm HD Models

The new Moodyʼs RMS North America Severe Convective Storm HD Models (HDv1.0) represent a significant advancement in risk modeling for tornado, hail, and straight-line wind perils across the contiguous United States.

Version HDv1.0 includes the United States, and version HDv1.1 to be released in 2026 will expand coverage to southern Canada. These models address the rapidly increasing losses from severe convective storms, which have recently surpassed hurricanes as the leading cause of insured losses in North America. Key drivers of this trend include urban expansion, rising construction costs, the proliferation of vulnerable building components, and evolving claims practices. The HD models are designed to help insurers and reinsurers better understand and manage these escalating risks by providing more accurate, high-resolution assessments.

Leveraging the latest scientific data and advanced computational techniques, the HD models introduce innovations such as location-coverage level simulation, temporal modeling of hazard events, and a recalibrated vulnerability framework. The models incorporate extensive meteorological and claims data, enabling a more realistic representation of both frequent and severe events. By delivering transparent, granular risk insights, the HD models empower insurers to make informed decisions, optimize risk management strategies, and enhance resilience in the face of increasingly severe convective storm activity.