2026.01.c

🥥

Highlights

The 2026.01.c release introduces the Auto Select API and enhancements to the Export API, Risk Data API, and Tenant Data API.

  • The Auto Select API supports the creation and management of tasks that automatically identify catastrophe modeling configurations (e.g. model profiles) that are applicable to a specified exposure.
  • The Export API now supports exporting exposure and analysis result data as data that is compatible with earlier versions of the EDM and RDM database schemas.
  • The Import API now supports workflows for importing ELT data and creating new risk sources for programs based on data.
  • The Tenant Data API now supports the management of encryption keys and VPN connections in support of VPN for Data Bridge.

Learn More

Accumulation API

Create Accumulation Job

The Create Accumulation Job operation (POST /platform/accumulation/v1/jobs) now accepts accepts the optional   includeLossByTreaty parameter.

additionalOutputOptions: {
  includeLossByTreaty: true
}

If true, accumulation analysis returns loss breakdowns by treaty.

Auto Select API

Create Auto Select Task

The Create Auto Select Task operation (POST platform/autoselect/v1/tasks) creates a smart selection task that automatically identifies catastrophe modeling configurations (e.g. model profiles) that are applicable to a specified exposure.

In this initial release, this operation can be used to automatically apply model profiles to accounts prior to catastrophe modeling.

A model profile is a set of preconfigured configurations that enable catastrophe modeling based on a single peril- and region-specific model. Every Create Model Job request must specify a model profile and one or more exposures (aggregate portfolio, portfolio, or account resource). To model an exposure, the region and peril of covered locations must match the region and peril of the model profile.

This operation selects model profiles that can be used to model the specified account based on the region and peril coverage account locations. It compares the model profile's region and peril with that of specified account's location exposures. The response rejects rejects model profiles that are not applicable to the account.

The request body accepts the resource URI of an account exposure and AUTOSELECT task settings in the optional settings object, which can be used to specify a list of model profiles for possible selection:

{
  "resourceType": "ACCOUNT",
  "resourceUri": "/platform/riskdata/v1/exposures/123/accounts/12",
  "settings": {
    "taskName": "ACCOUNT_123",
    "modelProfileIds": []
  }
}

The request accepts the following parameters:

ParameterTypeDescription
resourceTypeStringType of resource. Current implementation supports ACCOUNT only.
resourceUriStringResource URI of account resource.
settingsArrayConfigurations for autoselect task. The taskName parameter specifies the name of the task. The modelProfileIds array accepts the ID numbers of up to 100 model profiles. If unspecified, the tasks checks all model profiles.

Invalid modelProfileIds are ignored.

If successful, returns 201 Created HTTP status code and the URI of the job in the location header: {host}/platform/autoselect/v1/tasks/{uuid}. The client can use this URL to poll the status of the job.

This operation is supported for tenants with the RI-UNDERWRITEIQ entitlement. The client must pass a valid resource group ID in the x-rms-resource_group_id header parameter.

Get Auto Select Task

The Get Auto Select Task operation (POST platform/autoselect/v1/tasks/taskId) returns the results of a specific autoselection task.

This operation can be used to poll the status of an autoselection task.

In the initial release, the response returns a list of the model profiles that are applicable to the exposure specified in the original job.

{
  "taskUuid": "488ea126-xxxx-47bd-85ea-e0fcebe1931f",
  "taskType": "AUTOSELECT",
  "taskName": "ACCOUNT_123",
  "status": "COMPLETED",
  "createdAt": "2025-09-24T22:34:20.018Z",
  "updatedAt": "2025-09-24T22:35:20.018Z",
  "expiresAt": "2025-09-25T22:34:20.018Z",
  "createdBy": "[email protected]"
  "output": {
                "errors": [],
                "log": {
                    "modelProfileIds": [123, 456]
                }
            }
}

The response returns the following properties:

PropertyTypeDescription
taskUuidStringUUID of task.
taskTypeStringType of task, e.g. AUTOSELECT
taskNameStringUser-defined name of task.
statusStringStatus of task. One of COMPLETED, FAILED, IN PROGRESS, PENDING.
createdAtStringTime AUTOSELECT task started in ISO 8601 format, e.g. 2020-01-01T00:00:00.000Z.
updatedAtStringTime AUTOSELECT task started in ISO 8601 format, e.g. 2020-01-01T00:00:00.000Z.
expiresAtStringTime AUTOSELECT task started in ISO 8601 format, e.g. 2020-01-01T00:00:00.000Z.
createdByStringLogin of principal that created task.
outputObjectObject that returns information about the task including errors array and log object that returns an list of applicable model profiles in the modelProfileIds array.

Export API

SQL Server Updates

Microsoft plans to end extended support of SQL Server 2016 on July 14, 2026. As a result, the Intelligent Risk Platform™ will end support for SQL Server 2016 in June 2026. This change may affect Export API jobs that export data to EDM or RDM databases.

Create Export Job

The Create Export Job operation now supports exporting statistics and metrics segmented by granularity to CSV or Parquet.

The RESULTS job type supports exporting analysis result data (loss tables, EP metrics, and statistics) to a flat file in CSV or PARQUET format. This operation now supports exporting results data grouped by output level.

An output level is a category that identifies the granularity of analysis result data, i.e. the resolution level used to aggregate computed losses. This operation now supports exporting loss details at the following output levels: Account, Admin1 by Cedant, Admin1 by LOB by Cedant, Admin1 by LOB, Admin1, Admin2 by Cedant, Admin2 by LOB by Cedant, Admin2 by LOB, Admin2, Cedant, City by LOB, City, Country by Cedant, Country by LOB by Cedant, Country by LOB, Country, Cresta by Cedant, Cresta by LOB by Cedant, Cresta by LOB, Cresta, District/Admin3 by LOB, District/Admin3, Facultative, LOB, Location, Other GeoID by LOB, Other GeoID, Policy, Portfolio, PostalCode by Cedant, PostalCode by LOB by Cedant, PostalCode by LOB, PostalCode, Treaty

The outputlevels parameter specifies a list of output levels that specify the granularity of exported losses :

{
  "exportType": "RESULTS",
  "resourceType": "ANALYSES",
  "resourceUris": ["/platform/riskdata/v1/analyses/3855810"],
  "settings": {
    "fileExtension": "CSV",
    "lossDetails": [
      {
        "metricType": "EP",
        "outputLevels": [
          "Admin1",
          "Admin1 by Cedant",
          "Admin1 by LOB",
          "District by LOB",
          "District",
          "Admin1 by LOB by Cedant",
          "Admin2",
          "Admin2 by Cedant",
          "Admin2 by LOB",
          "Admin2 by LOB by Cedant",
          "Account",
          "Cedant",
          "City",
          "City by LOB",
          "Country",
          "Country by Cedant",
          "Country by LOB",
          "Country by LOB by Cedant",
          "Cresta",
          "Cresta by Cedant",
          "Cresta by LOB",
          "Cresta by LOB by Cedant",
          "District",
          "District by LOB",
          "LOB",
          "Location",
          "Other GeoID",
          "Other GeoID by LOB",
          "Policy",
          "Portfolio",
          "PostalCode",
          "PostalCode by Cedant",
          "PostalCode by LOB",
          "PostalCode by LOB by Cedant",
          "Treaty",
          "Facultative"
        ],
        "perspectiveCodes": ["GU"]
      }
    ]
  }
}

Create Export Job

The Create Export Job operation now enables the client to select analysis result data to export to an RDM by metric type and output level.

The settings object accepts the lossDetails parameter that specifies an array of "losses" to export. Each loss is defiend by a metric type and an array of output levels:

{
  "exportType": "RDM",
  "resourceType": "analyses",
  "resourceUris": ["/platform/riskdata/v1/analyses/15165331"],
  "settings": {
    "fileExtension": "BAK",
    "sqlVersion": 2019,
    "rdmName": "rdm_327993",
    "lossDetails": [
      {
        "metricType": "STATS",
        "outputLevels": ["Policy", "Location"]
      },
      {
        "metricType": "EP",
        "outputLevels": ["Portfolio", "Account"]
      },
      {
        "metricType": "LOSS_TABLES",
        "outputLevels": ["Geographic", "Facultative"]
      }
    ]
  }
}

Each loss details object specifies the metricType and an array of outputLevels for that metric.

PropertyTypeDescription
metricTypeStringThe metric or statistic returned, e.g. (STATS, EP, LOSS_TABLES).
outputLevelsArrayA list of output levels, e.g. Policy, Portfolio, Account.

Create Export Job

The Export API now enables Intelligent Risk Platform client applications to specify the database schema version of exported exposure (EDM) and analysis result (RDM) data. With this release, data can now be exported to on-premise EDMs and RDMs that use one of the following database schemas:18, 21, 22, 23, 24, and 25. Previously, exported exposure and result data was always exported to the latest version of the database schema.

A database schema defines how data is organized within a relational database; it defines the table names, fields, data types and the relationships between these entities. As new features are added to the Intelligent Risk Platform, Moody's releases new versions of the EDM database schema and RDM schema that support the required tables, fields, and data types. The most current versions of these database schemas is Version 25.

The Create Export Job operation now accepts a schemaVersion parameter that specifies the version of the EDM or RDM database schema of the exported exposure or result data. The schemaVersion parameter can be specified in the body of Create Export Job requests with the EDM, EXPOSURE_RESOURCE, EXPOSURE_VARIATION, LOCATION_RESULTS, RDM, RDM_DATABRIDGE, RESULTS, and ROLLUP_RESULTS export types.

The schemaVersion parameter accepts the following values: v18, v21, v22, v23, v24, and v25.


{
  "exportType": "EDM",
  "resourceType": "exposure",
  "settings": {
    "fileExtension": "BAK",
    "sqlVersion": "2019",
    "schemaVersion": "v18"
    "filters": {
      "exposureResourceType": "ACCOUNTS",
      "exposureResourceIds": [
        555,
        556,
        557
      ]
    },
    "fileName": "myEDM"
  },
  "resourceUri": "/platform/riskdata/v1/exposures/5555"

This feature provides the following benefits:

  • Share data with teams and partners on older environments without changing your primary EDMs or RDMs.
  • Keep integrations stable by exporting to the exact data version required by downstream tools.
  • Confirm and trace which version is used for each export, supporting audits, and troubleshooting.

The following rules apply:

  • Exporting an EDM with schemaVersion V23 supports exporting to v23, v22, v21 and v18.
  • Export Analyses should support exporting to RDMs with version to v23, v22, v21 and v18.
  • Export Analyses should support exporting to RDMs with version to v23, v22, v21 and v18 when exporting to a new RDM on Data Bridge.

The schemaVersion parameter can be specified in the body of Create Export Job requests with the EDM, EXPOSURE_RESOURCE, EXPOSURE_VARIATION, LOCATION_RESULTS, RDM, RDM_DATABRIDGE, RESULTS, and ROLLUP_RESULTS export types.

📷

Note

When you export results to an existing RDM in Data Bridge, the export process requires the RDM to be in version 25 and will change the schema to version 25.

Get Export Job

The Get Export Job operation (GET /platform/export/v1/jobs/{jobId) now returns information about the database schema version of exported EDM and RDM databases.

Every on-premise EDM or RDM conforms to a particular schema version, which defines how the data is organized within that database. The database schema defines the names of tables, fields, data types and the relationships between these entities in the database. As new features are added to the Intelligent Risk Platform, Moody's creates new versions of the EDM schema and RDM schema, which are updated in parallel.

This operation now returns the schema version of an exported EDM or RDM. The Create Export Job operation now enables client applications to specify the schema version of exported exposure and analysis results data in EDM, RDM, and DOWNLOAD_RDM export jobs. Consequently, the Get Export Jobs operation returns information the response.

The schemaVersion property is returned in the output object for all EDM, RDM, and DOWNLOAD_RDM export jobs:

{
  "jobId": "40481892",
  "userName": "[email protected]",
  "status": "FINISHED",
  "submittedAt": "2026-01-12T18:46:00.383Z",
  "startedAt": "2026-01-12T18:46:23Z",
  "endedAt": "2026-01-12T18:48Z",
  "name": "Single Analysis",
  "type": "DOWNLOAD_RDM",
  "progress": 100,
  "entitlement": "RI-RISKMODELER",
  "resourceGroupId": "0660550b-32fb-4360-b7ac-61e1b3761131",
  "priority": "medium",
  "details": {
    "resources": [
      {
        "uri": "/platform/riskdata/v1/analyses/19797273"
      }
    ],
    "summary": "Export is successful"
  },
  "tasks": [
    {
      "guid": "affb312e-b53b-4f60-89c4-765d66e382c6",
      "taskId": "1",
      "jobId": "40481892",
      "status": "Succeeded",
      "submittedAt": "2026-01-12T18:46:02.500Z",
      "createdAt": "2026-01-12T18:46:00.377Z",
      "name": "DOWNLOAD_RDM",
      "output": {
        "summary": "Export is successful",
        "errors": [],
        "log": {
          "analysisName": "Port_All_Acc",
          "rdmName": "A1_Databridge_Local_3_UXRt",
          "analysisIdMappings": "19797273->1",
          "schemaVersion": "V18"
        }
      },
      "percentComplete": 100
    }
  ]
}

Create Export Job

The Create Export Job operation now returns a warning message in the header of EDM and RDM export jobs regarding the versions of SQL Server that are supported by the Intelligent Risk Platform.

Currently, the Intelligent Risk Platform™ (IRP) applications and Data Bridge convert all uploaded EDM, RDM, and CEDE databases to SQL Server Version 2019 after upload. Similarly, downloaded databases are downloaded as SQL Server Version 2019 databases by default. Beginning in H2 of 2026, uploaded databases will be converted and stored as SQL Server Version 2022 databases and downloaded databases will be downloaded as SQL Server Version 2022 databases by default. For download, you will be able to select SQL Server Version 2019, but you should expect longer download times for this version.

Moodyʼs recommends upgrading your SQL Server Version to 2022 to experience the best performance.

Import API

Create Import Folder

The Create Import Folder operation (POST /platform/import/v1/folders) now supports the creation of RISK_SOURCE import folders.

An import folder is a temporary storage location and a logical path on Amazon S3. Once uploaded to the import folder, ELT data can be imported into the Intelligent Risk Platform as a risk source.

A risk source is a representation of risk to a cedant including modeled losses that underlies a program (reinsurance program) or business hierarchy position. The risk source links the program or business hierarchy to EDMs that contain the portfolios or analysis results.

The RISK_SOURCE import folder type accepts uploads of ELT data in CSV format. Once uploaded to the import folder, loss table data can be imported into the Intelligent Risk Platform as a risk source using the Create Import Job operation.

{
  "folderType": "RISK_SOURCE",
  "properties": {
    "fileExtension": "CSV",
    "fileTypes": ["lossTableFile"]
  }
}

All parameters are specified in the request body. The folderType and properties object are required.

PropertyTypeDescription
folderTypeStringType of the import folder, i.e. RISK_SOURCE.
propertiesObjectConfiguration settings for import folder including fileExtension and fileTypes.
fileExtensionStringFile extension of uploaded file.
fileTypesArrayType of uploaded file, i.e. lossTableFile.

If successful, creates a RISK_SOURCE job and returns a 200 OK HTTP status code and a response object that specifies the folderId. This identifies the import folder's location on Amazon S3. The response returns information needed to upload that file to the import folder using the AWS API: fileUri, accessKeyId, secretAccessKey, sessionToken, path, region, and uploadUri:

{
  "uploadDetails": {
    "lossTableFile": {
      "fileUri": "platform/import/v1/folders/31888/files/90730",
      "presignParams": {
        "accessKeyId": "xxxx",
        "secretAccessKey": "xxxx",
        "sessionToken": "xxx",
        "path": "xxxx",
        "region": "xxxx"
      },
      "uploadUrl": "https://rms-tenants-xxxxx.s3.amazonaws.com/4000540/import/platform/edm/31888/90730-exposurefile.mdf"
    }
  },
  "folderType": "RISK_SOURCE",
  "folderId": "31888"
}

The uploadDetails object provides information that will enable the tenant to upload loss table data to the import folder on AWS.

PropertyTypeDescription
lossTableFileObject
fileUriStringResource URI for the file, e.g. platform/import/v1/folders/31888/files/90730.
presignParamsObjectContains security credentials for the file: accessKeyId, secretAccessKey, sessionToken, path, region, uploadUrl.

The presignParams can be used to upload CSV files to Amazon S3 using the Amazon API:

PropertyTypeDescription
accessKeyIdStringBase-64 encoded access key for file.
secretAccessKeyStringBase-64 encoded secret access key for file.
sessionTokenStringBase-64 encoded session token for file.
pathStringBase-64 encoded path for file.
regionStringBase-64 encoded AWS region for file.
uploadUrlStringAmazon S3 URL for file.

Create Import Job

The Create Import Job operation now supports importing unsegmented (portfolio-level) event loss data as a risk source for the specified programs.

A RISK_SOURCE import type creates an import job (IMPORT_RISKSOURCE) that imports unsegmented event loss table (ELT) data from the specified RISK_SOURCE import folder and applies that loss data as a risk source to the specified program.

A program is a collection of subjects and program treaties that represents reinsurarnce program, i.e. a contract between an insurer (cedant) and reinsurer (underwriter). The subjects (exposures and risk sources) identify the risk items and the program treaties define the terms of the contract for those subjects. A risk source is a representation of risk to a cedant including modeled losses that underlies the program position. The risk source links the program or business hierarchy to EDMs that contain the portfolios or analysis results.

The ID of a resource group must be specified in the x-rms-resource-group-id header parameter.

{
  "importType": "RISK_SOURCE",
  "resourceUri": "/platform/riskdata/v1/programs/432342",
  "settings": {
    "folderId": "56",
    "riskSourceName": "external risk source",
    "riskSourceDescription": "Risk source imported from external system",
    "riskSourceType": "ELT",
    "delimiter": "TAB",
    "jobname": "Test Job Name",
    "modelUuid": "550e8400-e29b-41d4-a716-446655440000",
    "currencyCode": "USD"
  }
}

The body of the request takes three required parameters:

ParameterTypeDescription
importTypeStringType of import job, i.e. RISK_SOURCE.
resourceUriStringURI of program, e.g. /platform/riskdata/v1/programs/432342. Imported ELT data is applied as risk source to this program.
settingsObjectConfiguration settings for RISK_SOURCE import job.

The settings object specifies configurations for the import job. Within in the settings object, the folderId, riskSourceName, riskSourceType,
delimiter, modelUuid, and currencyCode are required.

ParameterTypeDescription
folderIdStringID of import folder.
riskSourceNameStringName of risk source to import.
riskSourceDescriptionStringDescription of risk source.
riskSourceTypeStringType of risk source, e.g. ELT. In the current release only ELT data is supported.
delimiterStringDelimiter . One of TAB, COMMA, or SEMICOLON.
jobname"StringName of import job. Used to track the RISK_SOURCE import job.
modelUuidStringUUID of model to associate with risk source.
currencyCodeStringThree-letter currency code in ISO 4217 format, e.g. USD. Loss table data is returned is the specified currency.

If successful, returns 201 Created HTTP status code and adds a IMPORT_RISKSOURCE job to the job queue. The response returns the URI of the job in the Location header.The client can use this URL to poll the status of the job.

This operation requires the RI-TREATYIQ entitlement. The client must belong to a group that has been assigned the Treaty Underwriter or TreatyIQ Admin role.

Get Import Job

The Get Import Job operation now returns information about the specified IMPORT_RISKSOURCE import job.

The Import API now supports importing unsegmented (portfolio-level) ELTs as risk sources for programs using the Create Import Job. The Get Import Job operation can be used to poll the status of new IMPORT_RISKSOURCE jobs.

{
  "jobId": "22667023",
  "userName": "string",
  "status": "QUEUED",
  "submittedAt": "2025-09-18T00:54:15.835Z",
  "startedAt": "2025-09-18T00:54:15.835Z",
  "endedAt": "2025-09-18T00:54:15.835Z",
  "name": "Test Job Name",
  "type": "IMPORT_RISKSOURCE",
  "progress": 0,
  "details": {
    "resources": [
      {
        "uri": "/platform/riskdata/v1/programs/123"
      }
    ],
    "summary": "Risksource is imported successfully"
  },
  "tasks": [
    {
      "taskId": "string",
      "guid": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
      "jobId": "string",
      "status": "string",
      "submittedAt": "2025-09-18T00:54:15.835Z",
      "createdAt": "2025-09-18T00:54:15.835Z",
      "name": "Import_Risksource_Task", //new task name
      "percentComplete": 0,
      "priorTaskGuids": [],
      "output": {
        "summary": "Risksource is imported successfully",
        "errors": [
          {
            "message": "string" // only if the job is failed
          }
        ],
        "log": {
          "expiresAt": "string",
          "logUrl": "string",
          "riskSourceUuid": "string"
        }
      }
    }
  ]
}

Risk Data API

Get Report View

The Get Report View operation (GET /platform/riskdata/v1/reportviews/{id}) returns the specified report view.

A report view is a collection of reports that return exposure-specific metrics and statistics. The report view is generated automatically whenever an UnderwriteIQ client creates exposures in batch using the Exposure Bulk Edit operation.

This operation now returns the ACCUMULATIONS_BY_TREATY report.

{
  "reportViewId": 3202,
  "reportViewName": "DemoJB with Marginal Impact",
  "exposureName": "DemoNov2022",
  "createdAt": "2024-01-31T19:40:13.707Z",
  "exposureId": 28104,
  "exposureResourceId": 80,
  "exposureResourceType": "ACCOUNT",
  "createdBy": "[email protected]",
  "notes": "",
  "details": [
    {
      "metricType": "ACCUMULATIONS_BY_TREATY",
      "analysisId": 231774,
      "appAnalysisId": 20406,
      "metricUrl": "https://xxxxx/platform/riskdata/v1/analyses/{analysisId}/treaty-accumulations?eventId={eventId}&treatyId={treatyId}",
      "additionalInfo": {
        "analysisName": "DemoNov2022: ACCOUNT: DemoJB"
      }
    }
  ]
}

This operation requires the RI-UNDERWRITEIQ entitlement.

Get PLT Risk Source

The Get PLT Risk Sources operation (GET /platform/riskdata/v1/risksources/{uuid}/imported-plt) returns a list of the period loss tables imported as program risk sources.

The period loss table is an output table that simulates event losses over the course of a time period, providing greater flexibility to evaluate loss metrics than the analytical calculations based on event loss tables (ELTs).

The Import API now enables RI-TREATIQ tenants to import unsegmented (portfolio-level) PLTs into programs and use the PLT as a risk source for that program. To learn more, see the updates to the Create Import Folder Job and Create Import Job operations in this release.

A risk source is a representation of risk to a cedant including modeled losses that underlies a program (reinsurance program) or business hierarchy position. The risk source links the program or business hierarchy to EDMs that contain the portfolios or analysis results.

[
  {
    "periodId": 503,
    "weight": 0.00002,
    "eventId": 3508644,
    "eventDate": "2020-08-07T00:00:00.000Z",
    "lossDate": "2020-08-13T00:00:00.000Z",
    "loss": 111642.35349968076,
    "peril": "Unrecognized",
    "region": "string"
  }
]

The response object returns information for each loss table:

PropertyTypeDescription
eventDateDateDate of event, , e.g. 2020-08-07T00:00:00.000Z.
eventIdNumberID of event, a representation of a peril that may produce catastrophe losses.
exposureResourceNumberNumberNumber of exposure resource.
lossDateDateDate of first policy payout, e.g. 2020-08-07T00:00:00.000Z.
lossDoubleExpected sampled loss based on the position or financial perspective.
perilStringNatural or man-made phenomenon that generates insurance loss, e.g. Earthquake, Fire.
periodIdNumberID of simulation period.
regionStringModel region of the analysis.
weightDoubleLikelihood that a simulation period occurs relative to the other simulation periods, e.g. 2.0E-5.

This operation requires the RI-TREATYIQ entitlement.

Get ELT Risk Source

The Get ELTs Risk Source operation (GET /platform/riskdata/v1/risksources/{uuid}/imported-elt) returns a list of the event loss tables (ELTs) imported as program risk sources.

The event loss table (ELT) is an output table that contains information about the loss-causing events in an DLM analysis, including the mean loss standard deviation (split into an independent and a correlated piece), exposure value, and event rate. The ELT is the basis of all modeled losses for all financial perspectives at all exposure levels and is used in computing output statistics.

The Import API now enables RI-TREATIQ tenants to import unsegmented (portfolio-level) ELTs into programs and use the ELT as a risk source for that program. To learn more, see the updates to the Create Import Folder Job and Create Import Job operations in this release. A risk source is a representation of risk to a cedant including modeled losses that underlies a program (reinsurance program) or business hierarchy position. The risk source links the program or business hierarchy to EDMs that contain the portfolios or analysis results.

This operation returns detailed information about each loss table imported as a risk source:

[
  {
    "eventId": 2864907,
    "positionValue": 182961890.62228483,
    "stdDevI": 13084002.9308518,
    "stdDevC": 97423142.35982497,
    "expValue": 3464996216.86613
  }
]

The response object returns information for each loss table:

PropertyTypeDescription
eventIdNumberID of event, a representation of a peril that may produce catastrophe losses.
positionValueNumberMean loss incurred for event. Based on the granularity and financial perpective specified in the output profile.
stdDevINumberStandard deviation from mean loss value. Independent standard deviation assumes that all locations are completely independent, which means that knowing the size of the loss at one location does not provide any information about the size of the loss at the other location.
stdDevCNumberStandard deviation from mean loss value. Correlated standard deviation assumes that all locations are correlated, which implies that if the losses are large for one location, they are likely to be large for the other location.
expValueNumberMaximum loss that can be incurred for the event.

This operation requires the RI-TREATYIQ entitlement.

Tenant Data API

VPN for Data Bridge Product Launch

VPN for Data Bridge, a new product, allows Data Bridge customers to establish the Data Bridge connectivity via site-to-site VPN capability. This ensures that you can connect to Data Bridge just like you connect to your on-premises infrastructure. This was released to a limited number of customers in 2025, but is now available for purchase by all.

Data Bridge licensees can create, manage, and monitor site-to-site VPN connections for their Data Bridge instances. This helps streamline the T-SQL connectivity for your Data Bridge instance, making the connection more secure. With VPN for Data Bridge, Data Bridge administrators no longer need to maintain the CIDR block IPs.

Data Bridge admins can:

  • Create, update, or delete VPN connection for their Data Bridge server instances.
  • Switch the Data Bridge connectivity from CIDR block whitelisting to site-to-site VPN connection.
  • View and manage VPN configurations and logs from the Admin Center.

To learn more about VPN for Data Bridge and how it can help your organization, contact Moody's Sales.

Search Tenant Data Jobs

The Search Tenant Jobs operation (GET /platform/tenantdata/v1/jobs) returns a list of tenant data jobs.

This operation now returns information about UPDATE_VPN and UPDATE_VPN jobs.

This operation supports response filtering by API, job type, and entitlement. Clients can filter responses by job type:

https://<HOST>/platform/tenantdata/v1/jobs?filter=type=CREATE_VPN`

The response returns information about the current status of the job:

[
  {
    "jobId": "40545922",
    "userName": "[email protected]",
    "status": "FINISHED",
    "submittedAt": "2026-01-14T16:03:36.010Z",
    "startedAt": "2026-01-14T16:03:38Z",
    "endedAt": "2026-01-14T16:09:15Z",
    "name": "UPDATE_VPN",
    "type": "UPDATE_VPN",
    "progress": 100,
    "entitlement": "IC-VPN",
    "details": {
      "resources": [],
      "summary": "UPDATE_VPN is successful",
      "log": {
        "tunnelId": "1",
        "customerGatewayIp": "18.116.78.48",
        "encryptionType": "preSharedKey",
        "encryptionKeyId": "113"
      }
    },
    "jobUri": "/platform/vpn/v1/jobs/40545922"
  },
  {
    "jobId": "40544513",
    "userName": "[email protected]",
    "status": "FINISHED",
    "submittedAt": "2026-01-14T15:00:03.378Z",
    "startedAt": "2026-01-14T15:00:06Z",
    "endedAt": "2026-01-14T15:06:47Z",
    "name": "CREATE_VPN",
    "type": "CREATE_VPN",
    "progress": 100,
    "entitlement": "IC-VPN",
    "details": {
      "resources": [],
      "summary": "CREATE_VPN is successful",
      "log": {
        "tunnelId": "1",
        "customerGatewayIp": "18.116.78.48",
        "encryptionType": "preSharedKey",
        "encryptionKeyId": "113"
      }
    },
    "jobUri": "/platform/vpn/v1/jobs/40544513"
  },
  {
    "jobId": "40579871",
    "userName": "[email protected]",
    "status": "FINISHED",
    "submittedAt": "2026-01-15T16:10:36.827Z",
    "startedAt": "2026-01-15T16:10:38Z",
    "endedAt": "2026-01-15T16:17:01Z",
    "name": "DELETE_VPN",
    "type": "DELETE_VPN",
    "progress": 100,
    "entitlement": "IC-VPN",
    "details": {
      "resources": [],
      "summary": "DELETE_VPN is successful",
      "log": {
        "tunnelId": "1",
        "customerGatewayIp": "18.116.78.49",
        "encryptionType": "preSharedKey",
        "encryptionKeyId": "141"
      }
    },
    "jobUri": "/platform/vpn/v1/jobs/40579871"
  }
]

The response returns the following properties for every Tenant Data job.

PropertyTypeDescription
jobIdStringID of tenant data job.
userNameStringLogin of principal that created the job.
statusStringStatus of the job, e.g. FINISHED.
submittedAtStringTime job was submitted in ISO 8601 format.
startedAtStringTime job started in ISO 8601 format.
endedAtStringTime job ended in ISO 8601 format.
nameStringUser-defined name of tenant job.
typeStringType of tenant job, e.g. CREATE_VPN, DELETE_JOB, UPDATE_VPN.
progressNumberProgress of tenant data job.
entitlementStringEntitlement of tenant data job, e.g. IC-VPN.
jobUriStringResource URI of tenant data job.
detailsObjectDetailed information for the job including a list of resources and a log object,

Once the statusof the job is FINISHED, the log object returns key information about the VPN connection:

PropertyTypeDescription
tunnelIdStringID of tunnel.
customerGatewayIpStringCustomer gateway IP address.
encryptionTypeStringType of encryption key, e.g. preSharedKey.
encryptionKeyIdStringID of encryption key.

Create Encryption Keys

The Create Encryption Key operation (POST /platform/tenantdata/v1/encryption-keys) creates an encryption key.

Requests to create vpn encryption keys must specify theencryptionKeyType, encryptionKeySubType, encryptionKeyName, and encryptionKeyValue parameters.

{
  "encryptionKeyType": "vpn",
  "encryptionKeySubType": "pre-shared-key",
  "encryptionKeyName": "202507VpnKey",
  "encryptionKeyValue": "myPreSharedKey_EcnryptionKeyValue"
}

The request accepts the following parameters:

ParameterTypeDescription
encryptionKeyTypeStringType of encryption key, i.e. vpn.
encryptionKeySubTypeStringSubtype of vpn encryption key, i.e. pre-shared-key.
encryptionKeyNameStringUser-defined name of encryption key.
encryptionKeyValueStringUser-defined string (between 8-32 characters in length) that defines the encryption key.

If successful, returns 201 Created HTTP status code and adds the encryption key to the AWS key store. Newly created encryption keys have a status of Available. For detailed information about the AWS key store, see Key Stores.

If successful, returns the encryption key ID:

{
  "keyId": 140
}

Search Encryption Keys

The Search Encryption Keys operation (GET /platform/tenantdata/v1/encryption-keys) returns a list of encryption keys.

This operation supports response filtering based on property values. Depending on the property, you may use a combination of comparison operators, list operators, and logical operators.

PropertyTypeComparisonList
createdAtTimestamp=, !=, <, >, <=, >=
createdByTimestamp=, !=, LIKE, NOT LIKEIN, NOT IN
encryptionKeyNameString=, !=, LIKE, NOT LIKEIN, NOT IN
encryptionKeySubTypeString=, !=IN, NOT IN
encryptionKeyTypeString=, !=IN, NOT IN
expirationDateTimestamp=, !=, <, >, <=, >=
idNumber=, !=, <, >, <=, >=IN, NOT IN
startDateTimestamp=, !=, <, >, <=, >=
statusString=, !=IN, NOT IN
tenantIdString=, !=IN, NOT IN

The encryptionKeyType and encryptionKeySubType filter based on ENUM values. The encryptionKeyType query parameter supports filtering by the ENUM values: VPN, TDE, BYOK. Only VPN are supported in the current release. The encryptionKeySubType query parameter supports filtering by the ENUM values: PRE-SHARED-KEY, MASTER-KEY-PASSWORD, CUSTOMER-UPLOADED-KEY.

The startDate, createdAt, expirationDate query parameters all accept String that record timestamp values in ISO-8601 format. The startDate and expirationDate parameter values map to the activatedAt and expiredAt expiration key properties respectively.

The encryptionKeyName and createdBy query parameters support the LIKE and NOT LIKE comparison operators, which support use of the % character`. For example, encryptionKeyName LIKE "VPN%".

All properties support the AND logical operator, e.g. encryptionKType="VPN" AND status="IN-USE". To learn more, see Response Filtering.

The response is a list of encryption key details.

[
  {
    "id": 131,
    "tenantId": "2001848",
    "encryptionKeyType": "VPN",
    "encryptionKeySubType": "PRE-SHARED-KEY",
    "encryptionKeyName": "Prem key",
    "encryptionKeyValue": "************_ABC",
    "status": "AVAILABLE",
    "createdBy": "[email protected]",
    "createdAt": "2026-01-13T18:02:55.541797Z",
    "activatedAt": null,
    "expiredAt": null
  },
  {
    "id": 125,
    "tenantId": "2001848",
    "encryptionKeyType": "VPN",
    "encryptionKeySubType": "PRE-SHARED-KEY",
    "encryptionKeyName": "Kembly PSK Key 3",
    "encryptionKeyValue": "************5678",
    "status": "IN-USE",
    "createdBy": "[email protected]",
    "createdAt": "2026-01-12T20:47:06.783020Z",
    "activatedAt": "2026-01-14T20:01:34.889114Z",
    "expiredAt": null
  },
  {
    "id": 122,
    "tenantId": "2001848",
    "encryptionKeyType": "VPN",
    "encryptionKeySubType": "PRE-SHARED-KEY",
    "encryptionKeyName": "TestKey09",
    "encryptionKeyValue": "************y123",
    "status": "REVOKED",
    "createdBy": "[email protected]",
    "createdAt": "2026-01-12T15:50:16.433609Z",
    "activatedAt": "2026-01-12T20:09:01.121550Z",
    "expiredAt": null
  }
]

The response returns the following properties for each encryption key.

PropertyTypeDescription
idIntegerID of encryption key.
tenantIdStringID of tenant.
encryptionKeyTypeStringType of encryption key, i.e. VPN. VPN is the only encryption key type currently supported.
encryptionKeySubTypeStringSubype of encryption key, i.e. PRE-SHARED-KEY. One of PRE-SHARED-KEY, MASTER-KEY-PASSWORD, CUSTOMER-UPLOADED-KEY.
encryptionKeyNameStringName of encryption key.
encryptionKeyValueStringValue of encryption key.
statusStringStatus of encryption key. One of AVAILABLE, REVOKED, and IN-USE.
createdByStringLogin of principal that created the encryption key.
createdAtStringTimestamp of encryption key creation date.
activatedAtStringTimestamp of encryption key activation date.
expiredAtStringTimestamp of encryption key expiration date.

To perform this operation, the client must have the IC-VPN entitlement and belong to a group that has been assigned the RI Admin or Data Bridge Admin roles.

Get Encryption Key

The Get Encryption Key operation (GET /platform/tenantdata/v1/encryption-keys/id) returns the specified encryption key.

A VPN encryption key is a pre-shared key that enables clients to create a VPN connection. The VPN encryption key represents a shared secret that enables Intelligent Risk Platform tenants to connect to Data Bridge via a VPN connection.

In the current release, the Tenant Data API supports VPN encryption keys only. Every encryption key is of the VPN encryption key type and the PRE-SHARED-KEY encryption key subtype.

The required path parameter specifies the ID of the a VPN encryption key.

{
  "id": 140,
  "tenantId": "2001848",
  "encryptionKeyType": "VPN",
  "encryptionKeySubType": "PRE-SHARED-KEY",
  "encryptionKeyName": "TestVpnKey",
  "encryptionKeyValue": "************alue",
  "status": "AVAILABLE",
  "createdBy": "[email protected]",
  "createdAt": "2026-01-15T01:59:33.113519Z",
  "activatedAt": null,
  "expiredAt": null
}

The response returns the following encryption key properties.

PropertyTypeDescription
idIntegerID of encryption key.
tenantIdStringID of tenant.
encryptionKeyTypeStringType of encryption key, i.e. VPN.
encryptionKeySubTypeStringSubype of encryption key, i.e. PRE-SHARED-KEY.
encryptionKeyNameStringName of encryption key.
encryptionKeyValueStringValue of encryption key.
statusStringStatus of encryption key. One of ACTIVE, AVAILABLE, EXPIRED, IN-USE. PENDING, and REVOKED,
createdByStringLogin of principal that created the encryption key.
createdAtStringTimestamp of encryption key creation date in ISO-8601 format.
activatedAtStringTimestamp of encryption key activation date.
expiredAtStringTimestamp of encryption key expiration date.

By default, newly created encryption keys are inactive; the status of the key is AVAILABLE. Once the encryption key is used to create a VPN connection, the status of the key is changed to IN-USE. Use the Update Encryption Key operation to change the status of the key to REVOKED.

To perform this operation, the client must have the IC-VPN entitlement and belong to a group that has been assigned the RI Admin or Data Bridge Admin roles.

Update Encryption Key

The Update Encryption Key operation (PATCH /platform/tenantdata/v1/encryption-keys/id) updates the status of specified encryption key.

A VPN encryption key is a pre-shared key that enables clients to create a VPN connection. The VPN encryption key represents a shared secret that enables Intelligent Risk Platform tenants to connect to Data Bridge via a VPN connection.

This operation enables the client to revoke the encryption key. Once revoked, the encryption key cannot be used to create a VPN connection.

At any one time an encryption key is defined by one of three statues: AVAILABLE, IN-USE, REVOKED.

StatusDescription
AVAILABLEThe encryption key is not assigned to a VPN connection.
IN-USEThe encryption key is assigned to a VPN connection.
REVOKEDThe encryption key cannot be assigned to a VPN connection.

The Create VPN Connection and Update VPN Connection operations enable the client to assign an encryption key to a new or existing VPN connection. An encryption key can be assigned to one and only one VPN connection at a time.

To perform this operation, the client must have the IC-VPN entitlement and belong to a group that has been assigned the RI Admin or Data Bridge Admin roles.

Create VPN Connection

The Create VPN Connection operation (POST platform/tenantData/v1/vpn-connections) creates and configures a VPN connection between the tenant and Data Bridge.

A VPN connection is a configuration that defines a secure, encrypted site-to-site tunnel that can be used to send data between a tenant's on-premise SQL Server instances and their managed server instances on Data Bridge.

{
  "customerGatewayIp": "18.116.78.XX",
  "comments": "Test VPN connection",
  "bgpRouting": false,
  "customerSubnetIps": ["10.0.0.0/24", "10.0.1.0/24"],
  "encryptionType": "preSharedKey",
  "encryptionKeyResourceId": "/platform/tenantdata/v1/encryption-keys/32",
  "enabledApps": ["DATA_BRIDGE"]
}

The customerGatewayIp parameter is required.

PropertyTypeDescription
bgpRoutingBooleanIf true, the VPN connection uses Border Gateway Protocol, a protocol for efficiently routing traffic.
commentsStringUser-defined description of VPN connection.
customerGatewayIpStringRequired: Customer gateway IP address.
customerSubnetIpsArrayList of customer subnet IP addresses. Private IPs for NAT (Network Address Translation) are non-routable, RFC-1918 addresses used to conserve public IP addresses
customerBgpAsnStringASN (Autonomous System Number) that identifies the tenant's network. Required if bgpRouting is true.
enabledAppsArrayList of Intelligent Risk Platform applications.VPN for Data Bridge currently supports DATA_BRIDGE only.
encryptionKeyResourceIdStringResource URI of an AVAILABLE encryption key. Encryption keys with a status of IN-USE or REVOKED cannot be assigned.
encryptionTypeStringType of encryption, e.g. preSharedKey.
vpnStatusStringStatus of VPN connection, i.e. on or off. By default, off.

On success, this operation returns 202 Accepted HTTP status code and adds a CREATE_VPN job to the workflow queue. Use the Search Tenant Jobs operation to poll the status of the CREATE_VPN job.

The client can use the Get VPN Connection operation to poll the status of the VPN Connection. While the CREATE_VPN job is running, the vpnStatus of the VPN connection changes to creating. If the CREATE_VPN job fails, the vpnStatus is set to failed_create.

To perform this operation, the client must have the IC-VPN entitlement and belong to a group that has been assigned the Admin or Data Bridge Admin roles.

Search VPN Connections

The Search VPN Connections operation (GET platform/tenantData/v1/vpnconnections) returns a list of VPN connection.

A VPN connection is a set of configurations that define a secure, encrypted site-to-site connection between a tenant's on-premise SQL Server instances and their managed server instances on Data Bridge.

This operation supports filtering VPN configurations by customerGatewayIp or bgpRouting. Responses can be sorted by vpnConnectionId.

ParameterTypeComparisionList
awsSecurityGroupIdsArray
awsVpnConnectionIdString=, !=IN, NOT IN
bgpRoutingString=, !=IN, NOT IN
commentsString=, !=, LIKE, NOT LIKEIN, NOT IN
createdAtTimestamp=, !=, <, >, <=, >=
createdByString=, !=, LIKE, NOT LIKEIN, NOT IN
customerBgpAsnNumber=, !=, <, >, <=, >=IN, NOT IN
customerGatewayIpString=, !=IN, NOT IN
customerSubnetIpsArray
dnsResolverIpsArray
enabledAtTimestamp=, !=, <, >, <=, >=
encryptionKeyIdNumber=, !=, <, >, <=, >=IN, NOT IN
encryptionTypeString=, !=IN, NOT IN
idNumber=, !=, <, >, <=, >=IN, NOT IN
irpBgpAsnNumber=, !=, <, >, <=, >=IN, NOT IN
irpSubnetIpsArray
statusString=, !=IN, NOT IN
vpnConnectionNameString=, !=, LIKE, NOT LIKEIN, NOT IN

The bgpRouting and status parameters filter based on ENUM values. The bgpRouting query parameter supports filtering by enabled and disabled values. The status query parameter supports filtering by CREATING, ON, OFF, DELETING, FAILED_CREATE, FAILED_DELETE values.

The createdAt and enabledAt query parameters accept a String that record timestamp values in ISO-8601 format.

The comments, createdBy, and vpnConnection query parameters support the LIKE and NOT LIKE comparison operators, whch support use of the % character. For example, vpnConnectionName LIKE "Boston%"`.

All properties support the AND logical operator, e.g. encryptionKType="VPN" AND status="IN-USE". To learn more, see Response Filtering.

If successful, returns 200 OK HTTP status code and a list of VPN connections.

[
  {
    "vpnConnectionId": 114,
    "customerGatewayIps": ["18.116.78.49"],
    "comments": "Test Comments Updated",
    "dnsResolverIps": ["10.1.3.38", "10.1.9.10"],
    "bgpRouting": true,
    "customerBgpAsn": "64530",
    "irpBgpAsn": "64512",
    "customerSubnetIps": ["11.0.0.0/17"],
    "irpSubnetIps": ["10.1.0.0/22", "10.1.8.0/22", "10.1.16.0/22"],
    "irpTargetIps": ["10.1.1.215", "10.1.11.63", "10.1.17.226"],
    "encryptionType": "preSharedKey",
    "encryptionKeyResourceId": 125,
    "enabledApps": ["DATA_BRIDGE"],
    "vpnStatus": "off",
    "tunnelSettings": [
      {
        "id": 1,
        "tunnelOutsideIp": "52.48.1.169",
        "SourceIp": "169.254.120.58",
        "NeighborIp": "169.254.120.57"
      },
      {
        "id": 2,
        "tunnelOutsideIp": "108.128.158.18",
        "SourceIp": "169.254.42.170",
        "NeighborIp": "169.254.42.169"
      }
    ],
    "createdBy": "[email protected]",
    "createdAt": "2026-01-14T15:00:03.027502Z",
    "enabledAt": null
  }
]
PropertyTypeDescription
bgpRoutingBooleanIf true, the VPN connection uses Border Gateway Protocol, a protocol for efficiently routing traffic.
commentsStringDescription of VPN connection configuration. Maximum 50 characters.
createdAtStringTimestamp of VPN connection.
createdByStringEmail of <glossary:principal>> that created the VPN connection.
customerBgpAsnString
customerGatewayIpsArrayList of IP gateways.
customerSubnetIpsArrayList of private IP addresses for the NATing.
dnsResolverIpsArray"tbc by client on their DNS resolver"
enabledAppsArrayList of supported Intelligent Risk Platform applications. VPN for Data Bridge currently supports DATA_BRIDGE only.
enabledAtStringOr null.
encryptionKeyResourceIdObjectID of encryption key.
encryptionTypeStringENUM values, i.e. preShareKey
irpBgpAsnString
irpSubnetIpsArrayList of private IP for the NATing.
tunnelSettingsArrayList of tunnel setting objects. Each includes id, tunnelOutsideIp, SourceIp, NeighborIp.
vpnConnectionIdIntegerID of VPN tunnel.
vpnStatusStringStatus of VPN. One of on or off. By default, off.

To perform this operation, the client must have the IC-VPN entitlement and belong to a group that has been assigned the Admin, Consumer, Contributor, or Data Bridge Admin roles.

Get VPN Connection

The Get VPN Connection operation (GET platform/tenantData/v1/vpnconnections/{vpnconnectionId}) returns detailed information about the specified VPN configuration setting.

A VPN connection is a set of configurations that define a secure, encrypted site-to-site connection between a tenant's on-premise SQL Server instances and their managed server instances on Data Bridge.

This operation returns detailed information about a single VPN connection.

{
  "vpnConnectionId": 1,
  "customerGatewayIps": ["21.32.01.01/32", "25.32.01.01/32"],
  "comments": "Public IP using natting",
  "dnsResolverIps": ["56.01.01.01", "58.01.01.01"],
  "bgpRouting": false,
  "customerSubnetIps": ["10.01.01.01/16", "11.01.01.02/24", "12.01.01.03/8"],
  "tunnelSettings": [
    {
      "id": 1,
      "tunnelOutsideIp": "21.32.01.01/32"
    },
    {
      "id": 2,
      "tunnelOutsideIp": "31.32.01.01/32"
    }
  ],
  "irpSubnetIps": ["20.01.01.01/16", "21.01.01.02/24", "22.01.01.03/8"],
  "encryptionType": "preSharedKey",
  "encryptionKeyResourceId": 126,
  "enabledApps": ["DATA_BRIDGE"],
  "vpnStatus": "on"
}

The response returns the following information about the specified VPN connection:

PropertyTypeDescription
vpnConnectionIdIntegerID of VPN connection.
customerGatewayIpsArrayRequired. List of IP gateways addresses.
commentsStringMaximum 50 characters.
dnsResolverIpsArray<!-- "tbc by client on their DNS --resolver"-->
bgpRoutingBooleanXXX.
customerSubnetIpsArrayList of private IP addresses for the nating.
tunnelSettingsArrayList of tunnel setting opjects. Includes id and tunnelOutsideIp
irpSubnetIpsArrayList of private IP for the nating.
encryptionTypeStringENUM values, i.e. preShareKey
encryptionKeyResourceIdObjectID of encryption key.
enabledAppsArrayList of supported Intelligent Risk Platform applications. VPN for Data Bridge currently supports DATA_BRIDGE only.
vpnStatusStringStatus of VPN connection. One of on,off, creating, deleting, failed_create, or failed_delete.

Polling VPN Status

This operation is particularly useful for retrieving the current vpnStatus of a VPN connection.

A VPN connection can only be used to send traffic between the tenant's network and the tenant's Data Bridge cluster when it has a vpnStatus of on. Use the Update VPN Connection operation to change the vpnStatus` of the VPN connection.

At any one time the VPN connection is defined by one of the following VPN statuses: One of on,off, creating, deleting, failed_create, or failed_delete.

VPN StatusDescription
onVPN connection is available.
offVPN connection is available.
creatingA VPN_CREATE or VPN_UPDATE job is running.
deletingA VPN_DELETE job is running.
failed_createA VPN_CREATE or VPN_UPDATE job has failed.
failed_deleteA VPN_DELETE job has failed.

During CREATE_VPN or UPDATE_VPN job, may return creating or failed_create values. During DELETE_VPN job, may return deleting or failed_delete values.

To perform this operation, the client must have the IC-VPN entitlement and belong to a group that has been assigned the Admin, Consumer, Contributor, or Data Bridge Admin roles.

Update VPN Connection

The Update VPN Connection operation (PATCH platform/tenantData/v1/vpnconnections/{vpnconnectionId}) updates the specified VPN connection.

A VPN connection is a configuration that defines a secure, encrypted site-to-site tunnel that can be used to send data between a tenant's on-premise SQL Server instances and their managed server instances on Data Bridge.

The request can be used to update the following VPN connection properties: bgRouting, comments, customerBgpAsn, customerGatewayIps, customerGatewayIps, encryptionKeyResourceId, encryptionType, vpnStatus.

For example, the operation can be used to specify public IP addresses used to create the VPN connection tunnel:

{
  "customerGatewayIps": ["21.32.01.01/32", "25.32.01.01/32"],
  "comments": "Public IP using natting",
  "customerSubnetIps": ["10.01.01.01/16", "11.01.01.02/24", "12.01.01.03/8"]
}

This operation can also be used to change the encryption key used by the VPN connection:

{
  "encryptionType": "preSharedKey",
  "encryptionKeyResourceId": "/platform/tenantdata/v1/encryption-keys/35"
}

One common use case is to use this operation to toggle the vpnStatus of the VPN connection.

{
  "vpnStatus": "on"
}

By default, every new VPN connection has a vpnStatus set to off and cannot be used. If the vpnStatus is off, the encryption key becomes AVAILABLE and can be assigned to another VPN connection.

  • If on, the VPN connection can be used to connect on-premise SQL Server instances and managed servers on Data Bridge.
  • If off, the VPN connection is unavailable. Access to Data Bridge is managed by Data Bridge ACL. To learn more, see Administer Cluster Security in Data Bridge API.

The following VPN connection properties can be updated using this operation:

PropertyTypeDescription
bgpRoutingBooleanIf true, the VPN connection uses Border Gateway protocol to efficiently route traffic.
commentsStringDescription of VPN connection configuration. Limit of 50 characters.
customerBgpAsnStringASN (Autonomous System Number) that identifies the tenant's network. Required if bgpRouting is true.
customerGatewayIpsStringCustomer gateway IP address.
customerSubnetIpsArrayList of customer subnet IP addresses. Private IPs for NAT (Network Address Translation) are non-routable, RFC-1918 addresses used to conserve public IP addresses
enabledAppsArrayList of Intelligent Risk Platform applications. VPN for Data Bridge currently supports DATA_BRIDGE only.
encryptionKeyResourceIdStringResource URI of an AVAILABLE encryption key. Encryption keys with a status of IN-USE or REVOKED cannot be assigned.
encryptionTypeStringType of encryption used to encrypt data traffic, i.e. PreSharedKey.
vpnStatusStringStatus of VPN connection, i.e. on and off. By default, off.

On success this operation generally adds an UPDATE_VPN job to the queue and returns a 202 Accepted HTTP status code in the response. Use the Search Tenant Data Job to poll the status of the job. If the only VPN connection property updated is the comment property, the operation returns a 200 OK HTTP status code in the response.

The client can use the Get VPN Connection operation to poll the status of the VPN Connection. While the Update_VPN job is running, the vpnStatus of the VPN connection changes to creating (as it does during a CREATE_VPN job). If the Update_VPN job fails, the vpnStatus is set to failed_create.

To perform this operation, the client must have the IC-VPN entitlement and belong to a group that has been assigned the RI Admin or Data Bridge Admin roles.

Delete VPN Connection

The Delete VPN Connections operation (DELETE platform/tenantData/v1/vpnconnections/{vpnconnectionId}) deletes the specified VPN connection.

A VPN connection defines a secure, encrypted VPN tunnel between a tenant's on-premise SQL Server instances and that tenant's Data Bridge cluster. The tenant can use this connection to send a receive data between on-premise SQL Server instances and managed server instances on Data Bridge.

This operation enables the tenant to delete a VPN connection.

To perform this operation, the client must belong to a group that has been assigned the Admin or Data Bridge Admin roles.

On success, this operation adds a DELETE_VPN job to job queue and reutrns a 202 Accepted response. Use the Search Tenant Data Job operation to poll the status of the DELETE_VPN job.

The client can use the Get VPN Connection operation to poll the status of the VPN Connection. While the DELETE_VPN job is running, the vpnStatus of the VPN connection changes to deleting. If the DELETE_VPN job fails, the vpnStatus is set to failed_delete.