Import Small Databases
Import small databases to Data Bridge
Overview
The Data Bridge Databases API facilitates the migration of transfer of small databases from on-premise server instances to managed server instances on Data Bridge.
This workflow is best suited to transferring relatively small databases as part of one-off migrations. Prior to upload databases should be saved as database artifacts in the BAK, DACPAC, or MDF formats. For best results, the database artifact files should be less than 5GB in size.
Data Bridge import processes leverage Amazon Web Services (AWS) APIs to enable tenants to upload database artifacts to S3 buckets on Amazon S3. The Databases API enables principals with the appropriate permissions to acquire the pre-signed URL for an S3 bucket. The client can then use AWS APIs to upload the database artifact to the bucket. Once uploaded to Amazon S3, the tenant may use Databases API operations to import the database into a managed server instance on Data Bridge.
Database Size Limits
Authenticated principals with the appropriate role-based permissions (Admin and Contributor roles) may import and export databases to Data Bridge.
Uploading databases using the Databases API is limited to databases no larger than 100GB. To upload larger databases, use the Import Upload API to upload and import large files (greater than 5GB in size) in multiple parts. To learn more, see Import Large Databases.
Step 1: Create database artifact
Data Bridge enables you to import databases that have been saved or exported as database artifacts. A database artifact is a copy of a database that has been saved to a particular format for backup, storage, or data transfer. Data Bridge APIs support processes for importing and exporting database artifacts in the BAK, MDF, or DACPAC formats.
Data Bridge enables importing any database from on-premise SQL Server instances without restriction.
- The database artifact name must have use the
.bak
,.dacpac
, or.mdf
file extension. - The database artifact name is limited to 80 characters, and can include only
0
-9
,A
-Z
,a
-z
, underscore (_
), hyphen (-
) and space. - Each database artifact must have a unique file name that is not used by any other uploaded database in Risk Modeler. Databases with non-unique file names will generate an error during upload.
If you need to import a Moody's RMS EDM or RDM database, note that certain restrictions do apply. Data Bridge enables you to import database artifacts representing EDMs and RDMs managed in RiskLink/RiskBrowser versions 13.0, 15.0, 16.0, 17.0, and 18.0. During import to Data Bridge, EDMs and RDMs are automatically updated to Microsoft SQL Server 2019.
Step 2: Get server instance
The Get server instances operation ( /databridge/v1/sql-instances
) returns you high-level information about a tenant's managed SQL Server instances.
The client must specify the name of a managed server instance when it imports a database.
curl --location -g --request GET '{host}/databridge/v1/sql-instances' \
--header 'Authorization: {api_key}'
A successful response returns a 200 OK
status code and an array of sever instances. For each server instance, the service returns the name, unique ID, and URL.
[
{
"name": "myServerInstance",
"uid": "4600d1f1-843c-4394-XXXX-458c5fa0aa17",
"endpoint": "a9baf781fb8ce40ebd3ff4dab4d77d79.XXX.XXXXX"
}
]
You can now use the name of the SQL Server instance (myServerInstance
) to identify where to create the database on Data Bridge.
Step 3: Acquire S3 bucket URL
The Import database operation (/v1/sql-instances/{instanceName}/databases/{databaseName}/import
) enables you to fetch a pre-signed URL to an S3 bucket.
The operation takes two required path parameters: instanceName
and databaseName
. The instanceName
specifies the name of the managed SQL Server instance (for example, myServerInstance). The databaseName
specifies the name of the uploaded database artifact (for example, myDatabase) and the database to be created on the specified server instance.
curl --request GET \
--url 'https://{host}/databridge/v1/sql-instances/myServerInstance/Databases/myDatabase/import?importFrom=0' \
--header 'Authorization: XXXXXXXXXX' \
--header 'accept: text/plain'
The required importFrom
query parameter specifies the format of the database artifact to be uploaded to the S3 bucket: 0
for MDF, 1
for BAK, and 2
for DACPAC.
If successful, the response returns a 200 OK
HTTP status code, and a pre-signed URL of an S3 bucket.
{
"mdfUri": https://XXX-XXXXXXX-XXX-XX-XXXX-1.s3.eu-west-1.amazonaws.com/XXXXXXX/ \
import/databridge/tmp/myDatabasef2021-06-02-17-50-57.mdf?X-Amz-Expires= \
XXXXXX&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=XXXXXXXXXXXXXXXXXXXX/ \
XXXXXXXX/eu-west-1/s3/aws4_request&X-Amz-Date=20210603T005124Z&X-Amz-SignedHeaders= \
content-type;host&X-Amz-Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
}
Step 4: Upload database artifact to S3 bucket
The Data Bridge API does not expose an operation for uploading database artifacts to S3 buckets on Amazon S3. Rather, application clients must use the AWS APIs to manage the uploading of database artifacts to S3 buckets.
Once you have uploaded the database artifact to the S3 bucket on Amazon S3, you can use Data Bridge API operations to import the database in to the managed server instance on Data Bridge.
Step 5: Import database
The Import database operation (POST {host}/databridge/v1/sql-instances/{instanceName}/Databases/{databaseName}/import
) transfers the database artifact uploaded to the S3 bucket to the specified server instance.
The optional importFrom
query parameter specifies the format of the database artifact: 0
for MDF, 1
for BAK, and 2
for DACPAC. If no value is specified, a BAK file is assumed.
curl --request POST \
--url 'https://{host}/databridge/v1/sql-instances/{instanceName}/Databases/{databaseName}/import?importFrom=2' \
--header 'accept: application/json' \
--header 'content-type: application/json' \
--data '
{
"groupIds": [
"5ee3051a-5508-4800-8af4-88324e4180f6",
"8e676a8f-63ed-4cc0-8678-9a20ce41a80a"
]
}
'
The operation also accepts an array of groupIds
that share the imported database and its exposure set with other groups.
A successful response returns a 202 Accepted
status code and the id
of the job in the Location
response header.
{
"id": "5ee3051a-5508-4800-8af4-88324e4180f6"
}
Step 6: Check import job status
The Get job status operation (/databridge/v1/Jobs/{jobId}
) retrieves information about a specific workflow job.
The jobId
path parameter specifies the ID number of a workflow job. Use the value of the id
attribute returned in Step 5:
curl --location -g --request GET '{host}/databridge/v1/Jobs/5ee3051a-5508-4800-8af4-88324e4180f6' \
--header 'Authorization: {api_key}'
When the job is complete, the status is set to SUCCESS
and the response returns the job ID number.
{
"jobId": 5ee3051a-5508-4800-8af4-88324e4180f6
}
Step 7: Confirm the database was created
The Get databases by instance operation (/databridge/v1/sql-instances/{instanceName}/databases
) enables client applications to retrieve a list of databases on the specified server instance. In this step, we'll use this operation to verify that we have successfully created our database.
The instanceName
path parameter specifies the name of a managed server instance. In this example, we need to verify that we have created a database on the myServerInstance
server instance.
curl --location --request GET 'https://{host}/databridge/v1/sql-instances/myServerInstance/databases' \
--header 'Authorization: {api_key}'
The service returns 200 OK
status code and a list of the databases on the specified SQL Server instance:
[
{
"name": "newDbconsumerbak",
"sizeInMb": "12"
},
{
"name": "myDatabase",
"sizeInMb": "14"
}
]
Updated about 2 months ago