Upscaling and Downscaling Azure SQL Databases


4 mins read

Azure SQL is an important part of our platform for delivering solutions. However, most of the apps and services we provide do not rely on a live connection to our SQL databases. Typically we use Power BI as the front-end for our apps where the data has been preloaded into datasets and transformed using DAX.

This means that some of our databases don’t need to be very high spec. For everyday operation we find that scaling them at S0 (10 DTUs) or S1 (20 DTUs) is sufficient. We do however have to import substantial amounts of new data into our databases at regular intervals, and we need these operations to perform as quickly as possible, and for this we have to scale up the databases to a much higher spec of at least S6 (400 DTUs).

It’s easy enough to do this manually using the Azure Portal but it’s cumbersome to have to remember to do this every time, and impractical as we wanted our importer utilities to be able to run automatically on a scheduled basis or by a non-tech person simply pressing a button. It’s also all too easy to forget to scale the database back down afterwards and only notice a few days later having run up a substantial bill for the high-spec configuration.

It is possible to automate the process of scaling up the database at the start of an data import run and scaling it back down on completion. This can be done in a number of different ETL methods, including Azure Data Factory (ADF), Power Automate and Azure Functions written in C#.

Here is a simple pipeline in ADF. The pipeline consists of just a single Web activity which calls an Azure Management API REST endpoint for scaling a database. The pipeline accepts 6 parameters which are shown below.

Once these are in place, the Web activity settings need to be configured to reference them.

@concat('https://management.azure.com/subscriptions/', pipeline().parameters.Subscription, '/resourceGroups/', pipeline().parameters.ResourceGroup, '/providers/Microsoft.Sql/servers/', pipeline().parameters.Server, '/databases/', pipeline().parameters.Database, '?api-version=2017-10-01-preview')

You can see how this is building the URL using the Subscription, ResourceGroup, Server and Database pipeline parameters.

Similarly, the body settings can reference the Scale and Tier pipeline parameters:

@concat('{"sku":{"name":"', pipeline().parameters.Scale, '","tier":"', pipeline().parameters.Tier, '"}, "location": "UK South"}')

The only other settings required were:

  • Method: PUT
  • Headers: Content-Type application/json
  • Authentication: MSI
  • Resource: https://management.azure.com/

You can test that it works by triggering the pipeline.

Scaling database called PredictiveAnalysis_Live down to S0

Once you are satisfied that the pipeline is successfully scaling the specified database to the correct performance configuration, you can include the pipeline in your import process. Since it is possible for one pipeline to invoke other pipelines in ADF you can include your scaling pipeline at the start and end of any import pipelines. This example scales the database up to S6 at the start and back down to S0 at the end.

Using the pipeline in Power Automate is also quite straightforward thanks to the Create a pipeline run action. You simply configure the name of the data factory, the name of the pipeline, and pass the required parameters in JSON format.

The only complication was that it is not sufficient to simply call the pipeline and then proceed to other actions in the flow, because it can take several minutes for the scale up/down to complete. To handle this you can use a Do Until control activity to check the status of the pipeline run every 30 seconds until it has completed.

The Get pipeline run activity uses the RunId returned by the earlier Create a pipeline run activity, and the Status returned is then used to check whether it is complete.

You might also want to use this technique from Azure Functions. If you have created the flow above, you can trigger it with a new message in an Azure Storage queue and the flow will call the scaling pipeline. You can write a message to the queue in your function app (C# code below).

            // Parse the connection string and return a reference to the storage account.
            var storageCredentials = new StorageCredentials("{my storage account}", "{my access key}"); 
            CloudStorageAccount cloudStorageAccount = new CloudStorageAccount(storageCredentials, true);

            // Create the queue client.
            CloudQueueClient queueClient = cloudStorageAccount.CreateCloudQueueClient();

            // Retrieve a reference to the queue.
            CloudQueue queue = queueClient.GetQueueReference("db-scale-queue");

            // Create the queue if it doesn't already exist.
            queue.CreateIfNotExists();

            // Create a message and add it to the queue.
            string messageContent = string.Concat("{", $"\"database\": \"{database}\", \"scale\": \"{scale}\"", "}");
            CloudQueueMessage qmessage = new CloudQueueMessage(messageContent);
            queue.AddMessage(qmessage);