Ask AI

You are viewing an unreleased or outdated version of the documentation

Storing compute logs in Azure Blob Storage/Azure Data Lake Storage#

In this guide, we'll walk through how to store compute logs in Azure Blob Storage or Azure Data Lake Storage. This guide assumes you have already set up an Azure Kubernetes Service (AKS) agent and deployed user code in Azure Container Registry (ACR).

This guide focuses on using Azure Blob Storage, but the same steps should be applicable for Azure Data Lake Storage.

If you have not yet set up an AKS agent, you can follow the Deploy an Azure Kubernetes Service (AKS) agent guide. If you have not yet deployed user code in ACR, you can follow the Deploy user code in Azure Container Registry (ACR) guide.

Prerequisites#

To complete the steps in this guide, you'll need:

  • The Azure CLI installed on your machine. You can download it here.
  • An Azure account with the ability to create resources in Azure Blob Storage or Azure Data Lake Storage.
  • An Azure container in Azure Blob Storage or Azure Data Lake Storage where you want to store logs.
  • Either the quickstart_etl module from the hybrid quickstart repo, or any other code location successfully imported, which contains at least one asset or job that will generate logs for you to test against.

Step 1: Give AKS agent access to blob storage account#

We need to ensure that the AKS agent has the necessary permissions to write logs to Azure Blob Storage or Azure Data Lake Storage. We'll do this with some azure CLI commands.

First, we'll enable the cluster to use workload identity. This will allow the AKS agent to use a managed identity to access Azure resources.

az aks update --resource-group <resource-group> --name <cluster-name> --enable-workload-identity

Then, we'll create a new managed identity for the AKS agent, and a new service account in our AKS cluster.

az identity create --resource-group <resource-group> --name agent-identity
kubectl create serviceaccount dagster-agent-service-account --namespace dagster-agent

Now we need to federate the managed identity with the service account.

az identity federated-credential create \
  --name dagster-agent-federated-id \
  --identity-name agent-identity \
  --resource-group <resource-group> \
  --issuer $(az aks show -g <resource-group> -n <aks-cluster-name> --query "oidcIssuerProfile.issuerUrl" -otsv) \
  --subject system:serviceaccount:dagster-agent:dagster-agent-service-account

Finally, we'll edit our AKS agent deployment to use the new service account.

kubectl edit deployment <your-user-cloud-deployment> -n dagster-agent

In the deployment manifest, add the following lines:

metadata:
  ...
  labels:
    ...
    azure.workload.identity/use: "true"
spec:
  ...
  template:
    ...
    spec:
      ...
      serviceAccountName: dagster-agent-sa

If everything is set up correctly, you should be able to run the following command and see an access token returned:

kubectl exec -n dagster-agent -it <pod-in-cluster> -- bash
# in the pod
curl -H "Metadata:true" "http://169.254.169.254/metadata/identity/oauth2/token?resource=https://storage.azure.com/"

Step 2: Configure Dagster to use Azure Blob Storage#

Now, you need to update the helm values to use Azure Blob Storage for logs. You can do this by editing the values.yaml file for your user-cloud deployment.

Pull down the current values for your deployment:

helm get values user-cloud > current-values.yaml

Then, edit the current-values.yaml file to include the following lines:

computeLogs:
  enabled: true
  custom:
    module: dagster_azure.blob.compute_log_manager
    class: AzureBlobComputeLogManager
    config:
      storage_account: mystorageaccount
      container: mycontainer
      default_azure_credential:
        exclude_environment_credential: false
      prefix: dagster-logs-
      local_dir: "/tmp/cool"
      upload_interval: 30

Finally, update your deployment with the new values:

helm upgrade user-cloud dagster-cloud/dagster-cloud-agent -n dagster-agent -f current-values.yaml

Step 3: Verify logs are being written to Azure Blob Storage#

It's time to kick off a run in Dagster to test your new configuration. If following along with the quickstart repo, you should be able to kick off a run of the all_assets_job, which will generate logs for you to test against. Otherwise, use any job that emits logs. When you go to the stdout/stderr window of the run page, you should see a log file that directs you to the Azure Blob Storage container.

Azure Blob Storage logs in Dagster
Whether or not the URL will be clickable depends on whether your logs are public or private. If they are private, directly clicking the link would not work, and instead you should use either the Azure CLI or the Azure Portal to access the logs using the URL.