Example

This example demonstrates how to use this Action Agent to set up an Azure Data Lake and save Pump data to it.

Refer to configuration to understand all configuration options of this Agent.

Step 1: Add the Agent

Drag the Azure Data Lake Action Agent onto the canvas, link the input endpoint to the pump data, and the output to the aggregation. Rename the Agent and save the Data Stream.

Step 2: Configure General

Select the Agent and click Configure. Keep the default Collection.

Step 3: Configure Upload Schedule

Set the interval to 60 seconds to upload a batch of data every minute.

Step 4: Configure Authentication

Enter the authentication details: the Subscription Id, Tenant Id, Client Id, and Client Secret.

Step 5: Create the Azure Storage Account (optional)

Refer Azure resource creation on how to enable this link, or skip to step 7.

We will create a new storage account, rather than selecting a pre-existing one. Click 'Create a new Storage Account'.

Select a Resource Group and Resource Location, and enter a name for the new Storage Account. Click Create Storage Account. Be patient, as it may take several minutes to complete.

The operation succeeded, so click Apply to set the new Storage Account's settings on the configuration page.

Step 6: Create the Blob Container (optional)

Refer Azure resource creation on how to enable this link, or skip to step 7.

In this case, our newly created Storage Account will not have any Containers for use. So click 'Create a new Blob Container' to create one.

Enter a name, then click Create Container. Please be patient, as it may take several minutes to complete.

The operation succeeded, so click Apply to select the new container on the configuration page.

Step 7: Configure File Details

Select the columns to output. In this case, select all the columns.

Select the preferred file, date, and time formats to use when creating the destination files in Azure Data Lake. In this case:

  • File Format: JSON

  • Date Format: yyyy/MM/dd

  • Time Format: HHmmss

Enter a Custom File Name. In this case, "pump_{pumpId}\{date}\{time}", and map the {pumpId} tag to the PumpID column.

Step 8: Results

Apply the changes, save the Data Stream, and publish it.

Open the Live View and observe the data flowing through to the Azure Data Lake.

The single record printed by the Event Printer shows that 590 events have been processed.

Finally, let's inspect the Azure Storage Account. You will find the files in a nested folder in the specified Container, with a file name structure relating to the Stream Object's unique Id and the time the events were received.

Files

See the Import, Export, and Clone - XMPro article for steps to import a Data Stream.

Last updated