Azure Monitor is a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. This service helps you maximize the availability and performance of your apps.
All data collected by Azure Monitor fits into one of two fundamental types:
To programmatically analyze these data sources, the Azure Monitor Query client library can be used. Use the client library for Azure Monitor to:
Key links:
npm install @azure/monitor-query
See our support policy for more details.
You can use the Azure Portal or the Azure CLI to create an Azure Monitor resource.
Instructions:
LogsQueryClient and MetricsQueryClient authenticate using a service principal.
Authentication via service principal is done by:
@azure/identity
package.Using DefaultAzureCredential
const { DefaultAzureCredential } = require("@azure/identity");
const { LogsQueryClient, MetricsQueryClient } = require("@azure/monitor-query");
const credential = new DefaultAzureCredential();
const logsQueryClient = new LogsQueryClient(credential);
// or
const metricsQueryClient = new MetricsQueryClient(credential);
More information about @azure/identity
can be found here
Azure Monitor Logs collects and organizes log and performance data from monitored resources. Data from different sources can be consolidated into a single workspace. Examples of data sources include:
Data collected by Azure Monitor Logs is stored in one or more Log Analytics workspaces. The workspace defines the:
Data from the disparate sources can be analyzed together using Kusto Query Language (KQL)—the same query language used by Azure Data Explorer. Data is retrieved from a Log Analytics workspace using a KQL query—a read-only request to process data and return results. For more information, see Log queries in Azure Monitor.
The LogsQueryClient
allows you to query logs, using the Kusto query language. This data can be queried in the
portal using tables like AppEvents
, AppDependencies
and others.
Azure Monitor Metrics collects numeric data from monitored resources into a time series database. Metrics are collected at regular intervals and describe some aspect of a system at a particular time. Metrics in Azure Monitor are lightweight and can support near real-time scenarios. They're useful for alerting and fast detection of issues. Metrics can be:
Each set of metric values is a time series with the following characteristics:
The MetricsQueryClient
allows you to query metrics.
The LogsQueryClient
can be used to query a Monitor workspace using the Kusto Query Language. The timespan.duration
can be specified as a string in an ISO8601 duration format.
You can use the Durations
constants provided for some commonly used ISO8601 durations.
const { LogsQueryClient, Durations } = require("@azure/monitor-query");
const { DefaultAzureCredential } = require("@azure/identity");
const azureLogAnalyticsWorkspaceId = "<the Workspace Id for your Azure Log Analytics resource>";
const logsQueryClient = new LogsQueryClient(new DefaultAzureCredential());
async function run() {
const kustoQuery = "AppEvents | limit 1";
const result = await logsQueryClient.query(azureLogAnalyticsWorkspaceId, kustoQuery, {
duration: Durations.TwentFourHours
});
const tablesFromResult = result.tables;
if (tablesFromResult == null) {
console.log(`No results for query '${kustoQuery}'`);
return;
}
console.log(`Results for query '${kustoQuery}'`);
for (const table of tablesFromResult) {
const columnHeaderString = table.columnDescriptors
.map((column) => `${column.name}(${column.type}) `)
.join("| ");
console.log("| " + columnHeaderString);
for (const row of table.rows) {
const columnValuesString = row.map((columnValue) => `'${columnValue}' `).join("| ");
console.log("| " + columnValuesString);
}
}
}
run().catch((err) => console.log("ERROR:", err));
The query
API for LogsQueryClient
returns the LogsQueryResult
.
Here is a hierarchy of the response:
LogsQueryResult
|---statistics
|---visualization
|---error
|---status ("Partial" | "Success" | "Failed")
|---tables (list of `LogsTable` objects)
|---name
|---rows
|---columnDescriptors (list of `LogsColumn` objects)
|---name
|---type
So, to handle a response with tables,
const tablesFromResult = result.tables;
for (const table of tablesFromResult) {
const columnHeaderString = table.columnDescriptors
.map((column) => `${column.name}(${column.type}) `)
.join("| ");
console.log("| " + columnHeaderString);
for (const row of table.rows) {
const columnValuesString = row.map((columnValue) => `'${columnValue}' `).join("| ");
console.log("| " + columnValuesString);
}
}
A full sample can be found here.
// setting optional parameters
const queryLogsOptions: LogsQueryOptions = {
// explicitly control the amount of time the server can spend processing the query.
serverTimeoutInSeconds: 60
};
const result = await logsQueryClient.query(
azureLogAnalyticsWorkspaceId,
kustoQuery,
{ duration: Durations.TwentyFourHours },
queryLogsOptions
);
const tablesFromResult = result.tables;
The following example demonstrates sending multiple queries at the same time using the batch query API. The queries can be represented as a list of BatchQuery
objects.
export async function main() {
if (!monitorWorkspaceId) {
throw new Error("MONITOR_WORKSPACE_ID must be set in the environment for this sample");
}
const tokenCredential = new DefaultAzureCredential();
const logsQueryClient = new LogsQueryClient(tokenCredential);
const kqlQuery = "AppEvents | project TimeGenerated, Name, AppRoleInstance | limit 1";
const queriesBatch = [
{
workspaceId: monitorWorkspaceId,
query: kqlQuery,
timespan: { duration: "P1D" }
},
{
workspaceId: monitorWorkspaceId,
query: "AzureActivity | summarize count()",
timespan: { duration: "PT1H" }
},
{
workspaceId: monitorWorkspaceId,
query:
"AppRequests | take 10 | summarize avgRequestDuration=avg(DurationMs) by bin(TimeGenerated, 10m), _ResourceId",
timespan: { duration: "PT1H" }
},
{
workspaceId: monitorWorkspaceId,
query: "AppRequests | take 2",
timespan: { duration: "PT1H" },
includeQueryStatistics: true
}
];
const result = await logsQueryClient.queryBatch(queriesBatch);
if (result.results == null) {
throw new Error("No response for query");
}
let i = 0;
for (const response of result.results) {
console.log(`Results for query with query: ${queriesBatch[i]}`);
if (response.error) {
console.log(` Query had errors:`, response.error);
} else {
if (response.tables == null) {
console.log(`No results for query`);
} else {
console.log(
`Printing results from query '${queriesBatch[i].query}' for '${queriesBatch[i].timespan}'`
);
for (const table of response.tables) {
const columnHeaderString = table.columnDescriptors
.map((column) => `${column.name}(${column.type}) `)
.join("| ");
console.log(columnHeaderString);
for (const row of table.rows) {
const columnValuesString = row.map((columnValue) => `'${columnValue}' `).join("| ");
console.log(columnValuesString);
}
}
}
}
// next query
i++;
}
}
The queryLogsBatch
API returns the LogsQueryBatchResult
.
Here is a hierarchy of the response:
LogsQueryBatchResult
|---results (list of following objects)
|---statistics
|---visualization
|---error
|---status ("Partial" | "Success" | "Failed")
|---tables (list of `LogsTable` objects)
|---name
|---rows
|---columnDescriptors (list of `LogsColumn` objects)
|---name
|---type
To handle a batch response:
let i = 0;
for (const response of result.results) {
console.log(`Results for query with query: ${queriesBatch[i]}`);
if (response.error) {
console.log(` Query had errors:`, response.error);
} else {
if (response.tables == null) {
console.log(`No results for query`);
} else {
console.log(
`Printing results from query '${queriesBatch[i].query}' for '${queriesBatch[i].timespan}'`
);
for (const table of response.tables) {
const columnHeaderString = table.columnDescriptors
.map((column) => `${column.name}(${column.type}) `)
.join("| ");
console.log(columnHeaderString);
for (const row of table.rows) {
const columnValuesString = row.map((columnValue) => `'${columnValue}' `).join("| ");
console.log(columnValuesString);
}
}
}
}
// next query
i++;
}
A full sample can be found here
For information on request throttling at the Log Analytics service level, see Rate limits.
The following example gets metrics for an Azure Metrics Advisor subscription.
The resource URI must be that of the resource for which metrics are being queried. It's normally of the format /subscriptions/<id>/resourceGroups/<rg-name>/providers/<source>/topics/<resource-name>
.
To find the resource URI:
id
property.import { DefaultAzureCredential } from "@azure/identity";
import { Durations, Metric, MetricsQueryClient } from "@azure/monitor-query";
import * as dotenv from "dotenv";
dotenv.config();
const metricsResourceId = process.env.METRICS_RESOURCE_ID;
export async function main() {
const tokenCredential = new DefaultAzureCredential();
const metricsQueryClient = new MetricsQueryClient(tokenCredential);
if (!metricsResourceId) {
throw new Error("METRICS_RESOURCE_ID must be set in the environment for this sample");
}
const iterator = metricsQueryClient.listMetricDefinitions(metricsResourceId);
let result = await iterator.next();
let metricNames: string[] = [];
for await (const result of iterator) {
console.log(` metricDefinitions - ${result.id}, ${result.name}`);
if (result.name) {
metricNames.push(result.name);
}
}
const firstMetricName = metricNames[0];
const secondMetricName = metricNames[1];
if (firstMetricName && secondMetricName) {
console.log(`Picking an example metric to query: ${firstMetricName} and ${secondMetricName}`);
const metricsResponse = await metricsQueryClient.query(
metricsResourceId,
[firstMetricName, secondMetricName],
{
granularity: "PT1M",
timespan: { duration: Durations.FiveMinutes }
}
);
console.log(
`Query cost: ${metricsResponse.cost}, interval: ${metricsResponse.granularity}, time span: ${metricsResponse.timespan}`
);
const metrics: Metric[] = metricsResponse.metrics;
console.log(`Metrics:`, JSON.stringify(metrics, undefined, 2));
const metric = metricsResponse.getMetricByName(firstMetricName);
console.log(`Selected Metric: ${firstMetricName}`, JSON.stringify(metric, undefined, 2));
} else {
console.error(`Metric names are not defined - ${firstMetricName} and ${secondMetricName}`);
}
}
main().catch((err) => {
console.error("The sample encountered an error:", err);
process.exit(1);
});
In the preceding sample, the ordering of results for the metrics in the metricResponse
will be in the order in which the user specifies the metric names in the metricNames
array argument for the query method. If user specifies [firstMetricName,secondMetricName]
, the result for firstMetricName
will appear before the result for secondMetricName
in the metricResponse
.
The metrics query API returns a QueryMetricsResult
object. The QueryMetricsResult
object contains properties such as a list of Metric
-typed objects, interval
, namespace
, and timespan
. The Metric
objects list can be accessed using the metrics
property. Each Metric
object in this list contains a list of TimeSeriesElement
objects. Each TimeSeriesElement
contains data
and metadataValues
properties. In visual form, the object hierarchy of the response resembles the following structure:
QueryMetricsResult
|---cost
|---timespan (of type `TimeInterval`)
|---granularity
|---namespace
|---resourceRegion
|---metrics (list of `Metric` objects)
|---id
|---type
|---name
|---unit
|---displayDescription
|---errorCode
|---timeseries (list of `TimeSeriesElement` objects)
|---metadataValues
|---data (list of data points represented by `MetricValue` objects)
|---timeStamp
|---average
|---minimum
|---maximum
|---total
|---count
|---getMetricByName(metricName): Metric | undefined (convenience method)
import { DefaultAzureCredential } from "@azure/identity";
import { Durations, Metric, MetricsQueryClient } from "@azure/monitor-query";
import * as dotenv from "dotenv";
dotenv.config();
const metricsResourceId = process.env.METRICS_RESOURCE_ID;
export async function main() {
const tokenCredential = new DefaultAzureCredential();
const metricsQueryClient = new MetricsQueryClient(tokenCredential);
if (!metricsResourceId) {
throw new Error("METRICS_RESOURCE_ID must be set in the environment for this sample");
}
console.log(`Picking an example metric to query: ${firstMetricName}`);
const metricsResponse = await metricsQueryClient.queryMetrics(
metricsResourceId,
{ duration: Durations.FiveMinutes },
{
metricNames: ["MatchedEventCount"],
interval: "PT1M",
aggregations: [AggregationType.Count]
}
);
console.log(
`Query cost: ${metricsResponse.cost}, interval: ${metricsResponse.interval}, time span: ${metricsResponse.timespan}`
);
const metrics: Metric[] = metricsResponse.metrics;
for (const metric of metrics) {
console.log(metric.name);
for (const timeseriesElement of metric.timeseries) {
for (const metricValue of timeseriesElement.data!) {
if (metricValue.count !== 0) {
console.log(`There are ${metricValue.count} matched events at ${metricValue.timeStamp}`);
}
}
}
}
}
main().catch((err) => {
console.error("The sample encountered an error:", err);
process.exit(1);
});
A full sample can be found here
The same log query can be executed across multiple Log Analytics workspaces. In addition to the KQL query, the following parameters are required:
workspaceId
- The first (primary) workspace ID.additionalWorkspaces
- A list of workspaces, excluding the workspace provided in the workspaceId
parameter. The parameter's list items may consist of the following identifier formats:For example, the following query executes in three workspaces:
const queryLogsOptions: LogsQueryOptions = {
additionalWorkspaces: ["<workspace2>", "<workspace3>"]
};
const kustoQuery = "AppEvents | limit 10";
const result = await logsQueryClient.queryLogs(
azureLogAnalyticsWorkspaceId,
kustoQuery,
{ duration: Durations.TwentyFourHours },
queryLogsOptions
);
To view the results for each workspace, use the TenantId
column to either order the results or filter them in the Kusto query.
Order results by TenantId
AppEvents | order by TenantId
Filter results by TenantId
AppEvents | filter TenantId == "<workspace2>"
A full sample can be found here
For more samples see here: samples.
Enabling logging may help uncover useful information about failures. In order to see a log of HTTP requests and responses, set the AZURE_LOG_LEVEL
environment variable to info
. Alternatively, logging can be enabled at runtime by calling setLogLevel
in the @azure/logger
:
import { setLogLevel } from "@azure/logger";
setLogLevel("info");
For more detailed instructions on how to enable logs, you can look at the @azure/logger package docs.
The following samples show you the various ways you can query your Log Analytics workspace:
logsQuery.ts
- Query logs in a Monitor workspacelogsQueryMultipleWorkspaces.ts
- Query logs in multiple workspaceslogsQueryBatch.ts
- Run multiple queries, simultaneously, with a batch in a Monitor workspacemetricsQuery.ts
- Query metrics in a Monitor workspaceMore in-depth examples can be found in the samples folder on GitHub.
If you'd like to contribute to this library, please read the contributing guide to learn more about how to build and test the code.
This module's tests are a mixture of live and unit tests, which require you to have an Azure Monitor instance. To execute the tests you'll need to run:
rush update
rush build -t @azure/monitor-query
cd into sdk/monitor/monitor-query
sample.env
file to .env
.env
file in an editor and fill in the values.npm run test
.View our tests folder for more details.
Generated using TypeDoc