Azure Monitor Query client library for Python
=============================================
The Azure Monitor Query client library is used to execute read-only queries against `Azure Monitor `_\ 's two data platforms:
* `Logs `_ - Collects and organizes log and performance data from monitored resources. Data from different sources such as platform logs from Azure services, log and performance data from virtual machines agents, and usage and performance data from apps can be consolidated into a single `Azure Log Analytics workspace `_. The various data types can be analyzed together using the `Kusto Query Language `_.
* `Metrics `_ - Collects numeric data from monitored resources into a time series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time. Metrics are lightweight and capable of supporting near real-time scenarios, making them particularly useful for alerting and fast detection of issues.
**Resources:**
* `Source code `_
* `Package (PyPI) `_
* `API reference documentation `_
* `Service documentation `_
* `Samples `_
* `Change log `_
*Disclaimer*
----------------
*Azure SDK Python packages support for Python 2.7 is ending 01 January 2022. For more information and questions, please refer to https://github.com/Azure/azure-sdk-for-python/issues/20691*
Getting started
---------------
Prerequisites
^^^^^^^^^^^^^
* Python 2.7, or 3.6 or later
* An `Azure subscription `_
* To query Logs, you need an `Azure Log Analytics workspace `_.
* To query Metrics, you need an Azure resource of any kind (Storage Account, Key Vault, Cosmos DB, etc.).
Install the package
^^^^^^^^^^^^^^^^^^^
Install the Azure Monitor Query client library for Python with `pip `_\ :
.. code-block:: bash
pip install azure-monitor-query
Create the client
^^^^^^^^^^^^^^^^^
An authenticated client is required to query Logs or Metrics. The library includes both synchronous and asynchronous forms of the clients. To authenticate, create an instance of a token credential. Use that instance when creating a ``LogsQueryClient`` or ``MetricsQueryClient``. The following examples use ``DefaultAzureCredential`` from the `azure-identity `_ package.
Synchronous clients
~~~~~~~~~~~~~~~~~~~
Consider the following example, which creates synchronous clients for both Logs and Metrics querying:
.. code-block:: python
from azure.identity import DefaultAzureCredential
from azure.monitor.query import LogsQueryClient, MetricsQueryClient
credential = DefaultAzureCredential()
logs_client = LogsQueryClient(credential)
metrics_client = MetricsQueryClient(credential)
Asynchronous clients
~~~~~~~~~~~~~~~~~~~~
The asynchronous forms of the query client APIs are found in the ``.aio``\ -suffixed namespace. For example:
.. code-block:: python
from azure.identity.aio import DefaultAzureCredential
from azure.monitor.query.aio import LogsQueryClient, MetricsQueryClient
credential = DefaultAzureCredential()
async_logs_client = LogsQueryClient(credential)
async_metrics_client = MetricsQueryClient(credential)
Execute the query
^^^^^^^^^^^^^^^^^
For examples of Logs and Metrics queries, see the `Examples <#examples>`_ section.
Key concepts
------------
Logs query rate limits and throttling
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Log Analytics service applies throttling when the request rate is too high. Limits, such as the maximum number of rows returned, are also applied on the Kusto queries. For more information, see `Rate and query limits `_.
If you're executing a batch logs query, a throttled request will return a ``LogsQueryError`` object. That object's ``code`` value will be ``ThrottledError``.
Metrics data structure
^^^^^^^^^^^^^^^^^^^^^^
Each set of metric values is a time series with the following characteristics:
* The time the value was collected
* The resource associated with the value
* A namespace that acts like a category for the metric
* A metric name
* The value itself
* Some metrics may have multiple dimensions as described in multi-dimensional metrics. Custom metrics can have up to 10 dimensions.
Examples
--------
* `Logs query <#logs-query>`_
* `Specify timespan <#specify-timespan>`_
* `Handle logs query response <#handle-logs-query-response>`_
* `Batch logs query <#batch-logs-query>`_
* `Advanced logs query scenarios <#advanced-logs-query-scenarios>`_
* `Set logs query timeout <#set-logs-query-timeout>`_
* `Query multiple workspaces <#query-multiple-workspaces>`_
* `Metrics query <#metrics-query>`_
* `Handle metrics query response <#handle-metrics-query-response>`_
* `Example of handling response <#example-of-handling-response>`_
Logs query
^^^^^^^^^^
This example shows getting a logs query. To handle the response and view it in a tabular form, the `pandas `_ library is used. See the `samples `_ if you choose not to use pandas.
Specify timespan
~~~~~~~~~~~~~~~~
The ``timespan`` parameter specifies the time duration for which to query the data. This value can be one of the following:
* a ``timedelta``
* a ``timedelta`` and a start datetime
* a start datetime/end datetime
For example:
.. code-block:: python
import os
import pandas as pd
from datetime import datetime, timezone
from azure.monitor.query import LogsQueryClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
query = """AppRequests | take 5"""
start_time=datetime(2021, 7, 2, tzinfo=timezone.utc)
end_time=datetime(2021, 7, 4, tzinfo=timezone.utc)
try:
response = client.query_workspace(
workspace_id=os.environ['LOG_WORKSPACE_ID'],
query=query,
timespan=(start_time, end_time)
)
if response.status == LogsQueryStatus.PARTIAL:
error = response.partial_error
data = response.partial_data
print(error.message)
elif response.status == LogsQueryStatus.SUCCESS:
data = response.tables
for table in data:
df = pd.DataFrame(data=table.rows, columns=table.columns)
print(df)
except HttpResponseError as err:
print("something fatal happened")
print (err)
Handle logs query response
~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``query_workspace`` API returns either a ``LogsQueryResult`` or a ``LogsQueryPartialResult`` object. The ``batch_query`` API returns a list that may contain ``LogsQueryResult``\ , ``LogsQueryPartialResult``\ , and ``LogsQueryError`` objects. Here's a hierarchy of the response:
.. code-block::
LogsQueryResult
|---statistics
|---visualization
|---tables (list of `LogsTable` objects)
|---name
|---rows
|---columns
|---column_types
LogsQueryPartialResult
|---statistics
|---visualization
|---partial_error (a `LogsQueryError` object)
|---code
|---message
|---status
|---partial_data (list of `LogsTable` objects)
|---name
|---rows
|---columns
|---column_types
The ``LogsQueryResult`` directly iterates over the table as a convenience. For example, to handle a logs query response with tables and display it using pandas:
.. code-block:: python
response = client.query(...)
for table in response:
df = pd.DataFrame(table.rows, columns=[col.name for col in table.columns])
A full sample can be found `here `_.
In a similar fashion, to handle a batch logs query response:
.. code-block:: python
for result in response:
if result.status == LogsQueryStatus.SUCCESS:
for table in result:
df = pd.DataFrame(table.rows, columns=table.columns)
print(df)
A full sample can be found `here `_.
Batch logs query
^^^^^^^^^^^^^^^^
The following example demonstrates sending multiple queries at the same time using the batch query API. The queries can either be represented as a list of ``LogsBatchQuery`` objects or a dictionary. This example uses the former approach.
.. code-block:: python
import os
from datetime import timedelta, datetime, timezone
import pandas as pd
from azure.monitor.query import LogsQueryClient, LogsBatchQuery, LogsQueryStatus
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
requests = [
LogsBatchQuery(
query="AzureActivity | summarize count()",
timespan=timedelta(hours=1),
workspace_id= os.environ['LOG_WORKSPACE_ID']
),
LogsBatchQuery(
query= """bad query""",
timespan=timedelta(days=1),
workspace_id= os.environ['LOG_WORKSPACE_ID']
),
LogsBatchQuery(
query= """let Weight = 92233720368547758;
range x from 1 to 3 step 1
| summarize percentilesw(x, Weight * 100, 50)""",
workspace_id= os.environ['LOG_WORKSPACE_ID'],
timespan=(datetime(2021, 6, 2, tzinfo=timezone.utc), datetime(2021, 6, 5, tzinfo=timezone.utc)), # (start, end)
include_statistics=True
),
]
results = client.query_batch(requests)
for res in results:
if res.status == LogsQueryStatus.FAILURE:
# this will be a LogsQueryError
print(res.message)
elif res.status == LogsQueryStatus.PARTIAL:
## this will be a LogsQueryPartialResult
print(res.partial_error.message)
for table in res.partial_data:
df = pd.DataFrame(table.rows, columns=table.columns)
print(df)
elif res.status == LogsQueryStatus.SUCCESS:
## this will be a LogsQueryResult
table = res.tables[0]
df = pd.DataFrame(table.rows, columns=table.columns)
print(df)
Advanced logs query scenarios
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Set logs query timeout
~~~~~~~~~~~~~~~~~~~~~~
The following example shows setting a server timeout in seconds. A gateway timeout is raised if the query takes more time than the mentioned timeout. The default is 180 seconds and can be set up to 10 minutes (600 seconds).
.. code-block:: python
import os
from azure.monitor.query import LogsQueryClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
response = client.query_workspace(
os.environ['LOG_WORKSPACE_ID'],
"range x from 1 to 10000000000 step 1 | count",
timespan=None,
server_timeout=1,
)
Query multiple workspaces
~~~~~~~~~~~~~~~~~~~~~~~~~
The same logs query can be executed across multiple Log Analytics workspaces. In addition to the Kusto query, the following parameters are required:
* ``workspace_id`` - The first (primary) workspace ID.
* ``additional_workspaces`` - A list of workspaces, excluding the workspace provided in the ``workspace_id`` parameter. The parameter's list items may consist of the following identifier formats:
* Qualified workspace names
* Workspace IDs
* Azure resource IDs
For example, the following query executes in three workspaces:
.. code-block:: python
client.query_workspace(
,
query,
additional_workspaces=['', '']
)
A full sample can be found `here `_.
Metrics query
^^^^^^^^^^^^^
The following example gets metrics for an Event Grid subscription. The resource URI is that of an Event Grid topic.
The resource URI must be that of the resource for which metrics are being queried. It's normally of the format ``/subscriptions//resourceGroups//providers/