azure.cognitiveservices.vision.computervision.models module¶
-
exception
azure.cognitiveservices.vision.computervision.models.
ComputerVisionErrorException
(deserialize, response, *args)[source]¶ Bases:
msrest.exceptions.HttpOperationError
Server responsed with exception of type: ‘ComputerVisionError’.
- Parameters
deserialize – A deserializer
response – Server response to be deserialized.
-
class
azure.cognitiveservices.vision.computervision.models.
AdultInfo
(*, is_adult_content: bool = None, is_racy_content: bool = None, is_gory_content: bool = None, adult_score: float = None, racy_score: float = None, gore_score: float = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object describing whether the image contains adult-oriented content and/or is racy.
- Parameters
is_adult_content (bool) – A value indicating if the image contains adult-oriented content.
is_racy_content (bool) – A value indicating if the image is racy.
is_gory_content (bool) – A value indicating if the image is gory.
adult_score (float) – Score from 0 to 1 that indicates how much the content is considered adult-oriented within the image.
racy_score (float) – Score from 0 to 1 that indicates how suggestive is the image.
gore_score (float) – Score from 0 to 1 that indicates how gory is the image.
-
class
azure.cognitiveservices.vision.computervision.models.
AnalyzeResults
(*, version: str, read_results, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Analyze batch operation result.
All required parameters must be populated in order to send to Azure.
- Parameters
version (str) – Required. Version of schema used for this result.
read_results (list[ReadResult]) – Required. Text extracted from the input.
-
class
azure.cognitiveservices.vision.computervision.models.
AreaOfInterestResult
(*, request_id: str = None, metadata=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Result of AreaOfInterest operation.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
area_of_interest (BoundingRect) – A bounding box for an area of interest inside an image.
- Parameters
request_id (str) – Id of the REST API request.
metadata (ImageMetadata) –
-
class
azure.cognitiveservices.vision.computervision.models.
BoundingRect
(*, x: int = None, y: int = None, w: int = None, h: int = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
A bounding box for an area inside an image.
-
class
azure.cognitiveservices.vision.computervision.models.
Category
(*, name: str = None, score: float = None, detail=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object describing identified category.
- Parameters
name (str) – Name of the category.
score (float) – Scoring of the category.
detail (CategoryDetail) – Details of the identified category.
-
class
azure.cognitiveservices.vision.computervision.models.
CategoryDetail
(*, celebrities=None, landmarks=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object describing additional category details.
- Parameters
celebrities (list[CelebritiesModel]) – An array of celebrities if any identified.
landmarks (list[LandmarksModel]) – An array of landmarks if any identified.
-
class
azure.cognitiveservices.vision.computervision.models.
CelebritiesModel
(*, name: str = None, confidence: float = None, face_rectangle=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object describing possible celebrity identification.
- Parameters
name (str) – Name of the celebrity.
confidence (float) – Confidence level for the celebrity recognition as a value ranging from 0 to 1.
face_rectangle (FaceRectangle) – Location of the identified face in the image.
-
class
azure.cognitiveservices.vision.computervision.models.
CelebrityResults
(*, celebrities=None, request_id: str = None, metadata=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Result of domain-specific classifications for the domain of celebrities.
- Parameters
celebrities (list[CelebritiesModel]) – List of celebrities recognized in the image.
request_id (str) – Id of the REST API request.
metadata (ImageMetadata) –
-
class
azure.cognitiveservices.vision.computervision.models.
ColorInfo
(*, dominant_color_foreground: str = None, dominant_color_background: str = None, dominant_colors=None, accent_color: str = None, is_bw_img: bool = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object providing additional metadata describing color attributes.
- Parameters
dominant_color_foreground (str) – Possible dominant foreground color.
dominant_color_background (str) – Possible dominant background color.
dominant_colors (list[str]) – An array of possible dominant colors.
accent_color (str) – Possible accent color.
is_bw_img (bool) – A value indicating if the image is black and white.
-
class
azure.cognitiveservices.vision.computervision.models.
ComputerVisionError
(*, code, message: str, request_id: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Details about the API request error.
All required parameters must be populated in order to send to Azure.
-
class
azure.cognitiveservices.vision.computervision.models.
DetectedBrand
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
A brand detected in an image.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
name (str) – Label for the brand.
confidence (float) – Confidence score of having observed the brand in the image, as a value ranging from 0 to 1.
rectangle (BoundingRect) – Approximate location of the detected brand.
-
class
azure.cognitiveservices.vision.computervision.models.
DetectedObject
(*, object_property: str = None, confidence: float = None, parent=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object detected in an image.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
rectangle (BoundingRect) – Approximate location of the detected object.
- Parameters
object_property (str) – Label for the object.
confidence (float) – Confidence score of having observed the object in the image, as a value ranging from 0 to 1.
parent (ObjectHierarchy) – The parent object, from a taxonomy perspective. The parent object is a more generic form of this object. For example, a ‘bulldog’ would have a parent of ‘dog’.
-
class
azure.cognitiveservices.vision.computervision.models.
DetectResult
(*, request_id: str = None, metadata=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Result of a DetectImage call.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
objects (list[DetectedObject]) – An array of detected objects.
- Parameters
request_id (str) – Id of the REST API request.
metadata (ImageMetadata) –
-
class
azure.cognitiveservices.vision.computervision.models.
DomainModelResults
(*, result=None, request_id: str = None, metadata=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Result of image analysis using a specific domain model including additional metadata.
- Parameters
result (object) – Model-specific response.
request_id (str) – Id of the REST API request.
metadata (ImageMetadata) –
-
class
azure.cognitiveservices.vision.computervision.models.
FaceDescription
(*, age: int = None, gender=None, face_rectangle=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object describing a face identified in the image.
- Parameters
age (int) – Possible age of the face.
gender (str or Gender) – Possible gender of the face. Possible values include: ‘Male’, ‘Female’
face_rectangle (FaceRectangle) – Rectangle in the image containing the identified face.
-
class
azure.cognitiveservices.vision.computervision.models.
FaceRectangle
(*, left: int = None, top: int = None, width: int = None, height: int = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object describing face rectangle.
- Parameters
-
class
azure.cognitiveservices.vision.computervision.models.
ImageAnalysis
(*, categories=None, adult=None, color=None, image_type=None, tags=None, description=None, faces=None, objects=None, brands=None, request_id: str = None, metadata=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Result of AnalyzeImage operation.
- Parameters
categories (list[Category]) – An array indicating identified categories.
adult (AdultInfo) – An object describing whether the image contains adult-oriented content and/or is racy.
color (ColorInfo) – An object providing additional metadata describing color attributes.
image_type (ImageType) – An object providing possible image types and matching confidence levels.
tags (list[ImageTag]) – A list of tags with confidence level.
description (ImageDescriptionDetails) – A collection of content tags, along with a list of captions sorted by confidence level, and image metadata.
faces (list[FaceDescription]) – An array of possible faces within the image.
objects (list[DetectedObject]) – Array of objects describing what was detected in the image.
brands (list[DetectedBrand]) – Array of brands detected in the image.
request_id (str) – Id of the REST API request.
metadata (ImageMetadata) –
-
class
azure.cognitiveservices.vision.computervision.models.
ImageCaption
(*, text: str = None, confidence: float = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An image caption, i.e. a brief description of what the image depicts.
-
class
azure.cognitiveservices.vision.computervision.models.
ImageDescription
(*, tags=None, captions=None, request_id: str = None, metadata=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
A collection of content tags, along with a list of captions sorted by confidence level, and image metadata.
- Parameters
captions (list[ImageCaption]) – A list of captions, sorted by confidence level.
request_id (str) – Id of the REST API request.
metadata (ImageMetadata) –
-
class
azure.cognitiveservices.vision.computervision.models.
ImageDescriptionDetails
(*, tags=None, captions=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
A collection of content tags, along with a list of captions sorted by confidence level, and image metadata.
- Parameters
captions (list[ImageCaption]) – A list of captions, sorted by confidence level.
-
class
azure.cognitiveservices.vision.computervision.models.
ImageMetadata
(*, width: int = None, height: int = None, format: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Image metadata.
-
class
azure.cognitiveservices.vision.computervision.models.
ImageTag
(*, name: str = None, confidence: float = None, hint: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An entity observation in the image, along with the confidence score.
-
class
azure.cognitiveservices.vision.computervision.models.
ImageType
(*, clip_art_type: int = None, line_drawing_type: int = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object providing possible image types and matching confidence levels.
-
class
azure.cognitiveservices.vision.computervision.models.
ImageUrl
(*, url: str, **kwargs)[source]¶ Bases:
msrest.serialization.Model
ImageUrl.
All required parameters must be populated in order to send to Azure.
- Parameters
url (str) – Required. Publicly reachable URL of an image.
-
class
azure.cognitiveservices.vision.computervision.models.
LandmarkResults
(*, landmarks=None, request_id: str = None, metadata=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Result of domain-specific classifications for the domain of landmarks.
- Parameters
landmarks (list[LandmarksModel]) – List of landmarks recognized in the image.
request_id (str) – Id of the REST API request.
metadata (ImageMetadata) –
-
class
azure.cognitiveservices.vision.computervision.models.
LandmarksModel
(*, name: str = None, confidence: float = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
A landmark recognized in the image.
-
class
azure.cognitiveservices.vision.computervision.models.
Line
(*, bounding_box, text: str, words, language: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object representing a recognized text line.
All required parameters must be populated in order to send to Azure.
- Parameters
language (str) – The BCP-47 language code of the recognized text line. Only provided where the language of the line differs from the page’s.
bounding_box (list[float]) – Required. Bounding box of a recognized line.
text (str) – Required. The text content of the line.
words (list[Word]) – Required. List of words in the text line.
-
class
azure.cognitiveservices.vision.computervision.models.
ListModelsResult
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Result of the List Domain Models operation.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
models_property (list[ModelDescription]) – An array of supported models.
-
class
azure.cognitiveservices.vision.computervision.models.
ModelDescription
(*, name: str = None, categories=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object describing supported model by name and categories.
-
class
azure.cognitiveservices.vision.computervision.models.
ObjectHierarchy
(*, object_property: str = None, confidence: float = None, parent=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object detected inside an image.
- Parameters
object_property (str) – Label for the object.
confidence (float) – Confidence score of having observed the object in the image, as a value ranging from 0 to 1.
parent (ObjectHierarchy) – The parent object, from a taxonomy perspective. The parent object is a more generic form of this object. For example, a ‘bulldog’ would have a parent of ‘dog’.
-
class
azure.cognitiveservices.vision.computervision.models.
OcrLine
(*, bounding_box: str = None, words=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object describing a single recognized line of text.
- Parameters
bounding_box (str) – Bounding box of a recognized line. The four integers represent the x-coordinate of the left edge, the y-coordinate of the top edge, width, and height of the bounding box, in the coordinate system of the input image, after it has been rotated around its center according to the detected text angle (see textAngle property), with the origin at the top-left corner, and the y-axis pointing down.
words (list[OcrWord]) – An array of objects, where each object represents a recognized word.
-
class
azure.cognitiveservices.vision.computervision.models.
OcrRegion
(*, bounding_box: str = None, lines=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
A region consists of multiple lines (e.g. a column of text in a multi-column document).
- Parameters
bounding_box (str) – Bounding box of a recognized region. The four integers represent the x-coordinate of the left edge, the y-coordinate of the top edge, width, and height of the bounding box, in the coordinate system of the input image, after it has been rotated around its center according to the detected text angle (see textAngle property), with the origin at the top-left corner, and the y-axis pointing down.
lines (list[OcrLine]) – An array of recognized lines of text.
-
class
azure.cognitiveservices.vision.computervision.models.
OcrResult
(*, language: str = None, text_angle: float = None, orientation: str = None, regions=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
OcrResult.
- Parameters
language (str) – The BCP-47 language code of the text in the image.
text_angle (float) – The angle, in radians, of the detected text with respect to the closest horizontal or vertical direction. After rotating the input image clockwise by this angle, the recognized text lines become horizontal or vertical. In combination with the orientation property it can be used to overlay recognition results correctly on the original image, by rotating either the original image or recognition results by a suitable angle around the center of the original image. If the angle cannot be confidently detected, this property is not present. If the image contains text at different angles, only part of the text will be recognized correctly.
orientation (str) – Orientation of the text recognized in the image, if requested. The value (up, down, left, or right) refers to the direction that the top of the recognized text is facing, after the image has been rotated around its center according to the detected text angle (see textAngle property). If detection of the orientation was not requested, or no text is detected, the value is ‘NotDetected’.
regions (list[OcrRegion]) – An array of objects, where each object represents a region of recognized text.
-
class
azure.cognitiveservices.vision.computervision.models.
OcrWord
(*, bounding_box: str = None, text: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Information on a recognized word.
- Parameters
bounding_box (str) – Bounding box of a recognized word. The four integers represent the x-coordinate of the left edge, the y-coordinate of the top edge, width, and height of the bounding box, in the coordinate system of the input image, after it has been rotated around its center according to the detected text angle (see textAngle property), with the origin at the top-left corner, and the y-axis pointing down.
text (str) – String value of a recognized word.
-
class
azure.cognitiveservices.vision.computervision.models.
ReadOperationResult
(*, status=None, created_date_time: str = None, last_updated_date_time: str = None, analyze_result=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
OCR result of the read operation.
- Parameters
status (str or OperationStatusCodes) – Status of the read operation. Possible values include: ‘notStarted’, ‘running’, ‘failed’, ‘succeeded’
created_date_time (str) – Get UTC date time the batch operation was submitted.
last_updated_date_time (str) – Get last updated UTC date time of this batch operation.
analyze_result (AnalyzeResults) – Analyze batch operation result.
-
class
azure.cognitiveservices.vision.computervision.models.
ReadResult
(*, page: int, angle: float, width: float, height: float, unit, lines, language: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Text extracted from a page in the input document.
All required parameters must be populated in order to send to Azure.
- Parameters
page (int) – Required. The 1-based page number of the recognition result.
language (str) – The BCP-47 language code of the recognized text page.
angle (float) – Required. The orientation of the image in degrees in the clockwise direction. Range between [-180, 180).
width (float) – Required. The width of the image in pixels or the PDF in inches.
height (float) – Required. The height of the image in pixels or the PDF in inches.
unit (str or TextRecognitionResultDimensionUnit) – Required. The unit used in the Width, Height and BoundingBox. For images, the unit is ‘pixel’. For PDF, the unit is ‘inch’. Possible values include: ‘pixel’, ‘inch’
lines (list[Line]) – Required. A list of recognized text lines.
-
class
azure.cognitiveservices.vision.computervision.models.
TagResult
(*, tags=None, request_id: str = None, metadata=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The results of a image tag operation, including any tags and image metadata.
- Parameters
tags (list[ImageTag]) – A list of tags with confidence level.
request_id (str) – Id of the REST API request.
metadata (ImageMetadata) –
-
class
azure.cognitiveservices.vision.computervision.models.
Word
(*, bounding_box, text: str, confidence: float, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An object representing a recognized word.
All required parameters must be populated in order to send to Azure.
-
class
azure.cognitiveservices.vision.computervision.models.
Gender
[source]¶ -
An enumeration.
-
female
= 'Female'¶
-
male
= 'Male'¶
-
-
class
azure.cognitiveservices.vision.computervision.models.
OperationStatusCodes
[source]¶ -
An enumeration.
-
failed
= 'failed'¶
-
not_started
= 'notStarted'¶
-
running
= 'running'¶
-
succeeded
= 'succeeded'¶
-
-
class
azure.cognitiveservices.vision.computervision.models.
TextRecognitionResultDimensionUnit
[source]¶ -
An enumeration.
-
inch
= 'inch'¶
-
pixel
= 'pixel'¶
-
-
class
azure.cognitiveservices.vision.computervision.models.
DescriptionExclude
[source]¶ -
An enumeration.
-
celebrities
= 'Celebrities'¶
-
landmarks
= 'Landmarks'¶
-
-
class
azure.cognitiveservices.vision.computervision.models.
OcrLanguages
[source]¶ -
An enumeration.
-
ar
= 'ar'¶
-
cs
= 'cs'¶
-
da
= 'da'¶
-
de
= 'de'¶
-
el
= 'el'¶
-
en
= 'en'¶
-
es
= 'es'¶
-
fi
= 'fi'¶
-
fr
= 'fr'¶
-
hu
= 'hu'¶
-
it
= 'it'¶
-
ja
= 'ja'¶
-
ko
= 'ko'¶
-
nb
= 'nb'¶
-
nl
= 'nl'¶
-
pl
= 'pl'¶
-
pt
= 'pt'¶
-
ro
= 'ro'¶
-
ru
= 'ru'¶
-
sk
= 'sk'¶
-
sr_cyrl
= 'sr-Cyrl'¶
-
sr_latn
= 'sr-Latn'¶
-
sv
= 'sv'¶
-
tr
= 'tr'¶
-
unk
= 'unk'¶
-
zh_hans
= 'zh-Hans'¶
-
zh_hant
= 'zh-Hant'¶
-
-
class
azure.cognitiveservices.vision.computervision.models.
VisualFeatureTypes
[source]¶ -
An enumeration.
-
adult
= 'Adult'¶
-
brands
= 'Brands'¶
-
categories
= 'Categories'¶
-
color
= 'Color'¶
-
description
= 'Description'¶
-
faces
= 'Faces'¶
-
image_type
= 'ImageType'¶
-
objects
= 'Objects'¶
-