azure.cognitiveservices.vision.face.models module

exception azure.cognitiveservices.vision.face.models.APIErrorException(deserialize, response, *args)[source]

Bases: msrest.exceptions.HttpOperationError

Server responsed with exception of type: ‘APIError’.

Parameters
  • deserialize – A deserializer

  • response – Server response to be deserialized.

class azure.cognitiveservices.vision.face.models.APIError(*, error=None, **kwargs)[source]

Bases: msrest.serialization.Model

Error information returned by the API.

Parameters

error (Error) –

class azure.cognitiveservices.vision.face.models.Accessory(*, type=None, confidence: Optional[float] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Accessory item and corresponding confidence level.

Parameters
  • type (str or AccessoryType) – Type of an accessory. Possible values include: ‘headWear’, ‘glasses’, ‘mask’

  • confidence (float) – Confidence level of an accessory

class azure.cognitiveservices.vision.face.models.AccessoryType(value)[source]

Bases: str, enum.Enum

An enumeration.

glasses = 'glasses'
head_wear = 'headWear'
mask = 'mask'
class azure.cognitiveservices.vision.face.models.ApplySnapshotRequest(*, object_id: str, mode='CreateNew', **kwargs)[source]

Bases: msrest.serialization.Model

Request body for applying snapshot operation.

All required parameters must be populated in order to send to Azure.

Parameters
  • object_id (str) – Required. User specified target object id to be created from the snapshot.

  • mode (str or SnapshotApplyMode) – Snapshot applying mode. Currently only CreateNew is supported, which means the apply operation will fail if target subscription already contains an object of same type and using the same objectId. Users can specify the “objectId” in request body to avoid such conflicts. Possible values include: ‘CreateNew’. Default value: “CreateNew” .

class azure.cognitiveservices.vision.face.models.Blur(*, blur_level=None, value: Optional[float] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties describing any presence of blur within the image.

Parameters
  • blur_level (str or BlurLevel) – An enum value indicating level of blurriness. Possible values include: ‘Low’, ‘Medium’, ‘High’

  • value (float) – A number indicating level of blurriness ranging from 0 to 1.

class azure.cognitiveservices.vision.face.models.BlurLevel(value)[source]

Bases: str, enum.Enum

An enumeration.

high = 'High'
low = 'Low'
medium = 'Medium'
class azure.cognitiveservices.vision.face.models.Coordinate(*, x: float, y: float, **kwargs)[source]

Bases: msrest.serialization.Model

Coordinates within an image.

All required parameters must be populated in order to send to Azure.

Parameters
  • x (float) – Required. The horizontal component, in pixels.

  • y (float) – Required. The vertical component, in pixels.

class azure.cognitiveservices.vision.face.models.DetectedFace(*, face_rectangle, face_id: Optional[str] = None, recognition_model='recognition_01', face_landmarks=None, face_attributes=None, **kwargs)[source]

Bases: msrest.serialization.Model

Detected Face object.

All required parameters must be populated in order to send to Azure.

Parameters
  • face_id (str) –

  • recognition_model (str or RecognitionModel) – Possible values include: ‘recognition_01’, ‘recognition_02’, ‘recognition_03’, ‘recognition_04’. Default value: “recognition_01” .

  • face_rectangle (FaceRectangle) – Required.

  • face_landmarks (FaceLandmarks) –

  • face_attributes (FaceAttributes) –

class azure.cognitiveservices.vision.face.models.DetectionModel(value)[source]

Bases: str, enum.Enum

An enumeration.

detection_01 = 'detection_01'
detection_02 = 'detection_02'
detection_03 = 'detection_03'
class azure.cognitiveservices.vision.face.models.Emotion(*, anger: Optional[float] = None, contempt: Optional[float] = None, disgust: Optional[float] = None, fear: Optional[float] = None, happiness: Optional[float] = None, neutral: Optional[float] = None, sadness: Optional[float] = None, surprise: Optional[float] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties describing facial emotion in form of confidence ranging from 0 to 1.

Parameters
class azure.cognitiveservices.vision.face.models.Error(*, code: Optional[str] = None, message: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Error body.

Parameters
  • code (str) –

  • message (str) –

class azure.cognitiveservices.vision.face.models.Exposure(*, exposure_level=None, value: Optional[float] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties describing exposure level of the image.

Parameters
  • exposure_level (str or ExposureLevel) – An enum value indicating level of exposure. Possible values include: ‘UnderExposure’, ‘GoodExposure’, ‘OverExposure’

  • value (float) – A number indicating level of exposure level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) is good exposure. [0.75, 1] is over exposure.

class azure.cognitiveservices.vision.face.models.ExposureLevel(value)[source]

Bases: str, enum.Enum

An enumeration.

good_exposure = 'GoodExposure'
over_exposure = 'OverExposure'
under_exposure = 'UnderExposure'
class azure.cognitiveservices.vision.face.models.FaceAttributeType(value)[source]

Bases: str, enum.Enum

An enumeration.

accessories = 'accessories'
age = 'age'
blur = 'blur'
emotion = 'emotion'
exposure = 'exposure'
facial_hair = 'facialHair'
gender = 'gender'
glasses = 'glasses'
hair = 'hair'
head_pose = 'headPose'
makeup = 'makeup'
mask = 'mask'
noise = 'noise'
occlusion = 'occlusion'
quality_for_recognition = 'qualityForRecognition'
smile = 'smile'
class azure.cognitiveservices.vision.face.models.FaceAttributes(*, age: Optional[float] = None, gender=None, smile: Optional[float] = None, facial_hair=None, glasses=None, head_pose=None, emotion=None, hair=None, makeup=None, occlusion=None, accessories=None, blur=None, exposure=None, noise=None, mask=None, quality_for_recognition=None, **kwargs)[source]

Bases: msrest.serialization.Model

Face Attributes.

Parameters
  • age (float) – Age in years

  • gender (str or Gender) – Possible gender of the face. Possible values include: ‘male’, ‘female’

  • smile (float) – Smile intensity, a number between [0,1]

  • facial_hair (FacialHair) – Properties describing facial hair attributes.

  • glasses (str or GlassesType) – Glasses type if any of the face. Possible values include: ‘noGlasses’, ‘readingGlasses’, ‘sunglasses’, ‘swimmingGoggles’

  • head_pose (HeadPose) – Properties indicating head pose of the face.

  • emotion (Emotion) – Properties describing facial emotion in form of confidence ranging from 0 to 1.

  • hair (Hair) – Properties describing hair attributes.

  • makeup (Makeup) – Properties describing the presence of makeup on a given face.

  • occlusion (Occlusion) – Properties describing occlusions on a given face.

  • accessories (list[Accessory]) – Properties describing any accessories on a given face.

  • blur (Blur) – Properties describing any presence of blur within the image.

  • exposure (Exposure) – Properties describing exposure level of the image.

  • noise (Noise) – Properties describing noise level of the image.

  • mask (Mask) – Properties describing the presence of a mask on a given face.

  • quality_for_recognition (str or QualityForRecognition) – Properties describing the overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. Possible values include: ‘Low’, ‘Medium’, ‘High’

class azure.cognitiveservices.vision.face.models.FaceLandmarks(*, pupil_left=None, pupil_right=None, nose_tip=None, mouth_left=None, mouth_right=None, eyebrow_left_outer=None, eyebrow_left_inner=None, eye_left_outer=None, eye_left_top=None, eye_left_bottom=None, eye_left_inner=None, eyebrow_right_inner=None, eyebrow_right_outer=None, eye_right_inner=None, eye_right_top=None, eye_right_bottom=None, eye_right_outer=None, nose_root_left=None, nose_root_right=None, nose_left_alar_top=None, nose_right_alar_top=None, nose_left_alar_out_tip=None, nose_right_alar_out_tip=None, upper_lip_top=None, upper_lip_bottom=None, under_lip_top=None, under_lip_bottom=None, **kwargs)[source]

Bases: msrest.serialization.Model

A collection of 27-point face landmarks pointing to the important positions of face components.

Parameters
class azure.cognitiveservices.vision.face.models.FaceList(*, name: str, face_list_id: str, user_data: Optional[str] = None, recognition_model='recognition_01', persisted_faces=None, **kwargs)[source]

Bases: azure.cognitiveservices.vision.face.models._models_py3.MetaDataContract

Face list object.

All required parameters must be populated in order to send to Azure.

Parameters
  • name (str) – Required. User defined name, maximum length is 128.

  • user_data (str) – User specified data. Length should not exceed 16KB.

  • recognition_model (str or RecognitionModel) – Possible values include: ‘recognition_01’, ‘recognition_02’, ‘recognition_03’, ‘recognition_04’. Default value: “recognition_01” .

  • face_list_id (str) – Required. FaceListId of the target face list.

  • persisted_faces (list[PersistedFace]) – Persisted faces within the face list.

class azure.cognitiveservices.vision.face.models.FaceRectangle(*, width: int, height: int, left: int, top: int, **kwargs)[source]

Bases: msrest.serialization.Model

A rectangle within which a face can be found.

All required parameters must be populated in order to send to Azure.

Parameters
  • width (int) – Required. The width of the rectangle, in pixels.

  • height (int) – Required. The height of the rectangle, in pixels.

  • left (int) – Required. The distance from the left edge if the image to the left edge of the rectangle, in pixels.

  • top (int) – Required. The distance from the top edge if the image to the top edge of the rectangle, in pixels.

class azure.cognitiveservices.vision.face.models.FacialHair(*, moustache: Optional[float] = None, beard: Optional[float] = None, sideburns: Optional[float] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties describing facial hair attributes.

Parameters
class azure.cognitiveservices.vision.face.models.FindSimilarMatchMode(value)[source]

Bases: str, enum.Enum

An enumeration.

match_face = 'matchFace'
match_person = 'matchPerson'
class azure.cognitiveservices.vision.face.models.FindSimilarRequest(*, face_id: str, face_list_id: Optional[str] = None, large_face_list_id: Optional[str] = None, face_ids=None, max_num_of_candidates_returned: int = 20, mode='matchPerson', **kwargs)[source]

Bases: msrest.serialization.Model

Request body for find similar operation.

All required parameters must be populated in order to send to Azure.

Parameters
  • face_id (str) – Required. FaceId of the query face. User needs to call Face - Detect first to get a valid faceId. Note that this faceId is not persisted and will expire at the time specified by faceIdTimeToLive after the detection call

  • face_list_id (str) – An existing user-specified unique candidate face list, created in Face List - Create a Face List. Face list contains a set of persistedFaceIds which are persisted and will never expire. Parameter faceListId, largeFaceListId and faceIds should not be provided at the same time.

  • large_face_list_id (str) – An existing user-specified unique candidate large face list, created in LargeFaceList - Create. Large face list contains a set of persistedFaceIds which are persisted and will never expire. Parameter faceListId, largeFaceListId and faceIds should not be provided at the same time.

  • face_ids (list[str]) – An array of candidate faceIds. All of them are created by Face - Detect and the faceIds will expire at the time specified by faceIdTimeToLive after the detection call. The number of faceIds is limited to 1000. Parameter faceListId, largeFaceListId and faceIds should not be provided at the same time.

  • max_num_of_candidates_returned (int) – The number of top similar faces returned. The valid range is [1, 1000]. Default value: 20 .

  • mode (str or FindSimilarMatchMode) – Similar face searching mode. It can be “matchPerson” or “matchFace”. Possible values include: ‘matchPerson’, ‘matchFace’. Default value: “matchPerson” .

class azure.cognitiveservices.vision.face.models.Gender(value)[source]

Bases: str, enum.Enum

An enumeration.

female = 'female'
male = 'male'
class azure.cognitiveservices.vision.face.models.GlassesType(value)[source]

Bases: str, enum.Enum

An enumeration.

no_glasses = 'noGlasses'
reading_glasses = 'readingGlasses'
sunglasses = 'sunglasses'
swimming_goggles = 'swimmingGoggles'
class azure.cognitiveservices.vision.face.models.GroupRequest(*, face_ids, **kwargs)[source]

Bases: msrest.serialization.Model

Request body for group request.

All required parameters must be populated in order to send to Azure.

Parameters

face_ids (list[str]) – Required. Array of candidate faceId created by Face - Detect. The maximum is 1000 faces

class azure.cognitiveservices.vision.face.models.GroupResult(*, groups, messy_group=None, **kwargs)[source]

Bases: msrest.serialization.Model

An array of face groups based on face similarity.

All required parameters must be populated in order to send to Azure.

Parameters
  • groups (list[list[str]]) – Required. A partition of the original faces based on face similarity. Groups are ranked by number of faces

  • messy_group (list[str]) – Face ids array of faces that cannot find any similar faces from original faces.

class azure.cognitiveservices.vision.face.models.Hair(*, bald: Optional[float] = None, invisible: Optional[bool] = None, hair_color=None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties describing hair attributes.

Parameters
  • bald (float) – A number describing confidence level of whether the person is bald.

  • invisible (bool) – A boolean value describing whether the hair is visible in the image.

  • hair_color (list[HairColor]) – An array of candidate colors and confidence level in the presence of each.

class azure.cognitiveservices.vision.face.models.HairColor(*, color=None, confidence: Optional[float] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Hair color and associated confidence.

Parameters
  • color (str or HairColorType) – Name of the hair color. Possible values include: ‘unknown’, ‘white’, ‘gray’, ‘blond’, ‘brown’, ‘red’, ‘black’, ‘other’

  • confidence (float) – Confidence level of the color

class azure.cognitiveservices.vision.face.models.HairColorType(value)[source]

Bases: str, enum.Enum

An enumeration.

black = 'black'
blond = 'blond'
brown = 'brown'
gray = 'gray'
other = 'other'
red = 'red'
unknown = 'unknown'
white = 'white'
class azure.cognitiveservices.vision.face.models.HeadPose(*, roll: Optional[float] = None, yaw: Optional[float] = None, pitch: Optional[float] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties indicating head pose of the face.

Parameters
class azure.cognitiveservices.vision.face.models.IdentifyCandidate(*, person_id: str, confidence: float, **kwargs)[source]

Bases: msrest.serialization.Model

All possible faces that may qualify.

All required parameters must be populated in order to send to Azure.

Parameters
  • person_id (str) – Required. Id of candidate

  • confidence (float) – Required. Confidence threshold of identification, used to judge whether one face belong to one person. The range of confidenceThreshold is [0, 1] (default specified by algorithm).

class azure.cognitiveservices.vision.face.models.IdentifyRequest(*, face_ids, person_group_id: Optional[str] = None, large_person_group_id: Optional[str] = None, max_num_of_candidates_returned: int = 1, confidence_threshold: Optional[float] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Request body for identify face operation.

All required parameters must be populated in order to send to Azure.

Parameters
  • face_ids (list[str]) – Required. Array of query faces faceIds, created by the Face - Detect. Each of the faces are identified independently. The valid number of faceIds is between [1, 10].

  • person_group_id (str) – PersonGroupId of the target person group, created by PersonGroup - Create. Parameter personGroupId and largePersonGroupId should not be provided at the same time.

  • large_person_group_id (str) – LargePersonGroupId of the target large person group, created by LargePersonGroup - Create. Parameter personGroupId and largePersonGroupId should not be provided at the same time.

  • max_num_of_candidates_returned (int) – The range of maxNumOfCandidatesReturned is between 1 and 100 (default is 1). Default value: 1 .

  • confidence_threshold (float) – Confidence threshold of identification, used to judge whether one face belong to one person. The range of confidenceThreshold is [0, 1] (default specified by algorithm).

class azure.cognitiveservices.vision.face.models.IdentifyResult(*, face_id: str, candidates, **kwargs)[source]

Bases: msrest.serialization.Model

Response body for identify face operation.

All required parameters must be populated in order to send to Azure.

Parameters
  • face_id (str) – Required. FaceId of the query face

  • candidates (list[IdentifyCandidate]) – Required. Identified person candidates for that face (ranked by confidence). Array size should be no larger than input maxNumOfCandidatesReturned. If no person is identified, will return an empty array.

class azure.cognitiveservices.vision.face.models.ImageUrl(*, url: str, **kwargs)[source]

Bases: msrest.serialization.Model

ImageUrl.

All required parameters must be populated in order to send to Azure.

Parameters

url (str) – Required. Publicly reachable URL of an image

class azure.cognitiveservices.vision.face.models.LargeFaceList(*, name: str, large_face_list_id: str, user_data: Optional[str] = None, recognition_model='recognition_01', **kwargs)[source]

Bases: azure.cognitiveservices.vision.face.models._models_py3.MetaDataContract

Large face list object.

All required parameters must be populated in order to send to Azure.

Parameters
  • name (str) – Required. User defined name, maximum length is 128.

  • user_data (str) – User specified data. Length should not exceed 16KB.

  • recognition_model (str or RecognitionModel) – Possible values include: ‘recognition_01’, ‘recognition_02’, ‘recognition_03’, ‘recognition_04’. Default value: “recognition_01” .

  • large_face_list_id (str) – Required. LargeFaceListId of the target large face list.

class azure.cognitiveservices.vision.face.models.LargePersonGroup(*, name: str, large_person_group_id: str, user_data: Optional[str] = None, recognition_model='recognition_01', **kwargs)[source]

Bases: azure.cognitiveservices.vision.face.models._models_py3.MetaDataContract

Large person group object.

All required parameters must be populated in order to send to Azure.

Parameters
  • name (str) – Required. User defined name, maximum length is 128.

  • user_data (str) – User specified data. Length should not exceed 16KB.

  • recognition_model (str or RecognitionModel) – Possible values include: ‘recognition_01’, ‘recognition_02’, ‘recognition_03’, ‘recognition_04’. Default value: “recognition_01” .

  • large_person_group_id (str) – Required. LargePersonGroupId of the target large person groups

class azure.cognitiveservices.vision.face.models.Makeup(*, eye_makeup: Optional[bool] = None, lip_makeup: Optional[bool] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties describing the presence of makeup on a given face.

Parameters
  • eye_makeup (bool) – A boolean value describing whether eye makeup is present on a face.

  • lip_makeup (bool) – A boolean value describing whether lip makeup is present on a face.

class azure.cognitiveservices.vision.face.models.Mask(*, type=None, nose_and_mouth_covered: Optional[bool] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties describing the presence of a mask on a given face.

Parameters
  • type (str or MaskType) – Mask type if any of the face. Possible values include: ‘noMask’, ‘faceMask’, ‘otherMaskOrOcclusion’, ‘uncertain’

  • nose_and_mouth_covered (bool) – A boolean value indicating whether nose and mouth are covered.

class azure.cognitiveservices.vision.face.models.MaskType(value)[source]

Bases: str, enum.Enum

An enumeration.

face_mask = 'faceMask'
no_mask = 'noMask'
other_mask_or_occlusion = 'otherMaskOrOcclusion'
uncertain = 'uncertain'
class azure.cognitiveservices.vision.face.models.MetaDataContract(*, name: str, user_data: Optional[str] = None, recognition_model='recognition_01', **kwargs)[source]

Bases: azure.cognitiveservices.vision.face.models._models_py3.NonNullableNameAndNullableUserDataContract

A combination of user defined name and user specified data and recognition model name for largePersonGroup/personGroup, and largeFaceList/faceList.

All required parameters must be populated in order to send to Azure.

Parameters
  • name (str) – Required. User defined name, maximum length is 128.

  • user_data (str) – User specified data. Length should not exceed 16KB.

  • recognition_model (str or RecognitionModel) – Possible values include: ‘recognition_01’, ‘recognition_02’, ‘recognition_03’, ‘recognition_04’. Default value: “recognition_01” .

class azure.cognitiveservices.vision.face.models.NameAndUserDataContract(*, name: Optional[str] = None, user_data: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

A combination of user defined name and user specified data for the person, largePersonGroup/personGroup, and largeFaceList/faceList.

Parameters
  • name (str) – User defined name, maximum length is 128.

  • user_data (str) – User specified data. Length should not exceed 16KB.

class azure.cognitiveservices.vision.face.models.Noise(*, noise_level=None, value: Optional[float] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties describing noise level of the image.

Parameters
  • noise_level (str or NoiseLevel) – An enum value indicating level of noise. Possible values include: ‘Low’, ‘Medium’, ‘High’

  • value (float) – A number indicating level of noise level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise level.

class azure.cognitiveservices.vision.face.models.NoiseLevel(value)[source]

Bases: str, enum.Enum

An enumeration.

high = 'High'
low = 'Low'
medium = 'Medium'
class azure.cognitiveservices.vision.face.models.Occlusion(*, forehead_occluded: Optional[bool] = None, eye_occluded: Optional[bool] = None, mouth_occluded: Optional[bool] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Properties describing occlusions on a given face.

Parameters
  • forehead_occluded (bool) – A boolean value indicating whether forehead is occluded.

  • eye_occluded (bool) – A boolean value indicating whether eyes are occluded.

  • mouth_occluded (bool) – A boolean value indicating whether the mouth is occluded.

class azure.cognitiveservices.vision.face.models.OperationStatus(*, status, created_time, last_action_time=None, resource_location: Optional[str] = None, message: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Operation status object. Operation refers to the asynchronous backend task including taking a snapshot and applying a snapshot.

All required parameters must be populated in order to send to Azure.

Parameters
  • status (str or OperationStatusType) – Required. Operation status: notstarted, running, succeeded, failed. If the operation is requested and waiting to perform, the status is notstarted. If the operation is ongoing in backend, the status is running. Status succeeded means the operation is completed successfully, specifically for snapshot taking operation, it illustrates the snapshot is well taken and ready to apply, and for snapshot applying operation, it presents the target object has finished creating by the snapshot and ready to be used. Status failed is often caused by editing the source object while taking the snapshot or editing the target object while applying the snapshot before completion, see the field “message” to check the failure reason. Possible values include: ‘notstarted’, ‘running’, ‘succeeded’, ‘failed’

  • created_time (datetime) – Required. A combined UTC date and time string that describes the time when the operation (take or apply a snapshot) is requested. E.g. 2018-12-25T11:41:02.2331413Z.

  • last_action_time (datetime) – A combined UTC date and time string that describes the last time the operation (take or apply a snapshot) is actively migrating data. The lastActionTime will keep increasing until the operation finishes. E.g. 2018-12-25T11:51:27.8705696Z.

  • resource_location (str) – When the operation succeeds successfully, for snapshot taking operation the snapshot id will be included in this field, and for snapshot applying operation, the path to get the target object will be returned in this field.

  • message (str) – Show failure message when operation fails (omitted when operation succeeds).

class azure.cognitiveservices.vision.face.models.OperationStatusType(value)[source]

Bases: str, enum.Enum

An enumeration.

failed = 'failed'
notstarted = 'notstarted'
running = 'running'
succeeded = 'succeeded'
class azure.cognitiveservices.vision.face.models.PersistedFace(*, persisted_face_id: str, user_data: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

PersonFace object.

All required parameters must be populated in order to send to Azure.

Parameters
  • persisted_face_id (str) – Required. The persistedFaceId of the target face, which is persisted and will not expire. Different from faceId created by Face - Detect and will expire in at the time specified by faceIdTimeToLive after the detection call.

  • user_data (str) – User-provided data attached to the face. The size limit is 1KB.

class azure.cognitiveservices.vision.face.models.Person(*, person_id: str, name: Optional[str] = None, user_data: Optional[str] = None, persisted_face_ids=None, **kwargs)[source]

Bases: azure.cognitiveservices.vision.face.models._models_py3.NameAndUserDataContract

Person object.

All required parameters must be populated in order to send to Azure.

Parameters
  • name (str) – User defined name, maximum length is 128.

  • user_data (str) – User specified data. Length should not exceed 16KB.

  • person_id (str) – Required. PersonId of the target face list.

  • persisted_face_ids (list[str]) – PersistedFaceIds of registered faces in the person. These persistedFaceIds are returned from Person - Add a Person Face, and will not expire.

class azure.cognitiveservices.vision.face.models.PersonGroup(*, name: str, person_group_id: str, user_data: Optional[str] = None, recognition_model='recognition_01', **kwargs)[source]

Bases: azure.cognitiveservices.vision.face.models._models_py3.MetaDataContract

Person group object.

All required parameters must be populated in order to send to Azure.

Parameters
  • name (str) – Required. User defined name, maximum length is 128.

  • user_data (str) – User specified data. Length should not exceed 16KB.

  • recognition_model (str or RecognitionModel) – Possible values include: ‘recognition_01’, ‘recognition_02’, ‘recognition_03’, ‘recognition_04’. Default value: “recognition_01” .

  • person_group_id (str) – Required. PersonGroupId of the target person group.

class azure.cognitiveservices.vision.face.models.QualityForRecognition(value)[source]

Bases: str, enum.Enum

An enumeration.

high = 'High'
low = 'Low'
medium = 'Medium'
class azure.cognitiveservices.vision.face.models.RecognitionModel(value)[source]

Bases: str, enum.Enum

An enumeration.

recognition_01 = 'recognition_01'
recognition_02 = 'recognition_02'
recognition_03 = 'recognition_03'
recognition_04 = 'recognition_04'
class azure.cognitiveservices.vision.face.models.SimilarFace(*, confidence: float, face_id: Optional[str] = None, persisted_face_id: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Response body for find similar face operation.

All required parameters must be populated in order to send to Azure.

Parameters
  • face_id (str) – FaceId of candidate face when find by faceIds. faceId is created by Face - Detect and will expire at the time specified by faceIdTimeToLive after the detection call

  • persisted_face_id (str) – PersistedFaceId of candidate face when find by faceListId. persistedFaceId in face list is persisted and will not expire. As showed in below response

  • confidence (float) – Required. Similarity confidence of the candidate face. The higher confidence, the more similar. Range between [0,1].

class azure.cognitiveservices.vision.face.models.Snapshot(*, id: str, account: str, type, apply_scope, created_time, last_update_time, user_data: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Snapshot object.

All required parameters must be populated in order to send to Azure.

Parameters
  • id (str) – Required. Snapshot id.

  • account (str) – Required. Azure Cognitive Service Face account id of the subscriber who created the snapshot by Snapshot - Take.

  • type (str or SnapshotObjectType) – Required. Type of the source object in the snapshot, specified by the subscriber who created the snapshot when calling Snapshot - Take. Currently FaceList, PersonGroup, LargeFaceList and LargePersonGroup are supported. Possible values include: ‘FaceList’, ‘LargeFaceList’, ‘LargePersonGroup’, ‘PersonGroup’

  • apply_scope (list[str]) – Required. Array of the target Face subscription ids for the snapshot, specified by the user who created the snapshot when calling Snapshot - Take. For each snapshot, only subscriptions included in the applyScope of Snapshot - Take can apply it.

  • user_data (str) – User specified data about the snapshot for any purpose. Length should not exceed 16KB.

  • created_time (datetime) – Required. A combined UTC date and time string that describes the created time of the snapshot. E.g. 2018-12-25T11:41:02.2331413Z.

  • last_update_time (datetime) – Required. A combined UTC date and time string that describes the last time when the snapshot was created or updated by Snapshot - Update. E.g. 2018-12-25T11:51:27.8705696Z.

class azure.cognitiveservices.vision.face.models.SnapshotApplyMode(value)[source]

Bases: str, enum.Enum

An enumeration.

create_new = 'CreateNew'
class azure.cognitiveservices.vision.face.models.SnapshotObjectType(value)[source]

Bases: str, enum.Enum

An enumeration.

face_list = 'FaceList'
large_face_list = 'LargeFaceList'
large_person_group = 'LargePersonGroup'
person_group = 'PersonGroup'
class azure.cognitiveservices.vision.face.models.TakeSnapshotRequest(*, type, object_id: str, apply_scope, user_data: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Request body for taking snapshot operation.

All required parameters must be populated in order to send to Azure.

Parameters
  • type (str or SnapshotObjectType) – Required. User specified type for the source object to take snapshot from. Currently FaceList, PersonGroup, LargeFaceList and LargePersonGroup are supported. Possible values include: ‘FaceList’, ‘LargeFaceList’, ‘LargePersonGroup’, ‘PersonGroup’

  • object_id (str) – Required. User specified source object id to take snapshot from.

  • apply_scope (list[str]) – Required. User specified array of target Face subscription ids for the snapshot. For each snapshot, only subscriptions included in the applyScope of Snapshot - Take can apply it.

  • user_data (str) – User specified data about the snapshot for any purpose. Length should not exceed 16KB.

class azure.cognitiveservices.vision.face.models.TrainingStatus(*, status, created, last_action=None, last_successful_training=None, message: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Training status object.

All required parameters must be populated in order to send to Azure.

Parameters
  • status (str or TrainingStatusType) – Required. Training status: notstarted, running, succeeded, failed. If the training process is waiting to perform, the status is notstarted. If the training is ongoing, the status is running. Status succeed means this person group or large person group is ready for Face - Identify, or this large face list is ready for Face - Find Similar. Status failed is often caused by no person or no persisted face exist in the person group or large person group, or no persisted face exist in the large face list. Possible values include: ‘nonstarted’, ‘running’, ‘succeeded’, ‘failed’

  • created (datetime) – Required. A combined UTC date and time string that describes the created time of the person group, large person group or large face list.

  • last_action (datetime) – A combined UTC date and time string that describes the last modify time of the person group, large person group or large face list, could be null value when the group is not successfully trained.

  • last_successful_training (datetime) – A combined UTC date and time string that describes the last successful training time of the person group, large person group or large face list.

  • message (str) – Show failure message when training failed (omitted when training succeed).

class azure.cognitiveservices.vision.face.models.TrainingStatusType(value)[source]

Bases: str, enum.Enum

An enumeration.

failed = 'failed'
nonstarted = 'nonstarted'
running = 'running'
succeeded = 'succeeded'
class azure.cognitiveservices.vision.face.models.UpdateFaceRequest(*, user_data: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Request to update face data.

Parameters

user_data (str) – User-provided data attached to the face. The size limit is 1KB.

class azure.cognitiveservices.vision.face.models.UpdateSnapshotRequest(*, apply_scope=None, user_data: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Request body for updating a snapshot, with a combination of user defined apply scope and user specified data.

Parameters
  • apply_scope (list[str]) – Array of the target Face subscription ids for the snapshot, specified by the user who created the snapshot when calling Snapshot - Take. For each snapshot, only subscriptions included in the applyScope of Snapshot - Take can apply it.

  • user_data (str) – User specified data about the snapshot for any purpose. Length should not exceed 16KB.

class azure.cognitiveservices.vision.face.models.VerifyFaceToFaceRequest(*, face_id1: str, face_id2: str, **kwargs)[source]

Bases: msrest.serialization.Model

Request body for face to face verification.

All required parameters must be populated in order to send to Azure.

Parameters
  • face_id1 (str) – Required. FaceId of the first face, comes from Face - Detect

  • face_id2 (str) – Required. FaceId of the second face, comes from Face - Detect

class azure.cognitiveservices.vision.face.models.VerifyFaceToPersonRequest(*, face_id: str, person_id: str, person_group_id: Optional[str] = None, large_person_group_id: Optional[str] = None, **kwargs)[source]

Bases: msrest.serialization.Model

Request body for face to person verification.

All required parameters must be populated in order to send to Azure.

Parameters
  • face_id (str) – Required. FaceId of the face, comes from Face - Detect

  • person_group_id (str) – Using existing personGroupId and personId for fast loading a specified person. personGroupId is created in PersonGroup - Create. Parameter personGroupId and largePersonGroupId should not be provided at the same time.

  • large_person_group_id (str) – Using existing largePersonGroupId and personId for fast loading a specified person. largePersonGroupId is created in LargePersonGroup - Create. Parameter personGroupId and largePersonGroupId should not be provided at the same time.

  • person_id (str) – Required. Specify a certain person in a person group or a large person group. personId is created in PersonGroup Person - Create or LargePersonGroup Person - Create.

class azure.cognitiveservices.vision.face.models.VerifyResult(*, is_identical: bool, confidence: float, **kwargs)[source]

Bases: msrest.serialization.Model

Result of the verify operation.

All required parameters must be populated in order to send to Azure.

Parameters
  • is_identical (bool) – Required. True if the two faces belong to the same person or the face belongs to the person, otherwise false.

  • confidence (float) – Required. A number indicates the similarity confidence of whether two faces belong to the same person, or whether the face belongs to the person. By default, isIdentical is set to True if similarity confidence is greater than or equal to 0.5. This is useful for advanced users to override “isIdentical” and fine-tune the result on their own data.