Optional
autoAuto-pausing properties
Optional
autoAuto-scaling properties
Optional
cacheThe cache size
Optional
creationThe time when the Big Data pool was created.
Optional
customList of custom libraries/packages associated with the spark pool.
Optional
defaultThe default folder where Spark logs will be written.
Optional
dynamicDynamic Executor Allocation
Optional
Readonly
idFully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName} NOTE: This property will not be serialized. It can only be populated by the server.
Optional
isWhether compute isolation is required or not.
Optional
Readonly
lastThe time when the Big Data pool was updated successfully. NOTE: This property will not be serialized. It can only be populated by the server.
Optional
libraryLibrary version requirements
The geo-location where the resource lives
Optional
Readonly
nameThe name of the resource NOTE: This property will not be serialized. It can only be populated by the server.
Optional
nodeThe number of nodes in the Big Data pool.
Optional
nodeThe level of compute power that each node in the Big Data pool has.
Optional
nodeThe kind of nodes that the Big Data pool provides.
Optional
provisioningThe state of the Big Data pool.
Optional
sessionWhether session level packages enabled.
Optional
sparkSpark configuration file to specify additional properties
Optional
sparkThe Spark events folder
Optional
sparkThe Apache Spark version.
Optional
tagsResource tags.
Optional
Readonly
typeThe type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" NOTE: This property will not be serialized. It can only be populated by the server.
Generated using TypeDoc
A Big Data pool