interface BatchTransformInputProperty
Language | Type name |
---|---|
.NET | Amazon.CDK.AWS.Sagemaker.CfnModelQualityJobDefinition.BatchTransformInputProperty |
Go | github.com/aws/aws-cdk-go/awscdk/v2/awssagemaker#CfnModelQualityJobDefinition_BatchTransformInputProperty |
Java | software.amazon.awscdk.services.sagemaker.CfnModelQualityJobDefinition.BatchTransformInputProperty |
Python | aws_cdk.aws_sagemaker.CfnModelQualityJobDefinition.BatchTransformInputProperty |
TypeScript | aws-cdk-lib » aws_sagemaker » CfnModelQualityJobDefinition » BatchTransformInputProperty |
Input object for the batch transform job.
Example
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import { aws_sagemaker as sagemaker } from 'aws-cdk-lib';
const batchTransformInputProperty: sagemaker.CfnModelQualityJobDefinition.BatchTransformInputProperty = {
dataCapturedDestinationS3Uri: 'dataCapturedDestinationS3Uri',
datasetFormat: {
csv: {
header: false,
},
json: {
line: false,
},
parquet: false,
},
localPath: 'localPath',
// the properties below are optional
endTimeOffset: 'endTimeOffset',
inferenceAttribute: 'inferenceAttribute',
probabilityAttribute: 'probabilityAttribute',
probabilityThresholdAttribute: 123,
s3DataDistributionType: 's3DataDistributionType',
s3InputMode: 's3InputMode',
startTimeOffset: 'startTimeOffset',
};
Properties
Name | Type | Description |
---|---|---|
data | string | The Amazon S3 location being used to capture the data. |
dataset | IResolvable | Dataset | The dataset format for your batch transform job. |
local | string | Path to the filesystem where the batch transform data is available to the container. |
end | string | If specified, monitoring jobs subtract this time from the end time. |
inference | string | The attribute of the input data that represents the ground truth label. |
probability | string | In a classification problem, the attribute that represents the class probability. |
probability | number | The threshold for the class probability to be evaluated as a positive result. |
s3 | string | Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. |
s3 | string | Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. |
start | string | If specified, monitoring jobs substract this time from the start time. |
dataCapturedDestinationS3Uri
Type:
string
The Amazon S3 location being used to capture the data.
datasetFormat
Type:
IResolvable
|
Dataset
The dataset format for your batch transform job.
localPath
Type:
string
Path to the filesystem where the batch transform data is available to the container.
endTimeOffset?
Type:
string
(optional)
If specified, monitoring jobs subtract this time from the end time.
For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs .
inferenceAttribute?
Type:
string
(optional)
The attribute of the input data that represents the ground truth label.
probabilityAttribute?
Type:
string
(optional)
In a classification problem, the attribute that represents the class probability.
probabilityThresholdAttribute?
Type:
number
(optional)
The threshold for the class probability to be evaluated as a positive result.
s3DataDistributionType?
Type:
string
(optional)
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key.
Defaults to FullyReplicated
s3InputMode?
Type:
string
(optional)
Whether the Pipe
or File
is used as the input mode for transferring data for the monitoring job.
Pipe
mode is recommended for large datasets. File
mode is useful for small files that fit in memory. Defaults to File
.
startTimeOffset?
Type:
string
(optional)
If specified, monitoring jobs substract this time from the start time.
For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs .