Create Batch Prediction
machinelearning_create_batch_prediction | R Documentation |
Generates predictions for a group of observations¶
Description¶
Generates predictions for a group of observations. The observations to
process exist in one or more data files referenced by a DataSource
.
This operation creates a new BatchPrediction
, and uses an MLModel
and the data files referenced by the DataSource
as information
sources.
create_batch_prediction
is an asynchronous operation. In response to
create_batch_prediction
, Amazon Machine Learning (Amazon ML)
immediately returns and sets the BatchPrediction
status to PENDING
.
After the BatchPrediction
completes, Amazon ML sets the status to
COMPLETED
.
You can poll for status updates by using the get_batch_prediction
operation and checking the Status
parameter of the result. After the
COMPLETED
status appears, the results are available in the location
specified by the OutputUri
parameter.
Usage¶
machinelearning_create_batch_prediction(BatchPredictionId,
BatchPredictionName, MLModelId, BatchPredictionDataSourceId, OutputUri)
Arguments¶
BatchPredictionId |
[required] A user-supplied ID that uniquely identifies the
|
BatchPredictionName |
A user-supplied name or description of the
|
MLModelId |
[required] The ID of the |
BatchPredictionDataSourceId |
[required] The ID of the |
OutputUri |
[required] The location of an Amazon Simple Storage Service
(Amazon S3) bucket or directory to store the batch prediction results.
The following substrings are not allowed in the Amazon ML needs permissions to store and retrieve the logs on your behalf. For information about how to set permissions, see the Amazon Machine Learning Developer Guide. |
Value¶
A list with the following syntax:
list(
BatchPredictionId = "string"
)
Request syntax¶
svc$create_batch_prediction(
BatchPredictionId = "string",
BatchPredictionName = "string",
MLModelId = "string",
BatchPredictionDataSourceId = "string",
OutputUri = "string"
)