Create Evaluation
machinelearning_create_evaluation | R Documentation |
Creates a new Evaluation of an MLModel¶
Description¶
Creates a new Evaluation
of an MLModel
. An MLModel
is evaluated on
a set of observations associated to a DataSource
. Like a DataSource
for an MLModel
, the DataSource
for an Evaluation
contains values
for the Target Variable
. The Evaluation
compares the predicted
result for each observation to the actual outcome and provides a summary
so that you know how effective the MLModel
functions on the test data.
Evaluation generates a relevant performance metric, such as BinaryAUC,
RegressionRMSE or MulticlassAvgFScore based on the corresponding
MLModelType
: BINARY
, REGRESSION
or MULTICLASS
.
create_evaluation
is an asynchronous operation. In response to
create_evaluation
, Amazon Machine Learning (Amazon ML) immediately
returns and sets the evaluation status to PENDING
. After the
Evaluation
is created and ready for use, Amazon ML sets the status to
COMPLETED
.
You can use the get_evaluation
operation to check progress of the
evaluation during the creation operation.
Usage¶
machinelearning_create_evaluation(EvaluationId, EvaluationName,
MLModelId, EvaluationDataSourceId)
Arguments¶
EvaluationId |
[required] A user-supplied ID that uniquely identifies the
|
EvaluationName |
A user-supplied name or description of the
|
MLModelId |
[required] The ID of the The schema used in creating the |
EvaluationDataSourceId |
[required] The ID of the |
Value¶
A list with the following syntax:
list(
EvaluationId = "string"
)
Request syntax¶
svc$create_evaluation(
EvaluationId = "string",
EvaluationName = "string",
MLModelId = "string",
EvaluationDataSourceId = "string"
)