Detect Moderation Labels
rekognition_detect_moderation_labels | R Documentation |
Detects unsafe content in a specified JPEG or PNG format image¶
Description¶
Detects unsafe content in a specified JPEG or PNG format image. Use
detect_moderation_labels
to moderate images depending on your
requirements. For example, you might want to filter images that contain
nudity, but not images containing suggestive content.
To filter images, use the labels returned by detect_moderation_labels
to determine which types of content are appropriate.
For information about moderation labels, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.
You can specify an adapter to use when retrieving label predictions by
providing a ProjectVersionArn
to the ProjectVersion
argument.
Usage¶
Arguments¶
Image
[required] The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.MinConfidence
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with a confidence level lower than this specified value.
If you don't specify
MinConfidence
, the operation returns labels with confidence values greater than or equal to 50 percent.HumanLoopConfig
Sets up the configuration for human evaluation, including the FlowDefinition the image will be sent to.
ProjectVersion
Identifier for the custom adapter. Expects the ProjectVersionArn as a value. Use the CreateProject or CreateProjectVersion APIs to create a custom adapter.
Value¶
A list with the following syntax:
list(
ModerationLabels = list(
list(
Confidence = 123.0,
Name = "string",
ParentName = "string",
TaxonomyLevel = 123
)
),
ModerationModelVersion = "string",
HumanLoopActivationOutput = list(
HumanLoopArn = "string",
HumanLoopActivationReasons = list(
"string"
),
HumanLoopActivationConditionsEvaluationResults = "string"
),
ProjectVersion = "string",
ContentTypes = list(
list(
Confidence = 123.0,
Name = "string"
)
)
)
Request syntax¶
svc$detect_moderation_labels(
Image = list(
Bytes = raw,
S3Object = list(
Bucket = "string",
Name = "string",
Version = "string"
)
),
MinConfidence = 123.0,
HumanLoopConfig = list(
HumanLoopName = "string",
FlowDefinitionArn = "string",
DataAttributes = list(
ContentClassifiers = list(
"FreeOfPersonallyIdentifiableInformation"|"FreeOfAdultContent"
)
)
),
ProjectVersion = "string"
)