Detect Toxic Content
comprehend_detect_toxic_content | R Documentation |
Performs toxicity analysis on the list of text strings that you provide as input¶
Description¶
Performs toxicity analysis on the list of text strings that you provide as input. The API response contains a results list that matches the size of the input list. For more information about toxicity detection, see Toxicity detection in the Amazon Comprehend Developer Guide.
Usage¶
Arguments¶
TextSegments
[required] A list of up to 10 text strings. Each string has a maximum size of 1 KB, and the maximum size of the list is 10 KB.
LanguageCode
[required] The language of the input text. Currently, English is the only supported language.
Value¶
A list with the following syntax:
list(
ResultList = list(
list(
Labels = list(
list(
Name = "GRAPHIC"|"HARASSMENT_OR_ABUSE"|"HATE_SPEECH"|"INSULT"|"PROFANITY"|"SEXUAL"|"VIOLENCE_OR_THREAT",
Score = 123.0
)
),
Toxicity = 123.0
)
)
)