Update Recipe Job
| gluedatabrew_update_recipe_job | R Documentation |
Modifies the definition of an existing DataBrew recipe job¶
Description¶
Modifies the definition of an existing DataBrew recipe job.
Usage¶
gluedatabrew_update_recipe_job(EncryptionKeyArn, EncryptionMode, Name,
LogSubscription, MaxCapacity, MaxRetries, Outputs, DataCatalogOutputs,
DatabaseOutputs, RoleArn, Timeout)
Arguments¶
EncryptionKeyArnThe Amazon Resource Name (ARN) of an encryption key that is used to protect the job.
EncryptionModeThe encryption mode for the job, which can be one of the following:
SSE-KMS- Server-side encryption with keys managed by KMS.SSE-S3- Server-side encryption with keys managed by Amazon S3.
Name[required] The name of the job to update.
LogSubscriptionEnables or disables Amazon CloudWatch logging for the job. If logging is enabled, CloudWatch writes one log stream for each job run.
MaxCapacityThe maximum number of nodes that DataBrew can consume when the job processes data.
MaxRetriesThe maximum number of times to retry the job after a job run fails.
OutputsOne or more artifacts that represent the output from running the job.
DataCatalogOutputsOne or more artifacts that represent the Glue Data Catalog output from running the job.
DatabaseOutputsRepresents a list of JDBC database output objects which defines the output destination for a DataBrew recipe job to write into.
RoleArn[required] The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role to be assumed when DataBrew runs the job.
TimeoutThe job's timeout in minutes. A job that attempts to run longer than this timeout period ends with a status of
TIMEOUT.
Value¶
A list with the following syntax:
Request syntax¶
svc$update_recipe_job(
EncryptionKeyArn = "string",
EncryptionMode = "SSE-KMS"|"SSE-S3",
Name = "string",
LogSubscription = "ENABLE"|"DISABLE",
MaxCapacity = 123,
MaxRetries = 123,
Outputs = list(
list(
CompressionFormat = "GZIP"|"LZ4"|"SNAPPY"|"BZIP2"|"DEFLATE"|"LZO"|"BROTLI"|"ZSTD"|"ZLIB",
Format = "CSV"|"JSON"|"PARQUET"|"GLUEPARQUET"|"AVRO"|"ORC"|"XML"|"TABLEAUHYPER",
PartitionColumns = list(
"string"
),
Location = list(
Bucket = "string",
Key = "string",
BucketOwner = "string"
),
Overwrite = TRUE|FALSE,
FormatOptions = list(
Csv = list(
Delimiter = "string"
)
),
MaxOutputFiles = 123
)
),
DataCatalogOutputs = list(
list(
CatalogId = "string",
DatabaseName = "string",
TableName = "string",
S3Options = list(
Location = list(
Bucket = "string",
Key = "string",
BucketOwner = "string"
)
),
DatabaseOptions = list(
TempDirectory = list(
Bucket = "string",
Key = "string",
BucketOwner = "string"
),
TableName = "string"
),
Overwrite = TRUE|FALSE
)
),
DatabaseOutputs = list(
list(
GlueConnectionName = "string",
DatabaseOptions = list(
TempDirectory = list(
Bucket = "string",
Key = "string",
BucketOwner = "string"
),
TableName = "string"
),
DatabaseOutputMode = "NEW_TABLE"
)
),
RoleArn = "string",
Timeout = 123
)