Skip to content

Create Event Source Mapping

lambda_create_event_source_mapping R Documentation

Creates a mapping between an event source and an Lambda function

Description

Creates a mapping between an event source and an Lambda function. Lambda reads items from the event source and invokes the function.

For details about how to configure different event sources, see the following topics.

The following error handling options are available only for stream sources (DynamoDB and Kinesis):

  • BisectBatchOnFunctionError – If the function returns an error, split the batch in two and retry.

  • DestinationConfig – Send discarded records to an Amazon SQS queue or Amazon SNS topic.

  • MaximumRecordAgeInSeconds – Discard records older than the specified age. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires

  • MaximumRetryAttempts – Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires.

  • ParallelizationFactor – Process multiple batches from each shard concurrently.

For information about which configuration parameters apply to each event source, see the following topics.

Usage

lambda_create_event_source_mapping(EventSourceArn, FunctionName,
  Enabled, BatchSize, FilterCriteria, MaximumBatchingWindowInSeconds,
  ParallelizationFactor, StartingPosition, StartingPositionTimestamp,
  DestinationConfig, MaximumRecordAgeInSeconds,
  BisectBatchOnFunctionError, MaximumRetryAttempts,
  TumblingWindowInSeconds, Topics, Queues, SourceAccessConfigurations,
  SelfManagedEventSource, FunctionResponseTypes,
  AmazonManagedKafkaEventSourceConfig, SelfManagedKafkaEventSourceConfig,
  ScalingConfig, DocumentDBEventSourceConfig, KMSKeyArn)

Arguments

EventSourceArn

The Amazon Resource Name (ARN) of the event source.

  • Amazon Kinesis – The ARN of the data stream or a stream consumer.

  • Amazon DynamoDB Streams – The ARN of the stream.

  • Amazon Simple Queue Service – The ARN of the queue.

  • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).

  • Amazon MQ – The ARN of the broker.

  • Amazon DocumentDB – The ARN of the DocumentDB change stream.

FunctionName

[required] The name or ARN of the Lambda function.

Name formats

  • Function nameMyFunction.

  • Function ARN⁠arn:aws:lambda:us-west-2:123456789012:function:MyFunction⁠.

  • Version or Alias ARN⁠arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD⁠.

  • Partial ARN⁠123456789012:function:MyFunction⁠.

The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

Enabled

When true, the event source mapping is active. When false, Lambda pauses polling and invocation.

Default: True

BatchSize

The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

  • Amazon Kinesis – Default 100. Max 10,000.

  • Amazon DynamoDB Streams – Default 100. Max 10,000.

  • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.

  • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.

  • Self-managed Apache Kafka – Default 100. Max 10,000.

  • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.

  • DocumentDB – Default 100. Max 10,000.

FilterCriteria

An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.

MaximumBatchingWindowInSeconds

The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. You can configure MaximumBatchingWindowInSeconds to any value from 0 seconds to 300 seconds in increments of seconds.

For Kinesis, DynamoDB, and Amazon SQS event sources, the default batching window is 0 seconds. For Amazon MSK, Self-managed Apache Kafka, Amazon MQ, and DocumentDB event sources, the default batching window is 500 ms. Note that because you can only change MaximumBatchingWindowInSeconds in increments of seconds, you cannot revert back to the 500 ms default batching window after you have changed it. To restore the default batching window, you must create a new event source mapping.

Related setting: For Kinesis, DynamoDB, and Amazon SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.

ParallelizationFactor

(Kinesis and DynamoDB Streams only) The number of batches to process from each shard concurrently.

StartingPosition

The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB Stream event sources. AT_TIMESTAMP is supported only for Amazon Kinesis streams, Amazon DocumentDB, Amazon MSK, and self-managed Apache Kafka.

StartingPositionTimestamp

With StartingPosition set to AT_TIMESTAMP, the time from which to start reading. StartingPositionTimestamp cannot be in the future.

DestinationConfig

(Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Kafka only) A configuration object that specifies the destination of an event after Lambda processes it.

MaximumRecordAgeInSeconds

(Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is infinite (-1).

BisectBatchOnFunctionError

(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry.

MaximumRetryAttempts

(Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires.

TumblingWindowInSeconds

(Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.

Topics

The name of the Kafka topic.

Queues

(MQ) The name of the Amazon MQ broker destination queue to consume.

SourceAccessConfigurations

An array of authentication protocols or VPC components required to secure your event source.

SelfManagedEventSource

The self-managed Apache Kafka cluster to receive records from.

FunctionResponseTypes

(Kinesis, DynamoDB Streams, and Amazon SQS) A list of current response type enums applied to the event source mapping.

AmazonManagedKafkaEventSourceConfig

Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.

SelfManagedKafkaEventSourceConfig

Specific configuration settings for a self-managed Apache Kafka event source.

ScalingConfig

(Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.

DocumentDBEventSourceConfig

Specific configuration settings for a DocumentDB event source.

KMSKeyArn

The ARN of the Key Management Service (KMS) customer managed key that Lambda uses to encrypt your function's filter criteria. By default, Lambda does not encrypt your filter criteria object. Specify this property to encrypt data using your own customer managed key.

Value

A list with the following syntax:

list(
  UUID = "string",
  StartingPosition = "TRIM_HORIZON"|"LATEST"|"AT_TIMESTAMP",
  StartingPositionTimestamp = as.POSIXct(
    "2015-01-01"
  ),
  BatchSize = 123,
  MaximumBatchingWindowInSeconds = 123,
  ParallelizationFactor = 123,
  EventSourceArn = "string",
  FilterCriteria = list(
    Filters = list(
      list(
        Pattern = "string"
      )
    )
  ),
  FunctionArn = "string",
  LastModified = as.POSIXct(
    "2015-01-01"
  ),
  LastProcessingResult = "string",
  State = "string",
  StateTransitionReason = "string",
  DestinationConfig = list(
    OnSuccess = list(
      Destination = "string"
    ),
    OnFailure = list(
      Destination = "string"
    )
  ),
  Topics = list(
    "string"
  ),
  Queues = list(
    "string"
  ),
  SourceAccessConfigurations = list(
    list(
      Type = "BASIC_AUTH"|"VPC_SUBNET"|"VPC_SECURITY_GROUP"|"SASL_SCRAM_512_AUTH"|"SASL_SCRAM_256_AUTH"|"VIRTUAL_HOST"|"CLIENT_CERTIFICATE_TLS_AUTH"|"SERVER_ROOT_CA_CERTIFICATE",
      URI = "string"
    )
  ),
  SelfManagedEventSource = list(
    Endpoints = list(
      list(
        "string"
      )
    )
  ),
  MaximumRecordAgeInSeconds = 123,
  BisectBatchOnFunctionError = TRUE|FALSE,
  MaximumRetryAttempts = 123,
  TumblingWindowInSeconds = 123,
  FunctionResponseTypes = list(
    "ReportBatchItemFailures"
  ),
  AmazonManagedKafkaEventSourceConfig = list(
    ConsumerGroupId = "string"
  ),
  SelfManagedKafkaEventSourceConfig = list(
    ConsumerGroupId = "string"
  ),
  ScalingConfig = list(
    MaximumConcurrency = 123
  ),
  DocumentDBEventSourceConfig = list(
    DatabaseName = "string",
    CollectionName = "string",
    FullDocument = "UpdateLookup"|"Default"
  ),
  KMSKeyArn = "string",
  FilterCriteriaError = list(
    ErrorCode = "string",
    Message = "string"
  )
)

Request syntax

svc$create_event_source_mapping(
  EventSourceArn = "string",
  FunctionName = "string",
  Enabled = TRUE|FALSE,
  BatchSize = 123,
  FilterCriteria = list(
    Filters = list(
      list(
        Pattern = "string"
      )
    )
  ),
  MaximumBatchingWindowInSeconds = 123,
  ParallelizationFactor = 123,
  StartingPosition = "TRIM_HORIZON"|"LATEST"|"AT_TIMESTAMP",
  StartingPositionTimestamp = as.POSIXct(
    "2015-01-01"
  ),
  DestinationConfig = list(
    OnSuccess = list(
      Destination = "string"
    ),
    OnFailure = list(
      Destination = "string"
    )
  ),
  MaximumRecordAgeInSeconds = 123,
  BisectBatchOnFunctionError = TRUE|FALSE,
  MaximumRetryAttempts = 123,
  TumblingWindowInSeconds = 123,
  Topics = list(
    "string"
  ),
  Queues = list(
    "string"
  ),
  SourceAccessConfigurations = list(
    list(
      Type = "BASIC_AUTH"|"VPC_SUBNET"|"VPC_SECURITY_GROUP"|"SASL_SCRAM_512_AUTH"|"SASL_SCRAM_256_AUTH"|"VIRTUAL_HOST"|"CLIENT_CERTIFICATE_TLS_AUTH"|"SERVER_ROOT_CA_CERTIFICATE",
      URI = "string"
    )
  ),
  SelfManagedEventSource = list(
    Endpoints = list(
      list(
        "string"
      )
    )
  ),
  FunctionResponseTypes = list(
    "ReportBatchItemFailures"
  ),
  AmazonManagedKafkaEventSourceConfig = list(
    ConsumerGroupId = "string"
  ),
  SelfManagedKafkaEventSourceConfig = list(
    ConsumerGroupId = "string"
  ),
  ScalingConfig = list(
    MaximumConcurrency = 123
  ),
  DocumentDBEventSourceConfig = list(
    DatabaseName = "string",
    CollectionName = "string",
    FullDocument = "UpdateLookup"|"Default"
  ),
  KMSKeyArn = "string"
)