Errors during ingestion
edit

For errors that occur during ingestion (2), the situation is different. We do have a status for N failed events out of X total events; if we fail the whole function then all X events will be processed again. While the N failed ones could potentially succeed, the remaining X-N will definitely fail, because the data streams are append-only (the function would attempt to recreate already ingested documents using the same document ID).

So the forwarder won’t return a failure for errors during ingestion; instead, the payload of the event that failed to be ingested will be sent to a replay SQS queue, which is automatically set up during the deployment. The replay SQS queue is not set as an Event Source Mapping for the function by default, which means you can investigate and consume (or not) the message as preferred.

You can temporarily set the replay SQS queue as an Event Source Mapping for the forwarder, which means messages in the queue will be consumed by the function and ingestion retried for transient failures. If the failure persists, the affected log entry will be moved to a DLQ after three retries.

Every other error that occurs during the execution of the forwarder is silently ignored, and reported to the APM server if instrumentation is enabled.