Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not retry on 413 response codes #1199

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

donoghuc
Copy link
Contributor

@donoghuc donoghuc commented Jan 7, 2025

Previously when Elasticsearch responds with a 413 (Payload Too Large) status,
the manticore adapter raises an error before the response can be processed
by the bulk_send error handling. This commit updates the adpapter to pass
through the response code without raising an exception so that it can be
properly handled. NOTE: this only applies to code 413 even though there may be
arguably other error codes that should not be retried. The scope of this work is
simply to 413 where we have explicit handling.

Previously when Elasticsearch responds with a 413 (Payload Too Large) status,
the manticore adapter raises an error before the response can be processed
by the bulk_send error handling. This commit updates the adpapter to pass
through the response code without raising an exception so that it can be
properly handled. NOTE: this only applies to code 413 even though there may be
arguably other error codes that should not be retried. The scope of this work is
simply to 413 where we have explicit handling.
@donoghuc
Copy link
Contributor Author

donoghuc commented Jan 8, 2025

Still manually testing this. I'm a bit confused at the results currently.

I configured ES to have a very low max message size and sent it a payload that exceeds that.

Without the change it appears the pipeline just fails:

[2025-01-07T15:18:51,117][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["/Users/cas/elastic-repos/logstash/config.conf"], :thread=>"#<Thread:0x2e7b23fe /Users/cas/elastic-repos/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[2025-01-07T15:18:51,319][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.2}
[2025-01-07T15:18:51,321][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2025-01-07T15:18:51,327][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2025-01-07T15:18:51,460][ERROR][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>264, :body=>""}
[2025-01-07T15:18:53,472][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker4
[2025-01-07T15:18:54,125][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2025-01-07T15:18:54,359][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2025-01-07T15:18:54,362][INFO ][logstash.runner          ] Logstash shut down.

With this change it appears to attempt the retries:

[2025-01-07T15:55:54,573][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["/Users/cas/elastic-repos/logstash/config.conf"], :thread=>"#<Thread:0x36095668 /Users/cas/elastic-repos/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[2025-01-07T15:55:54,784][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.21}
[2025-01-07T15:55:54,786][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2025-01-07T15:55:54,917][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T15:55:54,918][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T15:55:54,918][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2025-01-07T15:55:56,940][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T15:55:56,941][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T15:55:56,942][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2025-01-07T15:56:00,971][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T15:56:00,972][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T15:56:00,973][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2025-01-07T15:56:08,996][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T15:56:08,997][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T15:56:08,998][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2025-01-07T15:56:25,024][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T15:56:25,025][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T15:56:25,027][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2025-01-07T15:56:57,053][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T15:56:57,055][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T15:56:57,057][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
ç^C[2025-01-07T15:57:41,201][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2025-01-07T15:57:46,209][WARN ][logstash.runner          ] Received shutdown signal, but pipeline is still waiting for in-flight events
to be processed. Sending another ^C will force quit Logstash, but this may cause
data loss.
[2025-01-07T15:58:01,086][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T15:58:01,087][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T15:58:01,087][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2025-01-07T15:59:05,109][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T15:59:05,111][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T15:59:05,112][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2025-01-07T16:00:09,135][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T16:00:09,136][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T16:00:09,137][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2025-01-07T16:01:13,180][WARN ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Bulk request rejected: `413 Payload Too Large` {:action_count=>1, :content_length=>263}
[2025-01-07T16:01:13,181][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying failed action {:status=>413, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"event"=>{"original"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "sequence"=>0}, "message"=>"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "host"=>{"name"=>"cass-MacBook-Pro.local"}, "@timestamp"=>2025-01-07T23:55:54.792096Z, "@version"=>"1", "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"payload_too_large"}}
[2025-01-07T16:01:13,181][INFO ][logstash.outputs.elasticsearch][main][5db232a341251891960bff0f7ab91b6fc87b7c6c07d4f8e6065d04f3e6710a48] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}

@donoghuc
Copy link
Contributor Author

donoghuc commented Jan 8, 2025

@edmocosta

it seems there’s a clear intention to handle 413 status here, which is not working properly due to the manticore's adapter logic. This adapter raises an error, making the http_client condition unreachable, leading to infinite retries and no options to DLQ those problematic events (dlq_custom_codes).

It seems like preventing the adapter from raising an error results in exponential backoff. It seems like from your description this is not desirable. Do we want to update retry logic to not attempt to retry 413 errors? I'm not sure what expectations etc there are around the DLQ (now there is an event that will have error details at least).

Essentially my question is do we want to DLQ 413 errors in

if DOC_SUCCESS_CODES.include?(status)
@document_level_metrics.increment(:successes)
next
elsif DOC_CONFLICT_CODE == status
@document_level_metrics.increment(:non_retryable_failures)
@logger.warn "Failed action", status: status, action: action, response: response if log_failure_type?(error)
next
elsif @dlq_codes.include?(status)
handle_dlq_response("Could not index event to Elasticsearch.", action, status, response)
@document_level_metrics.increment(:dlq_routed)
next
else
# only log what the user whitelisted
@document_level_metrics.increment(:retryable_failures)
@logger.info "Retrying failed action", status: status, action: action, error: error if log_failure_type?(error)
actions_to_retry << action
even though the existing pattern appears to be at the doc level not the request level?

@yaauie
Copy link
Contributor

yaauie commented Jan 8, 2025

AFAIK, we only route events to the DLQ in two situations:

  • before pushing to ES when the action cannot be composed
  • after a successful HTTP request that identified the event as a terminal failure

When we fail to make a successful HTTP request, we are supposed to retry indefinitely (but in your unpatched test, since the pipeline is being shut down due to all inputs closing it bails on the retries). By not raising the 413, the request is considered a "success" and doc-level handling gets invoked (we expect a valid bulk-API response with one entry per sent document).

BUT: what happens when we have a batch of two events, one of which exceeds the limit? Do we get an HTTP 413, or do we get an HTTP 2XX with doc-level 413's? I've seen situations where ES batch-of-one responses have the HTTP status of the doc-level response.

@donoghuc
Copy link
Contributor Author

donoghuc commented Jan 8, 2025

From my testing it certainly looks like message size within a batch is not handled differently. If the batch is too large, it is whole sale rejected with a 413 even if within the batch there are events which would have been under the size limit.

From this comment

# This is a constant instead of a config option because
# there really isn't a good reason to configure it.
#
# The criteria used are:
# 1. We need a number that's less than 100MiB because ES
# won't accept bulks larger than that.
# 2. It must be large enough to amortize the connection constant
# across multiple requests.
# 3. It must be small enough that even if multiple threads hit this size
# we won't use a lot of heap.
#
# We wound up agreeing that a number greater than 10 MiB and less than 100MiB
# made sense. We picked one on the lowish side to not use too much heap.
TARGET_BULK_BYTES = 20 * 1024 * 1024 # 20MiB
it seems like we generally expect ES to accept requests 20MiB in size (though I do see #941). So in the wild I suppose this would be when messages that are explicitl over 20MiB are sent (the only way to exceed the batch size).

Assuming we are sticking by the 20MiB batch size (assuming it is reasonable that all ES would accept this size) the only time you would be dealing with 413 errors is when you are generating a single message that is too big. I dont think we would want to keep retrying that large message, but putting it in the DLQ would probably be useful for users to find the problematic messages.

@jsvd
Copy link
Member

jsvd commented Jan 9, 2025

We definitely have some issues with the BadResponseCodeError abstraction..
A perform_request for a bulk request or for template installation carries a different notion of "bad response". And we also have different layers making decisions on what's a BadResponseCodeError, like the manticore_adapter layer and the http_client layer above it.
And finally we have the bottom layer making decisions on what's a BadResponseCodeError for specific paths in the upper layers but ignoring others, namely carving an exception for the 404 because template installation expects it but not carving it for 413.

A suggestion is to move BadResponseCodeError a layer up so that the perform_request no longer raises BadResponseCodeError. This leaves the caller to make the decision on what to consider a bad response code error depending on the type of request:

  1. code performing template installation won't consider 404 a bad response code
  2. code performing a bulk request won't consider 413 a bad response code.

This way we can return the previous behaviour of allowing the http client to make the choice of reinterpreting the bulk level 413 as a document level error since we know there's only a single document in the bulk. Not sure if this change carries other consequences..

@donoghuc
Copy link
Contributor Author

donoghuc commented Jan 9, 2025

Been experimenting with refactoring the BadResponseCodeError handling. I think the surface area in the code itself wont be too large, the majority of the changes will be in ensuring the test suite mocking works with the refactor.

As i'm working on gettting the tests together i'll push a commit showing what i'm thinking for the refactor 8427d29 take a look and let me know if that is way off base before I go too far down the test refactor :)

Instead of trying to handle this at the manticore adapter layer, let callers
decide how to handle exit codes and raise or generate errors. This will allow us
to DLQ 413 errors and not retry and hanlde 404 for template API interaction
without having to rely on catching generic errors.
@donoghuc
Copy link
Contributor Author

Current State

The current change (82872d0) has:

  1. Refactor callers to beresponsible for "BadResponseCode" error generation instead of manticore adapter
  2. Added 413 to dlq code which will result in not attempting exponential backoff.

Open questions

Is it desirable to not retry 413 and dump them to DLQ if possible? Is the refactor moving the BadResponseCode generation out of the adapter worthwhile?

WIth a config like:

input {
  generator {
    message => "aaaaaaaaaaaaaaaaaaaa"
    count => 100
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
  }
}

And ES configured to have a very low message size (1kb)

➜  logstash git:(ae8ad28aaa) ✗ cat docker-compose.yml
version: '3.8'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.16.0
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ports:
      - "9200:9200"
    volumes:
      - ./elastic-config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml%
➜  logstash git:(ae8ad28aaa) ✗ cat ./elastic-config/elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.max_content_length: 1kb

We see

➜  logstash git:(ae8ad28aaa) ✗ bin/logstash -f config.conf
Using system java: /Users/cas/.jenv/shims/java
Sending Logstash logs to /Users/cas/elastic-repos/logstash/logs which is now configured via log4j2.properties
[2025-01-10T14:11:50,690][INFO ][logstash.runner          ] Log4j configuration path used is: /Users/cas/elastic-repos/logstash/config/log4j2.properties
[2025-01-10T14:11:50,692][WARN ][logstash.runner          ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2025-01-10T14:11:50,692][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"9.0.0", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.5 on 21.0.5 +indy +jit [arm64-darwin]"}
[2025-01-10T14:11:50,693][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2025-01-10T14:11:50,707][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000` (logstash default)
[2025-01-10T14:11:50,707][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000` (logstash default)
[2025-01-10T14:11:50,707][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-nesting-depth` configured to `1000` (logstash default)
[2025-01-10T14:11:50,718][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because command line options are specified
[2025-01-10T14:11:50,911][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2025-01-10T14:11:51,001][INFO ][org.reflections.Reflections] Reflections took 38 ms to scan 1 urls, producing 149 keys and 524 values
[2025-01-10T14:11:51,046][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2025-01-10T14:11:51,050][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2025-01-10T14:11:51,094][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2025-01-10T14:11:51,132][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2025-01-10T14:11:51,132][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.16.0) {:es_version=>8}
[2025-01-10T14:11:51,132][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2025-01-10T14:11:51,139][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `true`
[2025-01-10T14:11:51,145][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["/Users/cas/elastic-repos/logstash/config.conf"], :thread=>"#<Thread:0x6eca5aa8 /Users/cas/elastic-repos/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[2025-01-10T14:11:51,349][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.2}
[2025-01-10T14:11:51,351][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2025-01-10T14:11:51,359][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>9, :content_length=>332}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>8, :content_length=>324}
[2025-01-10T14:11:51,512][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>9, :content_length=>332}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>8, :content_length=>324}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>8, :content_length=>326}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>9, :content_length=>333}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>8, :content_length=>327}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>8, :content_length=>324}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>8, :content_length=>323}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>8, :content_length=>326}
[2025-01-10T14:11:51,513][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>8, :content_length=>323}
[2025-01-10T14:11:51,514][WARN ][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Bulk request rejected: `413 Payload Too Large` {:action_count=>9, :content_length=>333}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>21, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>21, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:51,520][ERROR][logstash.outputs.elasticsearch][main][e38bd1715df019f58ed109e91adbecda268c362b7e0a8eec6c816704463afbf0] Encountered a retryable error (will retry with exponential backoff) {:code=>413, :url=>"http://localhost:9200/_bulk?filter_path=errors,items.*.error,items.*.status", :content_length=>22, :body=>""}
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker11
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker6
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker1
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker0
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker2
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker4
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker9
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker5
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker3
[2025-01-10T14:11:53,542][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker10
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker8
[2025-01-10T14:11:53,541][INFO ][org.logstash.execution.WorkerLoop][main] Received signal to abort processing current batch. Terminating pipeline worker [main]>worker7
[2025-01-10T14:11:54,157][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2025-01-10T14:11:54,397][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2025-01-10T14:11:54,406][INFO ][logstash.runner          ] Logstash shut down.

I would probably need to add some conditional logging to ensure that we are not claiming to retry when we are in fact not going to do that. Before I continue though i want to make sure This approach is desirable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants