Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - network timeout on large payload ask() #108

Open
moofone opened this issue Jan 5, 2025 · 0 comments
Open

[BUG] - network timeout on large payload ask() #108

moofone opened this issue Jan 5, 2025 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@moofone
Copy link

moofone commented Jan 5, 2025

When we extend the max response payload limit to allow larger data, we start to run into network timeouts, which I believe is a problem here in the libp2p request-response handler

worker_streams: futures_bounded::FuturesMap::new(
                substream_timeout,
                max_concurrent_streams,
            )

I tested my payload which causes this OutboundFailure::Timeout using a hard coded 300s and it gives it sufficient time to return the large payload.

I'm not sure if substream_timeout is configurable in libp2p but we'll need to also configure/raise this limit in kameo to solve #106

@moofone moofone added the bug Something isn't working label Jan 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants