You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Seems that for regular socket send/recv operations, current implementation still uses one pair of poll op and send/recv op, which is inefficient. Below perf report shows that io_poll_add() introduces obvious overhead.
51.79% 0.31% IOUringEventLoo [kernel.kallsyms]
51.48% io_issue_sqe
32.98% io_write
12.26% io_read
4.35% io_poll_add
4.31% __io_arm_poll_handler
3.72% sock_poll
3.40% tcp_poll
2.54% _raw_spin_unlock_irqrestore
1.79% io_assign_file
I wonder whether we can use IORING_FEAT_FAST_POLL or IORING_OP_POLL_ADD's IORING_POLL_ADD_MULT feature, which will reduce most io_poll_add calls.
The text was updated successfully, but these errors were encountered:
IIRC the reason what to save memory to be commited for future unknown reads: this is beneficial for a slow-path, but still necessary for a common use case ie idle connection(s)
Any proposal on how to deal with this differently?
Seems that for regular socket send/recv operations, current implementation still uses one pair of poll op and send/recv op, which is inefficient. Below perf report shows that io_poll_add() introduces obvious overhead.
2.54% _raw_spin_unlock_irqrestore
I wonder whether we can use IORING_FEAT_FAST_POLL or IORING_OP_POLL_ADD's IORING_POLL_ADD_MULT feature, which will reduce most io_poll_add calls.
The text was updated successfully, but these errors were encountered: