-
Notifications
You must be signed in to change notification settings - Fork 829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance XCM Debugging with Log Capture in Unit Tests #7594
base: master
Are you sure you want to change the base?
Conversation
Currently, there is no straightforward way to verify if log messages are correctly output during unit tests. I initially explored using For Given this, I've implemented a custom log capture mechanism within |
if this is simple, then sure, it's nice to have, otherwise I don't think we really need to verify this in tests - we have many devs acting as canaries 😆 |
Yeah, this isn’t meant to replace devs acting as canaries, but rather to make debugging more efficient when needed. The implementation isn’t difficult, and there’s a use case where rollback clears events, making it hard to assert them—capturing and asserting logs could be an easier alternative in such cases. |
All GitHub workflows were cancelled due to failure one of the required jobs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I'm not familiar with the topic of capturing logs in tests. I'm surprised we don't do this in any other test yet. But anyway, seems like a more complex topic and that part of the code might take longer to merge.
I would suggest moving the XCM test changes without log capturing in a separate PR, which could be reviewed and merge immediately. And having the log capturing PR separate. After the log capturing is merged, we can improve the previously added XCM tests with log capturing.
@serban300 My changes to the XCM tests are meant to illustrate how
Let me know what you think! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is simple and clear enough, I have no objections
Ok, if the tests are only for showcasing the log capturing feature, we can keep the PR as it is. |
Yes, showcasing only. |
/cmd fmt |
/// | ||
/// For more details, see [`tracing-test`](https://crates.io/crates/tracing-test). | ||
#[cfg(feature = "std")] | ||
pub use tracing_test; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would move this inside the test_log_capture
mod. Also I wouldn't add examples related to this. There should be examples in the tracing-test
crate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that makes sense. Maybe we can remove tracing-test
and implement our own log capture logic, as discussed here.
/// assert!(test_log_capture::logs_contain("test log message")); | ||
/// ``` | ||
#[cfg(feature = "std")] | ||
pub mod test_log_capture { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would guard this under a test
feature as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
); | ||
|
||
// Ensure an error occurred | ||
assert!(result.is_err(), "Expected an error due to invalid destination"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Could we move the assert!(test_log_capture::logs_contain("XCM validate_send failed"));
after this line ? I think it would make sense to check if the logged error is the one that we expected right here, after we know that we got an error.
pub fn capture_with_max_level<F: FnOnce()>(max_level: impl Into<LevelFilter>, f: F) { | ||
let log_capture = MockWriter::new(&global_buf()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I think it would be a bit more flexible if we avoided using a closure and if we avoided using a global buffer.
I would do something like:
pub fn init_buffer_logger(
max_level: impl Into<LevelFilter>,
) -> (DefaultGuard, Arc<Mutex<Vec<u8>>>) {
let buf = Arc::new(Mutex::new(Vec::new()));
let subscriber = tracing_subscriber::fmt()
.with_max_level(max_level)
.with_writer(MockMakeWriter::new(buf.clone()))
.finish();
(subscriber::set_default(subscriber), buf)
}
(not sure about the naming)
The current MockWriter
wouldn't work with this approach. We would have to use the MockWriter
from tracing-subscriber or something similar. But it's really small and this way we could remove the tracing-test
dependency as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this approach won’t work unless we implement our own Writer, similar to what we did here:
log_capture_test.rs#L13.
I'm okay with removing tracing-test completely if we go with this approach.
/// test_log_capture::capture_with_max_level(Level::INFO, || { | ||
/// tracing::info!("This will be captured at INFO level"); | ||
/// tracing::debug!("This will be captured at DEBUG level"); | ||
/// }); | ||
/// | ||
/// assert!(test_log_capture::logs_contain("INFO level")); | ||
/// assert!(!test_log_capture::logs_contain("DEBUG level")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I played with this a bit and in my experiments indeed it works when I do tracing::info
, but it doesn't work with log::info
. I think a lot of the logs are emitted using log::
methods.
Maybe I'm doing something wrong. But just saying in case it's worth looking into.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven’t tested with log
, but tracing
seems to be the preferred way to log messages. To keep it simple, we can focus on tracing
for now.
let attempted_count = events | ||
.iter() | ||
.filter(|e| { | ||
matches!( | ||
e.event, | ||
relay_chain::RuntimeEvent::XcmPallet(pallet_xcm::Event::Attempted { .. }) | ||
) | ||
}) | ||
.count(); | ||
let sent_count = events | ||
.iter() | ||
.filter(|e| { | ||
matches!( | ||
e.event, | ||
relay_chain::RuntimeEvent::XcmPallet(pallet_xcm::Event::Sent { .. }) | ||
) | ||
}) | ||
.count(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we plan to count the number of logs in other places as well it would be worth deduplicating this logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
Description
This PR introduces a lightweight log-capturing mechanism for XCM unit tests, simplifying debugging by enabling structured log assertions. It partially addresses #6119 and #6125, offering an optional way to verify logs in tests while remaining unobtrusive in normal execution.
Key Changes
sp_tracing
.Review Notes:
sp_tracing::init_for_tests()
for log verification in automated tests.