-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helper to identify sections of tests #54
base: v1
Are you sure you want to change the base?
Conversation
Useful for scenarios such as scenarios := []struct{ ... }{ scenario1, scenario2, } for _, s := scenarios { c.Section(s.name) ... }
I don't understand the idea. Why doesn't it simply:
? |
Because that will log even if the test passes, making it the output harder to decipher. |
If that was true your own implementation wouldn't work either, right? You are simply calling c.log in your proposed change. |
But it's called within |
Maybe there's a better place to add this logging, you know the framework better so totally open to suggestion :) |
Can you please try my suggestion before arguing it doesn't work? :-) |
I did, hence my PR. Using |
c.Logf only logs if the test fails. |
And yes, you will have the history of prior scenarios, but anything is only ever shown if the test fails. If your test outputs "Scenario: foo" last before an assertion fails, that will be the one failing. If you want completely independent logs and test failure/success semantics, I suggest separating the tests. |
"If you want completely independent logs and test failure/success semantics, I suggest separating the tests." Having multiple examples in one test is very standard, and separating often leads to repeated code without the benefit of increased readability. This patch provides a nice and simple add on to support it. Yay or nay? |
Yes, you have a point.. The way it is done doesn't feel quite right, but let's fins a better way. Let me get back to you on this later today. |
Ping ;) Have you thought about a better to support this? |
Yes, sorry for the lack of feedback. I just haven't had a chance to cook a good-looking API yet, but we should have a way to add a some sort of label while iterating over such tables, which is only visible next to an actual failure. Not next to the backtrace, though, but to the error report itself. |
Useful for scenarios such as