-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about Flacoco #107
Comments
Hi @danglotb ,
Nice!
If I understood correctly, what you are doing is essentially computing the code coverage of the tests that you implicitly fail. Correct?
You can use Flacoco to compute code coverage in this way, however it might make more sense to just use a code coverage tool directly.
If you don't use the passing test cases, all executed lines will have a suspiciousness value of If you pass the test cases the suspiciousness values will take them into account. This means that lines predominantly executed by failing test cases will have higher values than others. Please note that we currently have a limitation #57, which means we can only select test classes.
What do you mean by "fault localization accuracy"? By implicitly failing a test case, it will be counted as a failing test case like any other. |
Hi @andre15silva, Thank you for your answers.
Not really, what I want is to compute a score for each modified line that corresponds to their suspiciousness regarding the energy regression. This scenario can be seen as classical regression, where you want to find out which lines seem to be responsible for the bug, that is detected by a failing test case.
What I want to know is that does the error that makes the test failing has an impact on the result of Flacoco? This Best |
Ah okay. Then in that case you want to pass the entire test suite. The more tests you pass the more accurate results should be (depending on test quality, coverage, etc).
I see. You shouldn't worry about that. Since the stack-trace won't include any lines in the code that is being analyzed, no lines from it will be added to the result. We only add lines from the stack-trace that correspond to lines in classes that are present in the coverage report from Jacoco. |
Ok thank you very much! This is much clearer. You can close this issue since you answered all my interrogations, for now! 😃 |
Hello,
I'm using Flacoco in the context of detecting energy regressions, based on the energy consumption of the tests.
The idea is the following:
First, we have a program in two versions
P
andP'
whereP
is the program before applying a commit, andP'
is the program after applying the commit.I take the tests that execute the lines modified and compute their respective Software Energy Consumption (
SEC
) of both versions of the program.Then, for each test
t
that hasSEC(t, P') - SEC(t, P) > 0
, meaning that the commit increases theSEC
of the testt
, I make them fail by inserting ajunit.framework.Assert.fail()
at the end of their body.Then, I give to Flacoco only the tests that are now failing. I take the ranked list of lines, and filter it by keeping only the modified lines.
In your opinion, is it a proper way to use Flacoco? Should I also use the "passing" tests in Flacoco? Does the
junit.framework.Assert.fail()
significant regarding the fault localization accuracy of Flacoco?Thank you very much!
The text was updated successfully, but these errors were encountered: