-
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory issues using datasets #2
Comments
Hi, Thanks! Do you know roughly how many events it is you have? Ed |
Well, I am not sure on the exact number of events. I did look at the folder where the trace files are temporarily saved and noted that there were about 7GB worth of files for SQL Cover. This is only for a subset of my test suite. There would be a lot more if I ran all the tests. It is a rather complex database with more than 2500 stored procedures and functions. If I get more specific information, I will add it. |
I was able to get a count for the events: 822,500. Like I said before, this is for only a subset of my tests (roughly 5%). |
I'll bear it in mind for a future version but there isn't a quick fix for it now. |
saw the same issue and played around with a solution locally that fixes the problem of the memory usage. Very cool project, i see a bunch of places though that can use optimizations that will speed it up and drastically reduce memory usage. I might do a pull request later for the optimizations |
great! If you do then one of the things to bear in mind is that we buffer all of the events, if there are lots of events then we will use up the memory so we will need to write to a file (possibly when we hit x events of something) |
take a look at the fork i made with an initial proposal to fix the memory problem that I think will help a lot. I tried not to edit your code to much and make minimal changes. basically i don't load the entire trace into memory, I load 1 row at a time and only pull out the data that is needed to calculate the coverage. there fields are : i have a more super optimized version locally that further only loads in the records that are relevent by making sure the objectIds exist in the call to load in the IEnumerable, which you can load in earlier instead of later. i went down from 3 gigs of memory usage to around 50 MB for the case i am using. |
cool 3 gig to 50 mb seems much better :) If you submit it as a pr i'll take it, I will change it to stop storing as xml so we don't need to parse it twice but it id good - thanks! If you would like to add a contributors file to the root of the project and add your details that would be great. ed |
@GoEddie I think this can be closed now unless this is still an issue @Matt2702. I'm not seeing any differences from @aboatswain work. master...aboatswain:fixMemoryUseIssue. I think @aboatswain can add himself as a contributor since I see his changes in master. |
Great tool! I have been testing it with our suite of DBFit tests. Everything works fine when I only run a small set of tests, but when the trace data gets large I am seeing memory issues. I created a console app to test SQLCover in the debugger and see where the issue was. I am getting out of memory exceptions when the DataTable is loaded from the the SqlDataReader (DatabaseGateway.GetRecords() called by ReadTrace()). I am not sure what can be easily changed to address this, since datasets are always loaded in memory.
The text was updated successfully, but these errors were encountered: