You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'am not here to rant or vent, just wanted to express that i'am fairly frustrated with the reliability of vulnz as a mirror. It runs OOM with 7G, demanding more and more each time the NVD database grows (#253)
This is not as surprising considering that we hold the entire data we fetch as long as we fetch - in the memory. It makes the entire thing for more critical, since fetching is not a tasks of seconds, but rather hours.For me, fetching takes more than 1 hour (cannot even tell how long .. since it no longer finishes), and during the entire time, we fetch json data and stuff it into memory. Only one single thing and we repeat from scratch.
It is thus not of a surprise that in the end, the app becomes more and more memory hungry, since the NVD database is growing.
@jeremylong I know that I already asked, but could you explain, in more detail, why holding the entire data in memory is so important? Please elaborate on the transformations or aggregations needed (grouping by date?) and if so, why we could not just simply put every result in one file and have the aggregation done as the last step. This way we have
partial results to continue from is something fails
a lot less memory usage
Thanks for considering and sharing details
The text was updated successfully, but these errors were encountered:
* Reimplement caching to use a per-year scope
* Add forgiveness if a year fails, continue to the next one
* Reimplement how the lastUpdated date is used and stored per year
* Add lockfile
* fix: preserve modified entries if year fails
* polish docs, add exit code
---------
Co-authored-by: Jeremy Long <[email protected]>
I'am not here to rant or vent, just wanted to express that i'am fairly frustrated with the reliability of vulnz as a mirror. It runs OOM with 7G, demanding more and more each time the NVD database grows (#253)
This is not as surprising considering that we hold the entire data we fetch as long as we fetch - in the memory. It makes the entire thing for more critical, since fetching is not a tasks of seconds, but rather hours.For me, fetching takes more than 1 hour (cannot even tell how long .. since it no longer finishes), and during the entire time, we fetch json data and stuff it into memory. Only one single thing and we repeat from scratch.
It is thus not of a surprise that in the end, the app becomes more and more memory hungry, since the NVD database is growing.
@jeremylong I know that I already asked, but could you explain, in more detail, why holding the entire data in memory is so important? Please elaborate on the transformations or aggregations needed (grouping by date?) and if so, why we could not just simply put every result in one file and have the aggregation done as the last step. This way we have
Thanks for considering and sharing details
The text was updated successfully, but these errors were encountered: