-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for the configuration of the size of the fetch queue and workers #120
Conversation
The defaults should be fine in 99% of cases, but some may want to tune them down on more powerful machines.
Reviewer's Guide by SourceryThis PR implements configuration options for controlling feed fetching parallelism by introducing two new settings: JobQueueDepth and Parallelism. The implementation adds these fields to the Config struct, sets sensible defaults based on CPU cores, and modifies the manifest priming function to use these configurable values instead of hardcoded ones. Sequence diagram for manifest priming with configurable parallelismsequenceDiagram
participant Main
participant Manifest
participant Worker
Main->>Manifest: Prime(config.Cache, config.Timeout.Duration, config.Parallelism, config.JobQueueDepth)
loop parallelism times
Manifest->>Worker: Start worker
Worker-->>Manifest: Fetch job
end
Manifest->>Main: Complete priming
Class diagram for updated Config structclassDiagram
class Config {
string Name
string URL
string Owner
string Email
string FeedID
string Cache
utils.Duration Timeout
string themePath
fs.FS Theme
string Output
manifest.Feed[] Feeds
int ItemsPerPage
int MaxPages
int JobQueueDepth
int Parallelism
}
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @kgaughan - I've reviewed your changes - here's some feedback:
Overall Comments:
- Please add documentation for the new
job_queue_depth
andparallelism
TOML configuration options explaining their purpose and typical usage scenarios.
Here's what I looked at during the review
- 🟡 General issues: 1 issue found
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
internal/config.go
Outdated
@@ -37,6 +40,9 @@ func (c *Config) Load(path string) error { | |||
c.Output = "./output" | |||
c.ItemsPerPage = 10 | |||
c.MaxPages = 5 | |||
// These are both somewhat arbitrary | |||
c.JobQueueDepth = 2 * runtime.NumCPU() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (performance): Consider adding upper bounds to JobQueueDepth and Parallelism to prevent resource exhaustion on many-CPU systems
While lower bounds are enforced, unbounded values on many-CPU systems could lead to excessive resource usage. Consider adding reasonable upper limits like math.Min(runtime.NumCPU(), 32) for Parallelism.
Suggested implementation:
"math"
"github.com/BurntSushi/toml"
// Cap values to prevent resource exhaustion on many-CPU systems
maxCPUs := 32 // Maximum reasonable number of CPUs to utilize
c.JobQueueDepth = int(math.Min(float64(2*runtime.NumCPU()), float64(2*maxCPUs)))
c.Parallelism = int(math.Min(float64(runtime.NumCPU()), float64(maxCPUs)))
@sourcery-ai review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @kgaughan - I've reviewed your changes - here's some feedback:
Overall Comments:
- Consider adding a comment explaining the reasoning behind the cpuLimit=32 constant. Is this based on empirical testing or specific hardware considerations?
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
The defaults should be fine in 99% of cases, but some may want to tune them down on more powerful machines.
Summary by Sourcery
Add configuration options for job queue depth and parallelism to allow tuning of fetch operations, with a cap on CPU usage to enhance performance.
New Features:
Enhancements: