Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bids 3236/ch for summary #657

Merged
merged 14 commits into from
Aug 2, 2024
Merged

Bids 3236/ch for summary #657

merged 14 commits into from
Aug 2, 2024

Conversation

Eisei24
Copy link
Contributor

@Eisei24 Eisei24 commented Aug 1, 2024

No description provided.

@@ -732,62 +738,138 @@ func (d *DataAccessService) GetValidatorDashboardGroupSummary(ctx context.Contex
return ret, nil
}

func (d *DataAccessService) internal_getElClAPR(ctx context.Context, validators []t.VDBValidator, days int) (elIncome decimal.Decimal, elAPR float64, clIncome decimal.Decimal, clAPR float64, err error) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I had to change the whole method from a validator list validators to the dashboardId (+group filter) is because I could not change the query for clickhouse to
WHERE validator_index IN ($1)

For a large dashboard, e.g. 250,000 validators, you get a "Max query size exceeded" error.

Copy link

cloudflare-workers-and-pages bot commented Aug 1, 2024

Deploying beaconchain with  Cloudflare Pages  Cloudflare Pages

Latest commit: f5606e4
Status:🚫  Build failed.

View logs

With("validators", goqu.L("(SELECT dashboard_id, group_id, validator_index FROM users_val_dashboards_validators WHERE dashboard_id = ?)", dashboardId.Id)).
Select(
goqu.L("ARRAY_AGG(r.validator_index) AS validator_indices"),
goqu.L("SUM(COALESCE(r.attestations_reward, 0) + COALESCE(r.blocks_cl_reward, 0) + COALESCE(r.sync_rewards, 0) + COALESCE(r.blocks_cl_slasher_reward, 0)) AS cl_rewards"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
goqu.L("SUM(COALESCE(r.attestations_reward, 0) + COALESCE(r.blocks_cl_reward, 0) + COALESCE(r.sync_rewards, 0) + COALESCE(r.blocks_cl_slasher_reward, 0)) AS cl_rewards"),
goqu.L("COALESCE(SUM(r.attestations_reward + r.blocks_cl_reward + r.sync_rewards + r.blocks_cl_slasher_reward), 0) AS cl_rewards"),

these columns arent nullable in clickhouse so this should work

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

if row.MaxEpochEnd > epochMax {
epochMax = row.MaxEpochEnd
}
}
// ------------------------------------------------------------------------------------------------------------------
// Get the EL rewards
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

validator_dashboard_view_execution_rewards_epoch is a proxied view that exists in clickhouse that exposes value proposer and epoch - in theory this can even be combined in the above query.

but not really important enough to have this block the pr

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will keep it in mind for the future.

Comment on lines 780 to 782
rewardsDs = rewardsDs.
InnerJoin(goqu.L("users_val_dashboards_validators v"), goqu.On(goqu.L("r.validator_index = v.validator_index"))).
Where(goqu.L("v.dashboard_id = ?", dashboardId.Id))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is prob slow as it isn't using the with subquery optimization that gets used above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it to the subquery for here and also the validator slashing list which should now be all clickhouse queries.
However I also removed the WHERE condition after the INNER JOIN since it was redudant, or was there an unknown performance trick that needed it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However I also removed the WHERE condition after the INNER JOIN since it was redudant, or was there an unknown performance trick that needed it?

doesnt seem like the clickhouse query planner is smart enough to optimize the join that way. without the where filter the query always scans the complete from table

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. I reverted the removal of the WHERE.

LEFT JOIN relays_blocks rb ON blocks.exec_block_hash = rb.exec_block_hash
WHERE proposer = ANY($1) AND status = '1' AND slot >= (SELECT MIN(epoch_start) * $2 FROM %s WHERE validator_index = ANY($1));`
err = db.AlloyReader.GetContext(ctx, &elIncome, fmt.Sprintf(query, table), validators, utils.Config.Chain.ClConfig.SlotsPerEpoch)
err = d.alloyReader.GetContext(ctx, &elIncome, query, args...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above, could be rewriten to use the proxy view in clickhouse

Copy link
Contributor

@invis-bitfly invis-bitfly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only minor things found, nothing critical really, so marking as approved

@Eisei24 Eisei24 merged commit 4374d7f into staging Aug 2, 2024
1 check passed
@Eisei24 Eisei24 deleted the BIDS-3236/ch_for_summary branch August 2, 2024 12:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants