Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add: columns to Eth2Processor and BlockProcessor #6862

Draft
wants to merge 22 commits into
base: unstable
Choose a base branch
from
Draft

Conversation

agnxsh
Copy link
Contributor

@agnxsh agnxsh commented Jan 20, 2025

No description provided.

@agnxsh agnxsh marked this pull request as draft January 20, 2025 05:56
Copy link

github-actions bot commented Jan 20, 2025

Unit Test Results

0 files   -        15  0 suites   - 2 285   0s ⏱️ - 1h 5m 22s
0 tests  -   5 358  0 ✔️  -   5 011  0 💤  - 347  0 ±0 
0 runs   - 36 999  0 ✔️  - 36 509  0 💤  - 490  0 ±0 

Results for commit a790995. ± Comparison against base commit e990a94.

♻️ This comment has been updated with latest results.

@@ -199,9 +245,62 @@ proc get_data_column_sidecars*(signed_beacon_block: electra.TrustedSignedBeaconB

sidecars

# Additional overload to perform reconstruction at the time of gossip
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return err("DataColumnSidecar: Length should not be 0")

var
columnCount = data_columns.len
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can/should be let


var
columnCount = data_columns.len
blobCount = data_columns[0].column.len
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also can/should be let

for column_index in 0..<NUMBER_OF_COLUMNS:
var
column_cells: seq[KzgCell]
column_proofs: seq[KzgProof]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both of these could use newSeqOfCap with cellsAndProofs.len

@@ -130,6 +143,28 @@ proc routeSignedBeaconBlock*(
msg = res.error()
return err(res.error())

# May not be required as we are already
# kzg verifying the blobs once
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KZG is an acronym, or at least capitalized like one


let kzgCommits =
signedBlock.message.body.blob_kzg_commitments.asSeq
if dataColumns.len > 0 and kzgCommits.len > 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem to actually use kzgCommits though, aside from this check?

(Also, if it's just checking len, it shouldn't need asSeq)

for dc in data_columns:
if dc.index in custody_columns:
final_columns.add dc
dataColumnRefs = Opt.some(final_columns.mapIt(newClone(it)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why even create the non-ref final_columns to begin with, only to mapIt immediately?

Once you know you're going to set dataColumnRefs = Opt.some ..., can build it up in place.

@@ -1269,6 +1303,12 @@ func getSyncCommitteeSubnets(node: BeaconNode, epoch: Epoch): SyncnetBits =

subnets + node.getNextSyncCommitteeSubnets(epoch)

func readCustodyGroupSubnets*(node: BeaconNode): uint64=
var res = CUSTODY_REQUIREMENT.uint64
Copy link
Contributor

@tersec tersec Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This whole function can be as

func readCustodyGroupSubnets(node: BeaconNode): uint64 =
  if node.config.peerdasSupernode:
    NUMBER_OF_CUSTODY_GROUPS.uint64
  else:
    CUSTODY_REQUIREMENT.uint64

without this setting of a maybe incorrect value first.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, not sure why it's exported. It's only used in the same module in which it's defined:

beacon_chain/nimbus_beacon_node.nim
1306:func readCustodyGroupSubnets*(node: BeaconNode): uint64=
1349:    targetSubnets = node.readCustodyGroupSubnets()

@@ -79,6 +79,8 @@ const
int(ConsensusFork.Phase0) .. int(high(ConsensusFork))
BlobForkCodeRange =
MaxForksCount .. (MaxForksCount + int(high(ConsensusFork)) - int(ConsensusFork.Deneb))
DataColumnForkCodeRange =
MaxForksCount .. (MaxForksCount + int(high(ConsensusFork)) - int(ConsensusFork.Fulu))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incorrect range because it overlaps with Blobs, the idea is
[blocks range][blobs range][columns range] ... high(int)
Allocated range for blocks is [0 .. 16383]
Allocated range for blobs is [16384 .. 32767]
So for blobs you should allocate [32768 .. 49151].

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  DataColumnForkCodeRange =
    (BlobForkCodeRange.high + 1) .. (BlobForkCodeRange.high + 1 + int(high(ConsensusFork)) - int(ConsensusFork.Fulu))

around this value?

func getDataColumnForkCode(fork: ConsensusFork): uint64 =
case fork
of ConsensusFork.Fulu:
uint64(MaxForksCount)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because of invalid code range it provides invalid codes which overlaps with blobs range.

@@ -132,6 +135,27 @@ proc getShortMap*[T](req: SyncRequest[T],
res.add('|')
res

proc getShortMap*[T](req: SyncRequest[T],
data: openArray[ref DataColumnSidecar]): string =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this code is based on old and ugly version of blobs map, but it was recently changed to more suitable map format. So the idea is to show how blobs being distributed over the range request, so now it could look like
12.............................., which means that for request range there was returned 1 blob for first block in range and 2 blobs for second one, all other blocks are without blobs, it looks like you still using MAX_BLOBS_PER_BLOCK constant (i'm not sure is it appropriate) but if it so - please change this procedure to new version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants