-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add: columns to Eth2Processor
and BlockProcessor
#6862
base: unstable
Are you sure you want to change the base?
Conversation
@@ -199,9 +245,62 @@ proc get_data_column_sidecars*(signed_beacon_block: electra.TrustedSignedBeaconB | |||
|
|||
sidecars | |||
|
|||
# Additional overload to perform reconstruction at the time of gossip |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return err("DataColumnSidecar: Length should not be 0") | ||
|
||
var | ||
columnCount = data_columns.len |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can/should be let
|
||
var | ||
columnCount = data_columns.len | ||
blobCount = data_columns[0].column.len |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also can/should be let
for column_index in 0..<NUMBER_OF_COLUMNS: | ||
var | ||
column_cells: seq[KzgCell] | ||
column_proofs: seq[KzgProof] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both of these could use newSeqOfCap
with cellsAndProofs.len
@@ -130,6 +143,28 @@ proc routeSignedBeaconBlock*( | |||
msg = res.error() | |||
return err(res.error()) | |||
|
|||
# May not be required as we are already | |||
# kzg verifying the blobs once |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
KZG is an acronym, or at least capitalized like one
|
||
let kzgCommits = | ||
signedBlock.message.body.blob_kzg_commitments.asSeq | ||
if dataColumns.len > 0 and kzgCommits.len > 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem to actually use kzgCommits
though, aside from this check?
(Also, if it's just checking len
, it shouldn't need asSeq
)
for dc in data_columns: | ||
if dc.index in custody_columns: | ||
final_columns.add dc | ||
dataColumnRefs = Opt.some(final_columns.mapIt(newClone(it))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why even create the non-ref
final_columns
to begin with, only to mapIt
immediately?
Once you know you're going to set dataColumnRefs = Opt.some ...
, can build it up in place.
@@ -1269,6 +1303,12 @@ func getSyncCommitteeSubnets(node: BeaconNode, epoch: Epoch): SyncnetBits = | |||
|
|||
subnets + node.getNextSyncCommitteeSubnets(epoch) | |||
|
|||
func readCustodyGroupSubnets*(node: BeaconNode): uint64= | |||
var res = CUSTODY_REQUIREMENT.uint64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This whole function can be as
func readCustodyGroupSubnets(node: BeaconNode): uint64 =
if node.config.peerdasSupernode:
NUMBER_OF_CUSTODY_GROUPS.uint64
else:
CUSTODY_REQUIREMENT.uint64
without this setting of a maybe incorrect value first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, not sure why it's exported. It's only used in the same module in which it's defined:
beacon_chain/nimbus_beacon_node.nim
1306:func readCustodyGroupSubnets*(node: BeaconNode): uint64=
1349: targetSubnets = node.readCustodyGroupSubnets()
@@ -79,6 +79,8 @@ const | |||
int(ConsensusFork.Phase0) .. int(high(ConsensusFork)) | |||
BlobForkCodeRange = | |||
MaxForksCount .. (MaxForksCount + int(high(ConsensusFork)) - int(ConsensusFork.Deneb)) | |||
DataColumnForkCodeRange = | |||
MaxForksCount .. (MaxForksCount + int(high(ConsensusFork)) - int(ConsensusFork.Fulu)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is incorrect range because it overlaps with Blobs, the idea is
[blocks range][blobs range][columns range] ... high(int)
Allocated range for blocks is [0 .. 16383]
Allocated range for blobs is [16384 .. 32767]
So for blobs you should allocate [32768 .. 49151]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DataColumnForkCodeRange =
(BlobForkCodeRange.high + 1) .. (BlobForkCodeRange.high + 1 + int(high(ConsensusFork)) - int(ConsensusFork.Fulu))
around this value?
func getDataColumnForkCode(fork: ConsensusFork): uint64 = | ||
case fork | ||
of ConsensusFork.Fulu: | ||
uint64(MaxForksCount) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because of invalid code range it provides invalid codes which overlaps with blobs range.
@@ -132,6 +135,27 @@ proc getShortMap*[T](req: SyncRequest[T], | |||
res.add('|') | |||
res | |||
|
|||
proc getShortMap*[T](req: SyncRequest[T], | |||
data: openArray[ref DataColumnSidecar]): string = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this code is based on old and ugly version of blobs map, but it was recently changed to more suitable map format. So the idea is to show how blobs being distributed over the range request, so now it could look like
12..............................
, which means that for request range there was returned 1 blob for first block in range and 2 blobs for second one, all other blocks are without blobs, it looks like you still using MAX_BLOBS_PER_BLOCK
constant (i'm not sure is it appropriate) but if it so - please change this procedure to new version.
No description provided.