Replies: 3 comments 4 replies
-
what do you mean by 'IPFS' here :) i don't know if we want to run a separate go-ipfs process, but if we have a blockstore and bitswap or graphsync running on the existing libp2p host we'll have the same effect |
Beta Was this translation helpful? Give feedback.
-
So this would mean writing a |
Beta Was this translation helpful? Give feedback.
-
When processing advertisements, then advertisements and their multihash entry blocks are saved to IPFS in-line with the processing of each advertisement. This is not done asynchronously to avoid building up a large amount of pending data waiting on IPFS during ingestion spikes, and it is OK for ingestion to lag significantly behind the announcement of a new advertisement. IPFS storage cannot grow indefinitely and older index content needs to be removed to maintain storage limits. The indexer will maintain timestamps for the advertisement CIDs that are saved to IPFS. When it is necessary to reduce storage size, the oldest advertisement and all entries for that advertisement will be unpinned, and IPFS GC will be run. Need to understand:
|
Beta Was this translation helpful? Give feedback.
-
To avoid having multiple indexer nodes all downloading the same content from content publishers, one indexer can download the content and the other indexers can get that content from a shared block store. Would IPFS make a useful shared block store?
Also, if index content is generally available via IPFS then any indexers can retrieve the content without necessarily downloading it directly from the publisher each time.
Beta Was this translation helpful? Give feedback.
All reactions