BIP draft: UTXO set sharing#2137
Conversation
802cfce to
4bc39c8
Compare
| | `merkle_root` | `uint256` | 32 | Root of the Merkle tree computed over chunk hashes. | | ||
|
|
||
| A requesting node MUST ignore entries whose `serialized_hash` does not match a known | ||
| utxo set hash for the corresponding height. |
There was a problem hiding this comment.
I think it would be better for a client that supports this mechanism to hardcode the merkle root instead of the straight serialized hash, and to drop serialized_hash from this message.
There was a problem hiding this comment.
Hm, yeah, I think that makes sense, I was struggling with finding the right path between building on top of the existing assumeutxo data we already have and extending it. I am adding the merkle root to the assumeutxo data in my PR, so checking the serialized_hash is a belts and suspenders there, so it makes sense to make this explicit here as well.
There was a problem hiding this comment.
If you wanted to change the contents of the utxo dump as well, it would be nice to include the header chain for the block where the snapshot was taken. Then you could do an import without needing to connect to the network at all, I think.
There was a problem hiding this comment.
I think it would be better for a client that supports this mechanism to hardcode the merkle root instead of the straight serialized hash, and to drop serialized_hash from this message.
Dropped serialized_hash mentions from the BIP completely in this new version.
it would be nice to include the header chain for the block where the snapshot was taken
Hm, I find this a bit odd. For the purpose of this BIP we are connected to the network necessarily, so we can just as well fetch the headers over the network in the way we usually do and that should be simpler code-wise. For loading the file directly it would make more sense to me but still we are connecting to the network afterwards. I will need to think about this more but thankfully we have the versioning of the dump file so we can extend the format easily in the future and it wouldn't even need to affect this BIP I think.
| | `block_hash` | `uint256` | 32 | Block hash this data corresponds to. | | ||
| | `chunk_index` | `uint32_t` | 4 | Zero-based index of this chunk. | | ||
| | `proof_length` | `compact_size` | 1–9 | Number of hashes in the Merkle proof. | | ||
| | `proof_hashes` | `uint256[]` | 32 × `proof_length` | Sibling hashes from leaf to root. | |
There was a problem hiding this comment.
Rather than a proof, it might be better to just request getutxoset <height> <hash> 0xFFFFFFFF once to get the full list of chunk hashes -- that should be about 74kB for a 9GB utxo set, and should stay under 4MB until the utxo set is >450GB. Then each chunk is just <height> <hash> <number> <data>.
There was a problem hiding this comment.
Huh, interesting idea to get all the chunk hashes first, I didn't think of that. It might make sense to do this with a separate message type even instead of hacking getutxoset as you described. I will have to think about it a little more.
There was a problem hiding this comment.
Specified the fetching of the chunk hash list now with new message type getutxotree/utxotree messages. I liked this a lot more when combining it with dropping the discovery approach and asking for the specific UTXO set directly. Let me know what you think.
|
I was going to complain that I haven’t seen a discussion about this proposal on the mailing list… but you did that already for me. If you already know that you should send it to the mailing list first, I don’t know why you opened the PR first, though. 😛 |
It wasn't clear to me that a ML post was a prerequisit to open the PR. I just thought it was necessary to do this at some point before the bip could get merged/assigned a number. I think that having a place for more detail oriented commentary makes sense to have in addition to the high level discussion happening on the ML, if ML readers have such feedback but would rather use the more convenient inline commenting/suggestion features in GitHub. I was also looking for feedback on my assumeutxo related question, e.g. can I assume knowledge of a feature that is only implemented in Bitcoin Core or should I define this in the BIP. The ML doesn't seem like the right place to ask about this. I will close this for now and re-open when I have made the ML post and given it some reasonable time for discussion. |
|
Thanks, and sorry, I might have come off as more gruff than intended — tone is hard in written text. Obviously, you’ve been around the block and your proposal reads well-considered, but we have been getting a lot of premature submissions out of the blue here, where then BIP Editors become the first line of feedback. Personally, it’s been taking a growing part of my work hours to even just get through all submissions to the repository. So, we have become a bit more insistent on proposals actually being posted to the list first, and I’d like to avoid giving the impression that I’m playing favorites. A hypothetical optimal order might be:
In your case it sounds like you’d be able to skip directly to 5, and we can of course reopen this PR, when there has been an ML discussion. |
| 1. The requesting node identifies peers advertising `NODE_UTXO_SET`. | ||
| 2. The requesting node sends `getutxosetinfo` to one or more of these peers. | ||
| 3. Each peer responds with `utxosetinfo`. The requesting node verifies that the advertised | ||
| `serialized_hash` matches a known UTXO set hash, compares `merkle_root` values across peers, | ||
| and selects a UTXO set whose Merkle root has agreement among multiple peers. |
There was a problem hiding this comment.
I think this process defeats the point of the service bit. Having "at least one" UTXO set only "works" while there are only a small number of UTXO set options to download. If it's not sufficient to just try peers until you find the UTXO set you want, then we probably need a way to advertise specific UTXO sets.
There was a problem hiding this comment.
Thanks, I do think it should be sufficient to try peers until we find the UTXO set we want and I dropped the discovery approach, see my other comment as well.
| before initiating the process. Including the serialized hash in the advertisement lets the requester | ||
| immediately filter out peers claiming a different UTXO set state before downloading any data. | ||
|
|
||
| **Discovery before download:** The `getutxosetinfo`/`utxosetinfo` exchange lets the requesting node |
There was a problem hiding this comment.
I'm not sure discovery is a useful feature. It enables a misguided implementation to blindly trust whatever nodes provide, and harms node privacy by adding a way to fingerprint nodes.
If you know what you'll accept in advance, you can simply request that snapshot, and the node will either provide it or (potentially) say it can't.
There was a problem hiding this comment.
I dropped the discovery approach, I think fingerprinting potential is still there with some probing but it's still better than with discovery and I think this matches the approach we follow in the network generally, like in BIP157 for example, better as well.
| for buffering and verifying a single chunk. Smaller chunks would increase protocol overhead; larger | ||
| chunks would increase memory pressure on constrained devices commonly used to run Bitcoin nodes. | ||
| Together with the additional message overhead, the `utxoset` message including the chunk data also | ||
| sits just below the theoretical maximum block size which means any implementation should be able to |
There was a problem hiding this comment.
It also happens to sit just below the maximum P2P message size MAX_PROTOCOL_MESSAGE_LENGTH, so it may be clearer to refer to that instead of block size
There was a problem hiding this comment.
Yeah, but this was a contious decision actually. MAX_PROTOCOL_MESSAGE_LENGTH is a Bitcoin Core implementation specific value. A different implementation may have a higher value for this. But every implementation will at least need to be able to receive the biggest possible block. So I think it's better to anchor it to that.
|
|
||
| Sent to discover which UTXO sets a peer can serve. This message has an empty payload. | ||
|
|
||
| A node that has advertised `NODE_UTXO_SET` MUST respond with `utxosetinfo`. A node that has not |
There was a problem hiding this comment.
nit: I think it would be clearer to repeat here that we should disconnect a node that doesn't respond to our getutxoset, as stated above:
A node signaling
NODE_UTXO_SETMUST respond togetutxosetinfomessages and MUST be capable of serving all UTXO sets it advertises in itsutxosetinforesponse. A node that fails to meet these
obligations SHOULD be disconnected.
There was a problem hiding this comment.
With the latest changes I felt it would be more appropriate to remove this general statement instead:
A node that fails to meet these obligations SHOULD be disconnected.
Without discovery it is just harder for us to decide if the peer is misbehaving or not. It is more precise anyway to clarify this explicitly for each situation.
|
|
||
| After all chunks have been received, the node MUST compute the serialized hash and compare it against a | ||
| known UTXO set hash. If this check fails, the node MUST discard all data and | ||
| SHOULD disconnect all peers that advertised the corresponding Merkle root. |
There was a problem hiding this comment.
I'm trying to understand why we "should" disconnect here, but we "must" disconnect if the peer sends us a utxoset message that fails the proof verification (L176). Is it because in the second case, the peer is surely trying to trick us, while in the first case, we probably have a bug on our end?
I'm trying to understand how it can happen that our chunks pass the individual verifications, but the final utxo set hash mismatches, and whether in this case it would be a problem on our end, or peers trying to trick us.
There was a problem hiding this comment.
Yeah, the intention was that we are stricter with a clear inconsistency in what the peer has given us vs. something that is probably a problem on our side. But have revisited the disconnection policies and I think it is a lot more consistent now. If I got everything right we must disconnect if the peer sent us something clearly inconsistent, we should disconnect if the peer sent us we didn't ask for and if evidence points to the issue being on our side I am now not saying anything about disconnecting.
|
Since this is already getting more discussion here than many submissions to the mailing list, I’m going to reopen it and turn it into draft. Please set it to “Ready for Review” when you have posted to the mailing list. |
|
Concept NACK. It's bad enough that nodes are formalizing this off network, but incorporating it into p2p is another level of awful. |
murchandamus
left a comment
There was a problem hiding this comment.
I gave this submission a first quick read. I noticed that there doesn’t seem to be a Backwards Compatibility section. Please add one.
| Assigned: ? | ||
| Discussion: 2026-05-06: https://groups.google.com/g/bitcoindev/c/rThmyI8ZN3Q | ||
| Version: 0.2.0 | ||
| License: BSD-2-Clause |
There was a problem hiding this comment.
The Headers have a fixed order, please update to:
| Assigned: ? | |
| Discussion: 2026-05-06: https://groups.google.com/g/bitcoindev/c/rThmyI8ZN3Q | |
| Version: 0.2.0 | |
| License: BSD-2-Clause | |
| Assigned: ? | |
| License: BSD-2-Clause | |
| Discussion: 2026-05-06: https://groups.google.com/g/bitcoindev/c/rThmyI8ZN3Q | |
| Version: 0.2.0 |
| This BIP defines a P2P protocol extension for sharing full UTXO sets between peers. It introduces | ||
| a new service bit `NODE_UTXO_SET`, four new P2P messages (`getutxotree`, `utxotree`, `getutxoset`, | ||
| `utxoset`), and a chunk-hash list anchored to a Merkle root known to the requesting node, enabling | ||
| per-chunk verification. This allows nodes to bootstrap from a recent height by obtaining the |
There was a problem hiding this comment.
I assume that the nodes are bootstrapping from scratch to a recent height rather than from a recent height? Maybe:
| per-chunk verification. This allows nodes to bootstrap from a recent height by obtaining the | |
| per-chunk verification. This allows bootstrapping nodes to leapfrog to a recent height by obtaining the |
|
|
||
| | Name | Bit | Description | | ||
| |------|-----|-------------| | ||
| | `NODE_UTXO_SET` | 12 (0x1000) | The node can serve complete UTXO set data for at least one height. | |
There was a problem hiding this comment.
This is in conflict with the Utreexo proposal which allocates
- Bit 12 to
NODE_UTREEXO - Bit 13 to
NODE_UTREEXO_ARCHIVE
per the BIP183 draft.
|
|
||
| A node MUST NOT set this bit unless it has at least one full UTXO set available to serve. | ||
| A node signaling `NODE_UTXO_SET` MUST be capable of responding to `getutxotree` and `getutxoset` | ||
| requests for every UTXO set it is willing to serve, including the full chunk-hash list and every |
There was a problem hiding this comment.
| requests for every UTXO set it is willing to serve, including the full chunk-hash list and every | |
| requests for every UTXO set that it is willing to serve, including the full chunk-hash list and every |
| last chunk contains the remaining bytes and may be smaller. | ||
|
|
||
| The leaf hash for each chunk is `SHA256d(chunk_data)`. The tree is built as a balanced binary tree. When | ||
| the number of nodes at a level is odd, the last node is duplicated before hashing the next level. |
There was a problem hiding this comment.
I guess this is safe because no left_child and right_child should ever have the same data, but I dimly remember that this construction had some downsides in the Merkle tree for transactions in a block. (Some node implementation accepted a block with the same transaction repeated, or smth?) Would it perhaps be better to have a dedicated value to add into the hash instead of repeating the left_child, e.g., SHA256d(left_child || SHA256d(odd_number_of_leaves_no_right_child))?
| | `num_chunks` | `compact_size` | 1–9 | Number of chunks the serialized UTXO set is split into. | | ||
| | `chunk_hashes` | `uint256[]` | 32 × `num_chunks` | The ordered list of chunk hashes. | | ||
|
|
||
| Upon receiving a `utxotree` message, the node MUST recompute the Merkle root from |
There was a problem hiding this comment.
Stumbled over this the first time, because “node” could apply to either side.
| Upon receiving a `utxotree` message, the node MUST recompute the Merkle root from | |
| Upon receiving a `utxotree` message, the requesting node MUST recompute the Merkle root from |
| `chunk_hashes` and compare it against the Merkle root it knows for the corresponding UTXO set. If | ||
| the roots do not match, the node MUST discard the response and MUST disconnect the peer. | ||
|
|
||
| #### `getutxoset` |
There was a problem hiding this comment.
getutxoset feels a bit odd, when the message actually requests a chunk. Also, Cluster Mempool makes extensive use of the term "chunk", and I was wondering whether this overlap could cause confusion in the future.
| Sent to request a single chunk of UTXO set data. The requesting node MUST have received a `utxotree` | ||
| for the corresponding UTXO set before sending this message. |
There was a problem hiding this comment.
Should there perhaps also be an addition that the serving node must not reply if it has not previously sent a utxotree message to the requesting peer?
There was a problem hiding this comment.
I also checked that the Utreexo BIPs don’t use the term "utxotree", but I worry that "utreexo" and "utxotree" are extremely similar and that will cause headaches in the future.
| ## Rationale | ||
|
|
||
| **Usage of service bit 12:** Service bits allow selective peer discovery through | ||
| DNS seeds and addr relay. Bit 12 is chosen as the next unassigned bit after `NODE_P2P_V2` (bit 11, BIP 324). |
There was a problem hiding this comment.
As noted above, your service flag should perhaps be moved to bit 14, unless the Utreexo project is willing to move instead.
| peer switching without data loss, and parallel download from multiple peers. The list itself is small | ||
| (~80 KB for a ~10 GB set). The specified serialization is deterministic, so all honest nodes produce |
There was a problem hiding this comment.
If a node is expected to source chunks from multiple different peers, is it really necessary to receive the entire tree description of 80 KB from each of the peers?
There was a problem hiding this comment.
The spec says "The requesting node sends getutxotree for the desired block hash to one or more of these peers." so I think the answer is "no, it is not necessary to receive the entire tree description of 80 KB from each of the peers" -- you only send requests to the number of peers you want to receive response from. Any attempt to give different responses will (should) result in them not hashing back to your known merkle root, so all valid descriptions will be identical, AIUI.
There was a problem hiding this comment.
I read line 133,134
Sent to request a single chunk of UTXO set data. The requesting node MUST have received a
utxotree
for the corresponding UTXO set before sending this message.
as the serving node not being permitted to respond to getutxoset calls for a specific tree unless it previously sent a utxotree message to the same peer, but maybe I misinterpreted that. It seems to me that both aspects of the question should be clarified:
- Must a peer send
getutxotreebefore being eligible to responses togetutxosetfor the same tree? - Is it necessary to retrieve the
utxotreefrom multiple peers before requesting chunks?
There was a problem hiding this comment.
Perhaps it would be better to send getutxotree to one node (repeating until you get a valid response), and then send getutxoset to any nodes that support utxo set sharing, with the response utxoset <hash> <n> <empty> indicating "i don't have that utxoset data" ? So instead of getutxotree / utxotree to establish whether a peer has the data you want, you send getutxoset / utxoset and either get an explicit nope or data you actually want?
stickies-v
left a comment
There was a problem hiding this comment.
I found the BIP pretty easy to grok and think it's well written. At this point, I do not support implementing this in Bitcoin Core, which afaik is the only current use case, but I think it does make sense to formalize these ideas into a BIP, given that a lot of people have thought about and discussed this and it is generally better to preserve that kind of knowledge.
My main conceptual concern with this BIP is that there's a non-trivial data availability problem. Being able to serve the utxo set for a specific height will most likely incur a significant storage and/or computation overhead. I think this means the network would have to converge on a small subset of available heights (cfr the "blessed" heights in Bitcoin Core). Serving just the tip is another option, but given the size of the UTXO set and the frequency at which th tip changes, that seems problematic for nodes that aren't able to be likely to download the utxo set within 10 minutes? As a "Peer Services" BIP, I'm not sure that concern can be resolved here, but perhaps it's worth at least mentioning?
| |-------|------|------|-------------| | ||
| | `block_hash` | `uint256` | 32 | Block hash identifying the requested UTXO set. | | ||
|
|
||
| A node that has advertised `NODE_UTXO_SET` and can serve the requested UTXO set MUST respond with |
There was a problem hiding this comment.
nit
| A node that has advertised `NODE_UTXO_SET` and can serve the requested UTXO set MUST respond with | |
| A node that has advertised `NODE_UTXO_SET` and can serve the requested UTXO set SHOULD respond with |
| The serialized UTXO set (header + body) is split into chunks of exactly 3,900,000 bytes (3.9 MB). The | ||
| last chunk contains the remaining bytes and may be smaller. | ||
|
|
||
| The leaf hash for each chunk is `SHA256d(chunk_data)`. The tree is built as a balanced binary tree. When |
There was a problem hiding this comment.
nit: when reading this, I was wondering if missed a section. A very brief (non-title) introduction of the tree we're building before talking about its leafs would read more naturally.
I don't believe this is a major problem; automatically storing and serving the utxo sets at fixed heights, eg every 12500 blocks (12 weeks / 3 months), and only keeping the most recent ~4 means that your node's storage requirements increase by ~50GB (~7% of the 700GB used for historical block data), and means that you can expect to download up to about ~40GB (12GB utxo set plus 12500 blocks at 2MB each) before you can validate blocks at the tip with a new install, assuming new point releases are made to update hardcoded assumeutxo data shortly after block x*12500 is mined. |
That's what I would consider as "blessed" heights in my comment, and I agree it's a solution. It still requires coordination as to which heights "the network" is able to serve, either through implementation details or BIP specification. |
I was assuming that the UTXO set snapshot heights would correspond to the UTXO set root commitments shipped with the Bitcoin Core releases, and volunteering nodes would likely keep the last 2-3 of those on hand. Alternatively I was wondering whether nodes might automatically keep UTXO set snapshots of, e.g., the latest couple modulo %20'000 blocks on top of the ones shipped with the Bitcoin Core releases, and would be willing to serve those if it had at least been buried by say 4000 blocks. |
If you're storing the utxoset yourself, you probably want to take the snapshot while it's still close to the tip, to avoid too much churn. Whether you start serving that immediately or wait is another matter though. |
|
Yeah, snapshotting while it is on-top is certainly the easiest, but I figured that it might not be great to start serving the UTXO set if it too close to the tip. |
This BIP draft describes the sharing of a full UTXO set via the p2p network.
Design summary:
The one part I am not so sure about yet: This references Bitcoin Core and it’s features or RPCs in a few places now. I am aware that this is not ideal for specification that targets a wider audience but the reality is that assumeutxo seems to be only implemented in Bitcoin Core and mentioning RPCs from the workflow there seems the most clear way to describe this. But I am happy to generalize this further, I would be very happy to receive some guidance what level of referencing assumutxo is acceptable here since it is obviously the main current motivation. One concrete example: The Bitcoin Core PR that I will use as a reference implementation will rely on assumeutxo params rather than comparing multiple peer values of the merkle root. Is this discrepancy an issue?
Mailing List post: https://groups.google.com/g/bitcoindev/c/rThmyI8ZN3Q/m/TJBc1xRbAQAJ
Proposed implementation: bitcoin/bitcoin#35054