A Grin full node is already lighter than a Bitcoin full node because of its cut-through ability, but the current implementation of a full node is still too heavy to run on mobile devices. A mobile full node could be possible by introducing a new flag
./grin --nimble that would run a full node with prunning.
Most blockchains require storing large amounts of data to run a full node. This is why SPV nodes became the way to participate in the network when using a mobile phone. The difference between a full node and an SPV node is that the SPV node only validates a subset of rules which means it requires more trust than a full node. Having the ability to fully verify the network state on a mobile phone would enable easier onboarding for many more users while keeping full validation.
The main idea is to reduce the size of the data by prunning rangeproofs and merging kernels commitments. Once a rangeproof has been validated a node no longer needs it so it can be pruned. The prunning of rangeproof reduces the size of a UTXO from ~700 bytes to 33 bytes. UTXOs should now be small enough, but we still have a long sequence of kernels that each transaction left behind. Here we use a similar trick that Mimblewimble uses to validate the supply. Given two signed curve points
z*G that sum to
(y+z)*G, we know that we collectively know the private key
z*G have been signed. This means that if we have two kernels with commitments
z*G, once we have validated their signatures, we can sum their public keys to obtain a curve point
(y+z)*G and pretend we have a signature for this point. Note that we can do this because we have validated the signature of both points and so we only need to trust the validation process which does not bring any new assumptions. This means that we can reduce two kernel commitments into a single one. Below we will introduce a made up kernel commitment we call a
kernel commitment accumulator that is a sum of multiple kernel commitments that came with a valid signature. Even though we won't have a signature for the kernel commitment accumulator point, we can assume we do because we have seen the signatures for the points that sum up to the accumulator point.
We assume parallel IBD is implemented. We start off with a kernel commitment accumulator that starts with a curve point
0*G. When syncing with Parallel IBD we obtain kernel chunks through GetKernelChunk calls which gives us a subset of kernels that can be validated separately. Once we have validated a kernel chunk we can add their commitments together to obtain a single kernel commitment and add that to our kernel commitment accumulator. For outputs we create a set of Pedersen commitments that starts of as an empty set. In the IBD phase we download the output chunks through GetOutputChunk calls and once we have validated the chunks we can drop the rangeproofs and put the Pedersen commitments in our set. After parallel IBD our node is synced up until the sync horizon point. From that point on we receive each block separately and validate it.
NOTE: NRD kernels are special because their validation depends on another kernel instance. This means that we can't just sum NRD kernels because we wouldn't be able to validate them which means their validation needs to happen separately. A possible solution to this is to keep all NRD kernels intact and validate them with a moving window NRD validation check after we got all of them. Those that have been validated (not necessarily all of them because a NRD relative kernel could appear at
horizon_point + 1 block) can be added to the kernel accumulator. There are better solutions to handling NRD kernel validation but they could be implemented later on.
We define a Pedersen commitment set to be a set that is an evaluation of a sequence of set computations. We start off with a set that we obtained with IBD up to the horizon point. To compute the latest UTXO state we loop through each block from the horizon onwards and remove the inputs that have been used in the block from the set and add the outputs that were introduced in the block to the set. We need to keep the inputs and outputs for the blocks from horizon point onwards in case of a reorg.
When a new block arrives we validate it and check if the set of inputs is a subset of our Pedersen commitment set (we know that they have valid rangeproofs because they are in our set). Once the block has been validated we keep its contents but can prune the rangeproofs that came with the outputs and merge the kernel commitments.
For a block that has gone out of the horizon window, we remove its inputs from the starting Pedersen commitment set and add its outputs to the starting set.
We need to make it clear that a nimble node is a different type of node that can't bootstrap other nodes. This is because it does not contain all the data needed for full validation. I also think that rangeproofs are needed to identify your UTXOs but I'm not sure about this part.
Given that many new nodes could come from mobile phones, the ratio of regular full nodes vs nimble nodes would probably start moving towards nimble nodes dominating over time. How does that impact the network stability/health? Are there any new sync attacks that are made possible through nimble nodes? What should the peer connection ratio of regular/nimble nodes look like for a mobile node and what for a regular full node?