Skip to content

Instantly share code, notes, and snippets.

@0xngmi
Created May 31, 2024 20:55
Show Gist options
  • Save 0xngmi/653aa70d3162f0ef4a41d56ced602a6c to your computer and use it in GitHub Desktop.
Save 0xngmi/653aa70d3162f0ef4a41d56ced602a6c to your computer and use it in GitHub Desktop.
Aggregator with customized user risk profiles

Background

In the current market, yield farmers have only 2 options: either deposit into a yield aggregator that completely manages their money for them, making all the risk assessments and moving their money between protocols, or just run everything on their own, moving their money continuously between protocols, at most using an auto-compounder on a single vault for some automation.

However, not everyone has the same risk profile and thus fully managed aggregators won't fit everyone, while on the other hand there's a big market of people that don't like constantly having to manage their positions and shuffle their funds. These two options are the 2 extremes, either fully managed or DYI.

Because of this I believe there's a market for users that want some automation while being able to set their own risk profile.

Idea

I propose a protocol where users deposit funds along with policies on how those funds can be deployed, and then the protocol optimizes their position within those boundaries and automatically moves funds as needed.

As an example, a risk averse investor might deposit DAI and only allow it to be deposited in Compound v2, sDAI or AAVE v2, thus protocol will only shift money between these battle-tested protocols and keep risks very low.

While someone more risk averse could deposit ETH with the following policy:

  • Protocol must have had >10M TVL for over 2 months
  • Protocol must have been included in an approved list by 2 of these 3: @Dynamo_Patrick, @phtevenstrong and @0xGeeGee
  • Protocol must have been audited by an auditor in a list
  • Amount of funds in any given protocol need to be lower than 20% of total (funds must be distributed across many protocols)
  • or, protocol is a simple sushi masterchef fork

Along with these, users could use many other conditions to fine-tune their policy to match the risk they're willing to take, so they're able to fully customize their position.

Then protocol will just crunch the data to optimize APY for users under these constraints, so effectively it's like building a yield aggregator specific for each user, giving them all the benefits of automation but without doing risk assessments for them, instead those are done by each user individually.

Implementation

A really great benefit of this idea is that, since risk is no longer managed by the yield aggregator's team and allocations are objective, it becomes possible to make the protocol more trustless.

There are multiple ways to implement the system, but here's one way to achieve low gas costs while keeping it trust-minimized for depositors:

Once a day, team will run an offchain script computing the optimal positions for each user, then calculate the final positions that all users will take in aggregate and which changes the protocol needs to make to get there, then it will post that onchain. It will also post a merkle tree with all the new user positions after the rebalance, which will take into account the profit/loss for each user.

Then, for a few hours, anyone else can run that script to verify that the results are accurate and they match what was posted onchain.

In case there's a malicious update that doesn't match the results from the script, or data for the merkle tree is not available, there's multiple mechanisms that could be in place to veto it:

  • A multisig of trusted community members can veto it
  • A challenge with bonds can be created by anyone, this challenge can either be resolved by using an oracle like UMA or (much harder) by bringing the data onchain and running the full script there.
  • Individual users can force a withdrawal using the previous merkle tree, this will send the user their proportion of LP tokens from all their positions, which the user will have to unwind himself. If more than >20% of users force withdrawal then a veto is enforced.

Then if there's 2 vetos back to back you could force an emergency withdrawal where no more updates are possible, and users just get their proportional LP positions, or they can appoint a liquidator that will get all their combined assets and can collectively unwind positions.^[1]

Then after a few hours of no challenge/veto, all the movements described in what was sent onchain will be executed and positions will be rebalanced.

Users that want to deposit will just deposit their coins into a waiting pool and wait for next rebalance, same for exiting users, although a small liquid pool can be maintain to facilitate small instant withdrawals.

This has two large benefits:

  • Security: A lot of aggregator exploits have been caused by having some mechanism where deposits/withdrawals can force the pool to interact with protocols, and if that interaction has some issue the pool is hacked. By moving those interactions to be completely controlled, this completely removes that class of bugs, as it's no longer possible to eg force 100x deposits and withdrawals into a curve pool using flashloans.
  • Tracking losses: A big issue for aggregators is that, if one of the strategies used by a pool suffers a loss, until there's some action onchain that notifies the loss to the pool, it can't know about it, so as long as the strategy used for withdrawal is different, users can abuse this by withdrawing without loss, making other users take an even bigger loss instead. This protocol solves this issue because withdrawals will happen always (excluding very smol ones) after losses have been checked in the whole system.

Moat/Economies of Scale

By moving funds from a lot of user positions in a single transaction, the protocol can save a lot on gas costs compared to every user moving their funds on their own, reducing costs from O(n) to O(1).

Not just that, but in many pools there's some deposit/exit fee, which can be explicit or it could be caused by slippage (eg single-sided deposit into a curve pool), by batching position movements across all users, it's possible to match users entering and exiting the pool in order to avoid those fees.

As users grow, gas costs per user will decrease and the chances of avoiding fees through Coincidence of Wants will increase, thus reducing costs and improving profits for users, which will create a moat derived from economies of scale that will be hard to replicate for others.

Composability

Initially it might look like this will kill composability compared to traditional yield aggregators, since in those you have a single token whereas here each user's position will be different.

However, it's possible to wrap all the positions that have the same policy under a receipt token, and this might actually help composability because integrating protocols can choose exactly the type of risks theyre willing to take, so for example lending protocols can integrate as collateral a token that has the specific risk profile theyre ok with, instead of being forced into a binary yes/no decision with traditional yield aggregators, where you either take all their risk or you don't list them at all.

It's true that this will lead to more fragmentation, as there will be lots of tokens instead of a single one, however almost all users mint and redeem receipt tokens against the protocol directly, so DEX liquidity is not needed and thus liquidity fragmentation is not a problem. Furthermore, this type of fragmentation is also a present in eigenlayer and it has not been an issue there.

Future ideas

Tranching

Once this setup is in place, it's extremely easy to extend it and build a tranching protocol on top.

In many cases, protocols don't experience full losses but only lose a portion of their TVL, in these cases risk-on users could opt to having a higher APY at the cost of losing their full position if there are protocol losses, while risk averse users can get lower APY with less risk by having those other users take the losses first.

This would allow the protocol to further extend risk customization for users, and not just automate positions but provide risk levers that a user can't get on their own, thus making risk management even more extensive and extend userbase from those that just want automation to sophisticated users that want advanced risk management tools.

Tranched products are huge in traditional finance and it's a market that is almost non-existent in defi, so IMO it's very interesting and has a huge potential for growth.

IMO the reasons we haven't seen these in DeFi are:

  • In many cases when there's a loss in DeFi it's due to a hack and then the protocol is fully drained, so tranching doesn't matter as everyone loses everything. This is already getting better as protocols mature and security improves across the ecosystem, and will likely keep getting better.
  • So far every protocol that has tried to build tranching has built a protocol that adds tranching on top of another protocol, like aave or compound. However, the exploit risk that comes from the tranching protocol is much higher than the risk that aave or compound have a partial loss, such as some bad debt. So by taking a low-risk tranche in one of these protocols you're actually increasing your total risk compared to depositing directly into aave. This shouldn't be an issue in our case since users deposit into the protocol for the automation so they're already taking that extra risk, and thus using tranches is risk neutral for them and can be used to actually lower risk.

Notes

[1]: This system is just a prototype, it will need extra work to ensure that this can't be manipulated to force a shutdown or for other nefarious means, but it proves that it's possible to build the system to be completely permissionless, even for users that aren't actively monitoring it and checking the fraud proofs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment