Sidechain Oracles Security Audit Report (Contest)

1. Introduction
1.1 Disclaimer
The contest audit report and MixBytes make no statements or warranties about the utility of the code, the safety of the code, suitability of the business model, investment advice, endorsement of the platform or its products, regulatory regime for the business model, or any other statements about the fitness of the contracts to purpose, or their bug-free status. The contest documentation is for discussion and information purposes only.

Mixbytes services offered are limited to contest hosting and contest conduct assistance on the MixBytes Camp platform and within the MixBytes Camp community. Mixbytes does not provide audit services within this contest. MixBytes Camp community members are not employees, partners, agents, contractors, or subcontractors of Mixbytes. MixBytes in no way shall be held liable or responsible for MixBytes Camp community members. MixBytes in no way controls or distributes the bounty fund. Mixbytes does not conduct any KYC verification on its platform.
1.2 Objective
The audit contest takes place in order to identify possible security vulnerabilities and flaws by the independent security researchers who are team members of the MixBytes Camp community.

The result of the contest is a unified report containing a list of all triaged vulnerabilities found during the contest, ranked according to the approved severity level.

Within the scope of the audit contest, there is no stage of re-audit after bug fixing on the Project's side.
Finding Severity breakdown
All vulnerabilities discovered during the audit contest are classified based on their potential severity and ranked in the following way:
Severity
Description
Critical
Bugs leading to assets theft, fund access locking, or any other loss of funds.
High
Bugs that can trigger a contract failure. Further recovery is possible only by manual modification of the contract state or replacement.
Medium
Bugs that can break the intended contract logic or expose it to DoS attacks, but do not cause direct loss funds.
1.3 Project Overview
Sidechain Oracles is a mechanism to provide a Uniswap V3 Pool's price history to chains where a pair may not be available or doesn't have healthy liquidity to rely on.

The set of contracts is designed to update and broadcast the oracle information every time some cooldown period has expired, or whenever the twap has changed significantly since the last update.
1.4 Project Dashboard
Project Summary
Title
Description
Sponsor
Wonderland
Project name
SideChain Oracles
Timeline
December 5, 2022 - December 16, 2022
Project Log
Date
Commit Hash
Note
December 5, 2022
da7cf7d15fca848828f3a2c6e0e8c55e0dd76841
Initial commit
Project Scope
The audit covered the following files:
File name
Link
1.5 Summary of findings
Severity
№ of Findings
Critical
1
High
8
Medium
14
2. Findings Report
2.1 Critical
1. Drain rewards via sendObservations replay attack
Description
sendObservations can be called by anyone that can lead to all the funds from the DataFeed contract being spent by an attacker (depends on implementation).

There is a sendObservations function - https://github.com/defi-wonderland/sidechain-oracles/blob/77efc9aa8709382d70fd80c1f5fecf50cab9e841/solidity/contracts/DataFeed.sol#L27 which is used to bridge tick data to the sidechains. It calls the bridgeObservations function which has xcall inside it. As it was stated before, xcall needs some ETH to be transferred with a call to pay for a relayer expenses. If it uses the funds from the DataFeed contract, then anyone will be able to call the sendObservations function multiple times for the same _poolNonce. Then all funds from the DataFeed contract could be spent by a malicious actor.
Recommendation
It's recommended to restrict the sendObservations function so that it could be used only by keepers or the governor. Or require a user to send ETH with an initial call but not use funds from the DataFeed contract.
Sponsor's commentary
Acknowledged: so far no ETH is sent to bridge the information, but a msg.value will be implemented to allow the value to be routed to the bridge. On purpose of this comment, the value will not be stored on DataFeed to be bridged, but rather could be stored in the StrategyJob responsible for coordinating and rewarding the txs.
2.2 High
1. Connext bridge has their own limit for calldata size
Description
Recommendation
It's recommended to check the future calldata size in DataFeed.fetchObservations() and, if it exceeds the limits, restrict the number of observations fetched.
Sponsor's commentary
TODO #77: implement a better secondsAgos calculation to avoid overflowing bridge calldata size. Older time periods could be broader, while the latest time period should be of fixed size.
2. No payments for xcall
Description
On the Connext Network docs page it is stated that a special fee should be payed to a relayer to execute a transaction on a destination chain. The relayer fee should be paid when xcall is initiated. But at line https://github.com/defi-wonderland/sidechain-oracles/blob/77efc9aa8709382d70fd80c1f5fecf50cab9e841/solidity/contracts/bridges/ConnextSenderAdapter.sol#L29 there is no such logic according to which xcall would work only on the testnet (where the relayer fee is zero), but not on the mainnet.
Recommendation
It's recommended to add a special parameter to the bridgeObservations function here - https://github.com/defi-wonderland/sidechain-oracles/blob/77efc9aa8709382d70fd80c1f5fecf50cab9e841/solidity/contracts/bridges/ConnextSenderAdapter.sol#L20. That parameter will reflect what amount of ETH should be paid to a relayer. Also https://github.com/defi-wonderland/sidechain-oracles/blob/77efc9aa8709382d70fd80c1f5fecf50cab9e841/solidity/contracts/bridges/ConnextSenderAdapter.sol#L29 the xcall call itself should specify a value to be transferred.
Sponsor's commentary
TODO #68: implement handling of value to send to bridge adapters.
3. Nonce DOS via strategicFetchObservations backrunning
Description
It's possible to break the StrategyJob state and therefore stop price broadcasting.
Let's consider StrategyJob.sol, DataFeedStrategy.sol and DataFeed.sol.
In StrategyJob.sol we are interested in:

mapping(uint32 => mapping(bytes32 => uint24)) public lastPoolNonceBridged;

function work(
    uint32 _chainId,
    bytes32 _poolSalt,
    uint24 _poolNonce,
    IOracleSidechain.ObservationData[] memory _observationsData
) external upkeep {
    // TODO: change criteria for workable (if there's a new nonce, bridge)
    if (!_workable(_chainId, _poolSalt, _poolNonce)) revert NotWorkable();
    lastPoolNonceBridged[_chainId][_poolSalt] = _poolNonce;
    dataFeed.sendObservations(defaultBridgeSenderAdapter, _chainId, _poolSalt, _poolNonce, _observationsData);
}

function work(bytes32 _poolSalt, IDataFeedStrategy.TriggerReason _reason) external upkeep {
    dataFeedStrategy.strategicFetchObservations(_poolSalt, _reason);
}

function _workable(
    uint32 _chainId,
    bytes32 _poolSalt,
    uint24 _poolNonce
) internal view returns (bool _isWorkable) {
    uint24 _lastPoolNonceBridged = lastPoolNonceBridged[_chainId][_poolSalt];
    if (_lastPoolNonceBridged == 0) {
        (uint24 _lastPoolNonceObserved, , ,) = dataFeed.lastPoolStateObserved(_poolSalt);
        return _poolNonce == _lastPoolNonceObserved;
    } else {
        return _poolNonce == ++_lastPoolNonceBridged;
    }
}
In DataFeedStrategy.sol we are interested in:

function strategicFetchObservations(bytes32 _poolSalt, TriggerReason _reason) external {
    IDataFeed.PoolState memory _lastPoolStateObserved;
    (, _lastPoolStateObserved.blockTimestamp, _lastPoolStateObserved.tickCumulative, _lastPoolStateObserved.arithmeticMeanTick) = dataFeed.lastPoolStateObserved(_poolSalt);
    if (!_isStrategic(_poolSalt, _lastPoolStateObserved, _reason)) revert NotStrategic();
    uint32[] memory _secondsAgos = calculateSecondsAgos(_lastPoolStateObserved.blockTimestamp);
    dataFeed.fetchObservations(_poolSalt, _secondsAgos);
    emit StrategicFetch(_poolSalt, _reason);
}
Note, the function above can be called by anyone!

Steps which lead StrategyJob into an incorrect state:

  1. Everything is correctly initialized by a governor.
  2. Workers from Keep3rJob are available.
  3. A worker calls StrategyJob.work() with salt and TriggerReason (the second work() function from the snippet above).
    • In StrategyJob lastPoolNonceBridged will be initialized with pool and lastNonce from DataFeed.
    • In DataFeed nonce will be increased by 1.
  4. A worker calls StrategyJob.work() with calculated twaps (the first work() function from the snippet above).
    • Here, it will be checked that nonce from lastPoolNonceBridged+1 equals to nonce lastPoolStateObserved from DataFeed.
  5. A hacker observes a mem-pool and notices a transaction by a worker.
    • It's not guaranteed that the worker's transaction will be included right now (for example, due to low gas price per transaction).
    • A hacker inserts the transaction in the next block. The transaction just calls DataFeedStrategy.strategicFetchObservations() directly. (DataFeed's nonce + 1)
    • The worker's transaction appears in the next blocks or later. The transaction calls StrategyJob.work() (the second one). If parameters are low (or this transaction appears too late) the worker's transaction will be applied. (DataFeed's nonce + 1 again!)
  6. Now a worker calls StrategyJob.work (the first one), StrategyJob has nonce: N, but DataFeed's nonce is N + 2.
  7. This job always will fail for poolSalt.
Item 5 might happen in another way – workers from Keep3rJob are scripts or people that might be unavailable (due to network issues or electricity).
And, at this moment, an attacker might send a lot of strategicFetchObservations transactions bypassing StrategyJob.

The consequence of such an exploit is impossibility to outsource the work due to Keep3rJob restrictions.
Recommendation
It's recommended to add the "onlyStrategyJob" modifier to DataFeedStrategy.
Sponsor's commentary
Sponsor:
Implausible:
  • Whenever strategicFetchObservations can be called, it will update the mainnet state, whenever the first fetch is called, the second will most probably revert (as no cooldown time has passed, and the twap hasn't changed significally)
  • Every strategic fetch is permissionlessly bridgeable, keepers are rewarded to bridge the lastBridgeNonce+1
  • Fetch and broadcast are separate processes (and have 2 separate tracked nonces)
Should lastBridgeNonce be less than lastObservedNonce, keepers are rewarded to bridge 1 by 1 each msg until both nonces match. Should they already match, keepers will have to wait until another observation to be rewarded for bridging.

Contest Manager:
Let's name two work functions from StrategyJob.sol as fetch_work and send_work.
Let's say a keeper calls fetch_work. This increases _lastPoolStateObserved.poolNonce.

Time passes.
The keeper calls fetch_work again. This increases _lastPoolStateObserved.poolNonce again.
Now we have two observations with nonce1 and nonce2 and they are both not yet sent through the bridge.
Can we now call send_work to send both observations through the bridge? This is answered by the _workable(), which checks the following:

  1. If _lastPoolNonceBridged == 0, then _poolNonce == _lastPoolNonceObserved is checked
  2. Otherwise, _poolNonce == ++_lastPoolNonceBridged is checked
In (2) case, nonce1 is expected to be _lastPoolNonceBridged+1. This is OK.
In (1) case, nonce1 should be equal to _lastPoolStateObserved.poolNonce. This is not the case because the _lastPoolStateObserved.poolNonce is equal to nonce2 now. So, in the initial stage, when we have not yet sent anything through the bridge, we cannot send nonce1. But can we send nonce2? It seems not, because on the other side of the bridge nonce1 is expected.

Sponsor:
Ok, there's an edge case where that could happen.

Should we redeploy the Job, the lastNonceBridged will be initialized to 0, if that situation you point happens, the keeper will be able to send any of the 2 nonces. If he sends the 2nd, then he will not be able to send the 1st (as the job initializes with the most recent one at the time of bridging).
keeper: sendWork(2) -> works on mainnet -> reverts on sidechain (missing nonce 1 yet)
keeper: sendWork(1) -> reverts on mainnet (old nonce)
keeper: sendWork(3) -> should work on mainnet
the case is, only when swapping jobs (redeploying), counting that the last one didn't yet broadcast nonce 1
when naturally initializes (job is the 1st one):
keeper: sendWork(2) -> works on mainnet -> works on sidechain (initializes pool at nonce 2)
keeper: sendWork(3) -> should work
in any case, the fetch and send job won't be stopped. What will be stopped is the sync between rewards in mainnet and sidechain, having to be manually fixed.
I'd put it under possible manual-fixable-out-of-service under operational change.
4. Nonce DOS via whitelistpipeline()
Description
There is the whitelistpipeline() function that can be used to add a pool to the whitelist and update the nonce of the pool.

  function _whitelistPipeline(uint32 _chainId, bytes32 _poolSalt) internal {
    (uint24 _lastPoolNonceObserved, , , ) = IDataFeed(address(this)).lastPoolStateObserved(_poolSalt);
    whitelistedNonces[_chainId][_poolSalt] = _lastPoolNonceObserved + 1;
    _whitelistedPools.add(_poolSalt);
    emit PipelineWhitelisted(_chainId, _poolSalt, _lastPoolNonceObserved + 1);
  }
There is no validation that the pool is successfully added _whitelistedPools.add(_poolSalt). Whitelistpipeline can be called a second time for the same pool.

Consider the following case, there is a pool with an already deployed oracle. It means that the oracle is storing the current nonce that was passed from the DataFeed contract.
The strategy calls the fetchObservations() function which increases the nonce in the _lastPoolStateObserved value.
If the governor calls the whitelistPipeline() function, it will increment the nonce of the pool that is stored in PipeLineManagement whitelistedNonces[_chainId][_poolSalt] so that the nonce will be greater by two than the nonce which is stored in oracle. From now it is impossible to call sendObservations() with the next nonce for the oracle because the validatePipeline function will revert:

if (_whitelistedNonce > _poolNonce) revert WrongNonce();
The deployed oracle for that pool could not be updated anymore because validatePipeline will pass only nonces that are greater by 2 than the nonce stored in oracle. But the oracles write() function can not be called with nonces greater by more than one.
Recommendation
Consider checking the return value like this:

if(_whitelistedPools.add(_poolSalt));
It'll prevent updating the nonce for the already whitelisted pool.
Sponsor's commentary
TODO: #61 Whitelisting the same pipeline can disable writing to the oracl.
5. Out of sync connext
Description
Connext doesn't have transaction ordering guarantees and in some cases it may lead to DoS.

  1. You are working with a pool with address POOL.
  2. DataFeed.lastPoolStateObserved[POOL].poolNonce == 2 (sender's side).
  3. OracleSidechain.poolNonce == 2 (receiver's side).
  4. strategyCooldown seconds passed since the last observations sending, so a keeper calls StrategyJob.work(bytes32, IDataFeedStrategy.TriggerReason) with reason TriggerReason.TIME and you fetch all observations and fill DataFeed._observedKeccak.
  5. Then keepers call StrategyJob.work(uint32,bytes32,uint24,IOracleSidechain.ObservationData[]) and you send all observations to bridge ( TRANSACTION #1).
  6. You incremented DataFeed.lastPoolStateObserved[POOL].poolNonce and now it is equal to 3.
  7. Then suddenly the price changed a lot and a new keeper calls StrategyJob.work(bytes32, IDataFeedStrategy.TriggerReason) with reason TriggerReason.TWAP and after that calls StrategyJob.work(uint32,bytes32,uint24,IOracleSidechain.ObservationData[]) and you send all observations to bridge. (TRANSACTION #2)
  8. You incremented DataFeed.lastPoolStateObserved[POOL].poolNonce and now it is equal to 4.
  9. DataReceiver received first TRANSACTION #2. It reverts because of this nonces check: https://github.com/defi-wonderland/sidechain-oracles/blob/77efc9aa8709382d70fd80c1f5fecf50cab9e841/solidity/contracts/OracleSidechain.sol#L101.
  10. Then TRANSACTION #1 is successfully commited and OracleSidechain.poolNonce becomes 3. But DataFeed.lastPoolStateObserved[POOL].poolNonce is already 4 so all next transactions will fail because of this check: https://github.com/defi-wonderland/sidechain-oracles/blob/77efc9aa8709382d70fd80c1f5fecf50cab9e841/solidity/contracts/OracleSidechain.sol#L101.

Even if anyone can call DataFeed.sendObservations() directly with old parameters and repeat the transaction, we consider the problem as High severity because it requires manual intervention to repair the protocol, and all other protocols that depend on the current one will suffer.
Recommendation
As far as we know, Connext guarantees transaction ordering if they are in different batches (30-minute merkle roots).
So, the simple solution is to add required delay = 30min even for TriggerReason.TWAP
Then, all your transactions in sidechains will keep the order.
Sponsor's commentary
TODO: #76: add a minimum time period to be observed, higher than or equal to 30.
6. OutOfGas error in receiver's chain leads to DoS
Description
Generally you have two sides: sender's side (mainnet) and receiver's side (side chain)
The receiver's side (starts with ConnextReceiverAdapter.xReceive()) is more gas-consuming because there are observations insertions into the Oracle storage.
Based on tests, we checked that it is about 2 times more gas-expensive than on the sender's side: StrategyJob.work(uint32,bytes32,uint24,IOracleSidechain.ObservationData[]) and StrategyJob.work(bytes32, IDataFeedStrategy.TriggerReason)

So, let's imagine the following situation:

  1. Mainnet block gas limit is 30_000_000.
  2. Sidechain is Polygon and it has the limit of 30_000_000 gas per block as well.
  3. You are working with a pool with address POOL.
  4. DataFeed.lastPoolStateObserved[POOL].poolNonce == 3 (sender's side).
  5. OracleSidechain.poolNonce == 3 (receiver's side).
  6. Your main keepers got stuck for a while and before they were back to work, a lot of unsent observations had already appeared.
  7. So, keepers call StrategyJob.work(bytes32, IDataFeedStrategy.TriggerReason) and you fetch all observations and fill DataFeed._observedKeccak.
  8. Then keepers call StrategyJob.work(uint32,bytes32,uint24,IOracleSidechain.ObservationData[]) and you send all stuck observations to sidechain in one transaction because in DataFeedStrategy.calculateSecondsAgos() the length of _secondsAgos is not limited.
  9. You incremented DataFeed.lastPoolStateObserved[POOL].poolNonce and now it is equal to 4.
  10. Then your DataReceiver gets a transaction with an enormous length of _observationsData[]. Due to the large number of storage operations, OutOfGas happened and the transaction was reverted. So, OracleSidechain.poolNonce didn't increment.
  11. Now any transaction to that receiver will fail because the sender's nonce already incremented, and the receiver's nonce - didn't. Thus, every transaction will fail here: https://github.com/defi-wonderland/sidechain-oracles/blob/77efc9aa8709382d70fd80c1f5fecf50cab9e841/solidity/contracts/OracleSidechain.sol#L101.

Even if anyone can call DataFeed.sendObservations() directly with old parameters and repeat the transaction, we consider the problem as High severity because it requires manual intervention to repair the protocol, and all other protocols that depends on the current one will suffer.
Recommendation
It is recommended to add a hard limit for maximum _secondsAgos size in DataFeedStrategy.calculateSecondsAgos().
Please, note that different networks may have different gas block limits.
Sponsor's commentary
TODO: #77: refactor _secondsAgos to avoid overflowing calldata size and gas.
7. Price changes in sidechain oracles are known a few blocks before the update
Description
If the keeper or someone else calls DataFeed.sendObservations(), at first, this request will appear in the mempool. Next, it will appear in the newly mined block of the main chain. During all this time everyone will know that after some time (more or less determined) it will appear in the sidechain. If some protocol uses Sidechain Oracles to get the price, a hacker will have a few blocks to utilize the old price before the price will be updated the way he knows.
Recommendation
Think of other mechanics to transfer prices to sidechain.
Sponsor's commentary
Ok.
8. Protocol-specific delays can lead to frontrunning on sidechains
Description
There will always be a delay when transferring tick data from the mainnet to a sidechain. Connext Network relayers will need some time to process and transfer the price data, keepers will spend some time to notice that the data have been fetched recently and it should be sent to a sidechain. All of these delays can cause sidechain price data to always be one step behind. Someone can extract profits from those delays - there will be a window when the data are already on the mainnet but have not arrived to the sidechain yet. There can be price manipulations, opening/closing different positions, etc.

Since this issue is protocol-specific, it's recommended to state it in the protocol docs. Everyone who uses oracles on the sidechains should know that such delays are possible and should treat oracles data as historical prices, but not as in-time ones.
Sponsor's commentary
Acknowledged: TODO, add comments in docs about this.
2.3 Medium
1. _secondsPerLiquidityCumulativeX128s is not replicated from mainnet
Description
Your public function OracleSidechain.observe() return _secondsPerLiquidityCumulativeX128s as well as _tickCumulatives, and your protocol clients may rely on both values.

But your _secondsPerLiquidityCumulativeX128s is different from the one on mainnet because observations[i].secondsPerLiquidityCumulativeX128 is calculated as follows:

secondsPerLiquidityCumulativeX128 = last.secondsPerLiquidityCumulativeX128 + ((uint160(delta) << 128) / (liquidity > 0 ? liquidity : 1))
But in our cases liquidity is everytime 0, so secondsPerLiquidityCumulativeX128 will be different than on mainnet.
So, your oracle clients may suffer from it.

Uiswap: https://github.com/Uniswap/v3-core/blob/05c10bf6d547d6121622ac51c457f93775e1df09/contracts/libraries/Oracle.sol#L41
Your oracle: https://github.com/defi-wonderland/sidechain-oracles/blob/77efc9aa8709382d70fd80c1f5fecf50cab9e841/solidity/contracts/OracleSidechain.sol#L129
Recommendation
It is recommended to replicate secondsPerLiquidityCumulativeX128 as well as tick in ObservationData (function OracleSidechain.write() arguments).
Sponsor's commentary
Dismissed: secondsPerLiquidityCumulativeX128 is not being currently used by any oracle implementation, the design dismisses this metric as it's only focused on quote.
2. Absolute value of twapThreshold won't suit all categories of tokens
Description
The twapThreshold variable sets the maximal allowed deviation between real and average prices: when the deviation is greater than twapThreshold, anyone can trigger fetching of observations. However, since the variable is set in absolute units (ticks), it won't work equally well for pools with different volatility:

  1. Stablecoin pools have very low volatility, thus twapThreshold needs to be set to a small value to detect volatility in such pools;
  2. Non-stablecoin pools have medium to high volatility which mostly depends on market conditions for such pools, twapThreshold needs to be set to a higher value and it'll need to be re-adjusted based on the market situation.

Thus, twapThreshold can be configured to suit tokens only of a certain volatility. For other pools, it will only be possible to fetch observations using the time strategy.
Recommendation
Consider using a relative threshold (a percentage) that detects relative changes in prices. While this won't still work equally for all tokens (since stablecoins are much less volatile than other tokens), it should improve volatility detection on average by putting all price changes on the same scale.
Sponsor's commentary
Acknowledged: TODO #78, add twapThreshold[pool] at whitelisting (add setters).
3. Centralization issues
Description
All contracts have only one owner. If the owner's address would be compromised, the hacker may change the route of data that would lead to fake prices in sidechain oracle.
Recommendation
  1. Use multisig or DAO for the onlyGovernor operations.
  2. Consider adding the ability to revoke the governor or decreasing the number of governor-controlled variables.
Sponsor's commentary
Ok.
4. Fetching observations may revert for some pools with low cardinality and frequent observations
Description

The observe function of Uniswap V3 pools will revert when trying to fetch an observation that's older than the oldest observation of a pool (Oracle.sol#L226). However, the calculateSecondsAgos function doesn't check that the oldest time point in the secondsAgos array is not older than the oldest observation of a pool. This can cause a revert in two situations:

  1. When _timeSinceLastObservation is bigger than block.timestamp - pool's oldest observation timestamp (DataFeedStrategy.sol#L97). The first element in the secondsAgos array will point to a timestamp that's older than the oldest observation in the pool.
  2. When filling the secondsAgos array for a new pool: if block.timestamp - periodDuration turns out to be older than the oldest observation of the pool (DataFeedStrategy.sol#L169-L171).

When this happens, a keeper or project owners will have to either:
  1. wait for observations to stale in the pool, which is not guaranteed since the observations are updated by swaps,
  2. or increase the cardinality of the pool, which will cost some transaction fees since increasing cardinality allocates storage slots.
Recommendation
When filling the secondsAgos array, consider checking pool's oldest observation (see getOldestObservationSecondsAgo as an example implementation) and using it's timestamp as the oldest time point in the array.
Sponsor's commentary
Acknowledged: TODO #79, add OLD reason for strategic data fetching.
5. Force fetch issues
Description
There is a forceFetchObservations function https://github.com/defi-wonderland/sidechain-oracles/blob/77efc9aa8709382d70fd80c1f5fecf50cab9e841/solidity/contracts/DataFeedStrategy.sol#L54 which allows the governor to fetch tick data in the specific timeframe. A malicious governor can on purpose miss some specific timeframes, so that the tick data would be fragmented. It won't be possible to fetch tick data from that missed timeframes in future because a new timeframe calculation relies on the last observation' blockTimestamp.
Recommendation
In order to keep a consistent and permissionless nature of the price data fetching process, it's recommended not to allow anyone to fetch (and then send) tick data from an arbitrary timeframe.
Sponsor's commentary
Acknowledged: TODO, add OLD reason for strategic data fetching.
6. Governor of PipelineManagement can set a poisoned BridgeSenderAdapter as whitelisted and manipulate the data
Description
The governor of PipelineManagement can set a poisoned BridgeSenderAdapter as whitelisted and manipulate the data.
Recommendation
It's recommended to secure the governor account properly, for example, by using MultiSigWallet.
Sponsor's commentary
Ok.
7. Large observation data out of gas error
Description
If the strategicFetchObservations function is not called for a long time, it can lead to 'out of gas' or 'out of bounds of uniswap oracle cache' reverts. It can happen because there is a cycle, the length of which depends on how long ago the last observation was made. After that, only forceFetchObservations can fix that.

https://github.com/defi-wonderland/sidechain-oracles/blob/da7cf7d15fca848828f3a2c6e0e8c55e0dd76841/solidity/contracts/DataFeedStrategy.sol#L47
Recommendation
It's recommended to limit the number of observations and also take into account the last observation point of uniswapv3 pool.
Sponsor's commentary
TODO #77: refactor _secondsAgo to avoid large calldatas.
8. Out of sync observation data
Description
The blockTimestamp field of observations is set to the timestamp of a previous observation, not the current one. In this loop:

for (; _i < _secondsAgosLength; ) {
  // Twap is calculated using the last recorded tickCumulative and time
  _tickCumulativesDelta = _tickCumulatives[_i] - _tickCumulative;
  _delta = _secondsAgo - _secondsAgos[_i];
  _arithmeticMeanTick = int24(_tickCumulativesDelta / int32(_delta));

  // Always round to negative infinity
  if (_tickCumulativesDelta < 0 && (_tickCumulativesDelta % int32(_delta) != 0)) --_arithmeticMeanTick;

  // Stores blockTimestamp and tick in observations array
  _observationsData[_observationsDataIndex++] = IOracleSidechain.ObservationData({
    blockTimestamp: _secondsNow - _secondsAgo,
    tick: _arithmeticMeanTick
  });

  // Updates state for next iteration calculation
  _secondsAgo = _secondsAgos[_i];
  _tickCumulative = _tickCumulatives[_i];

  unchecked {
    ++_i;
  }
}
  1. _arithmeticMeanTick is calculated on the difference between the current cumulative tick (_tickCumulatives[_i]) and the previous one ( _tickCumulative);
  2. _delta is the time difference between the current observation (_secondsAgos[_i]) and the previous one (_secondsAgo);
  3. However, blockTimestamp of the current observation is set to the timestamp of the previous one (_secondsNow - _secondsAgo).

This causes:
  1. missing observations in OracleSidechain because some new observations will be saved at timestamps of some old ones;
  2. overwriting of observations in OracleSidechain because existing observations will get overwritten with new observations that have the same timestamp.
Recommendation
Consider this change:

--- a/solidity/contracts/DataFeed.sol
+++ b/solidity/contracts/DataFeed.sol
@@ -91,8 +91,8 @@ contract DataFeed is IDataFeed, PipelineManagement {
         if (_tickCumulativesDelta < 0 && (_tickCumulativesDelta % int32(_delta) != 0)) --_arithmeticMeanTick;

         // Stores blockTimestamp and tick in observations array
-        _observationsData[_observationsDataIndex++] = IOracleSidechain.ObservationData({
-          blockTimestamp: _secondsNow - _secondsAgo,
+        _observationsData[_observationsDataIndex++] = IOracleSidechain.ObservationData({
+          blockTimestamp: _secondsNow - _secondsAgos[_i],
           tick: _arithmeticMeanTick
         });
Sponsor's commentary
Dismissed: the data is inserted on the previous timestamp, as we only know that from timestamp a->b the twap was x. The way UniV3 processes observations, it means that tick(a) = x. What happens after b is extrapolation.
9. Old slot0.tick
Description
Link: https://github.com/defi-wonderland/sidechain-oracles/blob/da7cf7d15fca848828f3a2c6e0e8c55e0dd76841/solidity/contracts/OracleSidechain.sol#L128

At the time of writing of a tick to OracleSidechain, the OLD tick value (from slot0) will have the time of the NEW tick (from _observationData.blockTimestamp). Due to this, the time of the tick and the value are out of sync.
Recommendation
It is recommended to replace slot0.tick with _observationData.tick.
Sponsor's commentary
Dismissed, that's how the design works.
10. sqrtPriceX96 cannot be recovered from a tick without losing precision
Description
The OracleSidechain contract restores the current √P from a tick but in the majority of cases this cannot be done without a rounding error since ticks are integers and √P is a Q64.96 number. In the worst case, the error can be very close to 1 basis point (0.01%). For example, these two √P 's are mapped to the same tick:

TickMath.getTickAtSqrtRatio(83290069058676223003182343270) == 1000
TickMath.getTickAtSqrtRatio(83294233458021775794974331556) == 1000
This can affect third-party projects that integrate with an OracleSidechain contract and expect to read accurate current prices from the slot0.sqrtPriceX96 variable.
Recommendation
Consider renaming the sqrtPriceX96 key of Slot0 to reflect that it's not an accurate price (so it is not to be confused with the one from Uniswap pools). Or consider removing the variable since the users of OracleSidechain contracts can do the conversion on their side when needed. Another argument against having the variable is that there will always be a delay between the state of Uniswap pools on the mainnet and the state of OracleSidechain, so sqrtPriceX96 will never show the actual current price.
Sponsor's commentary
Acknowledged: TODO #75, add comments on docs explaining this behaviour.
11. Calling DataFeed.sendObservations directly breaks the StrategyJob contract
Description
The StrategyJob._workable function, which checks whether we can send recently fetched observations, relies on the lastPoolNonceBridged inner parameter. This parameter is updated after every successful broadcast of the fetched data. However, it is possible to execute DataFeed.sendObservations directly which updates the nonce of the oracles on other networks but not in StrategyJob. This can block all the future calls to StrategyJob.

https://github.com/defi-wonderland/sidechain-oracles/blob/7a3a7b74479714158fdbb20534911bcfbcb2585d/solidity/contracts/StrategyJob.sol#L86
Recommendation
It is recommended to restric direct access to sendObservations().
Sponsor's commentary
Implausible: calls to StrategyJob are not blocked, since they would revert only on the receiving chai.
12. PipelineManagement.whitelistedPools is vulnerable to DoS
Description
Recommendation
It's recommended to add parameters to this function that will limit the out result in specified bounds.
Sponsor's commentary
Acknowledged, the function is permissioned.
13. Volatility detection may fail to detect short volatility
Description

The volatility detection mechanism calculates and compares two average prices over the twapLength period. Since the prices are of the same asset and since these are the average prices that are compared, the deviation between them may stay low until volatility is big and long enough. Due to the averaging of both prices they might not capture smaller volatility, which can still be critical to be fetched and sent to other chains. As a result, the users of OracleSidechain contracts on other chains will have delayed prices at times when there is a rapid spike in volatility.
Recommendation
Instead of comparing two average prices, consider comparing the current mainnet price against an averaged and inferred price on a sidechain. This will detect sudden spikes in volatility on the mainnet and will allow to quickly send them to other chains.
Alternatively, consider calculating _oracleArithmeticMeanTick over a longer period that can be significantly bigger than twapLength (probably, block.timestamp - _lastPoolStateObserved.blockTimestamp). This will smooth out spikes in volatility but will still allow more deviations between prices since they will be calculated on different time ranges.
Sponsor's commentary
Acknowledged.
14. Wrong stats on start
Description
In https://github.com/defi-wonderland/sidechain-oracles/blob/da7cf7d15fca848828f3a2c6e0e8c55e0dd76841/solidity/contracts/OracleSidechain.sol#L124 we first write the old slot0.tick info, then change it. But at init of OracleSidechain slot0.tick equals zero. If someone requests data from this time period, he will get terribly wrong data because the first written tick will affect the return value.
Recommendation
It is recommended to initiate slot0 at the first _write call before writing its tick.
Sponsor's commentary
Acknowledged: The deployment of a new oracle is linked to the insertion of new data, there is no time period in which the oracle would be without data. If someone requests data previous to the first datapoint, they will receive the OLD! error.
3. About MixBytes Camp
MixBytes Camp is a platform that connects blockchain projects and independent smart contracts auditors and security researchers.