Skip to content

Instantly share code, notes, and snippets.

@NikolausDemmel
Last active August 25, 2021 08:33
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save NikolausDemmel/badf505de3dd0eb38c3378297b8c5dc3 to your computer and use it in GitHub Desktop.
Save NikolausDemmel/badf505de3dd0eb38c3378297b8c5dc3 to your computer and use it in GitHub Desktop.
Per-residual robust norm with ceres AutoDiff (hacky)
///////////////////////////////////////////////////////////////////////////////
//
// This is exemplar code how one can implement per-residual robust norms in
// ceres while still grouping residual blocks into logical units > size 1.
// Having residual blocks of size > 1 may make sense to get them treated as one
// unit in Schur complement, and they may share substantial amount of
// computation. This is for example true for a photometric reprojection residual
// with all the pixels in one patch being projected with the same depth
// parameter.
//
// In general, one may also use the EvaluationCallback feature to share
// computation between residual blocks, but this is also tricky when working
// with AutoDiff (but can probably be done).
//
// Note that this is pretty hacky, and while it may work for optimization, it
// may also fail in unexpected ways. E.g. for Problem::Evaluate I assume the
// robust_loss == false option will not work. So use with care (or not at all).
// See the discussion:
// https://groups.google.com/d/topic/ceres-solver/iYseuqLWrd8/discussion
//
///////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////
// helpers access on the value part of ceres::Jet types, but beware:
//
// "only bad things can happen when you try and look underneath the template"
// -- Sameer Agarwal
///////////////////////////////////////////////////////////////////////////////
inline double drop_derivative(const double c) { return c; }
template <typename T, int N>
inline double drop_derivative(const ceres::Jet<T, N>& c) {
return c.a;
}
template <class Derived>
auto inline drop_derivative(const Eigen::ArrayBase<Derived>& arr) {
return arr.unaryExpr([](const auto& x) { return drop_derivative(x); });
}
///////////////////////////////////////////////////////////////////////////////
// Cost functor computes re-weighted residuals and also stores actual cost.
// It also acts as a LossFunction to return the corrected cost.
///////////////////////////////////////////////////////////////////////////////
struct MyCostFunctor : ceres::LossFunction {
template <class T>
bool operator()(..., T* sresiduals) const {
Eigen::Map<Eigen::Array<T, RESIDUAL_SIZE, 1>> residuals(sresiduals);
// regular residual computation
residuals = ...;
// compute per-residual Huber weights
Eigen::Array<double, RESIDUAL_SIZE, 1> weight_res;
for (Eigen::Index i = 0; i < RESIDUAL_SIZE; ++i) {
const double r = std::abs(drop_derivative(residuals(i)));
if (r < HUBER_PARAMETER) {
weight_res(i) = 1;
} else {
weight_res(i) = HUBER_PARAMETER / r;
}
}
// compute the weight we need to correct the cost (this is specific for Huber)
Eigen::Array<double, RESIDUAL_SIZE, 1> weight_cost = (2 - weight_res) * weight_res;
// Compute and remember the corrected cost
// ceres will multiply this by 0.5
cost_robustified_ = (weight_cost * drop_derivative(residuals).square()).sum();
// actually re-weight the residuals (this will also re-weight the Jacobians,
// since residuals is Jet)
// multiply sqrt of weight, since it will be squared by ceres
weight_res = weight_res.sqrt();
residuals *= weight_res;
// cost_squared_norm_ is just needed for the the sanity checks in Evaluate
cost_squared_norm_ = drop_derivative(residuals).matrix().squaredNorm();
return true;
}
// Hacky loss function interface
void Evaluate(double s, double rho[3]) const {
// The first two checks are just sanity checks to validate that this hacky
// workaround works as expected. The checks and cost_squared_norm_ could
// be removed.
// compare s to saved squared norm to verify this is not invoked in
// parallel with a race condition
CHECK_NEAR(s, cost_squared_norm_, std::max(1e-10, s * 1e-10));
// verify that cost has been set
CHECK_GE(cost_robustified_, 0.0);
// ceres uses 0.5*rho[0] as cost
rho[0] = cost_robustified_;
// we apply scaling manually in residual
rho[1] = 1;
// forces ceres to do 1st order correction:
rho[2] = 0;
}
private:
// hacky way for custom loss function computation
mutable double cost_squared_norm_ = -1;
mutable double cost_robustified_ = -1;
};
///////////////////////////////////////////////////////////////////////////////
// How to setup the residual block: Use the functor as loss function.
///////////////////////////////////////////////////////////////////////////////
// Don't take ownership of the loss function, since it is the same object
// as the residual functor (owned by the cost function) and would otherwise
// get destroyed twice.
ceres::Problem::Options ceres_opt;
ceres_opt.loss_function_ownership = ceres::DO_NOT_TAKE_OWNERSHIP;
ceres::Problem problem(ceres_opt);
auto* functor = new MyCostFunctor();
auto* cost_function = new ceres::AutoDiffCostFunction<
MyCostFunctor, RESIDUAL_SIZE, ...>(functor);
// pass functor as loss_function
problem.AddResidualBlock(cost_function, functor, ...);
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment