Skip to content

Instantly share code, notes, and snippets.

@abhichou4
Created August 29, 2020 12:47
Show Gist options
  • Save abhichou4/449286bf1cec8dea9f2ac5735c6f3826 to your computer and use it in GitHub Desktop.
Save abhichou4/449286bf1cec8dea9f2ac5735c6f3826 to your computer and use it in GitHub Desktop.
Google Summer Of Code 2020: Final Report

Google Summer Of Code 2020

Organisation: Tensorflow

Mentors:

Project

Current native support for forward-mode differentiation calls backward on a function twice. But executing this as a tf.function prevents retracing and the extra backward pass gets pruned. Now, It’s a matter of optimizing the current API and making the user experience better. Integrating tf.vectorized_map to facilitate batching of tangents falls under both the agendas. After this, hessian matrices can be computed efficiently using both forward and backward mode differentiation and, workarounds like calculating gradients/jacobians inside GradientTape context can be avoided.

Work Done

Challenges

  • Since any function inside the context of a ForwardAccumulator runs as a tf.function, tangensts in multiple nested accumaltors are passed through forward function wrapping in function.py. Currently, it's passing individual tangents and needs be be changed to accommodate batched.
  • These are some tests that fail because of this.

Work Left

  • Solving the paceholder issue mentioned above by extending support to tf.function.
  • Test _batch_accumulator extensively and include in the Public API.
  • Extend batching support to custom gradients.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment