Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
TensorFlow Modeling in Swift and Compilers for Machine Learning

TensorFlow Modeling in Swift and Compilers for Machine Learning

Google is an “AI first” company and Google Brain is making major investments to build out infrastructure for existing accelerators such as Cloud TPUs and GPUs as well as the coming wave of “edge” accelerators for mobile, automotive, AIY, and other low-power applications. Our infrastructure is pervasively open-source, and covers all levels of the stack - from compilers and programming languages, to high-performance runtimes, to production ML model development, support for multiple ML frameworks, and includes research on large batch training and other novel applications of these accelerators.

We are looking to hire for a number of positions in the areas listed below. We love working with people who combine a passion for learning and growth with product focus and practical experience. We welcome applicants from traditionally-underrepresented groups in the field of computer science.

If you are interested in one of these roles, please apply to the corresponding job post and mention this ad, contact Shanna Zhu (our recruiter) and/or email the mlir-hiring address, which (despite the name) is a general way to ask about any of these positions - it goes to the hiring managers for these positions.

TensorFlow Modeling in Swift

We’re seeking a talented Swift developer to join the Tensorflow Swift Modeling team. Our team is responsible for building and improving state of the art reference models for TensorFlow. To do this, we stay on top of state of the art machine learning algorithms and model architectures. We focus on large-scale training, performance optimization, and end-to-end solutions.

Now we are looking for exceptional software engineers to build models in Swift for TensorFlow, an initiative that combines the performance of graphs with the flexibility and expressivity of eager execution. We plan to deploy all models to TPU, our custom accelerator chips for machine learning, and dramatically improve how machine learning research and products are built.

You are expected to:

  • Understand models, including translation, recommendation, image recognition, object detection, etc, and implement these models by using Swift for Tensorflow.
  • Analyze the performance limitations of the models in training and inference.
  • Propose and work on the solutions to improve the quality and performance of the models.

Minimum Qualifications:

  • Experience coding in the C/C++ and/or Swift programming languages.
  • Machine learning experience.

Useful Qualifications:

  • Experience writing Swift code, and familiarity with its API design guidelines.
  • Experience working with TensorFlow and machine learning models in Python.
  • Experience in applying machine learning models to production.

If interested, please apply here, and mention this post.

Compilers for Machine Learning Accelerators: MLIR and XLA

The MLIR and XLA projects are joint projects that are building compiler and runtime infrastructure to support a very wide range of high-performance accelerators that underly TensorFlow and other frameworks like it. MLIR is driven by the increased generality of accelerator hardware and programming models, as well as the need to enable rapid bringup of new devices - sometimes with wildly different capabilities and target markets. This project heavily leverages LLVM technology, and we anticipate driving significant efforts in the broader LLVM community.

We are looking for outstanding engineers with experience in one or more of the following areas:

  • Mid-level compiler analyses, transformations and optimizations.
  • Code generation for GPUs or domain-specific high performance accelerators.
  • Code generation for CPUs and traditional architectures.
  • Polyhedral compiler optimizations.
  • Parallelizing compilers and runtimes.
  • Low level accelerator runtime implementation and optimization experience.

Preferred Qualifications:

  • Experience working with LLVM-family compiler technology, or ML compiler frameworks like XLA, Glow, TVM, etc.

If interested, please apply here, and mention this post.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.