Skip to content

Instantly share code, notes, and snippets.

@fbukevin
Created March 11, 2019 18:10
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save fbukevin/a539b9fd0cddd0f741601d1c906f97bb to your computer and use it in GitHub Desktop.
Save fbukevin/a539b9fd0cddd0f741601d1c906f97bb to your computer and use it in GitHub Desktop.
Job Post: TensorFlow TPU, Swift for TensorFlow, and ML Compiler Infrastructure Teams

TensorFlow TPU, Swift for TensorFlow, and ML Compiler Infrastructure Teams

Google is an “AI first” company and Google Brain is making major investments to build out infrastructure for existing accelerators such as Cloud TPUs and GPUs as well as the coming wave of “edge” accelerators for mobile, automotive, AIY, and other low-power applications. Our infrastructure is pervasively open-source, and covers all levels of the stack - from compilers and programming languages, to high-performance runtimes, to production ML model development, support for multiple ML frameworks, and includes research on large batch training and other novel applications of these accelerators.

We are looking to hire for a significant number of positions in the areas listed below, including a number of technical leadership and management roles, up to the Director / Principal Engineer level. We love working with people who combine a passion for learning and growth with product focus and practical experience. We welcome applicants from traditionally-underrepresented groups in the field of computer science.

If you are interested in one of these roles, please apply to the corresponding job post and mention this ad. Also feel free to get in touch with our recruiters Kristen Hofstetter and Ali Deloumi directly with any questions.

TensorFlow TPU Team

Google Tensor Processing Units (TPUs) are high-performance ML training and inference accelerators, the third generation of which can deliver more than 100 petaflops in a single “TPU Pod” system. TPUs are used pervasively by Google product teams and are available to the public as Cloud TPUs. Our team builds and improves TensorFlow infrastructure, Python APIs, and builds production ML models for TPUs. We drive commercial success of Cloud TPUs through high-end customer engagements, drive key internal teams to success, and collaborate with scientists across the greater Google AI team to bring their innovations to the market.

We are looking for outstanding engineers with experience in one or more of the following areas:

  • Applied machine learning model development
  • TensorFlow C++ infrastructure and device support
  • Development of TensorFlow Python APIs like Keras
  • Large scaling training through large batch training and model parallelism
  • Data processing, data augmentation, and input performance optimization
  • High performance "close to the metal" runtime implementation and optimization
  • Infrastructure qualification, validation, and release engineering
  • Technical writing experience

If interested, please apply here, and mention this post.

Compiler Infrastructure for Machine Learning Accelerators

We are driving a (stealth mode) project to build key compiler and runtime infrastructure that supports a very wide range of high-performance accelerators that underly TensorFlow and other frameworks like it. This project is driven by the increased generality of accelerator hardware and programming models, as well as the need to enable rapid bringup of new devices - sometimes with wildly different capabilities and target markets. We expect to announce and open-source this large-scale project when the time is right in early 2019. This project heavily leverages LLVM technology, and we anticipate driving significant efforts in the broader LLVM community.

We are looking for outstanding engineers with experience in one or more of the following areas:

  • Mid-level compiler analyses, transformations and optimizations.
  • Code generation for GPUs or domain-specific high performance accelerators.
  • Code generation for CPUs and traditional architectures.
  • Polyhedral compiler optimizations.
  • Parallelizing compilers and runtimes.
  • Low level accelerator runtime implementation and optimization experience.

Preferred Qualifications:

  • Experience working with LLVM-family compiler technology, or ML compiler frameworks like XLA, Glow, TVM, etc.

If interested, please apply here, and mention this post.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment