You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Laying the cultural and technical foundation for Big Rails
As applications built on Rails get larger and larger, and more and more engineers work in the same monolith, our community needs to think more about what sort of tooling and architectural changes will help us continue to scale. This talk shares ideas around a toolchain, and more importantly, the social and cultural programs needed to support that toolchain, that can be used to help engineers in an ever-growing Rails codebase continue to have high velocity, manage their complexity, and claim ownership over their own business subdomains.
ActiveRecord provides a great deal of flexibility and speed of implementation for developers making new apps. As our teams and codebase grow and our services need to continue to scale, some of the patterns we use can start to get in our way. We've seen a bit of that at GitHub, and as a result have been experimenting with some new ways to work with ActiveRecord queries, reduce N+1s, and isolate model details.
In this talk, I'll go over some the problems we've been facing, cover how we've been addressing them so far, and show some new experiments & patterns I've been working through.
John Crepezzi, Github
NOTES
Rails introduced strict mode to help avoid lazy loading rails/rails#37400
JavaScript has async and Ruby has…what exactly does Ruby have?! What is the difference between parallel, concurrent, and asynchronous execution of programs? If you have asked yourself these questions, this talk is for you! Together, let’s dive into the world of concurrent execution of computer programs. We’ll explore the vocabulary, general concepts, and inner workings of a computer before we dive into Ruby’s specific implementations of concurrency in its various forms. Finally, we will look at performance improvement considerations and shed light on the pros and cons of concurrency.
Clara Morgeneyer, Apple
NOTES
Parallelism != Concurrency
Concurrency is using 1 resource and breaking tasks into subtasks to take advantage of time spent in I/O
Global Interpreter Lock
Only allows 1 thread running at once
Puppies analogy:
Parallelization - 4 puppies, 4 food bowls
Concurrency - 4 puppies, 1 food bowl. 1 puppy takes a bite, goes to the back of the line while chewing and lets the next take a bite
"Chewing" == blocking I/O
Anytime something is blocking I/O is a good opportunity to reach for threads
Careful of race conditions with threads. If multiple threads are updating the same data, things can get out of sync.
Can use a mutex to resolve this, but that basically wipes out the benefits of threading in the first place by forcing a thread to finish before the next one starts
The biggest difference between a mid-level engineer and a senior engineer is the scale and scope of the work they're responsible for. How do you dive into complex tasks, report progress to project leadership, and stay focused with so many unknowns?
These are the questions I've continued to ask myself as I grow in my career. In this session, we'll explore the tools myself and other senior-level individual contributors use to shape our work from project inception to delivery.