Skip to content

Instantly share code, notes, and snippets.

View davetron5000's full-sized avatar

David Copeland davetron5000

View GitHub Profile
{
ngOnInit: function() {
var self = this;
this.sub = this.route.params.subscribe(function(params) {
var id = +params['id'];
self.heroService.getHero(id).then(function(hero) {
self.hero = hero
});
});
},
coreJS = require('core-js');
zoneJS = require('zone.js');
reflectMetadata = require('reflect-metadata');
ng = {};
ng.core = require ("@angular/core");
ng.common = require ("@angular/common");
ng.compiler = require ("@angular/compiler");
ng.platformBrowser= require("@angular/platform-browser");
ng.platformBrowserDynamic = require("@angular/platform-browser-dynamic");

This is totally ripped-off from Rent The Runway's version

You, the new Stitch Fix engineer, should strive to be able to be able to answer these questions on your own. If you don’t know the answers, we, the non-new engineerings, have failed to help you.

Questions by end of your first week

  • Who should be the first person I ask help of?
  • Can I run Spectre and look at data I got from my database dump?
  • Am I up on multithreaded.stitchfix.com via a PR that I made?
  • What’s our general dev workflow? e.g. PRs, merging, CI, etc.
  • What are the high level problems my team is solving?

By default, Postgres does not enforce any connection limits or lock timeouts. While this is a sensible default, if you are sharing a single database across many apps—as we are—it creates the ability for one application to take down the entire system.

What we've done instead is to instrument Active Record's connetion setup to enforce limits:

  • a hard connection timeout of 20 seconds (any connection lasting longer than this is killed)
  • a hard lock timeout of 19 seconds (any lock held longer than this is killed)

These are very high, but we had to start somewhere. What this means is if an application were to suddenly experience poor performance, it would not be able to hold onto database connections while it suffered the outage.

When a pull request is merged to master, the application gets deployed to production.

This motivates us to write good tests that we can trust. But, it also means that we must be judicious when we do the merge. How this happens depends on the team and the change.

For example, bugfixes are likely to go up immediately. New features can go up hidden under a feature flag, or they might need to wait for staff training or a sync with marketing.

The creator of the pull request is responsible for keeping their PR up to date with master, if the code is ready well in advance of other factors.

To avoid punishing our business partners with having to slog through a ticketing system, we capture each team's roadmap in a Google spreadsheet. Each team has one tab in this spreadsheet so that a holistic view can be formed, but teams typically only focus on their tab. This spreadsheet drives their weekly meetings.

The format of the team tabs is somewhat different, but they all tend to be very detailed for the near term (next one or two months), and increasingly vague out into the remainder of the fiscal year. Team leads are responsible for keeping these in sync with whatever task management system is in use by the team (e.g. waffle.io).

As we have no format product management (by design), each engineering team has a weekly meeting with their relevant business partners, who are the stakeholders for that part of the businsess. For example, the MOPS team meets with the VP of Warehouse Operations and their reports.

The format of the meeting differs by team, but generally, it's a time to review the status of ongoing work, adjust the priority of anything, and discuss any issues of the day. Teams typicaly employ a “Rolling Agenda” which is a shared Google Doc that anyone can edit. Anyone can place an item on the Agenda, and it is up to them to lead the discussion and collect next-steps, if any.

Typically, the first meeting of the month will be a deeper-dive into the team's roadmap and to generally set priorities for the month.

This deserves a longer post

We were having issues with using a "real" wiki, which I think is not uncommon: hard to navigate, changes were opaque and unclear, stale data, hard to search, etc. etc.

So, we adopted a different approach. We now keep all canonical developer documentation as a series of Markdown files in a repo on GitHub. This means that:

  • You can search via GitHub or grep.
  • All changes come with a Pull Request so there can be discussion and notification.
  • We can keep local copies on our computers for offline access.
  • Tweaks and typos can be fixed using GitHub's editor.

Keybase proof

I hereby claim:

  • I am davetron5000 on github.
  • I am davetron5000 (https://keybase.io/davetron5000) on keybase.
  • I have a public key whose fingerprint is FE49 0FEA 2F68 D0B9 2AC7 EBEB 7DE5 A739 E451 4828

To claim this, I am signing this object: