Skip to content

Instantly share code, notes, and snippets.

@foca
Last active July 27, 2016 13:38
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save foca/ec73e471d7043b32bdf52a49ca488d06 to your computer and use it in GitHub Desktop.
Save foca/ec73e471d7043b32bdf52a49ca488d06 to your computer and use it in GitHub Desktop.
It's Fine • A Tale of Terrible Misfortune

This is a talk I gave for the Hacker Paradise Summer '16 crew at Porto, Portugal. You can find the slides at SpeakerDeck. Photo Credits at the bottom.

It's Fine • A Tale of Terrible Misfortune

I'm gonna tell you the story of a little startup. It started with an idea, as startups often do. Two passionate people saw that certain people on the internet had a problem, and they could solve it for them. For a small fee, of course.

They wanted to avoid venture capital and investors, so from the get-go they were fully bootstrapped, which meant working during their free time. But they were to be their own bosses, and finally be free and financially independent. They were excited! It was a time of joy.

They started small and lean. One of them a designer, the other a developer. They figured out the best way to get it done was to crank away, so they jumped into creating a prototype straight away.

Their small prototype flourished. It got many features implemented and a great design. They made a great landing page to gather their future customers emails, and they advertised it to all the people that had the problem they were solving. Lots of emails came in! Lots of future customers!

One of them was a long time Linux and UNIX user, so they figured it would be easy and cheap to set up a server somewhere for their initial launch, and grow from there.

Thus, they set up a server, tested the prototype running on it, and invited some friends to test it. After fixing a couple bugs, adding a few more features, and polishing a few things, they saw everything was fine and opened the gates to the public. And it was a success! Customers just rolled in.

I’m going to skip ahead a bit here.

They grew. It was a great time, they were able to stop their “day job” and focus solely on their product. The app scaled up (more servers! performance tuning! lots of fun stuff.) And more and more customers came in. And with them, came the sweet, sweet smell of money in the bank. It was glorious!

I did say this was a story of terrible misfortune, so don’t get too happy for them just yet.

As with everything that grows and becomes noticeable, it eventually attracted the attention of some of the more undesirable elements of the internet. And things got… interesting.

Their first DDoS wasn’t as terrible as it could’ve been. A friend of them recommended that a lot of what they were doing could be cached and a Content Delivery Network would serve that load for them, mitigating the DDoS.

So they did that. It took a couple hours configuring, but in the end the attackers got bored and moved on to attack something different. The CDN prices weren’t cheap during the attack, but the product was running again. Whew.

Now, they were doing pretty good. Features kept being added. Design, copy, and usability kept being perfected. Customers were happy, and money kept rolling into the bank.

Bugs happened, customer support issues were submitted. They had a few other mild DDoS attacks, but nothing they couldn’t mitigate. Just “normal life” for a successful product.

Eventually they reached a point were it was too much work for just two people. They just spent a lot of time switching between building the product, dealing with bugs, and doing customer support.

They needed to offload some of the work. First off was customer support. Next, they needed to bring someone else to help with code.

So a new part-time developer came in. He got on-boarded into how everything worked, and was then deploying to production within his first week.

And so they grew. They hired a couple more customer support people, a few more developers (and it became tradition that everyone had to be deploying to production during their first week.) They grew as a team, and as a product, and they were more and more successful.

And then our protagonist developer’s computer was stolen at a cafe. Walked to the bar to order something and someone snatched it from the table and run out with it. Ugh, what a pain in the ass. Right? “Oh well, it’s just money,” the team commiserated with her on Slack. “Don’t worry about it. It’s fine.”

She went to the store, got a new laptop, and went home and started installing and configuring all the things (if you haven’t broken or lost your computer, or had it stolen recently, you might not know how annoying and time consuming it can be to install everything from scratch. Trust me.)

And in the middle of that, she got a notification that the app was down. Ugh. She hit up the other developers on their team chat so they could look into the problem. They all had access to the servers, after all.

“Uh oh. I can’t get in. It’s telling me my key is invalid” one said. “Um, I get the same thing”. They all tried, but none could. Panic ensued.

They were all locked out.

It wasn’t just a random computer thief, clearly. And the first thing that happened was that he found the processes in charge of backing up their database, disabled them, found previous backups, and deleted them. Next up he took down the actual database and deleted it. And finally he locked everyone out of the servers… for the lulz.

And this turned out to be fatal. No longer having backups, and having lost all their customer data turned out to be crippling.

They lost a lot of customers over the next couple of days. Some understood and stuck around, but it was not enough to keep everything afloat. People had to be laid off. Much crying ensued. They limped for some time, but eventually went out of business.

As I said, terrible misfortune.

Will you get to the point, already?

So, there’s a few things that happened in the story that could’ve been prevented by taking a few precautions and being (slightly) paranoid.

A few thoughts about security

Most (web) developers tend to think security means escaping HTML or preventing CSRF attacks by passing a random string in form submissions. And sure, building a secure product requires doing these things. But building a secure product is more than this. Security is a mindset, and it applies both to the code you write, the environment you work in, and the interaction with your team.

So let’s look at things you can do as individuals, first.

It’s great to have your servers all secured behind firewalls, but if your team’s workstations are vulnerable, then you’re shit out of luck, as we’ve seen.

A couple obvious ones are to have a decent user password for your account, and to lock your computer whenever you stand up and go to the bathroom, or to get a drink, or whatever. Even when you’re among friends, it’s good to do it to build up the habit.

Computers get stolen every day, and if someone can open the lid of your laptop and they’re in and have access to everything needed to access your production servers… well, that’s obviously bad.

Quick question: how many of you keep your computer logged in to your email across restarts? I mean, it’s so handy to not have to log in to gmail (or your provider of choice) every time you reboot your computer.

So, if someone for some reason steals your computer and gets to your user account, would they be able to just get into your email by just typing gmail.com on the browser? How about GitHub?

It’s sometimes annoying, but setting everything to log out when you close the browser is a good practice. And if you use a password manager, then logging back in isn’t too painful.

Oh, yeah, password managers. You all use one, right? I personally recommend 1Password, which is the one I use, but LastPass or iCloud Keychain or whatever else should be good, as long as you use one. Please don’t type passwords by hand. You can have the manager generate very random, very hard to guess, very long passwords for most services, and automatically fill your forms for you.

And speaking of logging into and out of things. How many of you have two factor authentication enabled?

Now those were things you should do as an individual to have a more secure computer. I will mention a couple things that are important (and usually overlooked) that you can do as a team, or that your company can do, to be more secure.

For example, access control is a real (and important!) thing. Having every developer in your team have unlimited access to everything might make debugging certain things a bit easier. It also means that if any of the developers’ credentials are compromised, the attacker now has access to your entire stack.

For example: In our story, if the backup script only had permissions to create new backups instead of also deleting them, this attack would’ve been mitigated as they would have just lost a little bit of data and could restore from a recent backup.

Not being able to roll out credentials quickly is another big one most companies ignore. If you think a developer’s credentials might have been compromised, you should immediately lock that person out. It’s easier for that developer to spend a couple hours configuring a new SSH key and resetting passwords, than it is to lament the aftermath of an attack.

Had this company implemented something like this, they would’ve locked out the theft victim’s credentials, and the attacker would have never gotten anywhere.

Obviously, having all this automated makes it repeatable, and thus less error prone. Forgetting to lock someone’s credentials out of one thing might be as fatal as doing nothing, if it just happens to be the “wrong” thing.

Particularly, you want to lock a person’s personal credentials out (their password, access to their email account, to your GitHub organisation, your AWS console, your staging and production servers…)

Essentially you want to pretend that that person doesn’t work with you for the next few minutes or hour or so while you roll new credentials.

Finally, you should get audited every now and then. No matter how great your firewall setup is, or how well you follow any security guidelines, you’ll probably miss something.

Even if you’re security-savvy, you’ll probably miss something (like a senior developer could fail to spot things and introduce bugs into the code.) And missing certain security vulnerabilities, or failing to keep up with the latest releases, might leave you wide open to attackers and terrible misfortune.

Wrapping up

These are only a few things you can do. They aren’t particularly difficult to implement, and the value-for-effort is actually pretty decent.

I mean, you don’t want lack of security on your personal computer to be the source of many people getting laid off and an entire company going out of business, now. Do you?

At the end of the day, this is a pretty extreme case. Your computer getting stolen might happen, but it’s unlikely it will end up in the hands of someone who knows what to do with it. Still, better safe than sorry.

Just be a little paranoid, and things will be fine.

Photo Credits

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment