Skip to content

Instantly share code, notes, and snippets.

@kiostark
Forked from jstray/gist:fe34b6b7079c6bf15dc1
Last active August 29, 2015 14:05
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kiostark/aef7e5237941a598eeb9 to your computer and use it in GitHub Desktop.
Save kiostark/aef7e5237941a598eeb9 to your computer and use it in GitHub Desktop.

[IN GENERAL: This is excellent at outlining what the decisions are that you'll have to make, but I don't feel like there is quite enough on how to make these decisions about how much to do. It's all in the form of "you have to assess and decide." Which I know is the case, but it would be great if there was a little more "at a glance" guidance. Maybe a checklist at the end of each section? Not sure how to solve this. Just keep in mind as you make a next pass.]

Journalism can be a high-risk profession, and some stories are a lot riskier than others[LET'S not repeat same opening line as first one]. In a previous post [[link to first part]], we covered the digital security precautions that every journalist should take. If one of your colleagues uses weak passwords or clicks on a phishing link, more sophisticated efforts are wasted. But assuming that everyone you are working with is already up to speed on basic computer security practice, there's a lot more you can do to provide security for a specific, sensitive story.

The first step is thinking through what it is you have to protect, and from whom. This is called threat modeling and is the first step in any security analysis. The goal is to construct a picture -- in some ways no more than an educated guess -- of what you're up against. Then you will be able to determine which software tools you'll need and how you'll need to use them. All of this leads to a security plan, a specific set of techniques and practices that everyone involved the story must understand and follow.

Security doesn't come from software. It comes from understanding your security problem thoroughly, and successfully executing a plan to mitigate your specific risks. Think process, not technology.

Threat Modeling

There is no one-size-fits-all security. Maybe you're withholding the name of a source so their abusive ex doesn't track them down. Maybe you're concerned about preventing your social media accounts from being hacked. Or perhaps you've got a potential whistleblower who worries that their employer is monitoring their email.

Threat modeling is a general approach to thinking through your security needs and coming up with a plan. To make this concrete, throughout the rest of this post I'll refer to the following security scenarios. These are simplified versions of real situations that journalists have faced.

Syria War Photographer You are a photojournalist in Syria with digital images you want to get out of the country. Some of the images may identify people working with the rebels who could be targeted by the government.

Insider Trading Whistleblower You are reporting on insider trading at a large bank and talking secretly to a whistleblower who may give you documents. If they are identified before the story comes out, at the very least you will lose your source. The source might lose their job.

Police Misconduct You are reporting a story about local police misconduct. You have talked to sources including police officers and victims. You would prefer that the police commissioner not know of your story before it is ready to be published.

The threat modeling approach is based on the idea of asking what is threatened, by who, and how [ADD: and how dire the consequences?]. You can structure this thought process by asking the following questions.

[THESE three scenarios suggest levels of consequence but don't specifically articulate them. Could you add that to each?]

What do you want to keep private?

It's often pretty clear what has to be kept secret. In the Syria scenario the photographs cannot be public until the people in them are safe elsewhere. But although the digital files containing the photographs cannot fall into enemy hands, it's not really the photographs that need to be secret, but the identities of the people in them.

Privacy is about protecting specific pieces of information. Aside from keeping someone's identity secret, you may need to protect someone's location, schedule, or address book. Address books or contact lists can be particularly sensitive, because they can reveal the identities of many people at once. The more specific you can be about what is secret, the better you can protect it. [CONSIDER: moving this paragraph up to before the three scenarios]

In the Whistleblower scenario it's similarly important to protect your source's identity. But in this case you also have to communicate with your source to receive the documents they want to give you. Encryption tools will protect the content of your communication from prying eyes in the bank's IT department, but they will still be able to tell who you are talking to even they don't know what you are saying.

[YOU haven't addressed the third scenario here, feels incomplete.]

Who wants to know

This is often the easiest question to answer. In security work, the entity who wants to break your security is called the adversary. In our scenarios the adversaries are the Syrian government, the bank, and the police department. Very often, the adversary is the subject of the story. But it's worth thinking about other adversaries. Who else would want what you have? Maybe a rival news organization would love to take a look at your juicy leak. Maybe there's a government who might take an interest in your work for intelligence purposes. The more specific you can be about potential threats, the better you can plan for them. [IS there a longer list of potential adversaries or categories of adversaries you could add in this 'graph? More we can spark people's imaginations, the better.]

What can they do to get it?

Once you've worked out who your adversary is and what they want, you're ready to ask how they can get it.

It's kind of glamorous to imagine nefarious hacking or NSA snooping, but there are far more mundane methods. Different adversaries might search public materials for your traces, steal your laptop, file a subpoena, or call a new employee and ask for their password. There are many different kinds of "attacks" on your security.

A technical attack relies on hacking, intercepting your communications electronically, or breaking codes. But your adversary doesn't get any points for difficulty, and non-technical attacks can leave you just as compromised. A legal attack might involve a lawsuit to stop publication or compel disclosure, or a subpoena to force you or a third party to reveal information. Or you or your source could be arrested or otherwise detained. A social attack is a con of some sort, relying on trust and deception. Your adversary could mount a phishing campaign, brazenly walk into your office during lunch and sit down at your computer, or call with a fake emergency to ask for passwords. A physical attack involves theft of computers or data storage devices, installing malware on someone's computer when they're not looking, or generally interfering with your hardware or your person. A determined adversary can also just beat someone until they talk -- a strategy which goes by the grim name of rubber-hose cryptanalysis when applied to "breaking" encryption. [SUGGESTION: Turn the above into bullet points]

cryptanalysis by wrench

You can't know for sure what your adversary can try, but you can make some educated guesses. In the police misconduct scenario, your adversary is likely to use legal tools against you, and maybe even arrest you or your sources. In the insider trading scenario the bank might file a lawsuit, but their IT department will use technical tools to determine if someone is using a work computer to leak proprietary information. The current Syrian government has used both sophisticated technical attacks and horrific torture.

What happens if they succeed?

Your security plan is incomplete if you haven't thought through what will happen when things go wrong.

To begin with, tracing through the consequences of a security failure can show you how to improve your protection. Good security plans often include multiple overlapping security measures, an important strategy known as defense in depth.

But thinking through the consequences of a security breach is also an important reminder of what's at stake. Security is never free: it costs time, money, and convenience. Suppose you have a hot piece of information but you're away from your laptop where your secure communications tools are installed. Can you get away with sending an unencrypted text message, just this one time? That depends. How important is it that you send the message before you get back to your laptop, as compared to the possible consequences of interception?

Your analysis of consequences may also lead to the conclusion that there is no safe way to do a story. The ethics of journalism security are still evolving, but I propose that "do no harm" should be a basic principle.

It's just a model

You won't be able to answer the above questions definitively, because you probably don't have solid information about your adversary's capabilities and intentions. That's why this is a threat "model." Your security planning can only be as good as your assumptions; you can only protect against risks you've thought of. These questions are designed to make your assumptions explicit.

But even though you cannot know what your adversary will do, there are two types of information that may help you guess. First, you can research what your specific adversary has done in the past, which may reveal intentions and capabilities. Second, you need to know what types of attacks are possible, and the difficulty or expense of each. You can see that someone could steal your laptop if they can get into your hotel room. Could someone unmask your source if they can get into your ISP? And if so, what could you do to stop them? This is why information security planning requires detailed technical knowledge. I'll cover some of the basics in the rest of this article, but no one can learn digital security in a day. Instead, my goal is to give you enough knowledge to understand what you will need to learn to keep safe.

Digital security concepts

The threat modeling questions above are designed to give you a clearer picture of your security needs. The next step is to translate these needs into an actual plan. To do that, you need to be familiar with some basic concepts related to how the Internet works. [CONSIDER: Are there people who will be reading this who already know who the internet works, do you think? If so, maybe say something like "get a refresher, or you can skip to {place in article}]

How communications travel over the Internet

The Internet got its name from being a "network of networks." Any communication between two points may travel through dozens of intermediate computers operated by different entities. Those computers belong to corporate networks, telecommunications companies, and technology companies that provide online services.

Suppose our insider trading source, alice@bigbank.com, sends an after-hours email from her desk at work to a reporter, bob@gmail.com, who is currently on his couch at home. The message first travels from Alice's computer to the BigBank email server, then to the telecommunications company (telco) that BigBank pays for Internet service. The email probably passes through several different telcos before it finds its way to Google's servers. When Bob checks his Gmail account from his couch, Google transmits the email from their server to Bob's web browser, via several more telcos, ending with whatever company Bob pays for internet service to his home.

How email really works

All of these intermediaries -- BigBank, the telcos, and Google -- may be able to read the contents of the email. The process is akin to passing a postcard from hand to hand over thousands of miles. Without encryption, everyone along the way can see what you wrote.

Fortunately there is already a lot of encryption built into the web: any time you go to a URL that starts with HTTPS you are using a secured connection between your computer and the server you are connecting to (the "S" in HTTPS stands for secure.) When Bob logs into Gmail, Google automatically redirects the connection to an HTTPS address, which means that the connection between Bob's browser and Google's servers is encrypted. But Google can still read your email, and other parts of the path from Alice's office computer to Google may not be secure. For example, BigBank almost certainly keeps some record of the email.

Other messaging systems face similar issues. Your message passes from your computer or mobile device through dozens of different computers owned by different organizations. Some of those connections may be encrypted -- such as connections from a browser using HTTPS -- but some may not be, and usually any company which stores or processes your data has access to it. Do you trust every person working for every one of those companies to be honest? Your private data is only as secure as the creepiest employee. Do you trust your service providers to be secure against the attacks of your adversary? It may be difficult for you to know whether or not they are competent when it comes to security. And what will they do when served with a subpoena? In the United States, the Fourth Amendment does not protect information that you have given to someone else to store or process.

The only way to protect information transmitted over the internet is to encrypt it yourself before transmission, a practice called end-to-end encryption.

Privacy versus anonymity

Encryption will hide the contents of what you are saying, but not who you are saying it to.

Our Syrian reporter should be wary of any electronic communication with in-country sources. Even if she uses end-to-end encryption, anyone snooping on the network -- say, the Syrian intelligence agency -- will know who she is communicating with. This is an instant catastrophe if her source identities must remain secret. [SPELL out that the reason is it will be easy to guess that your source is opposition, or whistleblower, etc.]

Just as it is possible to read the address on an envelope even when you can't read the letter inside, there is a difference between the content of a communication -- say, an email -- and information about who sent it, who received it, when, using which server, and so on. This is the distinction between "content" and "metadata" that has been popularized by the recent NSA revelations.

address metadata

Think of this envelope as your message passing through the Internet, including equipment that belongs to the telecommunications company as well as the bank's corporate servers. All of these machines -- and any person who has access to them -- can read the address. They have to be able to read the address to deliver the message! But if you have sealed the envelope no one can read the contents of the letter. Encryption technology "seals the envelope," protecting the contents of your communication. It does not hide the address.

This is the distinction between privacy and anonymity. It may be enough to protect just the content of your communications, or you may need to keep the addresses secret as well.

Anonymity is best understood as the inability to link one identity to another, such as linking a pseudonym to a real name. There are different kinds of unlinkability which might be needed in different situations. For example, it is important that your adversary not know that your source is talking to you specifically, or will it be a problem if they talk to any reporter?

We'll look at tools for both encryption and anonymity below.

Data at rest, data in motion

Data needs to be protected in two ways: when it's being transmitted form one place to another, and when it's stored somewhere. You adversary could read your email by intercepting it as it is transmitted across the network, or they could steal your laptop and read the messages stored there.

The key to securing data at rest is to keep track of how many copies exist and where they are stored. In the paper era, intelligence agencies would number each physical copy of a classified document and keep records of its whereabouts. It's much easier to make copies of digital material, for better or worse, but the same logic applies. How many copies are there of that sensitive photograph? It might be obvious that there's a copy on your laptop, but what about your camera memory card? What about your laptop backup? USB sticks? Did you ever view the photo on your phone? And even on your laptop, how many copies of the same file do you have? Is there a copy in a temporary directory somewhere? Have you imported the data into different programs?

Your security plan needs to take into account all of the copies that need to be made, where each is stored, and how it is protected.

Each individual copy of the data can be secured in variety of ways. Again, consider threats of all kinds: technical, physical, legal, social.

One of the simplest things you can do to secure your data is to use full disk encryption. As the name suggests, this encrypts everything on a drive, keyed to your password. Windows, Mac, and Linux all have built-in tools for such encryption, as do Apple and Android phones -- but you do have to turn it on. Disk encryption is much stronger than a mere login password; without disk encryption an adversary can read your data merely by connecting the drive to a different computer. You can also encrypt external drives and USB sticks, and you should.

Full disk encryption is free and essentially zero inconvenience. Like 2-step login, discussed in part 1, there is no excuse not to use it. Every journalist should turn it on, on every computer and every phone they use.

You will also need to understand secure erase techniques and tools. There's no use deleting a file just to have someone pull it out of the recycle bin or trash folder, and a dedicated adversary with access to your hardware can work wonders with appropriate data recovery technology. [QUESTION: Is secure erase 100%, or can things still be recovered? If the latter, qualify this.]

secure empty trash

Mobile devices

Smartphones are a security disaster. Not only do they store huge amounts of personal data, but they are inherently a location tracking device. Plus, the security tools for mobile devices are less mature than their desktop counterparts.

Consider for a moment what is accessible through your phone. Certainly your email and social media accounts. Perhaps your phone also has stored passwords to your online data in various cloud services. And of course, your phone has a copy of your address book. At minimum you should be using a lock code to protect your privacy, but this won't stop a determined and technically savvy adversary from accessing your data, should they get their hands on your device. As noted above, you should also be encrypting all data on your Apple or Android phone.

Even worse, phones are necessarily tracking devices [THIS phrase is repeated from above paragraph, find another phrasing. Also add the idea that it's tracking even if you have geo services turned off, which is what you're explaining in the next sentence.] . In order to stay connected to the mobile network, your phone constantly switches between signal towers, each of which serves a particular small area. The phone company keeps this data, as well as a record of every text message, call, and data transmission, for billing purposes. Many phones and apps also store an internal record of GPS coordinates and wifi hotspots, and often transmit this information to corporate servers. It's also possible for third parties to track the radio signals your phone emits using a device called an IMSI catcher, popular with both police and criminals.

In 2010, German politician Malte Spitz sued to obtain his location history data from the local phone company. Plotted on a map, and correlated with his public appearances and posts, the data paints an extremely detailed portrait of his life, activities, and associates, as an amazing visualization by Zeit Online shows.

Tell-all telephone

The data generated by your smartphone can be extraordinarily revealing, especially the location data. Our crooked police commissioner doesn't need to crack your anonymity scheme to figure out who your sources are, if they can just subpoena your phone records. Even if you didn't call anyone, the location data can reveal who you met with -- especially if it can be correlated with your source's phone location.

It is possible to use so-called "burner" phones but it is actually quite tricky to set up and correctly use a phone that is not connected to your identity. For example, you can't ever have your regular and burner phones turned on in the same place at the same time, and you can't ever call your regular contacts from your burner phone.

Consider simply not using your phone for any kind of sensitive communications. Or even leaving it at home. Is this level of paranoia really necessary? Sometimes. It all depends on who your adversary is and what they might do.

Endpoint Security

There's no need to intercept your email in transit if someone can just hack into your computer remotely and download your files. Unfortunately, this is not only possible, but there is a thriving market in tools to do just that. The problem of securing computers and the data on them -- as opposed to securing the communications transmitted between computers -- is known as endpoint security.

There are many ways a computer can be compromised. If the adversary can get a piece of software installed on your computer or mobile device without your knowledge, you lose. This can be accomplished with the unknowing cooperation of the user -- as in a phishing attack [[link to first part]] -- or by silently exploiting vulnerabilities remotely. Or they might find a way to get you to plug an infected USB stick into your computer. The goal of the adversary may to be install a remote administration tool, a piece of software that can do things like record keystrokes (to reveal passwords) or even secretly transmit your files on command. [I'VE BEEN TOLD that cheap usb drives you get at conferences may have malware on them. True or internet urban legend?]

The most basic defenses against remote hacking are anti-virus tools and up-to-date software. It's particularly important to keep your browser and operating system up to date. This will not protect you from all attacks, because not all known vulnerabilities have been disclosed and not all disclosed vulnerabilities have been fixed, but up to date software is generally going to be much safer than old versions.

If the adversary has physical access to your computer, anything is possible. They could install software, read files, or even install a inexpensive hardware device to record keystrokes. Generally, any device containing sensitive data or used for secure communication must be physically secured at all times. That usually means either on your person or locked up somewhere your adversary is unlikely to be able to get to.

Will your adversary really go through the trouble of using sophisticated malware to break into your computers, or even secretly tamper with your hardware while you're out? Only you can decide whether or not that's likely, based on your specific adversary and circumstance. The answer is a key part of your threat model. But note that remotely hacking into your computer is very much easier than cracking properly encrypted data.

Endpoint security is a difficult problem, one that is impossible to fully solve at this time. If you need to be absolutely sure that a computer is not compromised, the most reliable approach is to buy a brand new computer. At the very least, you should wipe the drives and re-install the operating system, or boot from a secured operating system like TAILS (see below.) In the worst case, when you must keep data from a determined, well-resourced, and technically sophisticated adversary, the only answer may be to store sensitive files on a computer that is never connected to any network at all. This is called an air gap.

Digital security tools

Every security professional get asked about tools constantly, but software is not what gives you security. By the time you are selecting tools you should already have developed a good threat model. You should know what you are protecting, and from whom, and how they might break your security.

Nonetheless tools are important. Each has quirks and flaws and nuances, and is easier or harder to use correctly -- and to get sources to use correctly. Here's a brief overview of some of the most common tools. New tools are being developed all the time, so this is the part of this post I expect to go out of date most rapidly.

Cryptocat: easy encrypted chat

Cryptocat is probably the easiest of all security tools to use, which makes it a good choice for sources who are new to secure communication -- that is, almost everyone. Simply enter your user name and the name of a chat room for instant secure group chat. You can also transmit files. Cryptocat is available as an extension for all major browsers, and as an iPhone app. It uses end to end encryption so not even the Cryptocat servers (or anyone who can hack into them) can read your messages, and does not log or store messages after they are transmitted.

Be aware that in-browser encryption is a relatively new concept, which means Cryptocat is less mature than other security tools. Previous versions had severe vulnerabilities. However, the code is open source and has since been audited by multiple people, which increases the likelihood that it is secure. It is a great tool for many people owing to its simplicity of use, however more extensively vetted tools like OTR plus Pidgin or Adium (discussed below) are probably more appropriate against technically sophisticated adversaries.

GPG: encrypted email

GPG is the gold standard for end-to-end secure email. It's an open-source implementation of the PGP protocol, but you probably don't want to use it directly. Instead, use an email application which supports GPG such as Thunderbird or Apple Mail, or a browser extension such as Mailvelope or Google's forthcoming end-to-end Gmail encryption.

GPG is a powerful technology, but it has some serious drawbacks. Like all encryption technologies, GPG will not protect the identities of the people you are communicating with, only the message contents. It's also one of the more difficult technologies to set up and use correctly, because it requires manual key generation and management.

OTR: encrypted instant messaging

OTR stands for "Off the Record" instant messaging. Like PGP, it's a protocol rather than an application. It's supported by the Pidgin universal instant messaging app on all operating systems, Adium on the Mac, and various other clients on other platforms and devices. Note that this is completely different than the confusingly named "off the record" option in Google Chat and AIM, which only turns off chat logging and does not provide end-to-end encryption.

Secure instant messaging is a lot simpler to set up than secure email, and it can actually be more secure too. First, it is simpler to use so it's less likely that you'll make a bad mistake. Also, after the conversation is over the per-conversation encryption keys are destroyed so there is no way to recover what was said -- assuming neither party kept a log of the conversation. Don't keep chat logs; go into the settings on your IM software and make sure logging is turned off. This is probably the number one way that secure IM is compromised.

If your threat model includes man-in-the-middle attacks, you need to be sure to do the fingerprint verification step with each new contact. It's an easy, one-time process.

Tor: an anonymity building block

Tor provides anonymity online, or at it least helps. The name stands for "The Onion Router" and that's sort of what Tor does: it routes your browser traffic through multiple computers using multiple layers of encryption. The end result is that the computer you connect to does not know where you are connecting from, and in fact no single computer on the internet has knowledge of the path you are taking. You can use Tor by downloading the Tor Browser, a custom version of Firefox.

How Tor Works

Tor is a powerful technology, mature and well tested, and even the NSA has difficulty breaking it. However, it's extremely important to understand what Tor does not do. For example, if you log into any of your regular accounts over Tor, the server on the other end (and anyone who can intercept that connection) still knows it's you. That's an obvious example, but there are many other behavioral risks. Tor does not hide the fact that you are using it, which can itself be used to identify you, as in this case.

True online anonymity is quite difficult. We are all embedded in huge web of online accounts, logins, contact lists, habits, locations and associations that is very difficult to break free from without traces. However there are a few recipes that are widely used. One key point: you can never reference or communicate with your non-anonymous identity from your anonymous identity. Ideally you should be accessing them from different computers, lest you get busted like Petraeus. But as usual, it all depends on your threat model: what is your adversary likely to do?

SecureDrop: anonymous submission

SecureDrop exists to solve a single problem: to allow people to submit material to journalists anonymously. It's designed so that the source must access the drop through the Tor Browser, meaning that not even the journalist knows who the source is. SecureDrop also supports simple two-way anonymized communication between the source and the journalist, which allows the journalist to ask questions about the material. This is the anonymous leaking model of Wikileaks, operationalized into a robust tool and a set of recommended procedures.

Guardian Project: secure mobile communications

You'll notice that all of the above tools run on desktop computers, not smartphones. The Guardian Project is trying to close that gap, with an array of open source applications for various platforms. You'll notice that they're based on the same secure standards as their desktop cousins.

  • OrWeb: Tor on your phone
  • ChatSecure: OTR instant messaging
  • OSTel: secure voice communications
  • TextSecure: A replacement for standard text messaging.

Silent Circle: commercial secure telephony

There is good reason to believe that open-source software can be more secure than proprietary systems. Open source software can be widely reviewed for bugs, and it's hard for any one entity to introduce hidden back doors. On the other hand, usability, training, and support can be... variable. Silent Circle is a commercial secure voice, video, and text app for your phone. They offer a slick product with good support. And you can give one of their pre-paid "Ronin cards" to a source to get them started with secure communications, easily.

But can you trust Silent Circle? That's impossible to answer definitively, but at least the pedigree looks good. Silent Circle was co-founded by PGP creator Phil Zimmerman, which lends a certain amount of technical and ideological credibility.

TAILS: a secure operating system

There is no un-hackable operating system, but TAILS comes closer than anything else that mere mortals might use.

Tails is secure because it starts completely fresh from every boot; it doesn't ever save anything to disk. You boot it from a DVD or USB stick inserted into almost any Intel-based computer (including Macs.) Even if someone was able to hack into your computer while it was running TAILS, they would find no personal data there, and any back doors or malware they were able to install would simply disappear forever when you shut the computer off. Even better, everything you do online using TAILS is automatically routed through Tor.

TAILS is ideal if your threat model includes sophisticated attackers who might try to put malware on your computer... or if you need to make absolutely certain that your computer is not compromised, but you don't have the budget to buy a fresh laptop (but note that no operating system can protect against hardware tampering.) It's also great for setting up an up anonymous communication machine that has no trace of your other identities -- though as usual, be careful! There are a hundred ways you might accidentally reveal yourself.

Is this a plan everyone can follow?

By now you realize that security is complex.

The required software can be difficult to set up. The protection that security technologies offer is riddled with caveats. And for some goals, like anonymity, a single slip-up can blow everything. Meanwhile, you operate in the real world, under time pressure and sometimes even in physical danger. And even if you become a security expert, your sources also need to use secure tools. Will they be able to operate the software perfectly every time, under pressure? Can you even persuade them to attempt it?

Even experienced people frequently get security wrong. Wikileaks did not intend to release the full cache of 250,000 diplomatic cables, but through a complex series of events involving mistakes by both Wikileaks and The Guardian, they became public. If Assange can't get security right, what hope does anyone else have?

This is why asking whether everyone can really follow the plan has to be part of the plan. Once you've worked out your threat model and specified the tools and the process, you have to ask: where is someone likely to screw it up? And what happens when they do? Far too often, we are own worst enemy when it comes to security.

The best plans are simple.

In the whistleblower scenario, you could choose not to use digital communication tools at all. Meet in person. Leave the phones at home. Take notes on paper. Keep the notes somewhere physically secure. In the Syria scenario, you could choose to delete any photo containing identifying information. Review the photos every evening when you get back your hotel. Copy only "safe" photos to your laptop. Then wipe the camera memory card with a secure erase tool.

Putting it all together

This is a lot of information. There's even more you'll need to learn -- as I said, no one can learn digital security in an afternoon. But hopefully you've now got the beginnings of a conceptual framework. Start with these questions:

  • What do you want to keep private?
  • Who wants to know?
  • What can they do to find out?
  • What happens if they succeed?

Answering these questions involves building up a picture of the security problems you face: a threat model. Building a realistic model requires that you understand something about how digital technologies store and transfer information, which is where technical know-how and learning comes in. After that -- when you understand the security landscape you are operating in -- you can move on to selecting the specific tools that might meet your needs. But tools don't make security; good plans and good habits do. In the end, the simplest plan you can come up with might be the best.

Resources

There's a lot more to learn. Fortunately, there is starting to be a lot of good material on journalism information security

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment