Skip to content

Instantly share code, notes, and snippets.

@marsam
Forked from mnot/snowden-ietf93.md
Last active October 15, 2016 17:29
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save marsam/42cd85887e6e18c42050 to your computer and use it in GitHub Desktop.
Save marsam/42cd85887e6e18c42050 to your computer and use it in GitHub Desktop.
[docs] Transcript of Edward Snowden's comments at IETF93.

Edward Snowden at IETF 93

Edward Snowden answered questions after a showing of CITIZENFOUR at the IETF93 meeting; this is a transcript of the video recording.

For more information, see the Internet Society article.

Introduction

0:00:13

standing ovation

Mark Nottingham: So we have about half an hour for questions and answers. Do you guys wanna kick off?

Edward Snowden: Really quickly just before the questions kick off...

Mark Nottingham: Sure.

Edward Snowden: When this was all happening in June, the media was all over the place. They're obsessed with me, what was going on, what the politics were about – this wasn't about politics. This wasn't even really about surveillance. This was about democracy.

This was about the fact that we, collectively as the public, both nationally and internationally, had the world that we lived in change without our knowledge, without our consent, without our awareness, without our involvement. Technologists, particularly the IETF, you guys are the ones who have, I think, really led the charge and started to show to us that there's a path out of this. There's a way forward when these kinds of things happen.

Because when we think about the work of the IETF, when we think about the standard of principles that we all worked on when we kind of – try to envision what is the internet? To whom does it belong? Who does it work for? The internet doesn't belong to vendors. The internet doesn't belong to governments.

The internet belongs to the user, right?

So when we think about things like the end to end principle, for example -- the idea of a simple core and smart edges -- that's what we planned for. That's what we wanted. That's what we expected, but what happened in secret, over a very long period of time was this change to a very dumb edge and a deadly core.

And that's what's being changed now and you guys have done RFCs, the IAB has put out a couple of draft statements about it, and this is what we see moving forward. We see an opportunity to change this.

Because it's not about the United States, it's not about the NSA, it's not about the Russians, it's not about the Chinese, it's not about the British, it's not about any national government. It's about the world we have, the world we want to live in and the internet, the connections that we want to build between people, between worlds, between every point of presence, in every home, on every phone around the world.

And I think – if I were going to synthesise and kind of simplify for everybody who doesn't think about this stuff, who doesn't care about the politics -- and I get that, because this isn't a political forum, it's very neutral, it's about the technology is to go –- let's not demonise the NSA specifically because they're one example of a global problem.

The lesson of 2013 is not that the NSA is evil. It's that the path is dangerous. The network path is something that we need to help users get across safely. Our job, as technologists, our job as engineers, our job as anybody who cares about the internet in any way, who has any kind of personal or commercial involvement is literally to armour the user, to protect the user and to make that they can get from one end of the path to the other safely without interference.

With that, can we have questions?

Scale of Pervasive Monitoring

0:04:02

Daniel Kahn Gillmor: Sure. Ed, this is Daniel. I'm behind the camera right now. So I was wondering if you could share some insights about the sort of scale and scope of the machinery that's doing this kind of monitoring and in terms of sort of protocol design, what sort of things we should be aware of in terms of the power of correlative attack and things like that?

Edward Snowden: laughs It's a good question. It's a big question.

If I could ask other people who have questions, if they could speak from the mic attached in front of the laptop, so I'm not getting the room audio, that would be really helpful because it's got a directional mic on.

When we think about scope and we think about scale, when we think about what I saw as an analyst, sitting in Hawaii, working with XKEYSCORE, working with the tools of mass surveillance, you wanna think that you, the analyst, you – anybody who's doing this, they wanted to be able to say or wherever or the equivalent – you're tsharking the internet, or wiresharking it, if you don't like the command line.

Every packet that crosses through your points of presence and the NSA wants a plan that they call the million points of presence. They kind of stopped when they got to about 100,000. They said “It's good enough, we don’t need anymore.”

But the idea is that you can start to store everything, you can start to buffer everything, then you can select things, you can promote things. Everything that looks like it's related to something that you're interested in, whether it's the plaintext of your email messages or your Yahoo chat, your Facebook chat before it was protected, or even after it's protected if they're able to steal the keys from the service providers and things like that are put in inaudible or some other mechanism.

Things scale up in ways that when people are designing the protocols they cannot anticipate.

Typically when we think as infosec people or security people, what the real threats are, back in the day when talking about – somebody hunched on the backbone of the internet watching everybody all the time was screaming paranoia.

You thought about somebody in the coffee shop, you thought about somebody on the local network, you thought about some kind of insider threat or some remote threats that either exploited some equipment on your network, but not that every point crossed is a problem.

Unfortunately, we're seeing that that is the case and so, it becomes a question of how do you make sure nothing is in plain text at any point where people can derive information directly through observation, where they can peripherally gain information that basically works contrary to the interests of the user through inference even if it's encrypted.

One of the big things that I think we're likely to see -- not immediately, not in five years, but ten years, 20 years down the road -- is as encryption becomes pervasive, which is the desired state, right? We need people to protect the confidentiality of users communications from all points of presence at all times. We are likely to see governments, companies, basically everybody who sees some lever for increasing their own incentives (whether it's for business purposes or government's purpose or whatever) to collect information about association. What IP pairs are common.

When you think about, for example, if you look at Amazon.com, we don't book entries and things like that. One of the entries that they put on every book page is things called statistically improbable phrases which normally refer to things that are unique to the author, something that the author put out themselves.

Now if content were encrypted, but for example, DNS requests were not, you can start to identify users by statistically improbable hostname pairings. Things that you don't see throughout the giant, massive buffer associated with normal people. It's not who goes to Reddit, it's not who goes to Hacker News. It's who goes to Reddit, Hacker News, and then grandma’s cookie site because that's traffic by a very small number of individuals.

These are the kind of things that you need to anticipate, that you need to prepare for. When we think about proposals like DPRIVE and DANE and DNSSEC. When you combine these things, what you're really doing is you're creating the next generation of DNS step by step and that's what we need to do.

We need to come together and think about the lessons that we've learned from the failures of architecture and build a new internet that will survive not just for the next two years, the next three years, but for the next 50 years, the next hundred years, the next generation.

Returning to the US

0:08:59

Patrick Linskey: Patrick Linskey from Cisco. Is anybody talking with you about possible returns to the US?

Edward Snowden: Yes. So, this is something that I can't discuss publicly because it's a private thing going on with the Department of Justice and everything like that.

I've made clear since 2013 that it's my intention, my desire to return, but the DOJ has been stuck on the – basically, the point, they don't want to provide guarantees of a fair trial.

The act that I've been charged under is from 1918, I believe, the Espionage Act which was intended to be used against basically saboteurs, infiltrators in society who are literally bombing rail lines; they were opening the gates to let enemy troops in and things like that.

It was never used against whistleblowers who are basically working in consort with journalists, in partnership basically to serve the public interest and because of the way this Act is structured, because of the way that it's basically an old Act, it's an emergency Act, it wasn't inaudible to Congress, it provides no ability to make a public interest defence.

You cannot mention to the jury at all what your motivations were, what your goals were, why you did it, how you went about what you did, how you mitigated the harms, anything like that. You can basically make no defence at all. The government simply says, "Classified information was passed to journalists. Ask anybody," and then you go directly to jail for basically the rest of your life.

There were only two cases previously – well, actually, one case prior to, I believe, the Obama administration, where this law had ever been used against someone working with journalists and that was Daniel Ellsberg, the Pentagon Papers case.

In recent years, unfortunately, we've seen this happen again and again and again to people who were basically working, precisely with journalists, because you don't really see a lot of actual spies. The thing back in the day of people selling out government secrets for money, for personal gain, that's really not something that we don't have a big clash of ideologies in the world today that motivates that kind of thing that we used to see back in the ‘30s and stuff like that.

Normally, you see people working with journalists, not to hurt the country, but to help the country and unfortunately, the government feels somewhat exposed by this dynamic. Rather than try to pass new laws, rather than try to create some new mechanisms to resolve that to provide any kind of new protections or structures by which to do this, they're basically trying to use a deterrent effect to say, "Look, we understand these are serious concerns."

For example, in the United States, both the courts and the Congress had decided that the NSA's programs were unlawful and inaudible but they say, "Even though you did a good thing there, we're still gonna destroy you because it's most important to us to maintain the stability of the system of controlling access to information."

Patrick Linskey: Thanks.

Preserving the Internet

0:12:25

Mark Nottingham: I have a couple of questions. I have a 13-year-old son and he really took away from the movie when you described how the internet used to be before surveillance; he was quite taken by that description because he hasn't experienced it and I wonder what can we do – those values are very much at the core of the IETF to me. What can we do to preserve that? You talk a lot end-to-end encryption, which we’ve taken on board. What else is there, both technically and as citizens?

Edward Snowden: One of the biggest challenges, I think, for a free internet, or an open internet, a safe internet for the users is that we have an ability to divorce ourselves from physical identity.

Now, one of the primary vulnerabilities of users around the world in terms of their relationship to great powers, whether that's the national government, whether that's foreign government, whether that's groups that don't like them very much, the fact that they can be de-anonymised for their thoughts and then some discriminatory action applied against them is very dangerous. I mean it's a direct threat to their safety.

But even if we're not talking about people who are radical and are likely to receive some direct action against them, there's a very soft discrimination of inclusive communities that happen.

When you're on Facebook, for example, if you write your name – a 12-year-old's not really gonna be able to have a conversation with a professional engineer. They're not gonna be able to enter that community. They'd be recognised as somebody who's an outsider, as a child. They're not gonna be taken seriously.

It's quite different back when I was very young and engaged with the internet. I think many of these people in the room were the same way where people didn't know who you were. If you asked question, the answer would be answered at face value.

So the question becomes how do we restore that balance to some extent? And I think one of the big things that we need to do is we need to get away from true name payments on the internet.

The credit payment system is one of the worst things that happened for the user in terms of being able to divorce that access from their identity. We need to think about how to develop systems that can tokenize identity or transfer identity that can take the fact that they have paid for access or have some ability to access the system, they’re an authenticated user in some way, and move away from this system of very targeted, very quantified identification of everyone that interacts with the system.

We want to get away from globally unique identifiers, or universally unique identifiers and instead, have at least mechanisms for it even if it's not the default, even though it's not the standard. Globally common identifiers – a fingerprint that matches everyone on the system if they choose to use it.

Basically we need to divorce identity from persona in a lasting, enduring and reliable way for people who choose to do it.

Active Attacks

0:15:35

Mark Nottingham: Thank you. I would just ask one more very short technical question, hopefully.

We're designing some systems that rely upon attackers staying passive and my question is, is that a reasonable assumption to make with the systems we're talking about or are they apt to go active?

Edward Snowden: No. They've already gone active. I mean there's the QUANTUM system that NSA uses. Anybody can search for this, there's QUANTUMHAND, QUANTUMINSERT, things like that. It's a whole suite of tools and this is just the NSA, right?

There's governments all around the world, criminal groups, malicious hackers, who are developing the same tool kits or already have them where they basically will identify and un-encrypt a traffic stream, and then they'll manipulate it from the network path.

They'll inject an an IFRAME into it that's malicious, they'll basically hijack the DNS request. They'll set up a race condition where they can respond before the actual authority responds, to send you to a server that they control, things like that, in order to provide some sort of browser exploit, get on the box and do whatever they want from there.

But, this is a question about where do we start, right? We can't fix everything at once. We need to be able to basically armour everything in transit because that makes things very difficult to manipulate. We need to be provide some sort of proof that it hasn't been tampered with, whether that's through signatures or whatever and then we need to think how do we actually harden the end. How do we get there? But it's a process. It's a standard.

Of course, IETF – you guys are above the wire, but you're below the apps. So the end point is very difficult. That's really the hardest problem, but right now, the most dangerous problem is the path. It's the network path. The more middleboxes there are, the more attacks surfaces there are, right? A lot of people are like, "Oh. Middleboxes are the solution to US security." Middleboxes are actually the problem with security. If you...

applause

That's not to attack the middlebox vendors in the room, you guys just have to actually think about this, how does the adversary look at path?

The simplest path and by that I mean the one which has the least complexity, the least attack surface is the safest path. The simplest path is the safest path.

And the idea here is when you add each of these steps, each of these layers of complexity, you're adding another point of presence that the NSA or whoever could go, "Let me add this to my box. I had an exploit for a a Juniper or whatever, or whatever." Any kind of appliance that you drop on the path is a potential vulnerability, so those should only be added when they absolutely cannot avoid it and in general, we wanna think about.

Again, making the path non-interfering. It's not necessarily a question of nothing at all can be on the path, but we have to think about what is the user's intent, how do we fulfill the user's intent without observing or analysing it or in any inappropriate way of interfering with it at all, if possible.

Mark Nottingham: Thanks. Just so you know we've got about six more people lined up to ask questions.

Edward Snowden: Sorry.

Mark Nottingham: No worries.

Human Rights

0:18:55

Niels ten Oever: First, I'd like to thank you for your courageous work and over the past years in the IETF community, there has been an increasing trends in privacy which has been accelerated by your courageous work. How – for example the statement on pervasive surveillance and the guidelines for privacy considerations, but do you think there could also be further work to protect other human rights on a practical level such as freedom of expression?

Edward Snowden: Yeah. This is – so this is very much a delicate topic. It's a very important topic and it’s a big question, so I can't really get at it in depth.

I spoke a little bit about identity and this really speaks to – when we think about human rights on the internet, the ones that immediately come to everybody's mind are privacy and security, right? You can add confidentiality in there as a part of privacy, and the idea is the Universal Declaration of Human Rights, the US Constitution, International Covenant on Civil and Political Rights, all of these frameworks that say what human rights are, they all say we have to be protected against arbitrary interference at any kind of scale, anything on target, anything they can justify in our communications.

Unfortunately, the internet has many ways of providing a very cheap and effective means of interfering with those rights.

So, yes, everybody knows about that, right, but we need to think about – when we think about identity, when we think about these other things, we need to think about other human rights principles which are very difficult to enforce through national mechanisms, right? We can pass the best human rights laws in the world in the United States, in the European Union, in Canada, wherever, it's not gonna help people in Russia, in China, in Brazil, anywhere else.

But if we create a fabric – if we create a network through protocols that provide means of access that can be guaranteed the safest – internet that you get in France is just as good as the internet that you connect to from China. That's a net win for human rights.

When we think about access, we also need to think about things like non-discrimination, but again, how do you enforce non-discrimination, something like that, through a protocol level? It's – it seems like it's very difficult, but then when you think about what are the mechanisms to which discrimination occurs from the human rights context? And that's through affiliation, through association, again through identity.

By anonymising people, by allowing them to divorce themselves from their physical identity whether a member of a minority group and agreed to – religious group or something like that – they have a political affiliation that could get them jailed or killed in that country. If you allow them to divorce themselves through the technology, you're providing human rights ahead of the law, that law will eventually recognise and support.

And this is something that I think people do not realise yet. Thinkers do, a lot of technologists do, but the broad public doesn't really think about the progress of human rights in the historic context.

Nobody was thinking about the speech rights of somebody when they were living in caves and running from tigers. That doesn't mean that human rights don't exist. It means that they are product of civilisational progress and we, in the technological community, are the vanguard of that progress today. We can provide new and enduring human rights that generations of people will enjoy and that will be a net gain for humanity as a family. But even if we don't go that far, I think we have not just the right but the duty to ensure that the human rights that we ourselves inherited will be passed along to those who come behind us. We can use technology as the mechanism to achieve that.

Niels ten Oever: Thank you.

DNSSEC Centralization

0:22:59

???: First of all, I'm sad for the state of US law. Sorry.

But I will like to ask one technological/philosophical question. You mentioned DNSSEC and yes, of course, it makes some things harder because – well, we have – it would allow us to eliminate some plaintext and so on from the wire, but on the other hand, in – basically centralises everything on one place which if we consider such powerful adversary, as we know we have, may be problematic.

Of course, well, I'm interested in your thoughts about that because I don't see a clear path of – from this situation. But at the same time, I'm, yes, a DNSSEC promoter and implementer, so now I'm – I have personal problem with that because on one hand, it eliminates some plain text on the wire, but on the other hand, it may just add a false sense of security because we will think that it's secure, but they – we all know that it’s not. What's your thought about it?

Edward Snowden: So, I agree with you and I mean this is what's important about the IETF. Just because I say it, doesn't mean it's gospel. I can be wrong about an incredible amount of things. Nobody should trust me. Nobody should grant any sort of outsized weight to what I say.

When I talk about the NSA, I mentioned it in correlation with DANE and the DPRIVE initiative as well because the whole idea is that, yes, providing some mechanism for authentication of the responses between DNS queries is valuable. It's not an end to itself.

We still have to be able to say, "Well, all right, the certificate that you're getting from it, for a server is also reliable," and then we have to actually do more armour the requests themselves to make sure that they don’t become a new vector, they don't become manipulated.

Who knows like if eventually the DNS responses themselves that are provided through this become some sort of vulnerability because of the way they're parsed or whatever, but the whole idea is that we gotta start somewhere and then we've got to iterate from that point.

We've gotta begin building and when I think about things like DNSSEC, I don't think it's the golden age, we can solve all of the problems, but I do think that it's a start. It's better than the status quo. It's better than what we have today

And by getting the community thinking, by coming together and trying to develop some kind of solution, some kind of standard, we can start developing things that will allow us to build a bridge to the next generation of what we need to protect us against the next generation of coming attacks, and there's a lot of things that get in there. I mean cryptographic agility is one of the big hot things that we have to deal with as well.

???: Thank you very much. So, let's implement it.

Edward Snowden: Thank you.

Corporate Networks

0:26:00

Joe Hildebrand: When we're doing security work here, we often hear push back on having default secure kinds of modes because of things like corporate networks where the corporation owns the network, they own the computer and so, they feel like they are justifying doing data loss preventions sorts of things. Those kinds of requirements act as a ratchet down on the security of the kinds of things that we can build.

I'm wondering if you thought at all about ways that we can do requirements analysis that would help us set the ratchet in the other direction?

Edward Snowden: So, one of the biggest things here and I think this is really the central thing, is it comes back again, who does the internet belong to, who does it serve, who's the IETF's ultimate customer? And the corporations – they have significant influence, vendors have significant influence inaudible.

They're not the enemy. They serve, they make vital contributions, but we need to be careful about granting an outside influence particularly when they have alternate means, other mechanisms of achieving their goals.

When you think about things like DLP, Data Loss Prevention, and whatnot on the network and the relationship of that to things like security based protocols, you have to go "all right, well, the benefit that the vendors, that the corporate customers, that the enterprise gains in convenience from having a less secure protocol is being paid for at a cost that's a threat to safety to users around the world. Which one of these needs to be weighed heavily?"

And I think it's not too much an ask to recognise that the user at scale, the internet as a body, has a greater interest in security than the enterprise has in convenience because the user cannot necessarily mitigate their circumstances. They lack in resources. They lack in capability. They lack in technological sophistication to protect themselves.

However, the enterprise has root on all of their endpoints. If they want to implement any kind of DLP solution, they can do so. They can run their own homegrown system. They can implement a commercial system. They can put out their own certificate authority. They can do whatever they want because they have root on all the systems.

It’s just like when you’re thinking about the NSA. They've got the Tailored Access Operations office. These are the hackers; these guys will go and get root on the boxes that we say "I wanna know what that guy is doing. Get rid of it".

And whether they’re now an export it, whether they e-mail and exploit, or, they inject something to network traffic stream or whatever, they get root. And then no matter what kind of personal security products that person is using, what kind of encryption they’re using, it doesn't help them because the NSA is now their systems administrator.

The enterprise is their own system administrator. They can handle this through alternative mechanisms that don't have the same costs for the global inaudible.

Joe Hildebrand: I think that’s a useful construction. Thank you.

Mark Nottingham: So we’ve been going for about half an hour. I know you might be time constrained or –

Edward Snowden: Well, let’s extend a little bit longer; about ten more minutes, if that’s okay with you.

Mark Nottingham: That’s just fine.

Bitcoin

0:29:35

???: What scared me most about the documentary and some of the comments were making sense even right now is not the data and we can encrypt that, but the metadata and the correlation. So the fact that our point of view now is going to get you in real, real trouble.

laughter

Edward Snowden: Probably. I just moved a couple of steps up on the list after this discussion.

???: Exactly. So question for you is the first example you gave was credit card being so dangerous. What do you think about things like – I mean you talk about – you can anonymise the end point, but then I'm talking to you, and then how do I know I'm talking to you and how do we have a meaningful conversation without the NSA knowing who's talking to whom, but the big thing might be for money.

So what about Bitcoin or things like Bitcoin that says at least the market transactions get anonymized when...

Edward Snowden: So, the Bitcoin thing is – I mean this is – nobody really likes to talk about Bitcoin anymore. There are informed concepts there. Obviously, Bitcoin by itself is flawed. The protocol has a lot of weaknesses and transaction sides and a lot of weaknesses that structurally make it vulnerable to people who are trying to own 50 percent of the network and so on and so forth.

But when we think about the basic principles behind it, there are some very interesting things that particularly when we start to combine them with that idea like before of tokenization, of concepts like proof of work.

Are there other means through which people can basically pay for access other than direct transfers of currency that originated with an association to their true name?

The other ones are inaudible mixed in networks, for example, where we have multiple steps just like Tor where they got these mixed inaudible in the Bitcoin universe where they tumble the transactions of the Bitcoins that go in it to pay for your purchase aren't the same Bitcoins that go out.

But focusing too much on Bitcoin, I think, is a mistake. The real solution is again, how do we get to a point where you don't have to have a direct link between your identity all of the time? You have personas. You have tokens that authenticate each person and when you want to be able to interact with people as your persona in your true name, you can do so. When you want to be able to switch to a persona - a common persona, an anonymous persona, a shared persona, you can do that. When you want to move to pseudonymous persona, you can do that.

A lot of these are difficult problems particularly when we talk about the metadata context, the signalling context. And there are actually some really bad proposals, I think, and this is in no offense to anybody who works on these particular problem spaces, but again, it gets back to the middlebox space.

We've got proposals like SPUD, for example, where they wanna make UDP a new channel for leaking metadata about the user's intention. They want to be able to –

applause

I get the feeling that there are a lot of people in the audience who are concerned about middleboxes. I didn't know...

laughter.

All right. So the idea here is we can all understand the incentives of these vendors. They want to be able to provide mechanisms for tiered pricing. They want to be able to provide prioritised service or increased rates. They want to be able to say, “Whatever, we'll kick you down a tier and we'll charge you less,” and these things are great, but again, those are their incentives, right? Those are not the internet's problem sets. Those are the vendors' problem sets.

And when we think about things like they talk about – all right – well, we want to be able to innovate in protocol space, so good - so does everybody, right? This isn't a thing where the vendor is against the IETF or the vendor's against the technical community, academic community, whatever.

We're all partners here, but we need to think about where the actual problems of this ossification originated from in the protocol space and it's actually not from the IETF. It's from internet access providers. It's from network service providers. It's from Level Three, Hurricane Electric. People in the middle, people running middleboxes, setting their firewall settings to a point where basically there's no space for innovation because they don't – "oh, well, we don’t recognise it. It must be malicious". They don't update it. They don't basically tend to the garden that we're all collaborating on and so, the question comes, how do we try new mechanisms? How do we create new incentives for everybody to work together here?

And I think the first is to recognise that when it comes to the global security problems we have with internet communications today, we have to recognise that the new proposals being put forward, we have to go, “Does this create more problems than it solves?"

And if it's creating more metadata that’s associated with user preference, they can be intercepted, they can be manipulated, they can be interjected as a stream, this is in general a very bad thing. We need to be able to reduce the amount of metadata that's linked on a part of a user's communications invisible to them, not increasing.

And in general, I think we need to get to the point of intent. What is the user's intent? As they interact with the internet, as they interact with their community, as they interact with the associations that they have with their friends, their connections, whatever.

And how do we ensure that our standards, our protocols, our technology, the systems that surround us everyday are working to support, to protect and to armour the user's intent rather than to betray it or to monetise it or to take advantage of it in some way that might not be the end of the world, it might not be the worst thing in the universe, but it is not compliant with the user's actual intention as they engage with that.

If you want to provide those mechanisms, that's fine, but in general, they should be transparent, they should be opt-in. They shouldn't be things that we're baking into protocols particularly when there's no clue to the problem that there's not another mechanism like simply changing the firewall settings to the user.

???: Thank you.

Edward Snowden: Let's actually take another question.

ISPs

0:36:30

Lee Howard: Choice of three questions. You can say which one to answer.

Edward Snowden: laughs

Lee Howard: First question was gonna be is there any government in the world that has the surveillance or intelligence gathering regulatory framework right? Second one was Bruce Schneier gave us some very specific assignments at the IETF usually you guys need to go do. Is there anything that we should specifically work on?

And the third one is, I work at a large US ISP, what can I do? What should we be doing?

Edward Snowden: Phew! laughs Those are good questions. We need another half hour for that. The US ISP is a good one. I would actually say that you're better informed to answer that than we are.

Because the reality is a lot of people who work in the political spaces that these programs, the political context of them, or the academic contexts - we can understand the academic, more practical realities of these problems on security context, the user context and political context, but we don't have the data on the practical context.

We don't know what the actual cost is of, for example, losing middleboxes, putting in the middleboxes, all of that stuff that everybody in the room really hates right now, for example, that we’ve seen. If you could help us understand that a little bit better, that would be valuable because it’d allow us to have a more nuanced conversation.

But I won't get too into that because again, I think it's pretty well-established everybody in the room has this – there are views on middleboxes that probably a lot more developed than my own.

You mentioned Bruce Schneier marching orders. When I think about the next five years, ten years, 20 years, 50 years of the internet. In particular, when we think about the encrypted space – I don't know if anybody in the room is from the CFRG.

By the way, the CrypTech project is awesome. I love what you guys are doing. I hope to see more of that.

But the CFRG specifically looked at elliptic curves – that's important work, that's valuable work, but something that I don’t see a lot of being done -- and I'm not up on the literature here, I could be totally wrong -- is actually quantum resistant cryptography.

We need to be able to start moving away from algorithms, at least in asymmetric space, that are reliant on strict factorisation or discreet log problems because when you think about the reality of encrypted blobs of data, right, when we've got cyphertext out there, when you think about ephemeral communications right, if they've got perfect forward secrecy, that's one thing.

When we think about people collecting everything off the lines, when we think about people being able level to collect everything at service providers up long lived data sets, the problem looks very different.

When we have, for example, encrypted blobs that have been exfiltrated through one means or another, they've been copied, they've been stolen, but they can't be broken today; they'll still be on disk in ten years and we need to have algorithms standardised because god knows how long it takes to actually adopt these things and get them in front of the users, get them being used by default that will be able to basically survive quantum cryptanalysis.

And I would say that is probably the most important problem that I don't see getting a lot of press, that I don't see getting enough attention.

Lee Howard: Thank you.

Mark Nottingham: So how are you going on time?

Edward Snowden: Well, I think we can take one more actually.

MitM

0:40:19

Alan Johnston: Mr Snowden, thank you for taking the time to talk to us. It's really an honour to have you here.

A lot of us are concerned about man-in-the-middle attacks on the protocols that we design. It's great encrypting your traffic, but if you encrypt it right to your – right to an attacker, then that's not so good from a privacy perspective and we're looking at techniques for doing it. For example, for protocols and technologies like WebRTC that we're using right now to communicate and doing it in ways that don’t involve third parties or middleboxes. I just wondered if you had any advice for us on whether we should be trying to solve this problem or what we should be doing about it.

Edward Snowden: So, that's a hard one for me to get to because it's not really my central point of expertise. Normally when I think about how do you authenticate that you're not being man-in-the-middled, I think actually in traditional tradecraft terms; I think of out-of-band authentication.

But of course, that doesn't work inaudible. Nobody's gonna be scanning QR codes in real life. Nobody's gonna be able to – nobody's, just as a fact of the working process - going to be using challenge/response even though they should, even though that's best practice. We do need better means to do it.

Unfortunately, I don't know of anybody who has a better solution, but if somebody can develop it, you wanna go and get the equivalent of the Nobel Prize from the Snowden community, you can.

laughter

But there is something that this touches on more broadly when we think about man-in-the-middle, and I know this has been controversial recently which is people think about actual subversion of the standardisation process itself.

We think about NIST, we think about Google, we see the CFRG. We think about the recent allegations that had been made about community members – people who are associated with the NSA and things like that, and these are real concerns. At the same time, I think we have to be really careful about basically swinging inaudible.

Just like we've got a presumption of innocence in courts, we should have that in the community as well. We do have to be careful. We do have to be suspicious, but we should really evaluate people on the basis of their work as opposed to whispered campaigns and things like that.

I can tell you as a inaudible; I worked at the CIA before the NSA, right? I worked with the NSA too. I have seen actual operations when the CIA has sent human agents into organisations overseas – big organisations, not standards organisations specifically, but big infrastructure companies because they wanted to be able to penetrate them.

And the idea that you can have a mole in your organisation is terrifying.

At the same time, I know from this kind of experience that if you think the guy working on your standards process who's sitting right next to you is this evil moustached form of villain who's trying to destroy everything and he's laughing inaudible, it doesn't work like that.

The NSA is not gonna tell the guy who's sitting on the standards forum with you that – “All right. You need to push this change because it's gonna weaken encryption here. We’re gonna be able to pwn everybody because of it, and you'll have destroyed internet security and we'll all laugh about it, it's great.” These are people who are basically doing bad things for good reasons. They're trying to inaudible.

Now you can argue about intentions and everything like that and that's fair. You can say it doesn't matter what their intentions are, the ends don't justify the means, and I agree with that.

But at the same time when you think about how this would work, the NSA is not gonna want some guy sitting in an IETF meeting to slip up, to say the wrong thing and reveal something about their quantum cryptanalysis capabilities.

Instead, there's simply going to tell this person, "look you're on this forum, you need to steer away from this because this is insecure even if in reality that's actually very secure and you need to steer to this because it is secure even if in reality it's actually a backdoor."

So, we need to take some of the fear out of the community and instead focus on vetting anything to the best of our ability and recognising the work.

When we’re dealing with top-level adversaries, when we're dealing with people who have nearly infinite resources, they're going to be able to interfere with processes through one means or another. They're going to be able to find some method of subversion and some method for access because that's what they do. They're gonna be there tomorrow, they're gonna be in the next day and they're all around the world, every organisation is doing this.

The real way to defend against that is to focus on the principles, focus on basically enabling the user's intent, focusing on protecting against all cost, don't carve out exceptions, don't – when people talk about lawful access and things like that, we need to think about what happens when China knocks on the door and asks for the same thing and basically vet things as best as you can.

If there's someone who appears to be suspicious and making suspicious contributions involved in a working group, don't necessarily say this person's an enemy agent or something like that. Simply scrutinise the work and find the weaknesses in it, if at all possible. That's the best you can do and that’s the only way people can still be an open and inclusive community that's not basically trying to saw our arms off because we’re afraid something might be compromised.

applause

Mark Nottingham: So, we have a couple more people lined up, but I'm sensing that it's getting late for you?

Edward Snowden: Yeah. It's pretty late, but all right, let's take one last one.

laughter

applause

Wireless

0:46:08

The question was about wireless security; e.g., 802.11

Edward Snowden: Yeah. So this – wireless is one of the biggest, most important, most difficult problem spaces to result because again, it gets back to the same thing of we don't want domain conflicts when it comes to addresses, like MAC addresses and stuff like that.

I'm not sure that the system that we use today is actually the best system. This whole thing about we've got burned-in, globally unique ARP addresses.

These identifiers are extremely dangerous. If you look at – if you go to your search and search for NSA and SHENANIGANS – the program is literally called SHENANIGANS.

They set up an airplane that basically had a big Stingray on the bottom of it that just listens for all of the ARPs, all of the basic wireless logins, from everything that's on the RF spectrum out in the wild and then we fly this over cities.

Now this initially started in the war zones. They were doing it in Yemen, in places like that, because they were looking for particular handsets and then they were aiming missiles at them.

They were literally – when we talk about the metadata problem there, we've got the director of the National Security Agency, former director Michael Hayden who said, "We kill people based on metadata," and he's not lying.

We don't know whose holding it at the time we launch the missile. We just know, "Hey look, that's the IMSI, that's the IMEI pair. Let's send something at it and see what we get," and that's how inaudible get hit.

At the same time legitimate targets get hit by this occasionally too. One would probably say more often than not; one would hope more often than not, but that's the reality of that.

So the question becomes how do we resolve that? How do we fix that? Is there any other way of doing this?

And the question becomes is there some way that we can survive, having for example, domain conflicts in the MAC space, or any other kind of global addressing space and de-conflict those in software. De-conflict those in some kind of identity space that the user can change, something keyed to what they do, they have where they go, "All right, whatever I did with my phone last night when I called my girlfriend, I wanna change that because of whatever," or "I'm going to a political rally. I don't want the local police to know that my handset was at this protest because I don't want to end up on a list and then I'll roll back to my identity after the protest.

These things are things that should be exposed to the user because here's the problem. When we think about launching drone strikes at actual terrorists, people who are making bombs, planning to place bombs in civilian areas, actually killed people, we want to have effective means of preventing that. We want to be able to save lives. We want to be able to keep people safe.

But the problem is this shenanigans program is now being used in the United States by the US Marshals service, by the FBI under a different name where they’re trying to find the same self inaudible IDs for common criminals.

There's a trickle down problem when it comes to the development of intelligence programs. We're – these agencies – they have extraordinary resources, extraordinary capabilities. These resources get funnelled into new capabilities to subvert the global infrastructure that we all rely upon, but then eventually, the government becomes very accustomed to using these capabilities in one problem space.

People who developed that solution then moved to a different agency. They’ve been moved to a different area. They started to cross-pollinate government agencies, and internationally. They create a private company that commercialises these products and things like that and suddenly, we see things that were used in very narrow circumstances where there was literally no alternate mechanism for addressing what they claim as a legitimate governmental need for intervention is suddenly being used against everybody in their daily lives as a matter of course.

If the idea of an airplane circling a city that's manned and has a limited flight time that's figuring out what building every handset lives in and then associating what other handsets spend the night in that building with them scares you, think about 20 years from now when there are drones that are powered by solar and are in a persistent orbit doing this 24/7 over every major point of presence, so even if you're not connected to the internet, they can see that your handset ID, that MAC address on your MacBook or your iPhone or whatever has moved from city A to city B to country C to conference D.

These are not things that are crazy. These are not things that are extremely paranoid. These are obvious solutions that are being developed today because they're cheap and they're easy. This is something you don't even need a grad student to do. This is something a couple of bright undergraduates could do in a couple of months.

And if we can do it that simply, minus the whole inaudible flight thing, think about what somebody with 75 billion dollars a year to spend on this stuff can do. And that's the combined intelligence budget of the United States Intelligence Community per year, including military spending.

So we really have to figure out not what the problems are today, but how do we fix things on an enduring basis for the future? How do we anticipate where things are likely to go and to resolve things on the infrastructural, on the fundamental level to go, "Look, we don't want to put ourselves in a position where we are vulnerable to this.”

And that's the reality today. We've got governments in the United Kingdom that are saying, "We want to be able to ban encryption," or "Okay. Maybe banning encryption isn't reasonable. We want to demand that service providers backdoor their encryption. We're gonna say if you wanna sell a device here, you have to be able to provide key material to us. We want to basically weaken security standards in this nation or that nation or this jurisdiction or whatever jurisdiction."

The reality is the United Kingdom does not own the internet. The United States does not own the internet. The public owns the internet.

They are our customer and the way we solve this, the way we protect this, the way we protect the global constituency of users that we represent is to make everybody safe all of the time. Otherwise, we're trying to pick good guys and bad guys and we're making ourselves vulnerable to changes in government of the good guys.

They might be the most enlightened people in the world today, but they might not be in five years, in ten years and the only safe political principle from dealing with that is to focus on safety first.

Don't make compromises when it comes to your rights. The only rights that anybody enjoys in any country in the world, in any time, or any place are not the rights that you say exist. They're not rights that you believe, but they're the rights that you stand up for, you assert, and defend. And I think that's what we should do.

applause

Mark Nottingham: So this is the last question, can we impose upon your time a bit more?

laughter

Edward Snowden: All right. I'm gonna have to keep this one short for real, but all right.

laughter

Technology and Democracy

0:53:50

Monika Ermert: I'm a journalist. I have a long list of questions, but I picked one for you.

laughter

So hi-tech is likely always going to be a domain where a few experts will have the knowledge to understand and control the system. Could it be that the use of hi-tech fundamentally undermines democratic society or kind of – how can we do something to educate users or...

Edward Snowden: I think actually the IETF is a great example of why it’s not the threat that people might be concerned about. There are contexts, there are circumstances where people could think about that, and that's a real danger to – the standard control of governments. However, when you look at the IETF, they literally don't make a decision unless it's based on consensus.

There are no requirements. There are no academic standards or qualifications that anybody has to meet before they can be involved in a working group.

Literally, anyone can join, anyone can participate in the process, anyone can make themselves heard, anyone can influence the standards that we develop, put forth, and decide.

I would say that the direction of the internet's future on a policy basis, how the technology is going to work and the kind of internet that we will all enjoy, the technology that we will all rely upon in the future is a more open system today than it ever has been in the past.

It's a more inclusive community than it ever has been before and the reality is, if the internet and technology does become a danger to us in the future, it's our own fault because we decided not to participate and we let other groups and other influences to decide for us rather than being part of it and shaping it for our own needs.

Thank you. I really have to go.

standing ovation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment