Tor Holes: Learning how to teach Tor from the Harvard bomb threat

In education, we talk about the “mental models” students have of the subjects they are learning: the understanding they have of a system. In computing eduaction, this encompasses quite a bit — models of how a computer works, of its current state, of what computer language looks like, of the shape of a network, etc.

As it happens, the bomb threat sent in to Harvard recently presents us with an opportunity to think about one user’s mental model in comparison to the actual threat model — encompassing the school network, the people and agencies able to access its traffic logs, and the tools used by the person sending in the bomb threat.

From what we think we know about the story so far, the guy sending the threat took precautions which he thought would protect him. He used Guerilla Mail, a service which provides disposable one-time email addresses. And he used Tor, which disguises where the sender’s message is coming from.

Unfortunately for the guy sending the threat (and fortunately for the rest of us who aren’t fond either of bombs or of students who make unreasonable attempts to escape from final exams), the choices he made made him vulnerable to the one known attack against someone trying to hide using Tor: a timing attack. If you have 1) a record of who’s using Tor and when on your campus, 2) the information that a message got to your machine through Tor, 3) and the time stamp on the message sent, it becomes not too hard to tell from the timing which user sent that message. Most of the time Tor users are somewhat protected by the fact that the place they’re using the Internet from (the local Internet cafe, their own Internet service provider) and the place they’re sending a message to (I dunno, someone else’s Gmail account) are not under the control of the same people. When you put both together in the hands of the same Internet service provider, it gets much easier to figure out that the person with stuff going into Tor at time X is the same person whose stuff comes out of Tor a short time later.

When I mentioned there was something missing from the bomb-threatener’s mental model of Tor, a  friend commented “seems like this is basically ‘don’t box from your own phone’, which is the original rule of mischief.” “Boxing” here refers to “blue boxing” — an old, pre-Web phone phreaker technique for making free international calls. Which you wouldn’t do from your own phone, because the phone company can track who’s making which calls. Anyone who had dabbled in this old hacker art would know that basic rule: the people who own the network can tell you what’s going on on the network. They have always kept logs, and not just to track you: also to bill you.

So what it seems the bomb-threatener didn’t have was some key knowledge about the network and how it interacts with Tor. Did he not know that Harvard keeps logs of all network activity (most colleges do, as Internet service providers) and that includes information on which computers are using Tor? Was he not aware that messages coming out of Tor may be identifiable as having gone through the Tor network? Did he think that Tor was some sort of magic cloak that would cover everything, from start to finish, even between him and the Harvard-controlled router to which he may have had to log in to use the Internet to begin with?

Not that we need to ask that guy in particular. But as we help teach people about Tor and other tools, being aware that their mental models may be prone to a hole shaped like this — a client-to-router hole, a “Tor can’t be identified as Tor” hole, a “why would the network keep track of traffic?” hole, a “Tor is just magic” hole — can help us anticipate and address these kinds of confusion.

My own current mental model of Tor, btw, is very much informed by the EFF’s neat interactive graphic on Tor and HTTPS. I’ll write more about the strengths of that graphic later, and more on mental models soon.

Post a Comment

Your email is never published nor shared. Required fields are marked *