I’ve been attending trainings on using Tor, encryption, and other security/privacy tools lately, as OpenITP is exploring what we can do to help along these lines. From an educator’s standpoint, a lot of good work is being done already; trainings have a lot of hands-on components, are very responsive to student questions, and use good metaphors. But there’s always room for improvement. I wanted to share a few ways of thinking about training, from the toolbox I gained as a student at Teachers College, in hopes of starting a discussion about best practices in teaching digital security. Three tools for thinking about teaching come to mind: mental models, fragile knowledge, and “a time for telling.”
Mental Models: Find out what your learners know (or think they know)
So as I mentioned in my post about Tor and the recent Harvard bomb scare: In education, we talk about students’ “mental models,” meaning their understanding (however faulty it is) of how something works. In computing education, this encompasses quite a bit — models of a computer’s current state, of what computer language looks like, of the shape of a network, etc.
Mental models are important to reckon with. Learners are not blank slates. What they already know matters a lot to how they build new knowledge. A big challenge with mental models is that learners may come in to a lesson with a pre-existing mental model of the domain, and that model may be incomplete or incorrect. Learners may then build faulty mental models onto these bad foundations.1
While journalists and activists are generally more tech-savvy populations, there are abundant examples in the general populace of misunderstandings about what is public, what is private, whether information is residing on/passing through your own machine or someone else’s, and how network traffic flows. For a related example, consider the video above, in which a kid has cobbled together a number of half-grasped ideas about Internet traffic and is now dispensing advice on IP addresses to others. Or the bomb-threatener at Harvard, who didn’t get that Tor couldn’t protect him from his own sysadmins. A little knowledge can be a dangerous thing. If we let learners graft new ideas onto misunderstandings like these without addressing the misunderstandings, we may put them at risk.
So it might be a good idea at the beginning of security trainings to survey the room and get a sense of attendees’ conceptions of the security and vulnerabilities of the everyday tools they use (Facebook, Gmail, text messages), and of their awareness of tools like Tor or email encryption and what they believe those tools do. We want to catch misunderstandings and correct for them in the course of our trainings, before they go on and build or act on them.
Generally, as a teacher you want to engage not only with what students already know but what they know they want to know — what we call “inquiry-driven learning.” The more you can address places where students have identified they don’t know something and want to know more, the more likely they are to remember what you say and connect it to their lives.
My favorite technique for engaging learners’ questions, preconceptions, and goals is a super-low-tech one I learned from Lalitha Vasudevan: use post-its or index cards. Hand ’em out at the beginning of a session, in the middle, at the end. Ask learners: what do you want to know? What do you feel like you still don’t get? What do you feel is going to be most important to you as a journalist/activist? Sort them out based on common questions, concerns, or misconceptions; respond to them in the course of your lectures. Point out misconceptions about privacy or network structure when you get to that point in your talk. The greatest thing about this technique is how much more feedback you get from the quietest participants, some of whom may be feeling shy about how little they know.
If trainers can pool knowledge on misconceptions, we can develop new materials that specifically address sticky points, or improve existing materials to discourage misconceptions. I’ve got some thoughts on existing EFF and other graphics, and I think a subsequent post will be an analysis of the relative strengths and weaknesses of various graphics.2
Most of us have probably had the misfortune, at some point, to be in a computer training so focused on step-by-step, click-this-button-open-that-enter-this instructions, that we just wanted to bash our heads against the wall. The great thing I’ve seen so far in most security trainings I’ve attended and materials I’ve looked at is that this kind of mindless training isn’t happening: people are doing a good job of explaining concepts, using metaphors in hands-on activities, and connecting them to practice. Most of us learned from the Internet, so we’re not likely to just fall back on instruction as a primary means of teaching; we’re used to learning by doing.
Overly-specific procedural instructions can lead to what we call “fragile knowledge”: an understanding that isn’t conducive to complexity or problem solving, that tends to fall apart and leave the learner helpless when the context of a problem changes. It can include “ritual knowledge” (“if I press these buttons in this order, the thing works, right?”), “naive knowledge” (partial or simplified conceptualizations of a problem which generally produce the right results, but may not always work), or “inert knowledge” (“OK, I’ve memorized how the Tor system works. Now, how do I set it up again…?”)
So long as we keep connecting the big picture of network communication and possible vulnerabilities to how the software can be set up and which tools to use when, we should still see people coming out of trainings able to adapt to new threats.
As those of us closer to the sources of this software start passing materials and training on to trainers who are further afield, however, it may be important to directly emphasize to trainers that simply lecturing, or teaching only a series of steps to set up a program, may not help learners as much as having them work hands-on and soliciting their questions and (mis)conceptions. Teachers tend to teach as they were taught, themselves. This could be a mixed blessing: we could see trainers adapting the strong methods we’re using to teach them, or we could see them fall back on less-successful teaching methods that may be prevalent in their local schools or colleges. Being explicit about teaching methods — both what we expect will work, and what trainers see as working best in their own communities — will help ensure none of us fall back on what feels convenient for convenience’s sake.
A Time For Telling: Lectures shouldn’t come first
My dad worked at Caltech while I was growing up, and tended to have gadgets lying around his office which had been abandoned by various professors. One of these was a putty-colored thing in a velvet-lined case. It looked like a hearing aid, but when you hooked it on the back of your ear and tilted your head forward, it let out a shrill, annoying whine. It was supposed to keep the wearer — and anyone nearby! — from nodding off during lectures.
Everyone at security trainings generally seems highly motivated to learn, for one reason or another. But even when you’re eager to learn, being talked at for a long time isn’t always the easiest or most effective way to do it. While there is a lot of really engaging teaching happening at these training sessions, those of us who are used to doing presentations at conferences and suchlike do still sometimes fall into lecture mode. I should say that everyone has been great about stopping to answer questions, though, which is ideal; Seamus has told me he thinks the questions were what made his explanation on networks great.
One cool thing I learned at Teachers College is that there’s a time and a place for lectures, when they really are one of the most effective ways to convey information to learners. (I knew there was a reason I liked some lecture-prone teachers better than others… and why MOOCs, as they’re being sold, sound like a pedagogical nightmare.) The classic cognition paper A Time For Telling [pdf] suggests that students learn best from lectures when they’ve actively worked through a problem first. In the paper’s original experiment, the students compared two contrasting cases before hearing a lecture.
Comparing similar situations could fit really well into security trainings, and in fact I’m already seeing trainings shaped like that — Quinn runs role-plays and a series of hands-on activities about encryption, with slight changes to how messages are sent; Susan had students researching the challenges to journalists’ security in different countries. When planning a training, then, it’s possible that activities like these, and follow-up discussions, should come before the slides and talks, for maximum effectiveness. (There’s a balance, of course, in ensuring students have enough information to work with beforehand, and I’m wondering how much we need to build up their knowledge of networks first.)
That’s it for now — next up will be a review of what’s working and what’s not working in graphics which illustrate how Tor works.
1 A classic bad pre-conception in science (which I only realized I had wrong when someone pointed it out to me in grad school!) is that the weather is colder in winter in the Northern Hemisphere because we are further from the sun. In fact, it’s colder because we’re hit with more diffuse, less-direct rays from the sun. It’s thought that this misconception may actually arise from well-intentioned science lessons — say, with graphics emphasizing the tilt of the globe’s access, which just don’t get the point across. Pointing an angled flashlight at a globe and drawing attention to how the light beam is more diffuse around the edges can help students build more accurate mental models.
2 Of course, there may be times when it might be better to change a security or privacy tool’s interface, rather than having to bash users’ mental models into shape. OpenITP will be working on that over the next few months in a series of UX/UI hackathons and more sustained efforts to support the usability of privacy and security tools. Meanwhile, it’s useful to check out what usability guru Jakob Nielsen has to say about how to accommodate users’ mental models.