Working notes on expert users and mental models of the Internet

Researching users’ mental models of aspects of the Internet is one of the things I’m supposed to do for my fellowship this year. I’ve done some work on mental models myself, both informally and as the secret pilot for my dissertation. I’ve been following the work of my colleague Arne Renkema-Padmos on the same topic with interest.

So I was pleased to see that a paper on mental models of the Internet was the winning paper at SOUPS this year. Like many attempts at eliciting users’ mental models of the Internet, its basic finding is that non-technical users are pretty unclear on the details of Internet infrastructure, and tend to focus more on surface features like graphics.

But another SOUPS paper, on expert versus nonexpert advice for maintaining security, also caught my eye as a potential indicator of mental models. (This has been published in a more accessible form as a Google Online Security blog post.) A comparison of these two SOUPS papers highlights some of the assumptions of research on mental models of the Internet and security to date, and suggests possible other topics for research attention.

Beutler_Google_Security-practices-v6First, check out this infographic from the Google Online Security post — it’s a striking visual of what security professionals believe users should be doing, and how it differs from what users think they should do:

 

 

 

 

Screen Shot 2015-07-29 at 3.07.00 PMFor a little more granularity, here’s how the above appears as a graph in Ion et al’s expert-versus-nonexpert paper.

 

 

 

 

 

It’s pretty clear nonexperts differ from experts in their mental models of what they need to do to stay safe. Their models make assumptions about which are the most common or vulnerable sites of attack. I have grouped their pieces of advice by location of vulnerability below:

Password
Two-factor authentication
Use unique passwords
Use strong passwords
Use password manager
Change password
Don’t enter password in a link clicked from email

Data in transit
Check if HTTPS
Visit only known websites

User’s system
Update system
Use Linux

Software
Use verified software
Use antivirus

Websites
Visit only known websites
Delete cookies
Don’t share info
Don’t enter password in a link clicked from email

User’s failings
Use password manager
Don’t share info
Be suspicious of everything

The advice from Ion et al backs up the finding from Kang et al that experts are more likely to secure connections to ensure security, while non-experts are more likely to clear their traces. Like Kang et al, the non-experts in Ion et al believed clearing cookies and not sharing information were good advice, while experts were more likely to check if a connection was using HTTPS.

It is notable that encrypting data in transit barely appears on the list of recommendations in Ion et al (with experts’ recommendation of HTTPS the only thing that comes close). Email-related recommendations also did not make a significant portion of experts’ top three pieces of advice, but Ion et al note that non-experts, in interviews, were likely to mention not entering your password in a link clicked in an email (i.e. not falling for “phishing”). The absence of email advice from the top-ranked advice seems to indicate a shortcoming of Ion et al’s method: non-experts were first asked questions about advice from experts, then their own advice was solicited in interviews.

Taken in sum, the lack of focus on encrypting data in transit and the lack of attention to email as a site of attack underscore a fact about security: encrypting communications primarily offers security against sophisticated attackers, such as organized crime groups, government agencies, or police forces. At the moment, the majority of individuals don’t need to worry about having their individual communications targeted by these groups.

By contrast, the bulk of the protections recommended by experts in Ion et al would help everyday users to protect their individual security from mass, lower-level criminal activity: avoiding social engineering attacks and the near-unavoidable weaknesses of passwords, keeping their systems from being ensnared in botnets or otherwise compromised by malware, not using compromised software. Encrypting email is not recommended by these experts because the study asked for recommendations that are both important and achievable by the average user; the people who could be hurt by attacks on email right now are a sensitive few, and encrypting email is still hard to do.

The comparison of these two papers led me to step back and contemplate some higher-level questions:

What are we looking for when we do mental models studies? The information these studies elicit is incredibly rich, particularly when coupled with interviews: they can illuminate lack of awareness of specific concepts, complexity or simplicity of models, comparisons between different groups of users including experts and novices; they can be subject to network analyses as well as semantic ones; etc. A few of the papers using mental models methods, including Kang et al and Renkema-Padmos’s work, only begin to scratch the surface of what could be analyzed. Renkema-Padmos’s group had each participants run through a free-drawing condition as well as a condition where they were provided stickers including concepts like “server” and “firewall,” but did not have time to compare the “scaffolded” condition with the free-drawing condition — which could give us a sense of users’ “best guesses” and misconceptions when prompted to think of more details than they would come up with on their own.

My current inclination, in the mental-models study Renkema-Padmos and I are about to embark on, is to try to hone in a little bit on one aspect of Internet use — possibly where users think they can be attacked, or one type of attack (phishing seems like a good candidate). Ion’s paper suggests that mental models of password attacks might specifically be of interest, as that’s a subject of concern. Previously, I had been thinking I would try to focus on where users think their data is at different moments in the use of a particular tool like email or Facebook; that still strikes me as a possible shortcoming of user understanding, as various studies have demonstrated that users focus on surface features and not Internet infrastructure.

I’m also mulling whether additional changes need to be made to the methodology. What would happen if users drew out their models, and then did a talk-aloud while they navigated a pre-defined task online, noting for us where they thought their data was on their own maps at each new screen? This might make for cognitive overload. Or it might elicit conflicts and “aha” moments that a simple drawing of the mental model might not, and stickers might overdetermine.

Will continue to ponder this as I read a few more articles. Stay tuned…


 

I also did some more detailed thinking-through of the assumptions of the advice from Ion et al:

Update system: it is the local system that has fatal vulnerabilities

Use unique passwords: attackers try your passwords across sites

Two-factor auth: verification is better when you have two proofs of your own identity; passwords are a vector

Use strong passwords: attackers have a password-guessing tool

Use password manager: your own memory is the flaw in the system

Check if HTTPS: the protocol is what makes communication safe

Don’t share info: personal disclosure is a vector; some websites are malicious

Use antivirus: the problem is not the local system, but malicious programs (there’s no indication where they come from)

Use Linux: it is the local system that is flawed; the most-attacked machines are the most popular ones; open-source means more eyeballs on potential vulnerabilities

Use verified software: software is a vector, it can be compromised

Be suspicious of everything: anything could be the vector, particularly things which look “too good to be true”

Visit only known websites: Websites are a vector; they are distinct from each other (as opposed to serving third-party content); users actually keep track of what websites they visit

Change password: password “staleness” is a vector

Delete cookies: Websites are the vector, cookies are the vector; privacy and security are the same thing

None of these really figure on state or legal actors.

Post a Comment

Your email is never published nor shared. Required fields are marked *