behaviour – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:42 +0000 en-GB hourly 1 Exploring the world of digital detoxing https://ensr.oii.ox.ac.uk/exploring-the-world-of-digital-detoxing/ Thu, 02 Mar 2017 10:50:06 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3973 As our social interactions become increasingly entangled with the online world, there are some who insist on the benefits of disconnecting entirely from digital technology. These advocates of “digital detoxing” view digital communication as eroding our ability to concentrate, to empathise, and to have meaningful conversations.

A 2016 survey by OnePoll found that 40% of respondents felt they had “not truly experienced valuable moments such as a child’s first steps or graduation” because “technology got in the way”, and OfCom’s 2016 survey showed that 15 million British Internet users (representing a third of those online), have already tried a digital detox. In recent years, America has sought to pathologise a perceived over-use of digital technology as “Internet addiction”. While the term is not recognized by the DSM, the idea is commonly used in media rhetoric and forms an important backdrop to digital detoxing.

The article Disconnect to reconnect: The food/technology metaphor in digital detoxing (First Monday) by Theodora Sutton presents a short ethnography of the digital detoxing community in the San Francisco Bay Area. Her informants attend an annual four-day digital detox and summer camp for adults in the Californian forest called Camp Grounded. She attended two Camp Grounded sessions in 2014, and followed up with semi-structured interviews with eight detoxers.

We caught up with Theodora to examine the implications of the study and to learn more about her PhD research, which focuses on the same field site.

Ed.: In your forthcoming article you say that Camp Grounded attendees used food metaphors (and words like “snacking” and “nutrition”) to understand their own use of technology and behaviour. How useful is this as an analogy?

Theodora: The food/technology analogy is an incredibly neat way to talk about something we think of as immaterial in a more tangible way. We know that our digital world relies on physical connections, but we forget that all the time. Another thing it does in lending a dietary connotation is to imply we should regulate our consumption of digital use; that there are healthy and unhealthy or inappropriate ways of using it.

I explore more pros and cons to the analogy in the paper, but the biggest con in my opinion is that while it’s neat, it’s often used to make value judgments about technology use. For example, saying that online sociality is like processed food is implying that it lacks authenticity. So the food analogy is a really useful way to understand how people are interpreting technology culturally, but it’s important to be aware of how it’s used.

Ed.: How do people rationalise ideas of the digital being somehow “less real” or “genuine” (less “nourishing”), despite the fact that it obviously is all real: just different? Is it just a peg to blame an “other” and excuse their own behaviour .. rather than just switching off their phones and going for a run / sail etc. (or any other “real” activity..).

Theodora: The idea of new technologies being somehow less real or less natural is a pretty established Western concept, and it’s been fundamental in moral panics following new technologies. That digital sociality is different, not lesser, is something we can academically agree on, but people very often believe otherwise.

My personal view is that figuring out what kind of digital usage suits you and then acting in moderation is ideal, without the need for extreme lengths, but in reality moderation can be quite difficult to achieve. And the thing is, we’re not just talking about choosing to text rather than meet in person, or read a book instead of go on Twitter. We’re talking about digital activities that are increasingly inescapable and part of life, like work e-mail or government services being moved online.

The ability to go for a run or go sailing are again privileged activities for people with free time. Many people think getting back to nature or meeting in person are really important for human needs. But increasingly, not everyone has the ability to get away from devices, especially if you don’t have enough money to visit friends or travel to a forest, or you’re just too tired from working all the time. So Camp Grounded is part of what they feel is an urgent conversation about whether the technology we design addresses human, emotional needs.

Ed.: You write in the paper that “upon arrival at Camp Grounded, campers are met with hugs and milk and cookies” .. not to sound horrible, but isn’t this replacing one type of (self-focused) reassurance with another? I mean, it sounds really nice (as does the rest of the Camp), but it sounds a tiny bit like their “problem” is being fetishised / enjoyed a little bit? Or maybe that their problem isn’t to do with technology, but rather with confidence, anxiety etc.

Theodora: The people who run Camp Grounded would tell you themselves that digital detoxing is not really about digital technology. That’s just the current scapegoat for all the alienating aspects of modern life. They also take away real names, work talk, watches, and alcohol. One of the biggest things Camp Grounded tries to do is build up attendees’ confidence to be silly and playful and have their identities less tied to their work persona, which is a bit of a backlash against Silicon Valley’s intense work ethic. Milk and cookies comes from childhood, or America’s summer camps which many attendees went to as children, so it’s one little thing they do to get you to transition into that more relaxed and childlike way of behaving.

I’m not sure about “fetishized,” but Camp Grounded really jumps on board with the technology idea, using really ironic things like an analog dating service called “embers,” a “human powered search” where you pin questions on a big noticeboard and other people answer, and an “inbox” where people leave you letters.

And you’re right, there is an aspect of digital detoxing which is very much a “middle class ailment” in that it can seem rather surface-level and indulgent, and tickets are pretty pricey, making it quite a privileged activity. But at the same time I think it is a genuine conversation starter about our relationship with technology and how it’s designed. I think a digital detox is more than just escapism or reassurance, for them it’s about testing a different lifestyle, seeing what works best for them and learning from that.

Ed.: Many of these technologies are designed to be “addictive” (to use the term loosely: maybe I mean “seductive”) in order to drive engagement and encourage retention: is there maybe an analogy here with foods that are too sugary, salty, fatty (i.e. addictive) for us? I suppose the line between genuine addiction and free choice / agency is a difficult one; and one that may depend largely on the individual. Which presumably makes any attempts to regulate (or even just question) these persuasive digital environments particularly difficult? Given the massive outcry over perfectly rational attempts to tax sugar, fat etc.

Theodora: The analogy between sugary, salty, or fatty foods and seductive technologies is drawn a lot — it was even made by danah boyd in 2009. Digital detoxing comes from a standpoint that tech companies aren’t necessarily working to enable meaningful connection, and are instead aiming to “hook” people in. That’s often compared to food companies that exist to make a profit rather than improve your individual nutrition, using whatever salt, sugar, flavourings, or packaging they have at their disposal to make you keep coming back.

There are two different ways of “fixing” perceived problems with tech: there’s technical fixes that might only let you use the site for certain amounts of time, or re-designing it so that it’s less seductive; then there’s normative fixes, which could be on an individual level deciding to make a change, or even society wide, like the French labour law giving the “right to disconnect” from work emails on evenings and weekends.

One that sort of embodies both of these is The Time Well Spent project, run by Tristan Harris and the OII’s James Williams. They suggest different metrics for tech platforms, such as how well they enable good experiences away from the computer altogether. Like organic food stickers, they’ve suggested putting a stamp on websites whose companies have these different metrics. That could encourage people to demand better online experiences, and encourage tech companies to design accordingly.

So that’s one way that people are thinking about regulating it, but I think we’re still in the stages of sketching out what the actual problems are and thinking about how we can regulate or “fix” them. At the moment, the issue seems to depend on what the individual wants to do. I’d be really interested to know what other ideas people have had to regulate it, though.

Ed.: Without getting into the immense minefield of evolutionary psychology (and whether or not we are creating environments that might be detrimental to us mentally or socially: just as the Big Mac and Krispy Kreme are not brilliant for us nutritionally) — what is the lay of the land — the academic trends and camps — for this larger question of “Internet addiction” .. and whether or not it’s even a thing?

Theodora: In my experience academics don’t consider it a real thing, just as you wouldn’t say someone had an addiction to books. But again, that doesn’t mean it isn’t used all the time as a shorthand. And there are some academics who use it, like Kimberly Young who proposed it in the 1990’s. She still runs an Internet addiction treatment centre in New York, and there’s another in Fall City, Washington state.

The term certainly isn’t going away any time soon and the centres treat people who genuinely seem to have a very problematic relationship with their technology. People like the OII’s Andrew Przybylski (@ShuhBillSkee) are working on untangling this kind of problematic digital use from the idea of addiction, which can be a bit of a defeatist and dramatic term.

Ed.: As an ethnographer working at the Camp according to its rules (hand-written notes, analogue camera) .. did it affect your thinking or subsequent behaviour / habits in any way?

Theodora: Absolutely. In a way that’s a struggle, because I never felt that I wanted or needed a digital detox, yet having been to it three times now I can see the benefits. Going to camp made a strong case for the argument to be more careful with my technology use, for example not checking my phone mid-conversation, and I’ve been much more aware of it since. For me, that’s been part of an on-going debate that I have in my own life, which I think is a really useful fuel towards continuing to unravel this topic in my studies.

Ed.: So what are your plans now for your research in this area — will you be going back to Camp Grounded for another detox?

Theodora: Yes — I’ll be doing an ethnography of the digital detoxing community again this summer for my PhD and that will include attending Camp Grounded again. So far I’ve essentially done just preliminary fieldwork and visited to touch base with my informants. It’s easy to listen to the rhetoric around digital detoxing, but I think what’s been missing is someone spending time with them to really understand their point of view, especially their values, that you can’t always capture in a survey or in interviews.

In my PhD I hope to understand things like: how digital detoxers even think about technology, what kind of strategies they have to use it appropriately once they return from a detox, and how metaphor and language work in talking about the need to “unplug.” The food analogy is just one preliminary finding that shows how fascinating the topic is as soon as you start scratching away the surface.

Read the full article: Sutton, T. (2017) Disconnect to reconnect: The food/technology metaphor in digital detoxing. First Monday 22 (6).


OII DPhil student Theodora Sutton was talking to blog editor David Sutcliffe.

]]>
Staying free in a world of persuasive technologies https://ensr.oii.ox.ac.uk/staying-free-in-a-world-of-persuasive-technologies/ Mon, 29 Jul 2013 10:11:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1541 iPhone apps
We’re living through a crisis of distraction. Image: “What’s on my iPhone” by Erik Mallinson

Ed: What persuasive technologies might we routinely meet online? And how are they designed to guide us towards certain decisions?

There’s a broad spectrum, from the very simple to the very complex. A simple example would be something like Amazon’s “one-click” purchase feature, which compresses the entire checkout process down to a split-second decision. This uses a persuasive technique known as “reduction” to minimise the perceived cost to a user of going through with a purchase, making it more likely that they’ll transact. At the more complex end of the spectrum, you have the whole set of systems and subsystems that is online advertising. As it becomes easier to measure people’s behaviour over time and across media, advertisers are increasingly able to customise messages to potential customers and guide them down the path toward a purchase.

It isn’t just commerce, though: mobile behavior-change apps have seen really vibrant growth in the past couple years. In particular, health and fitness: products like Nike+, Map My Run, and Fitbit let you monitor your exercise, share your performance with friends, use social motivation to help you define and reach your fitness goals, and so on. One interesting example I came across recently is called “Zombies, Run!” which motivates by fright, spawning virtual zombies to chase you down the street while you’re on your run.

As one final example, If you’ve ever tried to deactivate your Facebook account, you’ve probably seen a good example of social persuasive technology: the screen that comes up saying, “If you leave Facebook, these people will miss you” and then shows you pictures of your friends. Broadly speaking, most of the online services we think we’re using for “free” — that is, the ones we’re paying for with the currency of our attention — have some sort of persuasive design goal. And this can be particularly apparent when people are entering or exiting the system.

Ed: Advertising has been around for centuries, so we might assume that we have become clever about recognizing and negotiating it — what is it about these online persuasive technologies that poses new ethical questions or concerns?

The ethical questions themselves aren’t new, but the environment in which we’re asking them makes them much more urgent. There are several important trends here. For one, the Internet is becoming part of the background of human experience: devices are shrinking, proliferating, and becoming more persistent companions through life. In tandem with this, rapid advances in measurement and analytics are enabling us to more quickly optimise technologies to reach greater levels of persuasiveness. That persuasiveness is further augmented by applying knowledge of our non-rational psychological biases to technology design, which we are doing much more quickly than in the design of slower-moving systems such as law or ethics. Finally, the explosion of media and information has made it harder for people to be intentional or reflective about their goals and priorities in life. We’re living through a crisis of distraction. The convergence of all these trends suggests that we could increasingly live our lives in environments of high persuasive power.

To me, the biggest ethical questions are those that concern individual freedom and autonomy. When, exactly, does a “nudge” become a “push”? When we call these types of technology “persuasive,” we’re implying that they shouldn’t cross the line into being coercive or manipulative. But it’s hard to say where that line is, especially when it comes to persuasion that plays on our non-rational biases and impulses. How persuasive is too persuasive? Again, this isn’t a new ethical question by any means, but it is more urgent than ever.

These technologies also remind us that the ethics of attention is just as important as the ethics of information. Many important conversations are taking place across society that deal with the tracking and measurement of user behaviour. But that information is valuable largely because it can be used to inform some sort of action, which is often persuasive in nature. But we don’t talk nearly as much about the ethics of the persuasive act as we do about the ethics of the data. If we did, we might decide, for instance, that some companies have a moral obligation to collect more of a certain type of user data because it’s the only way they could know if they were persuading a person to do something that was contrary to their well-being, values, or goals. Knowing a person better can be the basis not only for acting more wrongly toward them, but also more rightly.

As users, then, persuasive technologies require us to be more intentional about how we define and express our own goals. The more persuasion we encounter, the clearer we need to be about what it is we actually want. If you ask most people what their goals are, they’ll say things like “spending more time with family,” “being healthier,” “learning piano,” etc. But we don’t all accomplish the goals we have — we get distracted. The risk of persuasive technology is that we’ll have more temptations, more distractions. But its promise is that we can use it to motivate ourselves toward the things we find fulfilling. So I think what’s needed is more intentional and habitual reflection about what our own goals actually are. To me, the ultimate question in all this is how we can shape technology to support human goals, and not the other way around.

Ed: What if a persuasive design or technology is simply making it easier to do something we already want to do: isn’t this just ‘user centered design’? (ie a good thing?)

Yes, persuasive design can certainly help motivate a user toward their own goals. In these cases it generally resonates well with user-centered design. The tension really arises when the design leads users toward goals they don’t already have. User-centered design doesn’t really have a good way to address persuasive situations, where the goals of the user and the designer diverge.

To reconcile this tension, I think we’ll probably need to get much better at measuring people’s intentions and goals than we are now. Longer-term, we’ll probably need to rethink notions like “design” altogether. When it comes to online services, it’s already hard to talk about “products” and “users” as though they were distinct entities, and I think this will only get harder as we become increasingly enmeshed in an ongoing co-evolution.

Governments and corporations are increasingly interested in “data-driven” decision-making: isn’t that a good thing? Particularly if the technologies now exist to collect ‘big’ data about our online actions (if not intentions)?

I don’t think data ever really drives decisions. It can definitely provide an evidentiary basis, but any data is ultimately still defined and shaped by human goals and priorities. We too often forget that there’s no such thing as “pure” or “raw” data — that any measurement reflects, before anything else, evidence of attention.

That being said, data-based decisions are certainly preferable to arbitrary ones, provided that you’re giving attention to the right things. But data can’t tell you what those right things are. It can’t tell you what to care about. This point seems to be getting lost in a lot of the fervour about “big data,” which as far as I can tell is a way of marketing analytics and relational databases to people who are not familiar with them.

The psychology of that term, “big data,” is actually really interesting. On one hand, there’s a playful simplicity to the word “big” that suggests a kind of childlike awe where words fail. “How big is the universe? It’s really, really big.” It’s the unknown unknowns at scale, the sublime. On the other hand, there’s a physicality to the phrase that suggests an impulse to corral all our data into one place: to contain it, mould it, master it. Really, the term isn’t about data abundance at all – it reflects our grappling with a scarcity of attention.

The philosopher Luciano Floridi likens the “big data” question to being at a buffet where you can eat anything, but not everything. The challenge comes in the choosing. So how do you choose? Whether you’re a government, a corporation, or an individual, it’s your ultimate aims and values — your ethical priorities — that should ultimately guide your choosiness. In other words, the trick is to make sure you’re measuring what you value, rather than just valuing what you already measure.


James Williams is a doctoral student at the Oxford Internet Institute. He studies the ethical design of persuasive technology. His research explores the complex boundary between persuasive power and human freedom in environments of high technological persuasion.

James Williams was talking to blog editor Thain Simon.

]]>