Interviews – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:38 +0000 en-GB hourly 1 Introducing Martin Dittus, Data Scientist and Darknet Researcher https://ensr.oii.ox.ac.uk/introducing-martin-dittus-data-scientist-and-darknet-researcher/ Wed, 13 Sep 2017 08:03:16 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4391 We’re sitting upstairs, hunched over a computer, and Martin is showing me the darknet. I guess I have as good an idea as most people what the darknet is, i.e. not much. We’re looking at the page of someone claiming to be in the UK who’s selling “locally produced” cannabis, and Martin is wondering if there’s any way of telling if it’s blood cannabis. How you would go about determining this? Much of what is sold on these markets is illegal, and can lead to prosecution, as with any market for illegal products.

But we’re not buying anything, just looking. The stringent ethics process governing his research means he currently can’t even contact anyone on the marketplace.

[Read more: Exploring the Darknet in Five Easy Questions]

Martin Dittus is a Data Scientist at the Oxford Internet Institute, and I’ve come to his office to find out about the OII’s investigation (undertaken with Mark Graham and Joss Wright) of the economic geographies of illegal economic activities in anonymous Internet marketplaces, or more simply: “mapping the darknet”. Basically: what’s being sold, by whom, from where, to where, and what’s the overall value?

Between 2011 and 2013, the Silk Road marketplace attracted hundreds of millions of dollars worth of bitcoin-based transactions before being closed down by the FBI, but relatively little is known about the geography of this global trade. The darknet throws up lots of interesting research topics: around traffic in illegal wildlife products, the effect of healthcare policies on demand for illegal prescription drugs, whether law enforcement has (or can have) much of an impact, questions around the geographies of trade (e.g. sites of production and consumption), and the economics of these marketplaces — as well as the ethics of researching all this.

OII researchers tend to come from very different disciplinary backgrounds, and I’m always curious about what brings people here. A computer scientist by training, Martin first worked as a software developer for Last.fm, an online music community that built some of the first pieces of big data infrastructure, “because we had a lot of data and very little money.” In terms of the professional experience he says it showed him how far you can get by being passionate about your work — and the importance of resourcefulness; “that a good answer is not to say, ‘No, we can’t do that,’ but to say: ‘Well, we can’t do it this way, but here are three other ways we can do it instead.’”

Resourcefulness is certainly something you need when researching darknet marketplaces. Two very large marketplaces (AlphaBay and Hansa) were recently taken down by the FBI, DEA and Dutch National Police, part-way through Martin’s data collection. Having your source suddenly disappear is a worry for any long-term data scraping process. However in this case, it raises the opportunity of moving beyond a simple observational study to a quasi-experiment. The disruption allows researchers to observe what happens in the overall marketplace after the external intervention — does trade actually go down, or simply move elsewhere? How resilient are these marketplaces to interference by law enforcement?

Having originally worked in industry for a few years, Martin completed a Master’s programme at UCL’s Centre for Advanced Spatial Analysis, which included training in cartography. The first time I climbed the three long flights of stairs to his office to say hello we quickly got talking about crisis mapping platforms, something he’d subsequently worked on during his PhD at UCL. He’s particularly interested in the historic context for the recent emergence of these platforms, where large numbers of people come together over a shared purpose: “Platforms like Wikipedia, for example, can have significant social and economic impact, while at the same time not necessarily being designed platforms. Wikipedia is something that kind of emerged, it’s the online encyclopaedia that somehow worked. For me that meant that there is great power in these platform models, but very little understanding of what they actually represent, or how to design them; even how to conceptualise them.”

“You can think of Wikipedia as a place for discourse, as a community platform, as an encyclopaedia, as an example of collective action. There are many theoretical ways to interpret it, and I think this makes it very powerful, but also very hard to understand what Wikipedia is; or indeed any large and complex online platform, like the darknet markets we’re looking at now. I think we’re at a moment in history where we have this new superpower that we don’t fully understand yet, so it’s a time to build knowledge.” Martin claims to have become “a PhD student by accident” while looking for a way to participate in this knowledge building: and found that doing a PhD was a great way to do so.

Whether discussing Wikipedia, crisis-mapping, the darknet, or indeed data infrastructures, it’s great to hear people talking about having to study things from many different angles — because that’s what the OII, as a multidisciplinary department, does in spades. It’s what we do. And Martin certainly agrees: “I feel incredibly privileged to be here. I have a technical background, but these are all intersectional, interdisciplinary, highly complex questions, and you need a cross-disciplinary perspective to look at them. I think we’re at a point where we’ve built a lot of the technological building blocks for online spaces, and what’s important now are the social questions around them: what does it mean, what are those capacities, what can we use them for, and how do they affect our societies?”

Social questions around darknet markets include the development of trust relationships between buyers and sellers (despite the explicit goal of law enforcement agencies to fundamentally undermine trust between them); identifying societal practices like consumption of recreational drugs, particularly when transplanted into a new online context; and the nature of market resilience, like when markets are taken down by law enforcement. “These are not, at core, technical questions,” Martin says. “Technology will play a role in answering them, but fundamentally these are much broader questions. What I think is unique about the OII is that it has a strong technical competence in its staff and research, but also a social, political, and economic science foundation that allows a very broad perspective on these matters. I think that’s absolutely unique.”

There were only a few points in our conversation where Martin grew awkward, a few topics he said he “would kind of dance around“ rather than provide on-record chat for a blog post. He was keen not to inadvertently provide a how-to guide for obtaining, say, fentanyl on the darknet; there are tricky unanswered questions of class (do these marketplaces allow a gentrification of illegal activities?) and the whitewashing of the underlying violence and exploitation inherent to these activities (thinking again about blood cannabis); and other areas where there’s simply not yet enough research to make firm pronouncements.

But we’ll certainly touch on some of these areas as we document the progress of the project over the coming months, exploring some maps of the global market as they are released, and also diving into the ethics of researching the darknet; so stay tuned!

Until then, Martin Dittus can be found at:

Web: https://www.oii.ox.ac.uk/people/martin-dittus/
Email: martin.dittus@oii.ox.ac.uk
Twitter: @dekstop

Follow the darknet project at: https://www.oii.ox.ac.uk/research/projects/economic-geog-darknet/

Twitter: @OiiDarknet

]]>
Cyberbullying is far less prevalent than offline bullying, but still needs addressing https://ensr.oii.ox.ac.uk/cyberbullying-is-far-less-prevalent-than-offline-bullying-but-still-needs-addressing/ Wed, 12 Jul 2017 08:33:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4337 Bullying is a major public health problem, with systematic reviews supporting an association between adolescent bullying and poor mental wellbeing outcomes. In their Lancet article “Cyberbullying and adolescent well-being in England: a population-based cross sectional study”, Andrew Przybylski and Lucy Bowes report the largest study to date on the prevalence of traditional and cyberbullying, based on a nationally representative sample of 120,115 adolescents in England.

While nearly a third of the adolescent respondents reported experiencing significant bullying in the past few months, cyberbullying was much less common, with around five percent of respondents reporting recent significant experiences. Both traditional and cyberbullying were independently associated with lower mental well-being, but only the relation between traditional bullying and well-being was robust. This supports the view that cyberbullying is unlikely to provide a source for new victims, but rather presents an avenue for further victimisation of those already suffering from traditional forms of bullying.

This stands in stark contrast to media reports and the popular perception that young people are now more likely to be victims of cyberbullying than traditional forms. The results also suggest that interventions to address cyberbullying will only be effective if they also consider the dynamics of traditional forms of bullying, supporting the urgent need for evidence-based interventions that target *both* forms of bullying in adolescence. That said, as social media and Internet connectivity become an increasingly intrinsic part of modern childhood, initiatives fostering resilience in online and every day contexts will be required.

We caught up with Andy and Lucy to discuss their findings:

Ed.: You say that given “the rise in the use of mobile and online technologies among young people, an up to date estimation of the current prevalence of cyberbullying in the UK is needed.” Having undertaken that—what are your initial thoughts on the results?

Andy: I think a really compelling thing we learned in this project is that researchers and policymakers have to think very carefully about what constitutes a meaningful degree of bullying or cyberbullying. Many of the studies and reports we reviewed were really loose on details here while a smaller core of work was precise and informative. When we started our study it was difficult to sort through the noise but we settled on a solid standard—at least two or three experiences of bullying in the past month—to base our prevalence numbers and statistical models on.

Lucy: One of the issues here is that studies often use different measures, so it is hard to compare like for like, but in general our study supports other recent studies indicating that relatively few adolescents report being cyberbullied only—one study by Dieter Wolke and colleagues that collected between 2014-2015 found that whilst 29% of school students reported being bullied, only 1% of 11-16 year olds reported only cyberbullying. Whilst that study was only in a handful of schools in one part of England, the findings are strikingly similar to our own. In general then it seems that rates of cyberbullying are not increasing dramatically; though it is concerning that prevalence rates of both forms of bullying—particularly traditional bullying—have remained unacceptably high.

Ed.: Is there a policy distinction drawn between “bullying” (i.e. young people) and “harassment” (i.e. the rest of us, including in the workplace)—and also between “bullying” and “cyber-bullying”? These are all basically the same thing, aren’t they—why distinguish?

Lucy: I think this is a good point; people do refer to ‘bullying’ in the workplace as well. Bullying, at its core, is defined as intentional, repeated aggression targeted against a person who is less able to defend him or herself—for example, a younger or more vulnerable person. Cyberbullying has the additional definition of occurring only in an online format—but I agree that this is the same action or behaviour, just taking place in a different context. Whilst in practice bullying and harassment have very similar meanings and may be used interchangeably, harassment is unlawful under the Equality Act 2010, whilst bullying actually isn’t a legal term at all. However certain acts of bullying could be considered harassment and therefore be prosecuted. I think this really just reflects the fact that we often ‘carve up’ human behaviour and experience according to our different policies, practices and research fields—when in reality they are not so distinct.

Ed.: I suppose online bullying of young people might be more difficult to deal with, given it can occur under the radar, and in social spaces that might not easily admit adults (though conversely, leave actual evidence, if reported..). Why do you think there’s a moral panic about cyberbullying — is it just newspapers selling copy, or does it say something interesting about the Internet as a medium — a space that’s both very open and very closed? And does any of this hysteria affect actual policy?

Andy: I think our concern arises from the uncertainty and unfamiliarity people have about the possibilities the Internet provides. Because it is full of potential—for good and ill—and is always changing, wild claims about it capture our imagination and fears. That said, the panic absolutely does affect policy and parenting discussions in the UK. Statistics and figures coming from pressure groups and well-meaning charities do put the prevalence of cyberbullying at terrifying, and unrealistically high, levels. This certainly has affected the way parents see things. Policy makers tend to seize on the worse case scenario and interpret things through this lens. Unfortunately this can be a distraction when there are known health and behavioural challenges facing young people.

Lucy: For me, I think we do tend to panic and highlight the negative impacts of the online world—often at the expense of the many positive impacts. That said, there was—and remains—a worry that cyberbullying could have the potential to be more widespread, and to be more difficult to resolve. The perpetrator’s identity may be unknown, may follow the child home from school, and may be persistent—in that it may be difficult to remove hurtful comments or photos from the Internet. It is reassuring that our findings, as well as others’, suggest that cyberbullying may not be associated with as great an impact on well-being as people have suggested.

Ed.: Obviously something as deeply complex and social as bullying requires a complex, multivalent response: but (that said), do you think there are any low-hanging interventions that might help address online bullying, like age verification, reporting tools, more information in online spaces about available help, more discussion of it as a problem (etc.)?

Andy: No easy ones. Understanding that cyber- and traditional bullying aren’t dissimilar, parental engagement and keeping lines of communication open are key. This means parents should learn about the technology their young people are using, and that kids should know they’re safe disclosing when something scary or distressing eventually happens.

Lucy: Bullying is certainly complex; school-based interventions that have been successful in reducing more traditional forms of bullying have tended to involve those students who are not directly involved but who act as ‘bystanders’—encouraging them to take a more active stance against bullying rather than remaining silent and implicitly suggesting that it is acceptable. There are online equivalents being developed, and greater education that discourages people (both children and adults) from sharing negative images or words, or encourages them to actively ‘dislike’ such negative posts show promise. I also think it’s important that targeted advice and support for those directly affected is provided.

Ed.: Who’s seen as the primary body responsible for dealing with bullying online: is it schools? NGOs? Or the platform owners who actually (if not-intentionally) host this abuse? And does this topic bump up against wider current concerns about (e.g.) the moral responsibilities of social media companies?

Andy: There is no single body that takes responsibility for this for young people. Some charities and government agencies, like the Child Exploitation and Online Protection command (CEOP) are doing great work. They provide a forum for information for parents and professionals for kids that is stratified by age, and easy-to-complete forms that young people or carers can use to get help. Most industry-based solutions require users to report and flag offensive content and they’re pretty far behind the ball on this because we don’t know what works and what doesn’t. At present cyberbullying consultants occupy the space and the services they provide are of dubious empirical value. If industry and the government want to improve things on this front they need to make direct investments in supporting robust, open, basic scientific research into cyberbulling and trials of promising intervention approaches.

Lucy: There was an interesting discussion by the NSPCC about this recently, and it seems that people are very mixed in their opinions—some would also say parents play an important role, as well as Government. I think this reflects the fact that cyberbullying is a complex social issue. It is important that social media companies are aware, and work with government, NGOs and young people to safeguard against harm (as many are doing), but equally schools and parents play an important role in educating children about cyberbullying—how to stay safe, how to play an active role in reducing cyberbullying, and who to turn to if children are experiencing cyberbullying.

Ed.: You mention various limitations to the study; what further evidence do you think we need, in order to more completely understand this issue, and support good interventions?

Lucy: I think we need to know more about how to support children directly affected by bullying, and more work is needed in developing effective interventions for cyberbullying. There are some very good school-based interventions with a strong evidence base to suggest that they reduce the prevalence of at least traditional forms of bullying, but they are not being widely implemented in the UK, and this is a missed opportunity.

Andy: I agree—a focus on flashy cyberbullying headlines presents the real risk of distracting us from developing and implementing evidence-based interventions. The Internet cannot be turned off and there are no simple solutions.

Ed.: You say the UK is ranked 20th of 27 EU countries on the mental well-being index, and also note the link between well-being and productivity. Do you think there’s enough discussion and effort being put into well-being, generally? And is there even a general public understanding of what “well-being” encompasses?

Lucy: I think the public understanding of well-being is probably pretty close to the research definition—people have a good sense that this involves more than not having psychological difficulty for example, and that it refers to friendships, relationships, and doing well; one’s overall quality of life. Both research and policy is placing more of an emphasis on well-being—in part because large international studies have suggested that the UK may score particularly poorly on measures of well-being. This is very important if we are going to raise standards and improve people’s quality of life.


Read the full article: Andrew Przybylski and Lucy Bowes (2017) Cyberbullying and adolescent well-being in England: a population-based cross sectional study. The Lancet Child & Adolescent Health.

Andrew Przybylski is an experimental psychologist based at the Oxford Internet Institute. His research focuses on applying motivational theory to understand the universal aspects of video games and social media that draw people in, the role of game structure and content on human aggression, and the factors that lead to successful versus unsuccessful self-regulation of gaming contexts and social media use. @ShuhBillSkee

Lucy Bowes is a Leverhulme Early Career Research Fellow at Oxford’s Department of Experimental Psychology. Her research focuses on the impact of early life stress on psychological and behavioural development, integrating social epidemiology, developmental psychology and behavioural genetics to understand the complex genetic and environmental influences that promote resilience to victimization and early life stress. @DrLucyBowes

Andy Przybylski and Lucy Bowes were talking to the Oxford Internet Institute’s Managing Editor, David Sutcliffe.

]]>
Does Twitter now set the news agenda? https://ensr.oii.ox.ac.uk/does-twitter-now-set-the-news-agenda/ Mon, 10 Jul 2017 08:30:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4176 The information provided in the traditional media is of fundamental importance for the policy-making process, signalling which issues are gaining traction, which are falling out of favour, and introducing entirely new problems for the public to digest. But the monopoly of the traditional media as a vehicle for disseminating information about the policy agenda is being superseded by social media, with Twitter in particular used by politicians to influence traditional news content.

In their Policy & Internet article, “Politicians and the Policy Agenda: Does Use of Twitter by the U.S. Congress Direct New York Times Content?” Matthew A. Shapiro and Libby Hemphill examine the extent to which he traditional media is influenced by politicians’ Twitter posts. They draw on indexing theory, which states that media coverage and framing of key policy issues will tend to track elite debate. To understand why the newspaper covers an issue and predict the daily New York Times content, it is modelled as a function of all of the previous day’s policy issue areas as well as all of the previous day’s Twitter posts about all of the policy issue areas by Democrats and Republicans.

They ask to what extent are the agenda-setting efforts of members of Congress acknowledged by the traditional media; what, if any, the advantages are for one party over the other, measured by the traditional media’s increased attention; and whether there is any variance across different policy issue areas? They find that Twitter is a legitimate political communication vehicle for US officials, that journalists consider Twitter when crafting their coverage, and that Twitter-based announcements by members of Congress are a valid substitute for the traditional communiqué in journalism, particularly for issues related to immigration and marginalized groups, and issues related to the economy and health care.

We caught up with the authors to discuss their findings:

Ed.: Can you give a quick outline of media indexing theory? Does it basically say that the press reports whatever the elite are talking about? (i.e. that press coverage can be thought of as a simple index, which tracks the many conversations that make up elite debate).

Matthew: Indexing theory, in brief, states that the content of media reports reflects the degree to which elites – politicians and leaders in government in particular – are in agreement or disagreement. The greater the level of agreement or consensus among elites, the less news there is to report in terms of elite conflict. This is not to say that a consensus among elites is not newsworthy; indexing theory conveys how media reporting is a function of the multiple voices that exist when there is elite debate.

Ed.: You say Twitter seemed a valid measure of news indexing (i.e. coverage) for at least some topics. Could it be that the NYT isn’t following Twitter so much as Twitter (and the NYT) are both following something else, i.e. floor debates, releases, etc.?

Matthew: We can’t test for whether the NYT is following Twitter rather than floor debates/press releases without collecting data for the latter. Perhaps If the House and Senate Press Galleries are indexing the news based on House and Senate debates, and if Twitter posts by members of Congress reflect the House and Senate discussions, we could still argue that Twitter remains significant because there are no limits on the amount of discussion – i.e. the boundaries of the House and Senate floors no longer exist – and the media are increasingly reliant on politicians’ use of Twitter to communicate to the press. In any case, the existing research shows that journalists are increasingly relying on Twitter posts for updates from elites.

Ed.: I’m guessing that indexing theory only really works for non-partisan media that follow elite debates, like the NYT? Or does it also work for tabloids? And what about things like Breitbart (and its ilk) .. which I’m guessing appeals explicitly to a populist audience, rather than particularly caring what the elite are talking about?

Matthew: If a study similar to our was done to examine the indexing tendencies of tabloids, Breitbart, or a similar type of media source, the first step would be to determine what is being discussed regularly in these outlets. Assuming, for example, that there isn’t much discussion about marginalized groups in Breitbart, in the context of indexing theory it would not be relevant to examine the pool of congressional Twitter posts mentioning marginalized groups. Those posts are effectively off of Breitbart’s radar. But, generally, indexing theory breaks down if partisanship and bias drive the reporting.

Ed.: Is there any sense in which Trump’s “Twitter diplomacy” has overturned or rendered moot the recent literature on political uses of Twitter? We now have a case where a single (personal) Twitter account can upset the stock market — how does one theorise that?

Matthew: In terms of indexing theory, we could argue that Trump’s Twitter posts themselves generate a response from Democrats and Republicans in Congress and thus muddy the waters by conflating policy issues with other issues like his personality, ties to Russia, his fact-checking problems, etc. This is well beyond our focus in the article, but we speculate that Trump’s early-dawn use of Twitter is primarily for marketing, damage control, and deflection. There are really many different ways to study this phenomenon. One could, for example, examine the function of unfiltered news from politician to the public and compare it with the news that is simultaneously reported in the media. We would also be interested in understanding why Trump and politicians like Trump frame their Twitter posts the way they do, what effect these posts have on their devoted followers as well as their fence-sitting followers, and how this mobilizes Congress both online (i.e. on Twitter) and when discussing and voting on policy options on the Senate and House floors. These areas of research would all build upon rather than render moot the extant literature on the political uses of Twitter.

Ed.: Following on: how does Indexing theory deal with Trump’s populism (i.e. avowedly anti-Washington position), hatred and contempt of the media, and apparent aim of bypassing the mainstream press wherever possible: even ditching the press pool and favouring populist outlets over the NYT in press gaggles. Or is the media bigger than the President .. will indexing theory survive Trump?

Matthew: Indexing theory will of course survive Trump. What we are witnessing in the media is an inability, however, to limit gaper’s block in the sense that the media focus on the more inflammatory and controversial aspects of Trump’s Twitter posts – unfortunately on a daily basis – rather than reporting the policy implications. The media have to report what is news, and Presidential Twitter posts are now newsworthy, but we would argue that we are reaching a point where anything but the meat of the policy implications must be effectively filtered. Until we reach a point where the NYT ignores the inflammatory nature of Trumps Twitter posts, it will be challenging to test indexing theory in the context of the policy agenda setting process.

Ed.: There are recent examples (Brexit, Trump) of the media apparently getting things wrong because they were following the elites and not “the forgotten” (or deplorable) .. who then voted in droves. Is there any sense in the media industry that it needs to rethink things a bit — i.e. that maybe the elite is not always going to be in control of events, or even be an accurate bellwether?

Matthew: This question highlights an omission from our article, namely that indexing theory marginalizes the role of non-elite voices. We agree that the media could do a better job reporting on certain things; for instance, relying extensively on weather vanes of public opinion that do not account for inaccurate self-reporting (i.e. people not accurately representing themselves when being polled about their support for Trump, Brexit, etc.) or understanding why disenfranchised voters might opt to stay home on Election Day. When it comes to setting the policy agenda, which is the focus of our article, we stand by indexing theory given our assumption that the policy process itself is typically directed from those holding power. On that point, and regardless of whether it is normatively appropriate, elites are accurate bellwethers of the policy agenda.

Read the full article: Shapiro, M.A. and Hemphill, L. (2017) Politicians and the Policy Agenda: Does Use of Twitter by the U.S. Congress Direct New York Times Content? Policy & Internet 9 (1) doi:10.1002/poi3.120.


Matthew A. Shapiro and Libby Hemphill were talking to blog editor David Sutcliffe.

]]>
We should pay more attention to the role of gender in Islamist radicalization https://ensr.oii.ox.ac.uk/we-should-pay-more-attention-to-the-role-of-gender-in-islamist-radicalization/ Tue, 04 Jul 2017 08:54:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4249 One of the key current UK security issues is how to deal with British citizens returning from participation in ISIS in Syria and Iraq. Most of the hundreds fighting with ISIS were men and youths. But, dozens of British women and girls also travelled to join Islamic State in Syria and Iraq. For some, online recruitment appeared to be an important part of their radicalization, and many took to the Internet to praise life in the new Caliphate once they arrived there. These cases raised concerns about female radicalization online, and put the issue of women, terrorism, and radicalization firmly on the policy agenda. This was not the first time such fears had been raised. In 2010, the university student Roshonara Choudhry stabbed her Member of Parliament, after watching YouTube videos of the radical cleric Anwar Al Awlaki. She is the first and only British woman so far convicted of a violent Islamist attack.

In her Policy & Internet article “The Case of Roshonara Choudhry: Implications for Theory on Online Radicalization, ISIS Women, and the Gendered Jihad”, Elizabeth Pearson explores how gender might have factored in Roshonara’s radicalization, in order to present an alternative to existing theoretical explanations. First, in precluding her from a real-world engagement with Islamism on her terms, gender limitations in the physical world might have pushed her to the Internet. Here, a lack of religious knowledge made her particularly vulnerable to extremist ideology; a susceptibility only increased through Internet socialization and to an active radical milieu. Finally, it might have created a dissonance between her online and multiple “real” gendered identities, resulting in violence.

As yet, there is no adequately proven link between online material and violent acts. But given the current reliance of terrorism research on the online environment, and the reliance of policy on terrorism research, the relationship between the virtual and offline domains must be better understood. So too must the process of “radicalization” — which still lacks clarity, and relies on theorizing that is rife with assumptions. Whatever the challenges, understanding how men and women become violent radicals, and the differences there might be between them, has never been more important.

We caught up with Elizabeth to discuss her findings:

Ed.: You note “the Internet has become increasingly attractive to many women extremists in recent years” — do these extremist views tend to be found on (general) social media or on dedicated websites? Presumably these sites are discoverable via fairly basic search?

Elizabeth: Yes and no. Much content is easily found online. ISIS has been very good at ‘colonizing’ popular social media platforms with supporters, and in particular, Twitter was for a period the dominant site. It was ideal as it allowed ISIS fans to find one another, share material, and build networks and communities of support. In the past 18 months Twitter has made a concerted – and largely successful – effort to ‘take down’ or suspend accounts. This may simply have pushed support elsewhere. We know that Telegram is now an important channel for information, for example. Private groups, the dark web and hidden net resources exist alongside open source material on sites such as Facebook, familiar to everyone. Given the illegality of much of this content, there has been huge pressure on companies to respond. Still there is criticism from bodies such as the Home Affairs Select Committee that they are not responding quickly or efficiently enough.

Ed.: This case seemed to represent a collision not just of “violent jihadists vs the West” but also “Salafi-Jihadists vs women” (as well as “Western assumptions of Muslim assumptions of acceptable roles for women”) .. were these the main tensions at play here?

Elizabeth: One of the key aspects of Roshonara’s violence was that it was transgressive. Violent Jihadist groups tend towards conservatism regarding female roles. Although there is no theological reason why women should not participate in the defensive Jihad, they are not encouraged to do so. ISIS has worked hard in its propaganda to keep female roles domestic – yet ideologically so. Roshonara appears to have absorbed Al Awlaki’s messaging regarding the injustices faced by Muslims, but only acted when she saw a video by Azzam, a very key scholar for Al Qaeda supporters, which she understood as justifying female violence. Hatred of western foreign policy, and support for intervention in Iraq appeared to be the motivation for her attack; a belief that women could also fight is what prompted her to carry this out herself.

Ed.: Does this struggle tend to be seen as a political struggle about land and nationhood; or a supranational religious struggle — or both? (with the added complication of Isis conflating nation and religion..)

Elizabeth: Nobody yet understands exactly why people ‘radicalize’. It’s almost impossible to profile violent radicals beyond saying they tend to be mainly male – and as we know, that is not a hard and fast rule either. What we can say is that there are complex factors, and a variety of recurrent themes cited by violent actors, and found in propaganda and messaging. One narrative is about political struggle on behalf of Muslims, who face injustice, particularly from the West. ISIS has made this struggle about the domination of land and nationhood, a development of Al Qaeda’s message. Religion is also important to this. Despite different levels of knowledge of Islam, supporters of the violent Jihad share commitment to battle as justified in the Quran. They believe that Islam is the way, the only way, and they find in their faith an answer to global issues, and whatever is happening personally to them. It is not possible, in my view, to ignore the religious component declared in this struggle. But there are other factors too. That’s what makes this so difficult and complex.

Ed.: You say that Roshonara “did not follow the path of radicalization set out in theory”. How so? But also .. how important and grounded is this “theory” in the practice of counter-radicalization? And what do exceptions like Roshonara Choudhry signify?

Elizabeth: Theory — based on empirical evidence — suggests that violence is a male preserve. Violent Jihadist groups also generally restrict their violence to men, and men only. Theory also tells us that actors rarely carry out violence alone. Belonging is an important part of the violent Jihad and ‘entrance’ to violence is generally through people you know, friends, family, acquaintances. Even where we have seen young women for example travel to join ISIS, this has tended to be facilitated through friends, or online contacts, or family. Roshanara as a female acting alone in this time before ISIS is therefore something quite unusual. She signifies – through her somewhat unique case – just how transgressive female violence is, and just how unusual solitary action is. She also throws into question the role of the internet. The internet alone is not usually sufficient for radicalization; offline contacts matter. In her case there remain some questions of what other contacts may have influenced her violence.

I’m not entirely sure how joined up counter-radicalization practices and radicalization theory are. The Prevent strategy aside, there are many different approaches, in the UK alone. The most successful that I have seen are due to committed individuals who know the communities they are based in and are trusted by them. It is relationships that seem to count, above all else.

Ed.: Do you think her case is an interesting outlier (a “lone wolf” as people commented at the time), or do you think there’s a need for more attention to be paid to gender (and women) in this area, either as potential threats, or solutions?

Elizabeth: Roshonara is a young woman, still in jail for her crime. As I wrote this piece I thought of her as a student at King’s College London, as I am, and I found it therefore all the more affecting that she did what she did. There is a connection through that shared space. So it’s important for me to think of her in human terms, in terms of what her life was like, who her friends were, what her preoccupations were and how she managed, or did not manage, her academic success, her transition to a different identity from the one her parents came from. She is interesting to me because of this, and because she is an outlier. She is an outlier who reveals certain truths about what gender means in the violent Jihad. That means women, yes, but also men, ideas about masculinity, male and female roles. I don’t think we should think of young Muslim people as either ‘threats’ or ‘solutions’. These are not the only possibilities. We should think about society, and how gender works within it, and within particular communities within it.

Ed.: And is gender specifically “relevant” to consider when it comes to Islamic radicalization, or do you see similar gender dynamics across all forms of political and religious extremism?

Elizabeth: My current PhD research considers the relationship between the violent Jihad and the counter-Jihad – cumulative extremism. To me, gender matters in all study. It’s not really anything special or extra, it’s just a recognition that if you are looking at groups you need to take into account the different ways that men and women are affected. To me that seems quite basic, because otherwise you are not really seeing a whole picture. Conservative gender dynamics are certainly also at work in some nationalist groups. The protection of women, the function of women as representative of the honour or dishonour of a group or nation – these matter to groups and ideologies beyond the violent Jihad. However, the counter-Jihad is in other ways progressive, for example promoting narratives of protecting gay rights as well as women’s rights. So women for both need to be protected – but what they need to be protected from and how differs for each. What is important is that the role of women, and of gender, matters in consideration of any ‘extremism’, and indeed in politics more broadly.

Ed.: You’re currently doing research on Boko Haram — are you also looking at gender? And are there any commonalities with the British context you examined in this article?

Elizabeth: Boko Haram interests me because of the ways in which it has transgressed some of the most fundamental gender norms of the Jihad. Since 2014 they have carried out hundreds of suicide attacks using women and girls. This is highly unusual and in fact unprecedented in terms of numbers. How this impacts on their relationship with the international Jihad, and since 2015, ISIS, to whom their leader gave a pledge of allegiance is something I have been thinking about.

There are many local aspects of the Nigerian conflict that do not translate – poverty, the terrain, oral traditions of preaching, human rights violations, Sharia in northern Nigerian states, forced recruitment.. In gender terms however, the role of women, the honour/dishonour of women, and gender-based violence translate across contexts. In particular, women are frequently instrumentalized by movements for a greater cause. Perhaps the greatest similarity is the resistance to the imposition of Western norms, including gender norms, free-mixing between men and women and gender equality. This is a recurrent theme for violent Jihadists and their supporters across geography. They wish to protect the way of life they understand in the Quran, as they believe this is the word of God, and the only true word, superseding all man-made law.

Read the full article: Pearson, E. (2016) The Case of Roshonara Choudhry: Implications for Theory on Online Radicalization, ISIS Women, and the Gendered Jihad. Policy & Internet 8 (1) doi:10.1002/poi3.101.


Elizabeth Pearson was talking to blog editor David Sutcliffe.

]]>
Our knowledge of how automated agents interact is rather poor (and that could be a problem) https://ensr.oii.ox.ac.uk/our-knowledge-of-how-automated-agents-interact-is-rather-poor-and-that-could-be-a-problem/ Wed, 14 Jun 2017 15:12:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4191 Recent years have seen a huge increase in the number of bots online — including search engine Web crawlers, online customer service chat bots, social media spambots, and content-editing bots in online collaborative communities like Wikipedia. (Bots are important contributors to Wikipedia, completing about 15% of all Wikipedia edits in 2014 overally, and more than 50% in certain language editions.)

While the online world has turned into an ecosystem of bots (by which we mean computer scripts that automatically handle repetitive and mundane tasks), our knowledge of how these automated agents interact with each other is rather poor. But being automata without capacity for emotions, meaning-making, creativity, or sociality, we might expect bot interactions to be relatively predictable and uneventful.

In their PLOS ONE article “Even good bots fight: The case of Wikipedia“, Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi, and Taha Yasseri analyze the interactions between bots that edit articles on Wikipedia. They track the extent to which bots undid each other’s edits over the period 2001–2010, model how pairs of bots interact over time, and identify different types of interaction outcomes. Although Wikipedia bots are intended to support the encyclopaedia — identifying and undoing vandalism, enforcing bans, checking spelling, creating inter-language links, importing content automatically, mining data, identifying copyright violations, greeting newcomers, etc. — the authors find they often undid each other’s edits, with these sterile “fights” sometimes continuing for years.

They suggest that even relatively “dumb” bots may give rise to complex interactions, carrying important implications for Artificial Intelligence research. Understanding these bot-bot interactions will be crucial for managing social media, providing adequate cyber-security, and designing autonomous vehicles (that don’t crash..).

We caught up with Taha Yasseri and Luciano Floridi to discuss the implications of the findings:

Ed.: Is there any particular difference between the way individual bots interact (and maybe get bogged down in conflict), and lines of vast and complex code interacting badly, or having unforeseen results (e.g. flash-crashes in automated trading): i.e. is this just (another) example of us not always being able to anticipate how code interacts in the wild?

Taha: There are similarities and differences. The most notable difference is that here bots are not competing. They all work based on same rules and more importantly to achieve the same goal that is to increase the quality of the encyclopedia. Considering these features, the rather antagonistic interactions between the bots come as a surprise.

Ed.: Wikipedia have said that they know about it, and that it’s a minor problem: but I suppose Wikipedia presents a nice, open, benevolent system to make a start on examining and understanding bot interactions. What other bot-systems are you aware of, or that you could have looked at?

Taha: In terms of content generating bots, Twitter bots have turned out to be very important in terms of online propaganda. The crawlers bots that collect information from social media or the web (such as personal information or email addresses) are also being heavily deployed. In fact we have come up with a first typology of the Internet bots based on their type of action and their intentions (benevolent vs malevolent), that is presented in the article.

Ed.: You’ve also done work on human collaborations (e.g. in the citizen science projects of the Zooniverse) — is there any work comparing human collaborations with bot collaborations — or even examining human-bot collaborations and interactions?

Taha: In the present work we do compare bot-bot interactions with human-human interactions to observe similarities and differences. The most striking difference is in the dynamics of negative interactions. While human conflicts heat up very quickly and then disappear after a while, bots undoing each others’ contribution comes as a steady flow which might persist over years. In the HUMANE project, we discuss the co-existence of humans and machines in the digital world from a theoretical point of view and there we discuss such ecosystems in details.

Ed.: Humans obviously interact badly, fairly often (despite being a social species) .. why should we be particularly worried about how bots interact with each other, given humans seem to expect and cope with social inefficiency, annoyances, conflict and break-down? Isn’t this just more of the same?

Luciano: The fact that bots can be as bad as humans is far from reassuring. The fact that this happens even when they are programmed to collaborate is more disconcerting than what happens among humans when these compete, or fight each other. Here are very elementary mechanisms that through simple interactions generate messy and conflictual outcomes. One may hope this is not evidence of what may happen when more complex systems and interactions are in question. The lesson I learnt from all this is that without rules or some kind of normative framework that promote collaboration, not even good mechanisms ensure a good outcome.

Read the full article: Tsvetkova M, Garcia-Gavilanes R, Floridi, L, Yasseri T (2017) Even good bots fight: The case of Wikipedia. PLoS ONE 12(2): e0171774. doi:10.1371/journal.pone.0171774


Taha Yasseri and Luciano Floridi were talking to blog editor David Sutcliffe.

]]>
Social media and the battle for perceptions of the U.S.–Mexico border https://ensr.oii.ox.ac.uk/social-media-and-the-battle-for-perceptions-of-the-u-s-mexico-border/ Wed, 07 Jun 2017 07:33:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4195 The US-Mexican border region is home to approximately 12 million people, and is the most-crossed international border in the world. Unlike the current physical border, the image people hold of “the border” is not firmly established, and can be modified. One way is via narratives (or stories), which are a powerful tool for gaining support for public policies. Politicians’ narratives about the border have historically been perpetuated by the traditional media, particularly when this allows them to publish sensational and attention grabbing news stories.

However, new social media, including YouTube, provide opportunities for less-mainstream narratives of cooperation. In their Policy & Internet article “Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube”, Donna L. Lybecker, Mark K. McBeth, Maria A. Husmann, and Nicholas Pelikan find that YouTube videos about the U.S.–Mexico border focus (perhaps unsurprisingly) on mainstream, divisive issues such as security and violence, immigration, and drugs. However, the videos appear to construct more favourable perspectives of the border region than traditional media, with around half constructing a sympathetic view of the border, and the people associated with it.

The common perceptions of the border generally take two distinct forms. One holds the U.S.–Mexico border to be the location of an annual legal flow of economic trade of $300 billion each year, a line which millions of people legally cross annually, the frontier of 100 years of peaceful coexistence between two countries, and the point of integration for the U.S.–Mexico relationship. An alternative perspective (particularly common since 9/11) focuses less on economic trade and legal crossing and more on undocumented immigration, violence and drug wars, and a U.S.-centric view of “us versus them”.

In order to garner public support for their “solutions” to these issues, politicians often define the border using one of these perspectives. Acceptance of the first view might well allow policymakers to find cooperative solutions to joint problems. Acceptance of the second creates a policy problem that is more value-laden than empirically based and that creates distrust and polarization among stakeholders and between the countries. The U.S.–Mexico border is clearly a complex region encompassing both positives and negatives — but understanding these narratives could have a real-world impact on policy along the border; possibly creating the greater cooperation we need to solve many of the urgent problems faced by border communities.

We caught up with the authors to discuss their findings:

Ed.: Who created the videos you studied: were they created by the public, or were they also produced by perhaps more progressive media outlets? i.e. were you able to disentangle the effect of the media in terms of these narratives?

Mark / Donna: For this study, we studied YouTube videos, using the “relevance” filter. Thus, the videos were ordered by most related to our topic and by most frequently viewed. With this selection method we captured videos produced by a variety of sources; some that contained embedded videos from mainstream media, others created by non-profit groups and public television groups, but also videos produced by interested citizens or private groups. The non-profit and media groups more often discuss the beneficial elements of the border (trade, shared environmental protection, etc.), while individual citizens or groups tended to post the more emotional and narrative-driven videos more likely to construct the border residents in a non-deserving sense.

Ed.: How influential do you think these videos are? In a world of extreme media concentration (where even the US President seems to get his news from Fox headlines and the 42 people he follows on Twitter) .. how significant is “home grown” content; which after all may have better, or at least more locally-representative, information than certain parts of the national media?

Mark / Donna: Today’s extreme media world supplies us with constant and fast-moving news. YouTube is part of the media mix, frequently mentioned as the second largest search engine on the web, and as such is influential. Media sources report that a large number of diverse people use YouTube, thus the videos encompass a broad swath of international, domestic and local issues. That said, as with most news sources today, some individuals gravitate to the stories that represent their point of view, and YouTube makes it possible for individuals to do just this. In other words, if a person perceives the US-Mexico border as a horrible place, they can use key words to search YouTube videos that represent that point of view.

However, we believe YouTube to be more influential than some other sources precisely because it encompasses diversity, thus, even when searching using specific terms, there will likely be a few videos included in search results that provide a different point of view. Furthermore, we did find some local, “home grown” content included in search results, again adding to the diversity presented to the individual watching YouTube. Although, we found less homegrown content than initially expected. Overall, there is selectivity bias with YouTube, like any type of media, but YouTube’s greater diversity of postings and viewers and broad distribution may increase both exposure and influence.

Ed.: Your article was published pre-Trump. How do you think things might have changed post-election, particularly given the uncertainty over “the wall“ and NAFTA — and Trump’s rather strident narratives about each? Is it still a case of “negative traditional media; equivocal social media”?

Mark / Donna: Our guess is that anti-border forces are more prominent on YouTube since Trump’s election and inauguration. Unless there is an organized effort to counter discussion of “the wall” and produce positive constructions of the border, we expect that YouTube videos posted over the past few months lean more toward non-deserving constructions.

Ed.: How significant do you think social media is for news and politics generally, i.e. its influence in this information environment — compared with (say) the mainstream press and party-machines? I guess Trump’s disintermediated tweeting might have turned a few assumptions on their heads, in terms of the relation between news, social media and politics? Or is the media always going to be bigger than Trump / the President?

Mark / Donna: Social media, including YouTube and Twitter, is interactive and thus allows anyone to bypass traditional institutions. President Trump can bypass institutions of government, media institutions, even his own political party and staff and communicate directly with people via Twitter. Of course, there are advantages to that, including hearing views that differ from the “official lines,” but there are also pitfalls, such as minimized editing of comments.

We believe people see both the strengths and the weakness with social media, and thus often read news from both traditional media sources and social media. Traditional media is still powerful and connected to traditional institutions, thus, remains a substantial source of information for many people — although social media numbers are climbing, particularly with the President’s use of Twitter. Overall, both types of media influence politics, although we do not expect future presidents will necessarily emulate President Trump’s use of social media.

Ed.: Another thing we hear a lot about now is “filter bubbles” (and whether or not they’re a thing). YouTube filters viewing suggestions according to what you watch, but still presents a vast range of both good and mad content: how significant do you think YouTube (and the explosion of smartphone video) content is in today’s information / media environment? (And are filter bubbles really a thing..?)

Mark / Donna: Yeah, we think that the filter bubbles are real. Again, we think that social media has a lot of potential to provide new information to people (and still does); although currently social media is falling into the same selectivity bias that characterizes the traditional media. We encourage our students to use online technology to seek out diverse sources; sources that both mirror their opinions and that oppose their opinions. People in the US can access diverse sources on a daily basis, but they have to be willing to seek out perspectives that differ from their own view, perspectives other than their favoured news source.

The key is getting individuals to want to challenge themselves and to be open to cognitive dissonance as they read or watch material that differs from their belief systems. Technology is advanced but humans still suffer the cognitive limitations from which they have always suffered. The political system in the US, and likely other places, encourages it. The key is for individuals to be willing to listen to views unlike their own.

Read the full article: Lybecker, D.L., McBeth, M.K., Husmann, M.A, and Pelikan, N. (2015) Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube. Policy & Internet 7 (4). DOI: 10.1002/poi3.94.


Mark McBeth and Donna Lybecker were talking to blog editor David Sutcliffe.

]]>
Using Open Government Data to predict sense of local community https://ensr.oii.ox.ac.uk/using-open-government-data-to-predict-sense-of-local-community/ Tue, 30 May 2017 09:31:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4137 Community-based approaches are widely employed in programmes that monitor and promote socioeconomic development. And building the “capacity” of a community — i.e. the ability of people to act individually or collectively to benefit the community — is key to these approaches. The various definitions of community capacity all agree that it comprises a number of dimensions — including opportunities and skills development, resource mobilization, leadership, participatory decision making, etc. — all of which can be measured in order to understand and monitor the implementation of community-based policy. However, measuring these dimensions (typically using surveys) is time consuming and expensive, and the absence of such measurements is reflected in a greater focus in the literature on describing the process of community capacity building, rather than on describing how it’s actually measured.

A cheaper way to measure these dimensions, for example by applying predictive algorithms to existing secondary data like socioeconomic characteristics, socio-demographics, and condition of housing stock, would certainly help policy makers gain a better understanding of local communities. In their Policy & Internet article “Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data”, Alessandro Piscopo, Ronald Siebes, and Lynda Hardman employ a machine-learning technique (“Random Forests”) to evaluate an estimate of community capacity derived from open government data, and determine the most important predictive variables.

The resulting models were found to be more accurate than those based on traditional statistics, demonstrating the feasibility of the Random Forests technique for this purpose — being accurate, able to deal with small data sets and nonlinear data, and providing information about how each variable in the dataset contributes to predictive accuracy.

We caught up with the authors to discuss their findings:

Ed.: Just briefly: how did you do the study? Were you essentially trying to find which combinations of variables available in Open Government Data predicted “sense of community and participation” as already measured by surveys?

Authors: Our research stemmed from an observation of the measures of social characteristics available. These are generally obtained through expensive surveys, so we asked ourselves “how could we generate them in a more economic and efficient way?” In recent years, the UK government has openly released a wealth of datasets, which could be used to provide information for other purposes — in our case, providing measures of sense of community and participation — than those for which they had been created. We started our work by consulting papers from the social science domain, to understand which factors were associated to sense of community and participation. Afterwards, we matched the factors that were most commonly mentioned in the literature with “actual” variables found in UK Open Government Data sources.

Ed.: You say “the most determinant variables in our models were only partially in agreement with the most influential factors for sense of community and participation according to the social science literature” — which were they, and how do you account for the discrepancy?

Authors: We observed two types of discrepancy. The first was the case of variables that had roughly the same level of importance in our models and in others previously developed, but with a different rank. For instance, median age was by far the most determinant variable in our model for sense of community. This variable was not ranked among the top five variables in the literature, although it was listed among the significant variables.

The second type of discrepancy regarded variables which were highly important in our models and not influential in others, or vice versa. An example is the socioeconomic status of residents of a neighbourhood, which appeared to have no effect on participation in prior studies, but was the top-ranking variable in our participation model (operationalised as the number of people in intermediate occupation).

We believe that there are multiple explanations for these phenomena, all of which deserve further investigation. First, highly determinant predictors in conventional statistical models have been proven to have little or no importance in ensemble algorithms, such as the one we used [1]. Second, factors influencing sense of community and civic participation may vary according to the context (e.g. different countries; see [3] about sense of community in China for an example). Finally, different methods may measure different aspects related to a socially meaningful concept, leading to different partial explanations.

Ed.: What were the predictors for “lack of community” — i.e. what would a terrible community look like, according to your models?

Authors: Our work did not really focus on finding “good” and “bad” communities. However, we did notice some characteristics that were typical of communities with low sense of community or participation in our dataset. For example, sense of community had a strong negative correlation with work and stores accessibility, with ethnic fragmentation, and with the number of people living in the UK for less than 10 years. On the other hand, it was positively correlated with the age of residents. Participation, instead, was negatively correlated with household composition and occupation of its residents, whilst it had a positive relation with their level of education and the weekly worked hours. Of course, these data would require to be interpreted by a social scientist, in order to properly contextualise and understand them.

Ed.: Do you see these techniques as being more useful to highlight issues and encourage discussion, or actually being used in planning? For example, I can see it might raise issues if machine-learning models “proved” that presence of immigrant populations, or neighbourhoods of mixed economic or ethnic backgrounds, were less cohesive than homogeneous ones (not sure if they are?).

Authors: How machine learning algorithms work is not always clear, even to specialists, and this has led some people to describe them as “black boxes”. We believe that models like those we developed can be extremely useful to challenge existing perspectives based on past data available in the social science literature, e.g. they can be used to confirm or reject previous measures in the literature. Additionally, machine learning models can serve as indicators that can be more frequently consulted: they are cheaper to produce, we can use them more often, and see whether policies have actually worked.

Ed.: It’s great that existing data (in this case, Open Government Data) can be used, rather than collecting new data from scratch. In practice, how easy is it to repurpose this data and build models with it — including in countries where this data may be more difficult to access? And were there any variables you were interested in that you couldn’t access?

Authors: Identifying relevant datasets and getting hold of them was a lengthy process, even in the UK, where plenty of work has been done to make government data openly available. We had to retrieve many datasets from the pages of the government department that produced them, such as the Department for Work and Pensions or the Home Office, because we could not find them through the portal data.gov.uk. Next to this, the ONS website was another very useful resource, which we used to get census data.

The hurdles encountered in gathering the data led us to recommend the development of methods that would be able to more automatically retrieve datasets from a list of sources and select the ones that provide the best results for predictive models of social dimensions.

Ed.: The OII has done some similar work, estimating the local geography of Internet use across Britain, combining survey and national census data. The researchers said the small-area estimation technique wasn’t being used routinely in government, despite its power. What do you think of their work and discussion, in relation to your own?

Authors: One of the issues we were faced with in our research was the absence of nationwide data about sense of community and participation at a neighbourhood level. The small area estimation approach used by Blank et al., 2017 [2] could provide a suitable solution to the issue. However, the estimates produced by their approach understandably incorporate a certain amount of error. In order to use estimated values as training data for predictive models of community measures it would be key to understand how this error would be propagated to the predicted values.

[1] Berk, R. 2006. “ An Introduction to Ensemble Methods for Data Analysis.” Sociological Methods & Research 34 (3): 263–95.
[2] Blank, G., Graham, M., and Calvino, C. 2017. Local Geographies of Digital Inequality. Social Science Computer Review. DOI: 10.1177/0894439317693332.
[3] Xu, Q., Perkins, D.D. and Chow, J.C.C., 2010. Sense of community, neighboring, and social capital as predictors of local political participation in China. American journal of community psychology, 45(3-4), pp.259-271.

Read the full article: Piscopo, A., Siebes, R. and Hardman, L. (2017) Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data. Policy & Internet 9 (1) doi:10.1002/poi3.145.


Alessandro Piscopo, Ronald Siebes, and Lynda Hardman were talking to blog editor David Sutcliffe.

]]>
Should adverts for social casino games be covered by gambling regulations? https://ensr.oii.ox.ac.uk/should-adverts-for-social-casino-games-be-covered-by-gambling-regulations/ Wed, 24 May 2017 07:05:19 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4108 Social casino gaming, which simulates gambling games on a social platform such as Facebook, is a nascent but rapidly growing industry — social casino game revenues grew 97 percent between 2012 and 2013, with a USD$3.5 billion market size by the end of 2015. Unlike gambling played for real money, social casino games generally have no monetary prizes and are free-to-play, although they may include some optional monetized features. The size of the market and users’ demonstrated interest in gambling-themed activities mean that social casino gamers are an attractive market for many gambling operators, and several large international gambling companies have merged with social casino game operators.

Some operators consider the games to be a source of additional revenue in jurisdictions where online gambling is largely illegal, or a way to attract new customers to a land-based gambling venue. Hybrid models are also emerging, with the potential for tangible rewards for playing social casino games. This merging of gaming and gambling means that many previously established boundaries are becoming blurred, and at many points, the two are indistinguishable.

However, content analysis of game content and advertising can help researchers, industry, and policymakers better understand how the two entertainment forms overlap. In their Policy & Internet article “Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults?”, Brett Abarbanel, Sally M. Gainsbury, Daniel King, Nerilee Hing, and Paul H. Delfabbro undertake a content analysis of 115 social casino gaming advertisements captured by young adults during their regular Internet use. They find advertisement imagery typically features images likely to appeal to young adults, with message themes including a glamorizing and normalization of gambling. Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content.

Gambling advertisements currently face much stricter restrictions on exposure and distribution than do social casino game advertisements: despite the latter containing much gambling-themed content designed to attract consumers. Given the receptivity of young people to messages that encourage gambling, the authors recommend that gaming companies embrace corporate social responsibility standards, including adding warning messages to advertisements for gambling-themed games. They hope that their qualitative research may complement existing quantitative findings, and facilitate discussions about appropriate policies for advertisements for social casino games and other gambling-themed games.

We caught up with Brett to discuss their findings:

Ed.: You say there are no policies related to the advertising of social casino games — why is this? And do you think this will change?

Brett: Social casino games are regulated under general consumer regulations, but there are no specific regulations for these types of games and they do not fall under gambling regulation. Although several gambling regulatory bodies have considered these games, as they do not require payment to play and prizes have no monetary value they are not considered gambling activities. Where the games include branding for gambling companies or are considered advertising, they may fall under relevant legislation. Currently it is up to individual consumers to consider if they are relevant, which includes parents considering their children’s’ use of the games.

Ed.: Is there work on whether these sorts of games actually encourage gambling behaviour? As opposed to gambling behaviour simply pre-existing — i.e. people are either gamblers or not, susceptible or not.

Brett: We have conducted previous research showing that almost one-fifth of adults who played social casino games had gambled for money as a direct result of these games. Research also found that two-thirds of adolescents who had paid money to play social casino games had gambled directly as a result of these games. This builds on other international research suggesting that there is a pathway between games and gambling. For some people, the games are perceived to be a way to ‘try out’ or practice gambling without money and most are motivated to gamble due to the possibility of winning real money. For some people with gambling problems, the games can trigger the urge to gamble, although for others, the games are used as a way to avoid gambling in an attempt to cut back. The pathway is complicated and needs further specific research, including longitudinal studies.

Ed.: Possibly a stupid question: you say social games are a huge and booming market, despite being basically free to play. Where does the revenue come from?

Brett: Not a stupid question at all! When something is free, of course it makes sense to question where the money comes from. The revenue in these business models comes from advertisements and players. The advertisement revenue model is similar to other revenue models, but the player revenue model, which is based largely on micropayments, is a major component of how these games make money. Players can typically play free, and micropayments are voluntary. However, when they run out of free chips, players have to wait to continue to play, or they can purchase additional chips.

The micropayments can also improve game experience, such as to obtain in-game items, as a temporary boost in the game, to add lives/strength/health to an avatar or game session, or unlock the next stage in the game. In social casino games, for example, micropayments can be made to acquire more virtual chips with which to play the slot game. Our research suggests that only a small fraction of the player base actually makes micropayments, and a smaller fraction of these pay very large amounts. Since many of these games are free to play, but one can pay to advance through game in certain ways, they have colloquially been referred to as “freemium” games.

Ed.: I guess social media (like Facebook) are a gift to online gambling companies: i.e. being able to target (and A/B test) their adverts to particular population segments? Are there any studies on the intersection of social media, gambling and behavioural data / economics?

Brett: There is a reasonable cross-over in social casino game players and gamblers – our Australian research found 25% of Internet and 5% of land-based gamblers used social casino games and US studies show around one-third of social casino gamers visit land-based casinos. Many of the most popular and successful social casino games are owned by companies that also operate gambling, in venues and online. Some casino companies offer social casino games to continue to engage with customers when they are not in the venue and may offer prizes that can be redeemed in venues. Games may allow gambling companies to test out how popular games will be before they put them in venues. Although, as most players do not pay to play social casino games, they may engage with these differently from gambling products.

Ed.: We’ve seen (with the “fake news” debate) social media companies claiming to simply be a conduit to others’ content, not content providers themselves. What do they say in terms of these social games: I’m assuming they would either claim that they aren’t gambling, or that they aren’t responsible for what people use social media for?

Brett: We don’t want to speak for the social media companies themselves, and they appear to leave quite a bit up to the game developers. Advertising standards have become more lax on gambling games – the example we give in our article is Google, who had a strict policy against advertisements for gambling-related content in the Google Play store but in February 2015 began beta testing advertisements for social casino games. In some markets where online gambling is restricted, online gambling sites offer ‘free’ social casino games that link to real money sites as a way to reach these markets.

Ed.: I guess this is just another example of the increasingly attention-demanding, seductive, sexualised, individually targeted, ubiquitous, behaviourally attuned, monetised environment we (and young children) find ourselves in. Do you think we should be paying attention to this trend (e.g. noticing the close link between social gaming and gambling) or do you think we’ll all just muddle along as we’ve always done? Is this disturbing, or simply people doing what they enjoy doing?

Brett: We should certainly be paying attention to this trend, but don’t think the activity of social casino games is disturbing. A big part of the goal here is awareness, followed by conscious action. We would encourage companies to take more care in controlling who accesses their games and to whom their advertisements are targeted. As you note, David, we are in such a highly-targeted, specified state of advertising. As a result, we should, theoretically, be able to avoid marketing games to young kids. Companies should also certainly be mindful of the potential effect of cartoon games. We don’t automatically assign a sneaky, underhanded motive to the industry, but at the same time there is a percentage of the population that is at risk for gambling problems and we don’t want to exacerbate the situation by inadvertently advertising to young people, who are more susceptible to this type of messaging.

Read the full article: Abarbanel, B., Gainsbury, S.M., King, D., Hing, N., and Delfabbro, P.H. (2017) Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults? Policy & Internet 9 (2). DOI: 10.1002/poi3.135.


Brett Abarbanel was talking to blog editor David Sutcliffe.

]]>
How useful are volunteer crisis-mappers in a humanitarian crisis? https://ensr.oii.ox.ac.uk/how-useful-are-volunteer-crisis-mappers-in-a-humanitarian-crisis/ Thu, 18 May 2017 09:11:30 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4129 User-generated content can provide a useful source of information during humanitarian crises like armed conflict or natural disasters. With the rise of interactive websites, social media, and online mapping tools, volunteer crisis mappers are now able to compile geographic data as a humanitarian crisis unfolds, allowing individuals across the world to organize as ad hoc groups to participate in data collection. Crisis mappers have created maps of earthquake damage and trapped victims, analyzed satellite imagery for signs of armed conflict, and cleaned Twitter data sets to uncover useful information about unfolding extreme weather events like typhoons.

Although these volunteers provide useful technical assistance to humanitarian efforts (e.g. when maps and records don’t exist or are lost), their lack of affiliation with “formal” actors, such as the United Nations, and the very fact that they are volunteers, makes them a dubious data source. Indeed, concerns have been raised about the quality of amateur mapping and data efforts, and the uses to which they are put. Most of these concerns assume that volunteers have no professional training. And herein lies the contradiction: by doing the work for free and at their own will the volunteers make these efforts possible and innovative, but this is also why crisis mapping is doubted and questioned by experts.

By investigating crisis-mapping volunteers and organizations, Elizabeth Resor’s article “The Neo-Humanitarians: Assessing the Credibility of Organized Volunteer Crisis Mappers” published in Policy & Internet presents evidence of a more professional cadre of volunteers and a means to distinguish between different types of volunteer organizations. Given these organizations now play an increasingly integrated role in humanitarian responses, it’s crucial that their differences are understood and that concerns about the volunteers are answered.

We caught up with Elizabeth to discuss her findings:

Ed.: We have seen from Citizen Science (and Wikipedia) that large crowds of non-professional volunteers can produce work of incredible value, if projects are set up right. Are the fears around non-professional crisis mappers valid? For example, is this an environment where everything “must be correct”, rather than “probably mostly correct”?

Elizabeth: Much of the fears around non-professional crisis mappers comes from a lack of understanding about who the volunteers are and why they are volunteering. As these questions are answered and professional humanitarian actors become more familiar with the concept of volunteer humanitarians, I think many of these fears are diminishing.

Due to the fast-paced and resource-constrained environments of humanitarian crises, traditional actors, like the UN, are used to working with “good enough” data, or data that are “probably mostly correct”. And as you point out, volunteers can often produce very high quality data. So when you combine these two facts, it stands to reason that volunteer crisis mappers can contribute necessary data that is most likely as good as (if not better) than the data that humanitarian actors are used to working with. Moreover, in my research I found that most of these volunteers are not amateurs in the full sense because they come from related professional fields (such as GIS).

Ed.: I suppose one way of assuaging fears is to maybe set up an umbrella body of volunteer crisis mapping organisations, and maybe offer training opportunities and certification of output. But then I suppose you just end up as professionals. How blurry are the lines between useful-not useful / professional-amateur in crisis mapping?

Elizabeth: There is an umbrella group for volunteer organizations set up exactly for that reason! It’s called the Digital Humanitarian Network. At the time that I was researching this article, the DHN was very new and so I wasn’t able to ask if actors were more comfortable working with volunteers contacted through the DHN, but that would be an interesting issue to look into.

The two crisis mapping organizations I researched — the Standby Task Force and the GIS Corps — both offer training and some structure to volunteer work. They take very different approaches to the volunteer work — the Standby Task Force work can include very simple micro-tasks (like classifying photographs), whereas the GIS Corps generally provides quite specialised technical assistance (like GIS analysis). However, both of these kinds of tasks can produce useful and needed data in a crisis.

Ed.: Another article in the journal examined the effective take-over of a Russian crisis volunteer website by the Government, i.e. by professionalising (and therefore controlling) the site and volunteer details they had control over who did / didn’t turn up in disaster areas (effectively meaning nonprofessionals were kept out). How do humanitarian organisations view volunteer crisis mappers: as useful organizations to be worked with in parallel, or as something to be controlled?

Elizabeth: I have seen examples of humanitarian and international development agencies trying to lead or create crowdsourcing responses to crises (for example, USAID “Mapping to End Malaria“). I take this as a sign that these agencies understand the value in volunteer contributions — something they wouldn’t have understood without the initial examples created by those volunteers.

Still, humanitarian organizations are large bureaucracies, and even in a crisis they function as bureaucracies, while volunteer organizations take a nimble and flexible approach. This structural difference is part of the value that volunteers can offer humanitarian organizations, so I don’t believe that it would be in the best interest of the humanitarian organizations to completely co-opt or absorb the volunteer organizations.

Ed.: How does liability work? Eg if crisis workers in a conflict zone are put in danger by their locations being revealed by well-meaning volunteers? Or mistakes being being made on the ground because of incorrect data — perhaps injected by hostile actors to create confusion (thinking of our current environment of hybrid warfare..).

Elizabeth: Unfortunately, all humanitarian crises are dangerous and involve threats to “on the ground” response teams as well as affected communities. I’m not sure how liability is handled. Incorrect data or revealed locations might not be immediately traced back to the source of the problem (i.e. volunteers) and the first concern would be minimizing the harm, not penalizing the cause.

Still, this is the greatest challenge to volunteer crisis mapping that I see. Volunteers don’t want to cause more harm than good, and to do this they must understand the context of the crisis in which they are getting involved (even if it is remotely). This is where relationships with organizations “on the ground” are key. Also, while I found that most volunteers had experience related to GIS and/or data analysis, very few had experience in humanitarian work. This seems like an area where training can help volunteers understand the gravity of their work, to ensure that they take it seriously and do their best work.

Ed.: Finally, have you ever participated as a volunteer crisis mapper? And also: how do you the think the phenomenon is evolving, and what do you think researchers ought to be looking at next?

Elizabeth: I haven’t participated in any active crises, although I’ve tried some of the tools and trainings to get a sense of the volunteer activities.

In terms of future research, you mentioned hybridized warfare and it would be interesting to see how this change in the location of a crisis (i.e. in online spaces as well as physical spaces) is changing the nature of volunteer responses. For example, how can many dispersed volunteers help monitor ISIS activity on YouTube and Twitter? Or are those tasks better suited for an algorithm? I would also be curious to see how the rise of isolationist politicians in Europe and the US has influenced volunteer crisis mapping. Has this caused more people to want to reach out and participate in international crises or is it making them more inward-looking? It’s certainly an interesting field to follow!

Read the full article: Resor, E. (2016) The Neo-Humanitarians: Assessing the Credibility of Organized Volunteer Crisis Mappers. Policy & Internet 8 (1) DOI:10.1002/poi3.112.

Elizabeth Resor was talking to blog editor David Sutcliffe.

]]>
Did you consider Twitter’s (lack of) representativeness before doing that predictive study? https://ensr.oii.ox.ac.uk/did-you-consider-twitters-lack-of-representativeness-before-doing-that-predictive-study/ Mon, 10 Apr 2017 06:12:36 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4062 Twitter data have many qualities that appeal to researchers. They are extraordinarily easy to collect. They are available in very large quantities. And with a simple 140-character text limit they are easy to analyze. As a result of these attractive qualities, over 1,400 papers have been published using Twitter data, including many attempts to predict disease outbreaks, election results, film box office gross, and stock market movements solely from the content of tweets.

Easy availability of Twitter data links nicely to a key goal of computational social science. If researchers can find ways to impute user characteristics from social media, then the capabilities of computational social science would be greatly extended. However few papers consider the digital divide among Twitter users. But the question of who uses Twitter has major implications for research attempts to use the content of tweets for inference about population behaviour. Do Twitter users share identical characteristics with the population interest? For what populations are Twitter data actually appropriate?

A new article by Grant Blank published in Social Science Computer Review provides a multivariate empirical analysis of the digital divide among Twitter users, comparing Twitter users and nonusers with respect to their characteristic patterns of Internet activity and to certain key attitudes. It thereby fills a gap in our knowledge about an important social media platform, and it joins a surprisingly small number of studies that describe the population that uses social media.

Comparing British (OxIS survey) and US (Pew) data, Grant finds that generally, British Twitter users are younger, wealthier, and better educated than other Internet users, who in turn are younger, wealthier, and better educated than the offline British population. American Twitter users are also younger and wealthier than the rest of the population, but they are not better educated. Twitter users are disproportionately members of elites in both countries. Twitter users also differ from other groups in their online activities and their attitudes.

Under these circumstances, any collection of tweets will be biased, and inferences based on analysis of such tweets will not match the population characteristics. A biased sample can’t be corrected by collecting more data; and these biases have important implications for research based on Twitter data, suggesting that Twitter data are not suitable for research where representativeness is important, such as forecasting elections or gaining insight into attitudes, sentiments, or activities of large populations.

Read the full article: Blank, G. (2016) The Digital Divide Among Twitter Users and Its Implications for Social Research. Social Science Computer Review. DOI: 10.1177/0894439316671698

We caught up with Grant to explore the implications of the findings:

Ed.: Despite your cautions about lack of representativeness, you mention that the bias in Twitter could actually make it useful to study (for example) elite behaviours: for example in political communication?

Grant: Yes. If you want to study elites and channels of elite influence then Twitter is a good candidate. Twitter data could be used as one channel of elite influence, along with other online channels like social media or blog posts, and offline channels like mass media or lobbying. There is an ecology of media and Twitter is one part.

Ed.: You also mention that Twitter is actually quite successful at forecasting certain offline, commercial behaviours (e.g. box office receipts).

Grant: Right. Some commercial products are disproportionately used by wealthier or younger people. That certainly would include certain forms of mass entertainment like cinema. It also probably includes a number of digital products like smartphones, especially more expensive phones, and wearable devices like a Fitbit. If a product is disproportionately bought by the same population groups that use Twitter then it may be possible to forecast sales using Twitter data. Conversely, products disproportionately used by poorer or older people are unlikely to be predictable using Twitter.

Ed.: Is there a general trend towards abandoning expensive, time-consuming, multi-year surveys and polling? And do you see any long-term danger in that? i.e. governments and media (and academics?) thinking “Oh, we can just get it off social media now”.

Grant: Yes and no. There are certainly people who are thinking about it and trying to make it work. The ease and low cost of social media is very seductive. However, that has to be balanced against major weaknesses. First the population using Twitter (and other social media) is unclear, but it is not a random sample. It is just a population of Twitter users, which is not a population of interest to many.

Second, tweets are even less representative. As I point out in the article, over 40% of people with a Twitter account have never sent a tweet, and the top 15% of users account for 85% of tweets. So tweets are even less representative of any real-world population than Twitter users. What these issues mean is that you can’t calculate measures of error or confidence intervals from Twitter data. This is crippling for many academic and government uses.

Third, Twitter’s limited message length and simple interface tends to give it advantages on devices with restricted input capability, like phones. It is well-suited for short, rapid messages. These characteristics tend to encourage Twitter use for political demonstrations, disasters, sports events, and other live events where reports from an on-the-spot observer are valuable. This suggests that Twitter usage is not like other social media or like email or blogs.

Fourth, researchers attempting to extract the meaning of words have 140 characters to analyze and they are littered with abbreviations, slang, non-standard English, misspellings and links to other documents. The measurement issues are immense. Measurement is hard enough in surveys when researchers have control over question wording and can do cognitive interviews to understand how people interpret words.

With Twitter (and other social media) researchers have no control over the process that generated the data, and no theory of the data generating process. Unlike surveys, social media analysis is not a general-purpose tool for research. Except in limited areas where these issues are less important, social media is not a promising tool.

Ed.: How would you respond to claims that for example Facebook actually had more accurate political polling than anyone else in the recent US Election? (just that no-one had access to its data, and Facebook didn’t say anything)?

Grant: That is an interesting possibility. The problem is matching Facebook data with other data, like voting records. Facebook doesn’t know where people live. Finding their location would not be an easy problem. It is simpler because Facebook would not need an actual address; it would only need to locate the correct voting district or the state (for the Electoral College in US Presidential elections). Still, there would be error of unknown magnitude, probably impossible to calculate. It would be a very interesting research project. Whether it would be more accurate than a poll is hard to say.

Ed.: Do you think social media (or maybe search data) scraping and analysis will ever successfully replace surveys?

Grant: Surveys are such versatile, general purpose tools. They can be used to elicit many kinds information on all kinds of subjects from almost any population. These are not characteristics of social media. There is no real danger that surveys will be replaced in general.

However, I can see certain specific areas where analysis of social media will be useful. Most of these are commercial areas, like consumer sentiments. If you want to know what people are saying about your product, then going to social media is a good, cheap source of information. This is especially true if you sell a mass market product that many people use and talk about; think: films, cars, fast food, breakfast cereal, etc.

These are important topics to some people, but they are a subset of things that surveys are used for. Too many things are not talked about, and some are very important. For example, there is the famous British reluctance to talk about money. Things like income, pensions, and real estate or financial assets are not likely to be common topics. If you are a government department or a researcher interested in poverty, the effect of government assistance, or the distribution of income and wealth, you have to depend on a survey.

There are a lot of other situations where surveys are indispensable. For example, if the OII wanted to know what kind of jobs OII alumni had found, it would probably have to survey them.

Ed.: Finally .. 1400 Twitter articles in .. do we actually know enough now to say anything particularly useful or concrete about it? Are we creeping towards a Twitter revelation or consensus, or is it basically 1400 articles saying “it’s all very complicated”?

Grant: Mostly researchers have accepted Twitter data at face value. Whatever people write in a tweet, it means whatever the researcher thinks it means. This is very easy and it avoids a whole collection of complex issues. All the hard work of understanding how meaning is constructed in Twitter and how it can be measured is yet to be done. We are a long way from understanding Twitter.

Read the full article: Blank, G. (2016) The Digital Divide Among Twitter Users and Its Implications for Social Research. Social Science Computer Review. DOI: 10.1177/0894439316671698


Grant Blank was talking to blog editor David Sutcliffe.

]]>
Internet Filtering: And Why It Doesn’t Really Help Protect Teens https://ensr.oii.ox.ac.uk/internet-filtering-and-why-it-doesnt-really-help-protect-teens/ Wed, 29 Mar 2017 08:25:06 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4035 Young British teens (between 12-15 years) spend nearly 19 hours a week online, raising concerns for parents, educators, and politicians about the possible negative experiences they may have online. Schools and libraries have long used Internet-filtering technologies as a means of mitigating adolescents’ experiences online, and major ISPs in Britain now filter new household connections by default.

However, a new article by Andrew Przybylski and Victoria Nash, “Internet Filtering Technology and Aversive Online Experiences in Adolescents”, published in the Journal of Pediatrics, finds equivocal to strong evidence that household-level Internet filtering does not reduce the chance of adolescents having recent aversive online experiences. The authors analysed data from 1030 in-home interviews conducted with early adolescents as part of Ofcom’s Children and Parents Media Use and Attitudes Report.

The Internet is now a central fixture of modern life, and the positives and negatives of online Internet use need to be balanced by caregivers. Internet filters have been adopted as a tool for limiting the negatives; however, evidence of their effectiveness is dubious. They are expensive to develop and maintain, and also carry significant informational costs: even sophisticated filters over-block, which is onerous for those seeking information about sexual health, relationships, or identity, and might have a disproportionate effect on vulnerable groups. Striking the right balance between protecting adolescents and respecting their rights to freedom of expression and information presents a formidable challenge.

In conducting their study to address this uncertainty, the authors found convincing evidence that Internet filters were not effective at shielding early adolescents from aversive experiences online. Given this finding, they propose that evidence derived from a randomized controlled trial and registered research designs are needed to determine how far Internet-filtering technology supports or thwarts young people online. Only then will parents and policymakers be able to make an informed decision as to whether their widespread use justifies their costs.

We caught up with Andy and Vicki to discuss the implications of their study:

Ed.: Just this morning when working from home I tried to look up an article’s author and was blocked, Virgin Media presumably having decided he might be harmful to me. Where does this recent enthusiasm for default-filtering come from? Is it just that it’s a quick, uncomplicated (technological) fix, which I guess is what politicians / policy-people like?

Vicki: In many ways this is just a typical response to the sorts of moral panic which have long arisen around the possible risks of new technologies. We saw the same concerns arise with television in the 1960s, for example, and in that case the UK’s policy response was to introduce a ‘watershed’, a daily time after which content aimed at adults could be shown. I suppose I see filtering as fulfilling the same sort of policy gap, namely recognising that certain types of content can be legally available but should not be served up ‘in front of the children’.

Andy: My reading of the psychological and developmental literature suggests that filters provide a way of creating a safe walled space in schools, libraries, and homes for young people to use the internet. This of course does not mean that reading our article will be harmful!

Ed.: I suppose that children desperate to explore won’t be stopped by a filter; those who aren’t curious probably wouldn’t encounter much anyway — what is the profile of the “child” and the “harm-filtering” scenario envisaged by policy-makers? And is Internet filtering basically just aiming at the (easy) middle of the bell-curve?

Vicki: This is a really important point. Sociologists recognised many years ago that the whole concept of childhood is socially constructed, but we often forget about this when it comes to making policy. There’s a tendency for politicians, for example, either to describe children as inherently innocent and vulnerable, or to frame them as expert ‘digital natives’, yet there’s plenty of academic research which demonstrates the extent to which children’s experiences of the Internet vary by age, education, income and skill level.

This matters because it suggests a ‘one-size-fits-all’ approach may fail. In the context of this paper, we specifically wanted to check whether children with the technical know-how to get around filters experienced more negative experiences online than those who were less tech-savvy. This is often assumed to be true, but interestingly, our analysis suggests this factor makes very little difference.

Ed.: In all these discussions and policy decisions: is there a tacit assumption that these children are all growing up in a healthy, supportive (“normal”) environment — or is there a recognition that many children will be growing up in attention-poor (perhaps abusive) environments and that maybe one blanket technical “solution” won’t fit everyone? Is there also an irony that the best protected children will already be protected, and the least protected, probably won’t be?

Andy: Yes this is ironic and somewhat tragic dynamic. Unfortunately because the evidence for filtering effectiveness is at such an early age it’s not possible to know which young people (if any) are any more or less helped by filters. We need to know how effective filters are in general before moving on to see the young people for whom they are more or less helpful. We would also need to be able to explicitly define what would constitute an ‘attention-poor’ environment.

Vicki: From my perspective, this does always serve as a useful reminder that there’s a good reason why policy-makers turn to universalistic interventions, namely that this is likely to be the only way of making a difference for the hardest-to-reach children whose carers might never act voluntarily. But admirable motives are no replacement for efficacy, so as Andy notes, it would make more sense to find evidence that household internet filtering is effective, and second that it can be effective for this vulnerable group, before imposing default-on filters on all.

Ed.: With all this talk of potential “harm” to children posed by the Internet .. is there any sense of how much (specific) harm we’re talking about? And conversely .. any sense of the potential harms of over-blocking?

Vicki: No, you are right to see that the harms of Internet use are quite hard to pin down. These typically take the form of bullying, or self-harm horror stories related to Internet use. The problem is that it’s often easier to gauge how many children have been exposed to certain risky experiences (e.g. viewing pornography) than to ascertain whether or how they were harmed by this. Policy in this area often abides by what’s known as ‘the precautionary principle’.

This means that if you lack clear evidence of harm but have good reason to suspect public harm is likely, then the burden of proof is on those who would prove it is not likely. This means that policies aimed at protecting children in many contexts are often conservative, and rightly so. But it also means that it’s important to reconsider policies in the light of new evidence as it comes along. In this case we found that there is not as much evidence that Internet filters are effective at preventing exposure to negative experiences online as might be hoped.

Ed.: Stupid question: do these filters just filter “websites”, or do they filter social media posts as well? I would have thought young teens would be more likely to find or share stuff on social media (i.e. mobile) than “on a website”?

Andy: My understanding is that there are continually updated ‘lists’ of website that contain certain kinds of content such as pornography, piracy, gambling, or drug use (see this list on Wikipedia, for example) as these categories vary by UK ISP.

Vicki: But it’s not quite true to say that household filtering packages don’t block social media. Some of the filtering options offered by the UK’s ‘Big 4’ ISPs enable parents and carers to block social media sites for ‘homework time’ for example. A bigger issue though, is that much of children’s Internet use now takes place outside the home. So, household-level filters can only go so far. And whilst schools and libraries usually filter content, public wifi or wifi in friends’ houses may not, and content can be easily exchanged directly between kids’ devices via Bluetooth or messaging apps.

Ed: Do these blocked sites (like the webpage of that journal author I was trying to access) get notified that they have been blocked and have a chance to appeal? Would a simple solution to over blocking simply be to allow (e.g. sexual health, gender-issue, minority, etc.) sites to request that they be whitelisted, or apply for some “approved” certification?

Vicki: I don’t believe so. There are whitelisted sites, indeed that was a key outcome of an early inquiry into ‘over-blocking’ by the UK Children’s Council on Internet Safety. But in order for this to be a sufficient response, it would be necessary for all sites and apps that are subject to filtering to be notified, to allow for possible appeal. The Open Rights Group provide a tool that allows site owners to check the availability of their sites, but there is no official process for seeking whitelisting or appeal.

Ed.: And what about age verification as an alternative? (however that is achieved / validated), i.e. restricting content before it is indexed, rather than after?

Andy: To evaluate this we would need to conduct a randomised controlled trial where we tested how the application of age verification for different households, selected at random, would relate (or not) to young people encountering potentially aversive content online.

Vicki: But even if such a study could prove that age verification tools were effective in restricting access to underage Internet users, it’s not clear this would be a desirable scenario. It makes most sense for content that is illegal to access below a certain age, such as online gambling or pornography. But if content is age-gated without legal requirement, then it could prove a very restrictive tool, removing the possibility of parental discretion and failing to make any allowances for the sorts of differences in ability or maturity between children that I pointed out at the beginning.

Ed: Similarly to the the arguments over Google making content-blocking decisions (e.g. over the “right to forget”): are these filtering decisions left to the discretion of ISPs / the market / the software providers, or to some government dept / NGO? Who’s ultimately in charge of who sees what?

Vicki: Obviously when it comes to content that is illegal for children or adults to access then broad decisions about the delineation of what is illegal falls to governments and is then interpreted and applied by private companies. But when it comes to material that is not illegal, but just deemed harmful or undesirable, then ISPs and social media platforms are left to decide for themselves how to draw the boundaries and then how to apply their own policies. This increasing self-regulatory role for what Jonathan Zittrain has called ‘private sherriffs’ is often seen a flexible and appropriate response, but it does bring reduced accountability and transparency.

Ed.: I guess it’s ironic with all this attention paid to children, that we now find ourselves in an information environment where maybe we should be filtering out (fake) content for adults as well (joke..). But seriously: with all these issues around content, is your instinct that we should be using technical fixes (filtering, removing from indexes, etc.) or trying to build reflexivity, literacy, resilience in users (i.e. coping strategies). Or both? Both are difficult.

Andy: It is as ironic as it is tragic. When I talk to parents (both Vicki and I are parents) I hear that they have been let down by the existing advice which often amounts to little more than ‘turn it off’. Their struggles have nuance (e.g. how do I know who is in my child’s WhatsApp groups? Is snapchat OK if they’re just using it amongst best friends?) and whilst general broad advice is heard, this more detailed information and support is hard for parents to find.

Vicki: I agree. But I think it’s inevitable that we’ll always need a combination of tools to deal with the incredible array of content that develops online. No technical tool will ever be 100% reliable in blocking content we don’t want to see, and we need to know how to deal with whatever gets through. That certainly means having a greater social and political focus on education but also a willingness to consider that building resilience may mean exposure to risk, which is hard for some groups to accept.

Every element of our strategy should be underpinned by whatever evidence is available. Ultimately, we also need to stop thinking about these problems as technology problems: fake news is just as much a feature of increasing political extremism and alienation just as online pornography is a feature of a heavily sexualised mainstream culture. And we can be certain: neither of these broader social trends will be resolved by simple efforts to block out what we don’t wish to see.

Read the full article: Przybylski, A. and Nash, V. (2017) Internet Filtering Technology and Aversive Online Experiences in Adolescents. Journal of Pediatrics. DOI: http://dx.doi.org/10.1016/j.jpeds.2017.01.063


Andy Przybylski and Vicki Nash were talking to blog editor David Sutcliffe.

]]>
Tackling Digital Inequality: Why We Have to Think Bigger https://ensr.oii.ox.ac.uk/tackling-digital-inequality-why-we-have-to-think-bigger/ Wed, 15 Mar 2017 11:42:25 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3988 Numerous academic studies have highlighted the significant differences in the ways that young people access, use and engage with the Internet and the implications it has in their lives. While the majority of young people have some form of access to the Internet, for some their connections are sporadic, dependent on credit on their phones, an available library, or Wi-Fi open to the public. Qualitative data in a variety of countries has shown such limited forms of access can create difficulties for these young people as an Internet connection becomes essential for socialising, accessing public services, saving money, and learning at school.

While the UK government has financed technological infrastructure and invested in schemes to address digital inequalities, the outcomes of these schemes are rarely uniformly positive or transformative for the people involved. This gap between expectation and reality demands theoretical attention; with more attention placed on the cultural, political and economic contexts of the digitally excluded, and the various attempts to “include” them.

Focusing on a two-year digital inclusion scheme for 30 teenagers and their families initiated by a local council in England, a qualitative study by Huw C. Davies, Rebecca Eynon, and Sarah Wilkin analyses why, despite the good intentions of the scheme’s stakeholders, it fell short of its ambitions. It also explains how the neoliberalist systems of governance that are increasingly shaping the cultures and behaviours of Internet service providers and schools — that incentivise action that is counterproductive to addressing digital inequality and practices — cannot solve the problems they create.

We caught up with the authors to discuss the study’s findings:

Ed.: It was estimated that around 10% of 13 year olds in the study area lacked dependable access to the Internet, and had no laptop or PC at home. How does this impact educational outcomes?

Huw: It’s impossible to disaggregate technology from everything else that can affect a young person’s progress through school. However, one school in our study had transferred all its homework and assessments online while the other schools were progressing to this model. The students we worked with said doing research for homework is synonymous with using Google or Wikipedia, and it’s the norm to send homework and coursework to teachers by email, upload it to Virtual Learning Environments, or print it out at home. Therefore students who don’t have access to the Internet have to spend time and effort finding work-arounds such as using public libraries. Lack of access also excludes such students from casual learning from resources online or pursuing their own interests in their own time.

Ed.: The digital inclusion scheme was designed as a collaboration between a local council in England (who provided Internet services) and schools (who managed the scheme) in order to test the effect of providing home Internet access on educational outcomes in the area. What was your own involvement, as researchers?

Huw: Initially, we were the project’s expert consultants: we were there to offer advice, guidance and training to teachers and assess the project’s efficacy on its conclusion. However, as it progressed we took on the responsibility of providing skills training to the scheme’s students and technical support to their families. When it came to assessing the scheme, by interviewing young people and their families at their homes, we were therefore able to draw on our working knowledge of each family’s circumstances.

Ed.: What was the outcome of the digital inclusion project —- i.e. was it “successful”?

Huw: As we discuss in the article, defining success in these kinds of schemes is difficult. Subconsciously many people involved in these kinds of schemes expect technology to be transformative for the young people involved yet in reality the changes you see are more nuanced and subtle. Some of the scheme’s young people found apprenticeships or college courses, taught themselves new skills, used social networks for the first time and spoke to friends and relatives abroad by video for free. These success stories definitely made the scheme worthwhile. However, despite the significant good will of the schools, local council, and the families to make the scheme a success there were also frustrations and problems. In the article we talk about these problems and argue that the challenges the scheme encountered are not just practical issues to be resolved, but are systemic issues that need to be explicitly recognised in future schemes of this kind.

Ed.: And in the article you use neoliberalism as a frame to discuss these issues..?

Huw: Yes. But we recognise in the article that this is a concept that needs to be used with care. It’s often used pejoratively and/or imprecisely. We have taken it to mean a set of guiding principles that are intended to produce a better quality of services through competition, targets, results, incentives and penalties. The logic of these principles, we argue, influences they way organisations treat individual users of their services.

For example, for Internet Service Providers (ISPs) the logic of neoliberalism is to subcontract out the constituent parts of an overall service provision to create mini internal markets that (in theory) promote efficiency through competition. Yet this logic only really works if everyone comes to the market with similar resources and abilities to make choices. If their customers are well informed and wealthy enough to remind companies that they can take their business elsewhere these companies will have a strong incentive to improve their services and reduce their costs. If customers are disempowered by lack of choice the logic of neoliberalism tends to marginalise or ignore their needs. These were low-income families with little or no experience of exercising consumer choice and rights. For them therefore these mini markets didn’t work.

In the schools we worked with the logic of neoliberalism meant staff and students felt under pressure to meet certain targets — they all had to priortise things that were measured and measurable. Failure to meet these targets would then mean they would have to account for what went wrong, face losing out on a reward or they would expect disciplinary action. It therefore becomes much more difficult for schools to devote time and energy to schemes such as this.

Ed.: Were there any obvious lessons that might lead to a better outcome if the scheme were to be repeated: or are the (social, economic, political) problems just too intractable, and therefore too difficult and expensive to sort out?

Huw: Many of the families told us that access to the Internet was becoming evermore vital. This was not just for homework but also for access to public and health services (that are being increasingly delivered online) and getting to the best deals online for consumer services. They often told us therefore that they would do whatever it took to keep their connection after the two-year scheme ended. This often meant paying for broadband out of their social security benefits or income that was too low to be taxable: income that could otherwise have been spent on, for example, food and clothing. Given its necessity, we should have a national conversation about providing this service to low income families for free.

Ed.: Some of the families included in the study could be considered “hard to reach”. What were your experiences of working with them?

Huw: There are many practical and ethical issues to address before these sorts of schemes can begin. These families often face multiple intersecting problems that involve many agencies (who don’t necessarily communicate with each other) intervening in their lives. For example, some of the scheme’s families were dealing with mental illness, disability, poor housing, and debt all at the same time. It is important that such schemes are set up with an awareness of this complexity. We are very grateful to the families that took part in the scheme and the insights they gave us for how such schemes should run in the future.

Ed.: Finally, how do your findings inform all the studies showing that “digital inclusion schemes are rarely uniformly positive or transformative for the people involved”. Are these studies gradually leading to improved knowledge (and better policy intervention), or simply showing the extent of the problem without necessarily offering “solutions”?

Huw: We have tried to put this scheme into a broader context to show such policy interventions have to be much more ambitious, intelligent, and holistic. We never assumed digital inequality is an isolated problem that can be fixed with a free broadband connection, but when people are unable to afford the Internet it is an indication of other forms of disadvantage that, in a sympathetic and coordinated way, have to be addressed simultaneously. Hopefully, we have contributed to the growing awareness that such attempts to ameliorate the symptoms may offer some relief but should never be considered a cure in itself.

Read the full article: Huw C. Davies, Rebecca Eynon, Sarah Wilkin (2017) Neoliberal gremlins? How a scheme to help disadvantaged young people thrive online fell short of its ambitions. Information, Communication & Society. DOI: 10.1080/1369118X.2017.1293131

The article is an output of the project “Tackling Digital Inequality Amongst Young People: The Home Internet Access Initiative“, funded by Google.

Huw Davies was talking to blog editor David Sutcliffe.

]]>
Exploring the world of digital detoxing https://ensr.oii.ox.ac.uk/exploring-the-world-of-digital-detoxing/ Thu, 02 Mar 2017 10:50:06 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3973 As our social interactions become increasingly entangled with the online world, there are some who insist on the benefits of disconnecting entirely from digital technology. These advocates of “digital detoxing” view digital communication as eroding our ability to concentrate, to empathise, and to have meaningful conversations.

A 2016 survey by OnePoll found that 40% of respondents felt they had “not truly experienced valuable moments such as a child’s first steps or graduation” because “technology got in the way”, and OfCom’s 2016 survey showed that 15 million British Internet users (representing a third of those online), have already tried a digital detox. In recent years, America has sought to pathologise a perceived over-use of digital technology as “Internet addiction”. While the term is not recognized by the DSM, the idea is commonly used in media rhetoric and forms an important backdrop to digital detoxing.

The article Disconnect to reconnect: The food/technology metaphor in digital detoxing (First Monday) by Theodora Sutton presents a short ethnography of the digital detoxing community in the San Francisco Bay Area. Her informants attend an annual four-day digital detox and summer camp for adults in the Californian forest called Camp Grounded. She attended two Camp Grounded sessions in 2014, and followed up with semi-structured interviews with eight detoxers.

We caught up with Theodora to examine the implications of the study and to learn more about her PhD research, which focuses on the same field site.

Ed.: In your forthcoming article you say that Camp Grounded attendees used food metaphors (and words like “snacking” and “nutrition”) to understand their own use of technology and behaviour. How useful is this as an analogy?

Theodora: The food/technology analogy is an incredibly neat way to talk about something we think of as immaterial in a more tangible way. We know that our digital world relies on physical connections, but we forget that all the time. Another thing it does in lending a dietary connotation is to imply we should regulate our consumption of digital use; that there are healthy and unhealthy or inappropriate ways of using it.

I explore more pros and cons to the analogy in the paper, but the biggest con in my opinion is that while it’s neat, it’s often used to make value judgments about technology use. For example, saying that online sociality is like processed food is implying that it lacks authenticity. So the food analogy is a really useful way to understand how people are interpreting technology culturally, but it’s important to be aware of how it’s used.

Ed.: How do people rationalise ideas of the digital being somehow “less real” or “genuine” (less “nourishing”), despite the fact that it obviously is all real: just different? Is it just a peg to blame an “other” and excuse their own behaviour .. rather than just switching off their phones and going for a run / sail etc. (or any other “real” activity..).

Theodora: The idea of new technologies being somehow less real or less natural is a pretty established Western concept, and it’s been fundamental in moral panics following new technologies. That digital sociality is different, not lesser, is something we can academically agree on, but people very often believe otherwise.

My personal view is that figuring out what kind of digital usage suits you and then acting in moderation is ideal, without the need for extreme lengths, but in reality moderation can be quite difficult to achieve. And the thing is, we’re not just talking about choosing to text rather than meet in person, or read a book instead of go on Twitter. We’re talking about digital activities that are increasingly inescapable and part of life, like work e-mail or government services being moved online.

The ability to go for a run or go sailing are again privileged activities for people with free time. Many people think getting back to nature or meeting in person are really important for human needs. But increasingly, not everyone has the ability to get away from devices, especially if you don’t have enough money to visit friends or travel to a forest, or you’re just too tired from working all the time. So Camp Grounded is part of what they feel is an urgent conversation about whether the technology we design addresses human, emotional needs.

Ed.: You write in the paper that “upon arrival at Camp Grounded, campers are met with hugs and milk and cookies” .. not to sound horrible, but isn’t this replacing one type of (self-focused) reassurance with another? I mean, it sounds really nice (as does the rest of the Camp), but it sounds a tiny bit like their “problem” is being fetishised / enjoyed a little bit? Or maybe that their problem isn’t to do with technology, but rather with confidence, anxiety etc.

Theodora: The people who run Camp Grounded would tell you themselves that digital detoxing is not really about digital technology. That’s just the current scapegoat for all the alienating aspects of modern life. They also take away real names, work talk, watches, and alcohol. One of the biggest things Camp Grounded tries to do is build up attendees’ confidence to be silly and playful and have their identities less tied to their work persona, which is a bit of a backlash against Silicon Valley’s intense work ethic. Milk and cookies comes from childhood, or America’s summer camps which many attendees went to as children, so it’s one little thing they do to get you to transition into that more relaxed and childlike way of behaving.

I’m not sure about “fetishized,” but Camp Grounded really jumps on board with the technology idea, using really ironic things like an analog dating service called “embers,” a “human powered search” where you pin questions on a big noticeboard and other people answer, and an “inbox” where people leave you letters.

And you’re right, there is an aspect of digital detoxing which is very much a “middle class ailment” in that it can seem rather surface-level and indulgent, and tickets are pretty pricey, making it quite a privileged activity. But at the same time I think it is a genuine conversation starter about our relationship with technology and how it’s designed. I think a digital detox is more than just escapism or reassurance, for them it’s about testing a different lifestyle, seeing what works best for them and learning from that.

Ed.: Many of these technologies are designed to be “addictive” (to use the term loosely: maybe I mean “seductive”) in order to drive engagement and encourage retention: is there maybe an analogy here with foods that are too sugary, salty, fatty (i.e. addictive) for us? I suppose the line between genuine addiction and free choice / agency is a difficult one; and one that may depend largely on the individual. Which presumably makes any attempts to regulate (or even just question) these persuasive digital environments particularly difficult? Given the massive outcry over perfectly rational attempts to tax sugar, fat etc.

Theodora: The analogy between sugary, salty, or fatty foods and seductive technologies is drawn a lot — it was even made by danah boyd in 2009. Digital detoxing comes from a standpoint that tech companies aren’t necessarily working to enable meaningful connection, and are instead aiming to “hook” people in. That’s often compared to food companies that exist to make a profit rather than improve your individual nutrition, using whatever salt, sugar, flavourings, or packaging they have at their disposal to make you keep coming back.

There are two different ways of “fixing” perceived problems with tech: there’s technical fixes that might only let you use the site for certain amounts of time, or re-designing it so that it’s less seductive; then there’s normative fixes, which could be on an individual level deciding to make a change, or even society wide, like the French labour law giving the “right to disconnect” from work emails on evenings and weekends.

One that sort of embodies both of these is The Time Well Spent project, run by Tristan Harris and the OII’s James Williams. They suggest different metrics for tech platforms, such as how well they enable good experiences away from the computer altogether. Like organic food stickers, they’ve suggested putting a stamp on websites whose companies have these different metrics. That could encourage people to demand better online experiences, and encourage tech companies to design accordingly.

So that’s one way that people are thinking about regulating it, but I think we’re still in the stages of sketching out what the actual problems are and thinking about how we can regulate or “fix” them. At the moment, the issue seems to depend on what the individual wants to do. I’d be really interested to know what other ideas people have had to regulate it, though.

Ed.: Without getting into the immense minefield of evolutionary psychology (and whether or not we are creating environments that might be detrimental to us mentally or socially: just as the Big Mac and Krispy Kreme are not brilliant for us nutritionally) — what is the lay of the land — the academic trends and camps — for this larger question of “Internet addiction” .. and whether or not it’s even a thing?

Theodora: In my experience academics don’t consider it a real thing, just as you wouldn’t say someone had an addiction to books. But again, that doesn’t mean it isn’t used all the time as a shorthand. And there are some academics who use it, like Kimberly Young who proposed it in the 1990’s. She still runs an Internet addiction treatment centre in New York, and there’s another in Fall City, Washington state.

The term certainly isn’t going away any time soon and the centres treat people who genuinely seem to have a very problematic relationship with their technology. People like the OII’s Andrew Przybylski (@ShuhBillSkee) are working on untangling this kind of problematic digital use from the idea of addiction, which can be a bit of a defeatist and dramatic term.

Ed.: As an ethnographer working at the Camp according to its rules (hand-written notes, analogue camera) .. did it affect your thinking or subsequent behaviour / habits in any way?

Theodora: Absolutely. In a way that’s a struggle, because I never felt that I wanted or needed a digital detox, yet having been to it three times now I can see the benefits. Going to camp made a strong case for the argument to be more careful with my technology use, for example not checking my phone mid-conversation, and I’ve been much more aware of it since. For me, that’s been part of an on-going debate that I have in my own life, which I think is a really useful fuel towards continuing to unravel this topic in my studies.

Ed.: So what are your plans now for your research in this area — will you be going back to Camp Grounded for another detox?

Theodora: Yes — I’ll be doing an ethnography of the digital detoxing community again this summer for my PhD and that will include attending Camp Grounded again. So far I’ve essentially done just preliminary fieldwork and visited to touch base with my informants. It’s easy to listen to the rhetoric around digital detoxing, but I think what’s been missing is someone spending time with them to really understand their point of view, especially their values, that you can’t always capture in a survey or in interviews.

In my PhD I hope to understand things like: how digital detoxers even think about technology, what kind of strategies they have to use it appropriately once they return from a detox, and how metaphor and language work in talking about the need to “unplug.” The food analogy is just one preliminary finding that shows how fascinating the topic is as soon as you start scratching away the surface.

Read the full article: Sutton, T. (2017) Disconnect to reconnect: The food/technology metaphor in digital detoxing. First Monday 22 (6).


OII DPhil student Theodora Sutton was talking to blog editor David Sutcliffe.

]]>
Estimating the Local Geographies of Digital Inequality in Britain: London and the South East Show Highest Internet Use — But Why? https://ensr.oii.ox.ac.uk/estimating-the-local-geographies-of-digital-inequality-in-britain/ Wed, 01 Mar 2017 11:39:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3962 Despite the huge importance of the Internet in everyday life, we know surprisingly little about the geography of Internet use and participation at sub-national scales. A new article on Local Geographies of Digital Inequality by Grant Blank, Mark Graham, and Claudio Calvino published in Social Science Computer Review proposes a novel method to calculate the local geographies of Internet usage, employing Britain as an initial case study.

In the first attempt to estimate Internet use at any small-scale level, they combine data from a sample survey, the 2013 Oxford Internet Survey (OxIS), with the 2011 UK census, employing small area estimation to estimate Internet use in small geographies in Britain. (Read the paper for more on this method, and discussion of why there has been little work on the geography of digital inequality.)

There are two major reasons to suspect that geographic differences in Internet use may be important: apparent regional differences and the urban-rural divide. The authors do indeed find a regional difference: the area with least Internet use is in the North East, followed by central Wales; the highest is in London and the South East. But interestingly, geographic differences become non-significant after controlling for demographic variables (age, education, income etc.). That is, demographics matter more than simply where you live, in terms of the likelihood that you’re an Internet user.

Britain has one of the largest Internet economies in the developed world, and the Internet contributes an estimated 8.3 percent to Britain’s GDP. By reducing a range of geographic frictions and allowing access to new customers, markets and ideas it strongly supports domestic job and income growth. There are also personal benefits to Internet use. However, these advantages are denied to people who are not online, leading to a stream of research on the so-called digital divide.

We caught up with Grant Blank to discuss the policy implications of this marked disparity in (estimated) Internet use across Britain.

Ed.: The small-area estimation method you use combines the extreme breadth but shallowness of the national census, with the relative lack of breadth (2000 respondents) but extreme richness (550 variables) of the OxIS survey. Doing this allows you to estimate things like Internet use in fine-grained detail across all of Britain. Is this technique in standard use in government, to understand things like local demand for health services etc.? It seems pretty clever..

Grant: It is used by the government, but not extensively. It is complex and time-consuming to use well, and it requires considerable statistical skills. These have hampered its spread. It probably could be used more than it is — your example of local demand for health services is a good idea..

Ed.: You say this method works for Britain because OxIS collects information based on geographic area (rather than e.g. randomly by phone number) — so we can estimate things geographically for Britain that can’t be done for other countries in the World Internet Project (including the US, Canada, Sweden, Australia). What else will you be doing with the data, based on this happy fact?

Grant: We have used a straightforward measure of Internet use versus non-use as our dependent variable. Similar techniques could predict and map a variety of other variables. For example, we could take a more nuanced view of how people use the Internet. The patterns of mobile use versus fixed-line use may differ geographically and could be mapped. We could separate work-only users, teenagers using social media, or other subsets. Major Internet activities could be mapped, including such things as entertainment use, information gathering, commerce, and content production. In addition, the amount of use and the variety of uses could be mapped. All these are major issues and their geographic distribution has never been tracked.

Ed.: And what might you be able to do by integrating into this model another layer of geocoded (but perhaps not demographically rich or transparent) data, e.g. geolocated social media / Wikipedia activity (etc.)?

Grant: The strength of the data we have is that it is representative of the UK population. The other examples you mention, like Wikipedia activity or geolocated social media, are all done by smaller, self-selected groups of people, who are not at all representative. One possibility would be to show how and in what ways they are unrepresentative.

Ed.: If you say that Internet use actually correlates to the “usual” demographics, i.e. education, age, income — is there anything policy makers can realistically do with this information? i.e. other than hope that people go to school, never age, and get good jobs? What can policy-makers do with these findings?

Grant: The demographic characteristics are things that don’t change quickly. These results point to the limits of the government’s ability to move people online. They say that 100% of the UK population will never be online. This raises the question, what are realistic expectations for online activity? I don’t know the answer to that but it is an important question that is not easily addressed.

Ed.: You say that “The first law of the Internet is that everything is related to age”. When are we likely to have enough longitudinal data to understand whether this is simply because older people never had the chance to embed the Internet in their lives when they were younger, or whether it is indeed the case that older people inherently drop out. Will this age-effect eventually diminish or disappear?

Grant: You ask an important but unresolved question. In the language of social sciences — is the decline in Internet use with age an age-effect or a cohort-effect. An age-effect means that the Internet becomes less valuable as people age and so the decline in use with age is just a reflection of the declining value of the Internet. If this explanation is true then the age-effect will persist into the indefinite future. A cohort-effect implies that the reason older people tend to use the Internet less is that fewer of them learned to use the Internet in school or work. They will eventually be replaced by active Internet-using people and Internet use will no longer be associated with age. The decline with age will eventually disappear. We can address this question using data from the Oxford Internet Survey, but it is not a small area estimation problem.

Read the full article: Blank, G., Graham, M., and Calvino, C. 2017. Local Geographies of Digital Inequality. Social Science Computer Review. DOI: 10.1177/0894439317693332.

This work was supported by the Economic and Social Research Council [grant ES/K00283X/1]. The data have been deposited in the UK Data Archive under the name “Geography of Digital Inequality”.


Grant Blank was speaking to blog editor David Sutcliffe.

]]>
The economic expectations and potentials of broadband Internet in East Africa https://ensr.oii.ox.ac.uk/the-economic-expectations-and-potentials-of-broadband-internet-in-east-africa/ Thu, 13 Mar 2014 09:39:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2603 Ed: There has a lot of excitement about the potential of increased connectivity in the region: where did this come from? And what sort of benefits were promised?

Chris: Yes, at the end of the 2000s when the first fibre cables landed in East Africa, there was much anticipation about what this new connectivity would mean for the region. I remember I was in Tanzania at the time, and people were very excited about this development – being tired of the slow and expensive satellite connections where even simple websites could take a minute to load. The perception, both in the international press and from East African politicians was that the cables would be a game changer. Firms would be able to market and sell more directly to customers and reduce inefficient ‘intermediaries’. Connectivity would allow new types of digital-driven business, and it would provide opportunity for small and medium firms to become part of the global economy. We wanted to revisit this discussion. Were firms adopting internet, as it became cheaper? Had this new connectivity had the effects that were anticipated, or was it purely hype?

Ed:  So what is the current level and quality of broadband access in Rwanda? ie how connected are people on the ground?

Chris: Internet access has greatly improved over the previous few years, and the costs of bandwidth have declined markedly. The government has installed a ‘backbone’ fibre network and in the private sector there has also been a growth in the number of firms providing Internet service. There are still some problems though. Prices are still are quite high, particularly for dedicated broadband connections, and in the industries we looked at (tea and tourism) many firms couldn’t afford it. Secondly, we heard a lot of complaints that lower bandwidth connections – WiMax and mobile internet – are unreliable and become saturated at peak times. So, Rwanda has come a long way, but we expect there will be more improvements in the future.

Ed: How much impact has the Internet had on Rwanda’s economy generally? And who is it actually helping, if so?

Chris: Economists in the World Bank have calculated that in developing economies a 10% improvement in Internet access leads to an increase in growth of 1.3%, so the effects should be taken seriously. In Rwanda, it’s too early to concretely see the effects in bottom line economic growth. In Rwanda, it’s too early to concretely see the effects in bottom line economic growth. In this work we wanted to examine the effect on already established sectors to get insight on Internet adoption and use. In general, we can say that firms are increasingly adopting Internet connectivity in some form, and that firms have been able take advantage and improve operations. However, it seems that wider transformational effects of connectivity have so far been limited.

Ed: And specifically in terms of the Rwandan tea and tourism industries: has the Internet had much effect?

Chris: The global tourism industry is driven by Internet use, and so tour firms, guides and hotels in Rwanda have been readily adopting it. We can see that the Internet has been beneficial, particularly for those firms coordinating tourism in Rwanda, who can better handle volumes of tourists. In the tea industry, adoption is a little lower but the Internet is used in similar ways – to coordinate the movement of tea from production to processing to selling, and this simplifies management for firms. So, connectivity has had benefits by improvements in efficiency, and this complements the fact that both sectors are looking to attract international investment and become better integrated into markets. In that sense, one can say that the growth in Internet connectivity is playing a significant role in strategies of private sector development.

Ed: The project partly focuses on value chains: ie where value is captured at different stages of a chain, leading (for example) from Rwandan tea bush to UK Tesco shelf. How have individual actors in the chain been affected? And has there been much in the way of (the often promised) disintermediation — ie are Rwandan tea farmers and tour operators now able to ‘plug directly’ into international markets?

Chris: Value chains allow us to pay more attention to who are the winners (and losers) of the processes described above, and particularly to see if this benefits Rwandan firms who are linked into global markets. One of the potential benefits originally discussed around new connectivity was that with the growth of online channels and platforms — and through social media — that firms as they became connected would have a more direct link to large markets and be able to disintermediate and improve the benefits they received. Generally, we can say that such disintermediation has not happened, for different reasons. In the tourism sector, many tourists are still reluctant to go directly to Rwandan tourist firms, for reasons related to trust (particularly around payment for holidays). In the tea sector, the value chains are very well established, and with just a few retailers in the end-markets, direct interaction with markets has simply not materialised. So, the hope of connectivity driving disintermediation in value chains has been limited by the market structure of both these sectors.

Ed: Is there any sense that the Internet is helping to ‘lock’ Rwanda into global markets and institutions: for example international standards organisations? And will greater transparency mean Rwanda is better able to compete in global markets, or will it just allow international actors to more efficiently exploit Rwanda’s resources — ie for the value in the chain to accrue to outsiders?

Chris: One of the core activities around the Internet that we found for both tea and tourism was firms using connectivity as a way to integrate themselves into logistic tracking, information systems, and quality and standards; whether this be automation in the tea sector or using global booking systems in the tourism sector. In one sense, this benefits Rwandan firms in that it’s crucial to improving efficiency in global markets, but it’s less clear that benefits of integration always accrue to those in Rwanda. It also moves away from the earlier ideas that connectivity would empower firms, unleashing a wave of innovation. To some of the firms we interviewed, it felt like this type of investment in the Internet was simply a way for others to better monitor, define and control every step they made, dictated by firms far away.

Ed. How do the project findings relate to (or comment on) the broader hopes of ICT4D developers? ie does ICT (magically) solve economic and market problems — and if so, who benefits?

Chris: For ICT developers looking to support development, there is often a tendency to look to build for actors who are struggling to find markets for their goods and services (such as apps linking buyers and producers, or market pricing information). But, the industries we looked at are quite different — actors (even farmers) are already linked via value chains to global markets, and so these types of application were less useful. In interviews, we found other informal uses of the Internet amongst lower-income actors in these sectors, which point the way towards new ICT applications: sectoral knowledge building, adapting systems to allow smallholders to better understand their costs, and systems to allow better links amongst cooperatives. More generally for those interested in ICT and development, this work highlights that changes in economies are not solely driven by connectivity, particularly in industries where rewards are already skewed towards larger global firms over those in developing countries. This calls for a context-dependent analysis of policy and structures, something that can be missed when more optimistic commentators discuss connectivity and the digital future.


Christopher Foster was talking to blog editor David Sutcliffe.

]]>