Huw Davies – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:24:47 +0000 en-GB hourly 1 From private profit to public liabilities: how platform capitalism’s business model works for children https://ensr.oii.ox.ac.uk/from-private-profit-to-public-liabilities-how-platform-capitalisms-business-model-works-for-children/ Thu, 14 Sep 2017 08:52:12 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4395 Two concepts have recently emerged that invite us to rethink the relationship between children and digital technology: the “datafied child” (Lupton & Williamson, 2017) and children’s digital rights (Livingstone & Third, 2017). The concept of the datafied child highlights the amount of data that is being harvested about children during their daily lives, and the children’s rights agenda includes a response to ethical and legal challenges the datafied child presents.

Children have never been afforded the full sovereignty of adulthood (Cunningham, 2009) but both these concepts suggest children have become the points of application for new forms of power that have emerged from the digitisation of society. The most dominant form of this power is called “platform capitalism” (Srnicek, 2016). As a result of platform capitalism’s success, there has never been a stronger association between data, young people’s private lives, their relationships with friends and family, their life at school, and the broader political economy. In this post I will define platform capitalism, outline why it has come to dominate children’s relationship to the internet and suggest two reasons in particular why this is problematic.

Children predominantly experience the Internet through platforms

‘At the most general level, platforms are digital infrastructures that enable two or more groups to interact. They therefore position themselves as intermediaries that bring together different users: customers, advertisers, service providers, producers, suppliers, and even physical objects’ (Srnicek 2016, p43). Examples of platforms capitalism include the technology superpowers – Google, Apple, Facebook, and Amazon. There are, however, many relevant instances of platforms that children and young people use. This includes platforms for socialising, platforms for audio-visual content, platforms that communicate with smart devices and toys, and platforms for games and sports franchises and platforms that provide services (including within in the public sector) that children or their parents use.

Young people choose to use platforms for play, socialising and expressing their identity. Adults have also introduced platforms into children’s lives: for example Capita SIMS is a platform used by over 80% of schools in the UK for assessment and monitoring (over the coming months at the Oxford Internet Institute we will be studying such platforms, including SIMS, for The Oak Foundation). Platforms for personal use have been facilitated by the popularity of tablets and smartphones.

Amongst the young, there has been a sharp uptake in tablet and smart phone usage at the expense of PC or laptop use. Sixteen per cent of 3-4 year olds have their own tablet, with this incidence doubling for 5-7 year olds. By the age of 12, smartphone ownership begins to outstrip tablet ownership (Ofcom, 2016). For our research at the OII, even when we included low-income families in our sample, 93% of teenagers owned a smartphone. This has brought forth the ‘appification’ of the web that Zittrain predicted in 2008. This means that children and young people predominately experience the internet via platforms that we can think of as controlled gateways to the open web.

Platforms exist to make money for investors

In public discourse some of these platforms are called social media. This term distracts us from the reason many of these publicly floated companies exist: to make money for their investors. It is only logical for all these companies to pursue the WeChat model that is becoming so popular in China. WeChat is a closed circuit platform, in that it keeps all engagements with the internet, including shopping, betting, and video calls, within its corporate compound. This brings WeChat closer to monopoly on data extraction.

Platforms have consolidated their success by buying out their competitors. Alphabet, Amazon, Apple, Facebook and Microsoft have made 436 acquisitions worth $131 billion over the last decade (Bloomberg, 2017). Alternatively, they just mimic the features of their competitors. For example, when Facebook acquired Instagram it introduced Stories, a feature use by Snapchat, which lets its users upload photos and videos as a ‘story’ (that automatically expires after 24 hours).

The more data these companies capture that their competitors are unable to capture, the more value they can extract from it and the better their business model works. It is unsurprising therefore that during our research we asked groups of teenagers to draw a visual representation of what they thought the world wide web and internet looked like – almost all of them just drew corporate logos (they also told us they had no idea that platforms such as Facebook own WhatsApp and Instagram, or that Google owns YouTube). Platform capitalism dominates and controls their digital experiences — but what provisions do these platforms make for children?

The General Data Protection Regulation (GDPR) (set to be implemented in all EU states, including the UK, in 2018) says that platforms collecting data about children below the age of 13 years shall only be lawful if and to the extent that consent is given or authorised by the child’s parent or custodian. Because most platforms are American-owned, they tend to apply a piece of Federal legislation known as COPPA; the age of consent for using Snapchat, WhatsApp, Facebook, and Twitter, for example, is therefore set at 13. Yet, the BBC found last year that 78% of children aged 10 to 12 had signed up to a platform, including Facebook, Instagram, Snapchat and WhatsApp.

Platform capitalism offloads its responsibilities onto the user

Why is this a problem? Firstly, because platform capitalism offloads any responsibility onto problematically normative constructs of childhood, parenting, and paternal relations. The owners of platforms assume children will always consult their parents before using their services and that parents will read and understand their terms and conditions, which, research confirms, in reality few users, children or adults, even look at.

Moreover, we found in our research many parents don’t have the knowledge, expertise, or time to monitor what their children are doing online. Some parents, for instance, worked night shifts or had more than one job. We talked to children who regularly moved between homes and whose estranged parents didn’t communicate with each other to supervise their children online. We found that parents who are in financial difficulties, or affected by mental and physical illness, are often unable to keep on top of their children’s digital lives.

We also interviewed children who use strategies to manage their parent’s anxieties so they would leave them alone. They would, for example, allow their parents to be their friends on Facebook, but do all their personal communication on other platforms that their parents knew nothing about. Often then the most vulnerable children offline, children in care for example, are the most vulnerable children online. My colleagues at the OII found 9 out of 10 of the teenagers who are bullied online also face regular ‘traditional’ bullying. Helping these children requires extra investment from their families, as well as teachers, charities and social services. The burden is on schools too to address the problem of fake news and extremism such as Holocaust denialism that children can read on platforms.

This is typical of platform capitalism. It monetises what are called social graphs: i.e. the networks of users who use its platforms that it then makes available to advertisers. Social graphs are more than just nodes and edges representing our social lives: they are embodiments of often intimate or very sensitive data (that can often be de-anonymised by linking, matching and combining digital profiles). When graphs become dysfunctional and manifest social problems such as abuse, doxxing, stalking, and grooming), local social systems and institutions — that are usually publicly funded — have to deal with the fall-out. These institutions are often either under-resourced and ill-equipped to these solve such problems, or they are already overburdened.

Are platforms too powerful?

The second problem is the ecosystems of dependency that emerge, within which smaller companies or other corporations try to monetise their associations with successful platforms: they seek to get in on the monopolies of data extraction that the big platforms are creating. Many of these companies are not wealthy corporations and therefore don’t have the infrastructure or expertise to develop their own robust security measures. They can cut costs by neglecting security or they subcontract out services to yet more companies that are added to the network of data sharers.

Again, the platforms offload any responsibility onto the user. For example, WhatsApp tells its users; “Please note that when you use third-party services, their own terms and privacy policies will govern your use of those services”. These ecosystems are networks that are only as strong as their weakest link. There are many infamous examples that illustrate this, including the so-called ‘Snappening’ where sexually explicit pictures harvested from Snapchat — a platform that is popular with teenagers — were released on to the open web. There is also a growing industry in fake apps that enable illegal data capture and fraud by leveraging the implicit trust users have for corporate walled gardens.

What can we do about these problems? Platform capitalism is restructuring labour markets and social relations in such a way that opting out from it is becoming an option available only to a privileged few. Moreover, we found teenagers whose parents prohibited them from using social platforms often felt socially isolated and stigmatised. In the real world of messy social reality, platforms can’t continue to offload their responsibilities on parents and schools.

We need some solutions fast because, by tacitly accepting the terms and conditions of platform capitalism – particularly when that they tell us it is not responsible for the harms its business model can facilitate – we may now be passing an event horizon where these companies are becoming too powerful, unaccountable, and distant from our local reality.

References

Hugh Cunningham (2009) Children and Childhood in Western Society Since 1500. Routledge.

Sonia Livingstone, Amanda Third (2017) Children and young people’s rights in the digital age: An emerging agenda. New Media and Society 19 (5).

Deborah Lupton, Ben Williamson (2017) The datafied child: The dataveillance of children and implications for their rights. New Media and Society 19 (5).

Nick Srnicek (2016) Platform Capitalism. Wiley.

]]>
How and why is children’s digital data being harvested? https://ensr.oii.ox.ac.uk/how-and-why-is-childrens-digital-data-being-harvested/ Wed, 10 May 2017 11:43:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4149 Everyone of a certain age remembers logging-on to a noisy dial-up modem and surfing the Web via AOL or AltaVista. Back then, the distinction between offline and online made much more sense. Today, three trends are conspiring to firmly confine this distinction to history. These are the mass proliferation of Wi-Fi, the appification of the Web, and the rapid expansion of the Internet of (smart) Things. Combined they are engineering multi-layered information ecosystems that enmesh around children going about their every day lives. But it’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving these developments.

Three Trends

1. The proliferation of Wi-Fi means when children can use smart phones or tablets in variety of new contexts including on buses and trains, in hotels and restaurants, in school, libraries and health centre waiting rooms.

2. Research confirms apps on smart phones and tablets are now children’s primary gateway to the Web. This is the appification of the Web that Jonathon Zittrain predicted: the WeChat app, popular in China, is becoming its full realisation.

3. Simultaneously, the rapid expansion of the Internet of Things means everything is becoming ‘smart’ – phones, cars, toys, baby monitors, watches, toasters: we are even promised smart cities. Essentially, this means these devices have an IP address that allows to them receive, process, and transmit data on the Internet. Often these devices (including personal assistants like Alexa, game consoles and smart TVs) are picking up data produced by children. Marketing about smart toys tells us they are enhancing children’s play, augmenting children’s learning, incentivising children’s healthy habits and can even reclaim family time. Salient examples include Hello Barbie and Smart Toy Bear, which use voice and/or image recognition and connect to the cloud to analyse, process, and respond to children’s conversations and images. This sector is expanding to include app-enabled toys such as toy drones, cars, and droids (e.g. Star Wars BB-8); toys-to-life, which connect action figures to video games (e.g. Skylanders, Amiibo); puzzle and building games (e.g. Osmo, Lego Fusion); and children’s GPS-enabled wearables such as smart watches and fitness trackers. We need to look beyond the marketing to see what is making this technology ubiquitous.

The commercial incentives to collect children’s data

Service providers now use free Wi-Fi as an additional enticement to their customers, including families. Apps offer companies opportunities to contain children’s usage in a walled-garden so that they can capture valuable marketing data, or offer children and parents opportunities to make in-app purchases. Therefore, more and more companies, especially companies that have no background in technology such as bus operators and cereal manufactures, use Wi-Fi and apps to engage with children.

The smart label is also a new way for companies to differentiate their products from others in saturated markets that overwhelm consumers with choice. However, security is an additional cost that manufactures of smart technologies manufacturers are unwilling to pay. The microprocessors in smart toys often don’t have the processing power required for strong security measures and secure communication, such as encryption (e.g. an 8-bit microcontroller cannot support the industry standard SSL to encrypt communications). Therefore these devices are designed without the ability to accommodate software or firmware updates. Some smart toys transmit data in clear text (parents of course are unaware of such details when purchasing these toys).

While children are using their devices they are constantly emitting data. Because this data is so valuable to businesses it has become a cliché to frame it as an exploitable ‘natural’ resource like oil. This means every digitisable movement, transaction and interaction we make is potentially commodifiable. Moreover, the networks of specialist companies, partners and affiliates that capture, store process, broker and resell the new oil are becoming so complex they are impenetrable. This includes the involvement of commercial actors in public institutions such as schools.

Lupton & Williamson (2017) use the term ‘datafied child’ to draw attention to this creeping normalisation of harvesting data about children. As its provenance becomes more opaque the data is orphaned and vulnerable to further commodification. And when it is shared across unencrypted channels or stored using weak security (as high profile cases show) it is easily hacked. The implications of this are only beginning to emerge. In response, children’s rights, privacy and protection; the particular ethics of the capture and management of children’s data; and its potential for commercial exploitation are all beginning to receive more attention.

Refocusing on children

Apart from a ticked box, companies have no way of knowing if a parent or child has given their consent. Children, or their parents, will often sign away their data to quickly dispatch any impediment to accessing the Wi-Fi. When children use public Wi-Fi they are opening, often unencrypted, channels to their devices. We need to start mapping the range of actors who are collecting data in this way and find out if they have any provisions for protecting children’s data.

Similarly, when children use their apps, companies assume that a responsible adult has agreed to the terms and conditions. Parents are expected to be gatekeepers, boundary setters, and supervisors. However, for various reasons, there may not be an informed, (digitally) literate adult on hand. For example, parents may be too busy with work or too ill to stay on top of their children’s complex digital lives. Children are educated in year groups but they share digital networks and practices with older children and teenagers, including siblings, extended family members, and friends who may enable risky practices.

We may need to start looking at additional ways of protecting children that transfers the burden away from the family and to companies that are capturing and monetising the data. This includes being realistic about the efficacy of current legislation. Because children can simply enter a fake birthdate, application of the US Children’s Online Privacy Protection Act to restrict the collection of children’s personal data online has been fairly ineffectual (boyd et al., 2011). In Europe, the incoming General Data Protection Regulation allows EU states to set a minimum age of 16 under which children cannot consent to having their data processed, potentially encouraging and even larger population of minors to lie about their age online.

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.


This work is part of the OII project “Child safety on the Internet: looking beyond ICT actors“, which maps the range of non-ICT companies engaging digitally with children and identifying areas where their actions might affect a child’s exposure to online risks such as data theft, adverse online experiences or sexual exploitation. It is funded by the Oak Foundation.

]]>
Tackling Digital Inequality: Why We Have to Think Bigger https://ensr.oii.ox.ac.uk/tackling-digital-inequality-why-we-have-to-think-bigger/ Wed, 15 Mar 2017 11:42:25 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3988 Numerous academic studies have highlighted the significant differences in the ways that young people access, use and engage with the Internet and the implications it has in their lives. While the majority of young people have some form of access to the Internet, for some their connections are sporadic, dependent on credit on their phones, an available library, or Wi-Fi open to the public. Qualitative data in a variety of countries has shown such limited forms of access can create difficulties for these young people as an Internet connection becomes essential for socialising, accessing public services, saving money, and learning at school.

While the UK government has financed technological infrastructure and invested in schemes to address digital inequalities, the outcomes of these schemes are rarely uniformly positive or transformative for the people involved. This gap between expectation and reality demands theoretical attention; with more attention placed on the cultural, political and economic contexts of the digitally excluded, and the various attempts to “include” them.

Focusing on a two-year digital inclusion scheme for 30 teenagers and their families initiated by a local council in England, a qualitative study by Huw C. Davies, Rebecca Eynon, and Sarah Wilkin analyses why, despite the good intentions of the scheme’s stakeholders, it fell short of its ambitions. It also explains how the neoliberalist systems of governance that are increasingly shaping the cultures and behaviours of Internet service providers and schools — that incentivise action that is counterproductive to addressing digital inequality and practices — cannot solve the problems they create.

We caught up with the authors to discuss the study’s findings:

Ed.: It was estimated that around 10% of 13 year olds in the study area lacked dependable access to the Internet, and had no laptop or PC at home. How does this impact educational outcomes?

Huw: It’s impossible to disaggregate technology from everything else that can affect a young person’s progress through school. However, one school in our study had transferred all its homework and assessments online while the other schools were progressing to this model. The students we worked with said doing research for homework is synonymous with using Google or Wikipedia, and it’s the norm to send homework and coursework to teachers by email, upload it to Virtual Learning Environments, or print it out at home. Therefore students who don’t have access to the Internet have to spend time and effort finding work-arounds such as using public libraries. Lack of access also excludes such students from casual learning from resources online or pursuing their own interests in their own time.

Ed.: The digital inclusion scheme was designed as a collaboration between a local council in England (who provided Internet services) and schools (who managed the scheme) in order to test the effect of providing home Internet access on educational outcomes in the area. What was your own involvement, as researchers?

Huw: Initially, we were the project’s expert consultants: we were there to offer advice, guidance and training to teachers and assess the project’s efficacy on its conclusion. However, as it progressed we took on the responsibility of providing skills training to the scheme’s students and technical support to their families. When it came to assessing the scheme, by interviewing young people and their families at their homes, we were therefore able to draw on our working knowledge of each family’s circumstances.

Ed.: What was the outcome of the digital inclusion project —- i.e. was it “successful”?

Huw: As we discuss in the article, defining success in these kinds of schemes is difficult. Subconsciously many people involved in these kinds of schemes expect technology to be transformative for the young people involved yet in reality the changes you see are more nuanced and subtle. Some of the scheme’s young people found apprenticeships or college courses, taught themselves new skills, used social networks for the first time and spoke to friends and relatives abroad by video for free. These success stories definitely made the scheme worthwhile. However, despite the significant good will of the schools, local council, and the families to make the scheme a success there were also frustrations and problems. In the article we talk about these problems and argue that the challenges the scheme encountered are not just practical issues to be resolved, but are systemic issues that need to be explicitly recognised in future schemes of this kind.

Ed.: And in the article you use neoliberalism as a frame to discuss these issues..?

Huw: Yes. But we recognise in the article that this is a concept that needs to be used with care. It’s often used pejoratively and/or imprecisely. We have taken it to mean a set of guiding principles that are intended to produce a better quality of services through competition, targets, results, incentives and penalties. The logic of these principles, we argue, influences they way organisations treat individual users of their services.

For example, for Internet Service Providers (ISPs) the logic of neoliberalism is to subcontract out the constituent parts of an overall service provision to create mini internal markets that (in theory) promote efficiency through competition. Yet this logic only really works if everyone comes to the market with similar resources and abilities to make choices. If their customers are well informed and wealthy enough to remind companies that they can take their business elsewhere these companies will have a strong incentive to improve their services and reduce their costs. If customers are disempowered by lack of choice the logic of neoliberalism tends to marginalise or ignore their needs. These were low-income families with little or no experience of exercising consumer choice and rights. For them therefore these mini markets didn’t work.

In the schools we worked with the logic of neoliberalism meant staff and students felt under pressure to meet certain targets — they all had to priortise things that were measured and measurable. Failure to meet these targets would then mean they would have to account for what went wrong, face losing out on a reward or they would expect disciplinary action. It therefore becomes much more difficult for schools to devote time and energy to schemes such as this.

Ed.: Were there any obvious lessons that might lead to a better outcome if the scheme were to be repeated: or are the (social, economic, political) problems just too intractable, and therefore too difficult and expensive to sort out?

Huw: Many of the families told us that access to the Internet was becoming evermore vital. This was not just for homework but also for access to public and health services (that are being increasingly delivered online) and getting to the best deals online for consumer services. They often told us therefore that they would do whatever it took to keep their connection after the two-year scheme ended. This often meant paying for broadband out of their social security benefits or income that was too low to be taxable: income that could otherwise have been spent on, for example, food and clothing. Given its necessity, we should have a national conversation about providing this service to low income families for free.

Ed.: Some of the families included in the study could be considered “hard to reach”. What were your experiences of working with them?

Huw: There are many practical and ethical issues to address before these sorts of schemes can begin. These families often face multiple intersecting problems that involve many agencies (who don’t necessarily communicate with each other) intervening in their lives. For example, some of the scheme’s families were dealing with mental illness, disability, poor housing, and debt all at the same time. It is important that such schemes are set up with an awareness of this complexity. We are very grateful to the families that took part in the scheme and the insights they gave us for how such schemes should run in the future.

Ed.: Finally, how do your findings inform all the studies showing that “digital inclusion schemes are rarely uniformly positive or transformative for the people involved”. Are these studies gradually leading to improved knowledge (and better policy intervention), or simply showing the extent of the problem without necessarily offering “solutions”?

Huw: We have tried to put this scheme into a broader context to show such policy interventions have to be much more ambitious, intelligent, and holistic. We never assumed digital inequality is an isolated problem that can be fixed with a free broadband connection, but when people are unable to afford the Internet it is an indication of other forms of disadvantage that, in a sympathetic and coordinated way, have to be addressed simultaneously. Hopefully, we have contributed to the growing awareness that such attempts to ameliorate the symptoms may offer some relief but should never be considered a cure in itself.

Read the full article: Huw C. Davies, Rebecca Eynon, Sarah Wilkin (2017) Neoliberal gremlins? How a scheme to help disadvantaged young people thrive online fell short of its ambitions. Information, Communication & Society. DOI: 10.1080/1369118X.2017.1293131

The article is an output of the project “Tackling Digital Inequality Amongst Young People: The Home Internet Access Initiative“, funded by Google.

Huw Davies was talking to blog editor David Sutcliffe.

]]>