Child Safety – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:46 +0000 en-GB hourly 1 How and why is children’s digital data being harvested? https://ensr.oii.ox.ac.uk/how-and-why-is-childrens-digital-data-being-harvested/ Wed, 10 May 2017 11:43:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4149 Everyone of a certain age remembers logging-on to a noisy dial-up modem and surfing the Web via AOL or AltaVista. Back then, the distinction between offline and online made much more sense. Today, three trends are conspiring to firmly confine this distinction to history. These are the mass proliferation of Wi-Fi, the appification of the Web, and the rapid expansion of the Internet of (smart) Things. Combined they are engineering multi-layered information ecosystems that enmesh around children going about their every day lives. But it’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving these developments.

Three Trends

1. The proliferation of Wi-Fi means when children can use smart phones or tablets in variety of new contexts including on buses and trains, in hotels and restaurants, in school, libraries and health centre waiting rooms.

2. Research confirms apps on smart phones and tablets are now children’s primary gateway to the Web. This is the appification of the Web that Jonathon Zittrain predicted: the WeChat app, popular in China, is becoming its full realisation.

3. Simultaneously, the rapid expansion of the Internet of Things means everything is becoming ‘smart’ – phones, cars, toys, baby monitors, watches, toasters: we are even promised smart cities. Essentially, this means these devices have an IP address that allows to them receive, process, and transmit data on the Internet. Often these devices (including personal assistants like Alexa, game consoles and smart TVs) are picking up data produced by children. Marketing about smart toys tells us they are enhancing children’s play, augmenting children’s learning, incentivising children’s healthy habits and can even reclaim family time. Salient examples include Hello Barbie and Smart Toy Bear, which use voice and/or image recognition and connect to the cloud to analyse, process, and respond to children’s conversations and images. This sector is expanding to include app-enabled toys such as toy drones, cars, and droids (e.g. Star Wars BB-8); toys-to-life, which connect action figures to video games (e.g. Skylanders, Amiibo); puzzle and building games (e.g. Osmo, Lego Fusion); and children’s GPS-enabled wearables such as smart watches and fitness trackers. We need to look beyond the marketing to see what is making this technology ubiquitous.

The commercial incentives to collect children’s data

Service providers now use free Wi-Fi as an additional enticement to their customers, including families. Apps offer companies opportunities to contain children’s usage in a walled-garden so that they can capture valuable marketing data, or offer children and parents opportunities to make in-app purchases. Therefore, more and more companies, especially companies that have no background in technology such as bus operators and cereal manufactures, use Wi-Fi and apps to engage with children.

The smart label is also a new way for companies to differentiate their products from others in saturated markets that overwhelm consumers with choice. However, security is an additional cost that manufactures of smart technologies manufacturers are unwilling to pay. The microprocessors in smart toys often don’t have the processing power required for strong security measures and secure communication, such as encryption (e.g. an 8-bit microcontroller cannot support the industry standard SSL to encrypt communications). Therefore these devices are designed without the ability to accommodate software or firmware updates. Some smart toys transmit data in clear text (parents of course are unaware of such details when purchasing these toys).

While children are using their devices they are constantly emitting data. Because this data is so valuable to businesses it has become a cliché to frame it as an exploitable ‘natural’ resource like oil. This means every digitisable movement, transaction and interaction we make is potentially commodifiable. Moreover, the networks of specialist companies, partners and affiliates that capture, store process, broker and resell the new oil are becoming so complex they are impenetrable. This includes the involvement of commercial actors in public institutions such as schools.

Lupton & Williamson (2017) use the term ‘datafied child’ to draw attention to this creeping normalisation of harvesting data about children. As its provenance becomes more opaque the data is orphaned and vulnerable to further commodification. And when it is shared across unencrypted channels or stored using weak security (as high profile cases show) it is easily hacked. The implications of this are only beginning to emerge. In response, children’s rights, privacy and protection; the particular ethics of the capture and management of children’s data; and its potential for commercial exploitation are all beginning to receive more attention.

Refocusing on children

Apart from a ticked box, companies have no way of knowing if a parent or child has given their consent. Children, or their parents, will often sign away their data to quickly dispatch any impediment to accessing the Wi-Fi. When children use public Wi-Fi they are opening, often unencrypted, channels to their devices. We need to start mapping the range of actors who are collecting data in this way and find out if they have any provisions for protecting children’s data.

Similarly, when children use their apps, companies assume that a responsible adult has agreed to the terms and conditions. Parents are expected to be gatekeepers, boundary setters, and supervisors. However, for various reasons, there may not be an informed, (digitally) literate adult on hand. For example, parents may be too busy with work or too ill to stay on top of their children’s complex digital lives. Children are educated in year groups but they share digital networks and practices with older children and teenagers, including siblings, extended family members, and friends who may enable risky practices.

We may need to start looking at additional ways of protecting children that transfers the burden away from the family and to companies that are capturing and monetising the data. This includes being realistic about the efficacy of current legislation. Because children can simply enter a fake birthdate, application of the US Children’s Online Privacy Protection Act to restrict the collection of children’s personal data online has been fairly ineffectual (boyd et al., 2011). In Europe, the incoming General Data Protection Regulation allows EU states to set a minimum age of 16 under which children cannot consent to having their data processed, potentially encouraging and even larger population of minors to lie about their age online.

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.


This work is part of the OII project “Child safety on the Internet: looking beyond ICT actors“, which maps the range of non-ICT companies engaging digitally with children and identifying areas where their actions might affect a child’s exposure to online risks such as data theft, adverse online experiences or sexual exploitation. It is funded by the Oak Foundation.

]]>
Evidence on the extent of harms experienced by children as a result of online risks: implications for policy and research https://ensr.oii.ox.ac.uk/evidence-on-the-extent-of-harms-experienced-by-children-as-a-result-of-online-risks-implications-for-policy-and-research/ https://ensr.oii.ox.ac.uk/evidence-on-the-extent-of-harms-experienced-by-children-as-a-result-of-online-risks-implications-for-policy-and-research/#comments Tue, 29 Jul 2014 10:47:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2847
The range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, but there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. Image by Brad Flickinger
Child Internet safety is a topic that continues to gain a great deal of media coverage and policy attention. Recent UK policy initiatives such as Active Choice Plus in which major UK broadband providers agreed to provide household-level filtering options, or the industry-led Internet Matters portal, reflect a public concern with the potential risks and harms of children’s Internet use. At the same time, the range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, in large part due to the extensive international studies funded by the European Commission as part of the excellent EU Kids Online network. Whilst this has greatly helped us understand how children behave online, there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. This is a problematic, first, because risks can only be identified if we understand what types of harms we wish to avoid, and second, because if we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives.

Of course, the focus on risk rather than harm is understandable from an ethical and methodological perspective. It wouldn’t be ethical, for example, to conduct a trial in which one group of children was deliberately exposed to very violent or sexual content to observe whether any harms resulted. Similarly, surveys can ask respondents to self-report harms experienced online, perhaps through the lens of upsetting images or experiences. But again, there are ethical concerns about adding to children’s distress by questioning them extensively on difficult experiences, and in a survey context it’s also difficult to avoid imposing adult conceptions of ‘harm’ through the wording of the questions.

Despite these difficulties, there are many research projects that aim to measure and understand the relationship between various types of physical, emotional or psychological harm and activities online, albeit often outside the social sciences. With support from the OUP Fell Fund, I worked with colleagues Vera Slavtcheva-Petkova and Monica Bulger to review the extent of evidence available across these other disciplines. Looking at journal articles published between 1997 and 2012, we aimed to identify any empirical evidence detailing Internet-related harms experienced by children and adolescents and to gain a sense of the types of harm recorded, their severity and frequency.

Our findings demonstrate that there are many good studies out there which do address questions of harm, rather than just risk. The narrowly drawn search found 148 empirical studies which either clearly delineated evidence of very specific harms, or offered some evidence of less well-defined harms. Further, these studies offer rich insights into three broad types of harm: health-related (including harms relating to the exacerbation of eating disorders, self-harming behaviour and suicide attempts); sex-related (largely focused on studies of online solicitation and child abuse); and bullying-related (including the effects on mental health and behaviour). Such a range of coverage would come as no surprise to most researchers focusing on children’s Internet use – these are generally well-documented areas, albeit with the focus more normally on risk rather than harm. Perhaps more surprising was the absence in our search of evidence of harm in relation to privacy violations or economic well-being, both of which are increasingly discussed as significant concerns or risks for minors using the Internet. This gap might have been a factor of our search terms, of course, but given the policy relevance of both issues, more empirical study of not just risk but actual harm would seem to be merited in these areas.

Another important gap in the literature concerned the absence of literature demonstrating that severe harms often befall those without prior evidence of vulnerability or risky behaviour. For example, in relation to websites promoting self-harm or eating disorders, there is little evidence that young people previously unaffected by self-harm or eating disorders are influenced by these websites. This isn’t unexpected – other researchers have shown that harm more often befalls those who display riskier behaviour, but this is important to bear in mind when devising treatment or policy strategies for reducing such harms.

It’s also worth noting how difficult it is to determine the prevalence of harms. The best-documented cases are often those where medical, police or court records provide great depth of qualitative detail about individual suffering in cases of online grooming and abuse, eating disorders or self-harm. Yet these cases provide little insight into prevalence. And whilst survey research offers more sense of scale, we found substantial disparities in the levels of harm reported on some issues, with the prevalence of cyber-bullying, for example, varying from 9% to 72% across studies with similar age groups of children. It’s also clear that we quite simply need much more research and policy attention on certain issues. The studies relating to the online grooming of children and production of abuse images are an excellent example of how a broad research base can make an important contribution to our understanding of online risks and harms. Here, journal articles offered a remarkably rich understanding, drawing on data from police reports, court records or clinical files as well as surveys and interviews with victims, perpetrators and carers. There would be real benefits to taking a similarly thorough approach to the study of users of pro-eating disorder, self-harm and pro-suicide websites.

Our review flagged up some important lessons for policy-makers. First, whilst we (justifiably) devote a wealth of resources to the small proportion of children experiencing severe harms as a result of online experiences, the number of those experiencing more minor harms such as those caused by online bullying is likely much higher and may thus deserve more attention than currently received. Second, the diversity of topics discussed and types of harm identified seems to suggest that a one-size-fits-all solution will not work when it comes to online protection of minors. Simply banning or filtering all potentially harmful websites, pages or groups might be more damaging than useful if it drives users to less public means of communicating. Further, whilst some content such as child sexual abuse images are clearly illegal and generate great harms, other content and sites is less easy to condemn if the balance between perpetuating harmful behavior and provide valued peer support is hard to call. It should also be remembered that the need to protect young people from online harms must always be balanced against the need to protect their rights (and opportunities) to freely express themselves and seek information online.

Finally, this study makes an important contribution to public debates about child online safety by reminding us that risk and harm are not equivalent and should not be conflated. More children and young people are exposed to online risks than are actually harmed as a result and our policy responses should reflect this. In this context, the need to protect minors from online harms must always be balanced against their rights and opportunities to freely express themselves and seek information online.

A more detailed account of our findings can be found in this Information, Communication and Society journal article: Evidence on the extent of harms experienced by children as a result of online risks: implications for policy and research. If you can’t access this, please e-mail me for a copy.


Victoria Nash is a Policy and Research Fellow at the Oxford Internet Institute (OII), responsible for connecting OII research with policy and practice. Her own particular research interests draw on her background as a political theorist, and concern the theoretical and practical application of fundamental liberal values in the Internet era. Recent projects have included efforts to map the legal and regulatory trends shaping freedom of expression online for UNESCO, analysis of age verification as a tool to protect and empower children online, and the role of information and Internet access in the development of moral autonomy.

]]>
https://ensr.oii.ox.ac.uk/evidence-on-the-extent-of-harms-experienced-by-children-as-a-result-of-online-risks-implications-for-policy-and-research/feed/ 1
Exploring variation in parental concerns about online safety issues https://ensr.oii.ox.ac.uk/exploring-variation-parental-concerns-about-online-safety-issues/ Thu, 14 Nov 2013 08:29:42 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1208 Ed: You’ve spent a great deal of time studying the way that children and young people use the Internet, much of which focuses on the positive experiences that result. Why do you think this is so under-represented in public debate?

boyd / Hargittai: The public has many myths about young people’s use of technology. This is often perpetuated by media coverage that focuses on the extremes. Salacious negative headlines often capture people’s attention, even if the practices or incidents described are outliers and do not represent the majority’s experiences. While focusing on extremely negative and horrific incidents is a great way to attract attention and get readers, it does a disservice to young people, their parents, and ultimately society as a whole.

As researchers, we believe that it’s important to understand the nuances of what people experience when they engage with technology. Thus, we are interested in gaining a better understanding of their everyday practices — both the good and the bad. Our goal is to introduce research that can help contextualize socio-technical practices and provide insight into the diversity of viewpoints and perspectives that shape young people’s use of technology.

Ed: Your paper suggests we need a more granular understanding of how parental concerns relating to the Internet can vary across different groups. Why is this important? What are the main policy implications of this research?

boyd / Hargittai: Parents are often seen as the target of policy interventions. Many lawmakers imagine that they’re designing laws to help empower parents, but when you ask them to explain which parents they are empowering, it becomes clear that there’s an imagined parent that is not always representative of the diverse views and perspectives of all parents. We’re not opposed to laws that enable parents to protect their children, but we’re concerned whenever a class of people, especially a class as large as “parents,” is viewed as homogenous.

Parents have different and often conflicting views about what’s best for their children or for children writ large. This creates a significant challenge for designing interventions that are meant to be beneficial and applicable to a diverse group of people. What’s beneficial or desirable to one may not be positively received by another. More importantly, what’s helpful to one group of parents may not actually benefit parents or youth as a whole. As a result, we think it’s important to start interrogating assumptions that underpin technology policy interventions so that policymakers have a better understanding of how their decisions affect whom they’re hoping to reach.

Ed: What did your study reveal, and in particular, where do you see the greatest differences in attitudes arising? Did it reveal anything unexpected?

boyd / Hargittai: The most significant take-away from our research is that there are significant demographic differences in concerns about young people. Some of the differences are not particularly surprising. For example, parents of children who have been exposed to pornography or violent content, or who have bullied or been bullied, have greater concern that this will happen to their child. Yet, other factors may be more surprising. For example, we found significant racial and ethnic differences in how parents approach these topics. Black, Hispanic, and Asian parents are much more concerned about at least some of the online safety measures than Whites, even when controlling for socioeconomic factors and previous experiences.

While differences in cultural experiences may help explain some of these findings, our results raise serious questions as to the underlying processes and reasons for these discrepancies. Are these parents more concerned because they have a higher level of distrust for technology? Because they feel as though there are fewer societal protections for their children? Because they feel less empowered as parents? We don’t know. Still, our findings challenge policy-makers to think about the diversity of perspectives their law-making should address. And when they enact laws, they should be attentive to how those interventions are received. Just because parents of colour are more concerned does not mean that an intervention intended to empower them will do so. Like many other research projects, this study results in as many — if not more — questions than it answers.

Ed: Are parents worrying about the right things? For example, you point out that ‘stranger danger’ registers the highest level of concern from most parents, yet this is a relatively rare occurrence. Bullying is much more common, yet not such a source of concern. Do we need to do more to educate parents about risks, opportunities and coping?

boyd / Hargittai: Parental fear is a contested issue among scholars and for good reason. In many ways, it’s a philosophical issue. Should parents worry more about frequent but low-consequence issues? Or should they concern themselves more with the possibility of rare but devastating incidents? How much fear is too much fear? Fear is an understandable response to danger, but left unchecked, it can become an irrational response to perceived but unlikely risks. Fear can prevent injury, but too much fear can result in a form of protectionism that itself can be harmful. Most parents want to protect their children from harm but few think about the consequences of smothering their children in their efforts to keep them safe. All too often, in erring on the side of caution, we escalate a societal tendency to become overprotective, limiting our children’s opportunities to explore, learn, be creative and mature. Finding the right balance is very tricky.

People tend to fear things that they don’t understand. New technologies are often terrifying because they are foreign. And so parents are reasonably concerned when they see their children using tools that confound them. One of the best antidotes to fear is knowledge. Although this is outside of the scope of this paper, we strongly recommend that parents take the time to learn about the tools that their children are using, ideally by discussing them with their children. The more that parents can understand the technological choices and decisions made by their children, the more that parents can help them navigate the risks and challenges that they do face, online and off.

Ed: On the whole, it seems that parents whose children have had negative experiences online are more likely to say they are concerned, which seems highly appropriate. But we also have evidence from other studies that many parents are unaware of such experiences, and also that children who are more vulnerable offline, may be more vulnerable online too. Is there anything in your research to suggest that certain groups of parents aren’t worrying enough?

boyd / Hargittai: As researchers, we regularly use different methodologies and different analytical angles to get at various research questions. Each approach has its strengths and weaknesses, insights and blind spots. In this project, we surveyed parents, which allows us to get at their perspective, but it limits our ability to understand what they do not know or will not admit. Over the course of our careers, we’ve also surveyed and interviewed numerous youth and young adults, parents and other adults who’ve worked with youth. In particular, danah has spent a lot of time working with at-risk youth who are especially vulnerable. Unfortunately, what she’s learned in the process — and what numerous survey studies have shown — is that those who are facing some of the most negative experiences do not necessarily have positive home life experiences. Many youth face parents who are absent, addicts, or abusive; these are the youth who are most likely to be physically, psychologically, or socially harmed, online and offline.

In this study, we took parents at face value, assuming that parents are good actors with positive intentions. It is important to recognise, however, that this cannot be taken for granted. As with all studies, our findings are limited because of the methodological approach we took. We have no way of knowing whether or not these parents are paying attention, let alone whether or not their relationship to their children is unhealthy.

Although the issues of abuse and neglect are outside of the scope of this particular paper, these have significant policy implications. Empowering well-intended parents is generally a good thing, but empowering abusive parents can create unintended consequences for youth. This is an area where much more research is needed because it’s important to understand when and how empowering parents can actually put youth at risk in different ways.

Ed: What gaps remain in our understanding of parental attitudes towards online risks?

boyd / Hargittai: As noted above, our paper assumes well-intentioned parenting on behalf of caretakers. A study could explore online attitudes in the context of more information about people’s general parenting practices. Regarding our findings about attitudinal differences by race and ethnicity, much remains to be done. While existing literature alludes to some reasons as to why we might observe these variations, it would be helpful to see additional research aiming to uncover the sources of these discrepancies. It would be fruitful to gain a better understanding of what influences parental attitudes about children’s use of technology in the first place. What role do mainstream media, parents’ own experiences with technology, their personal networks, and other factors play in this process?

Another line of inquiry could explore how parental concerns influence rules aimed at children about technology uses and how such rules affect youth adoption and use of digital media. The latter is a question that Eszter is addressing in a forthcoming paper with Sabrina Connell, although that study does not include data on parental attitudes, only rules. Including details about parental concerns in future studies would allow more nuanced investigation of the above questions. Finally, much is needed to understand the impact that policy interventions in this space have on parents, youth, and communities. Even the most well-intentioned policy may inadvertently cause harm. It is important that all policy interventions are monitored and assessed as to both their efficacy and secondary effects.


Read the full paper: boyd, d., and Hargittai, E. (2013) Connected and Concerned: Exploring Variation in Parental Concerns About Online Safety Issues. Policy and Internet 5 (3).

danah boyd and Eszter Hargittai were talking to blog editor David Sutcliffe.

]]>
How effective is online blocking of illegal child sexual content? https://ensr.oii.ox.ac.uk/how-effective-is-online-blocking-of-illegal-child-sexual-content/ Fri, 28 Jun 2013 09:30:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1576 Anonymous Belgium
The recent announcement by ‘Anonymous Belgium’ (above) that they would ‘liberate the Belgian Web’ on 15 July 2013 in response to blocking of websites by the Belgian government was revealed to be a promotional stunt by a commercial law firm wanting to protest non-transparent blocking of online content.

Ed: European legislation introduced in 2011 requires Member States to ensure the prompt removal of child pornography websites hosted in their territory and to endeavour to obtain the removal of such websites hosted outside; leaving open the option to block access by users within their own territory. What is problematic about this blocking?

Authors: From a technical point of view, all possible blocking methods that could be used by Member States are ineffective as they can all be circumvented very easily. The use of widely available technologies (like encryption or proxy servers) or tiny changes in computer configurations (for instance the choice of DNS-server), that may also be used for better performance or the enhancement of security or privacy, enable circumvention of blocking methods. Another problem arises from the fact that this legislation only targets website content while offenders often use other technologies such as peer-to-peer systems, newsgroups or email.

Ed: Many of these blocking activities stem from European efforts to combat child pornography, but you suggest that child protection may be used as a way to add other types of content to lists of blocked sites – notably those that purportedly violate copyright. Can you explain how this “mission creep” is occurring, and what the risks are?

Authors: Combating child pornography and child abuse is a universal and legitimate concern. With regard to this subject there is a worldwide consensus that action must be undertaken in order to punish abusers and protect children. Blocking measures are usually advocated on the basis of the argument that access to these images must be prevented, hence avoiding that users stumble upon child pornography inadvertently. Whereas this seems reasonable with regard to this particular type of content, in some countries governments increasingly use blocking mechanisms for other ‘illegal’ content, such as gambling or copyright-infringing content, often in a very non-transparent way, without clear or established procedures.

It is, in our view, especially important at a time when governments do not hesitate to carry out secret online surveillance of citizens without any transparency or accountability, that any interference with online content must be clearly prescribed by law, have a legitimate aim and, most importantly, be proportional and not go beyond what is necessary to achieve that aim. In addition, the role of private actors, such as ISPs, search engine companies or social networks, must be very carefully considered. It must be clear that decisions about which content or behaviours are illegal and/or harmful must be taken or at least be surveyed by the judicial power in a democratic society.

Ed: You suggest that removal of websites at their source (mostly in the US and Canada) is a more effective means of stopping the distribution of child pornography — but that European law enforcement has often been insufficiently committed to such action. Why is this? And how easy are cross-jurisdictional efforts to tackle this sort of content?

Authors: The blocking of websites is, although questionably ineffective as a method of making the content inaccessible, a quick way to be seen to take action against the appearance of unwanted material on the Internet. The removal of content on the other hand requires not only the identification of those responsible for hosting the content but more importantly the actual perpetrators. This is of course a more intrusive and lengthy process, for which law enforcement agencies currently lack resources.

Moreover, these agencies may indeed run into obstacles related to territorial jurisdiction and difficult international cooperation. However, prioritising and investing in actual removal of content, even though not feasible in certain circumstances, will ensure that child sexual abuse images do not further circulate, and, hence, that the risk of repetitive re-victimization of abused children is reduced.


Read the full paper: Karel Demeyer, Eva Lievens and Jos Dumortier (2012) Blocking and Removing Illegal Child Sexual Content: Analysis from a Technical and Legal Perspective. Policy and Internet 4 (3-4).

Karel Demeyer, Eva Lievens and Jos Dumortier were talking to blog editor Heather Ford.

]]>
Uncovering the structure of online child exploitation networks https://ensr.oii.ox.ac.uk/uncovering-the-structure-of-online-child-exploitation-networks/ https://ensr.oii.ox.ac.uk/uncovering-the-structure-of-online-child-exploitation-networks/#comments Thu, 07 Feb 2013 10:11:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=661 The Internet has provided the social, individual, and technological circumstances needed for child pornography to flourish. Sex offenders have been able to utilize the Internet for dissemination of child pornographic content, for social networking with other pedophiles through chatrooms and newsgroups, and for sexual communication with children. A 2009 estimate by the United Nations estimates that there are more than four million websites containing child pornography, with 35 percent of them depicting serious sexual assault [1]. Even if this report or others exaggerate the true prevalence of those websites by a wide margin, the fact of the matter is that those websites are pervasive on the world wide web.

Despite large investments of law enforcement resources, online child exploitation is nowhere near under control, and while there are numerous technological products to aid in finding child pornography online, they still require substantial human intervention. Despite this, steps can be taken to increase the automation process of these searches, to reduce the amount of content police officers have to examine, and increase the time they can spend on investigating individuals.

While law enforcement agencies will aim for maximum disruption of online child exploitation networks by targeting the most connected players, there is a general lack of research on the structural nature of these networks; something we aimed to address in our study, by developing a method to extract child exploitation networks, map their structure, and analyze their content. Our custom-written Child Exploitation Network Extractor (CENE) automatically crawls the Web from a user-specified seed page, collecting information about the pages it visits by recursively following the links out of the page; the result of the crawl is a network structure containing information about the content of the websites, and the linkages between them [2].

We chose ten websites as starting points for the crawls; four were selected from a list of known child pornography websites while the other six were selected and verified through Google searches using child pornography search terms. To guide the network extraction process we defined a set of 63 keywords, which included words commonly used by the Royal Canadian Mounted Police to find illegal content; most of them code words used by pedophiles. Websites included in the analysis had to contain at least seven of the 63 unique keywords, on a given web page; manual verification showed us that seven keywords distinguished well between child exploitation web pages and regular web pages. Ten sports networks were analyzed as a control.

The web crawler was found to be able to properly identify child exploitation websites, with a clear difference found in the hardcore content hosted by child exploitation and non-child exploitation websites. Our results further suggest that a ‘network capital’ measure — which takes into account network connectivity, as well as severity of content — could aid in identifying the key players within online child exploitation networks. These websites are the main concern of law enforcement agencies, making the web crawler a time saving tool in target prioritization exercises. Interestingly, while one might assume that website owners would find ways to avoid detection by a web crawler of the type we have used, these websites — despite the fact that much of the content is illegal — turned out to be easy to find. This fits with previous research that has found that only 20-25 percent of online child pornography arrestees used sophisticated tools for hiding illegal content [3,4].

As mentioned earlier, the huge amount of content found on the Internet means that the likelihood of eradicating the problem of online child exploitation is nil. As the decentralized nature of the Internet makes combating child exploitation difficult, it becomes more important to introduce new methods to address this. Social network analysis measurements, in general, can be of great assistance to law enforcement investigating all forms of online crime—including online child exploitation. By creating a web crawler that reduces the amount of hours officers need to spend examining possible child pornography websites, and determining whom to target, we believe that we have touched on a method to maximize the current efforts by law enforcement. An automated process has the added benefit of aiding to keep officers in the department longer, as they would not be subjugated to as much traumatic content.

There are still areas for further research; the first step being to further refine the web crawler. Despite being a considerable improvement over a manual analysis of 300,000 web pages, it could be improved to allow for efficient analysis of larger networks, bringing us closer to the true size of the full online child exploitation network, but also, we expect, to some of the more hidden (e.g., password/membership protected) websites. This does not negate the value of researching publicly accessible websites, given that they may be used as starting locations for most individuals.

Much of the law enforcement to date has focused on investigating images, with the primary reason being that databases of hash values (used to authenticate the content) exists for images, and not for videos. Our web crawler did not distinguish between the image content, but utilizing known hash values would help improve the validity of our severity measurement. Although it would be naïve to suggest that online child exploitation can be completely eradicated, the sorts of social network analysis methods described in our study provide a means of understanding the structure (and therefore key vulnerabilities) of online networks; in turn, greatly improving the effectiveness of law enforcement.

[1] Engeler, E. 2009. September 16. UN Expert: Child Porn on Internet Increases. The Associated Press.

[2] Westlake, B.G., Bouchard, M., and Frank, R. 2012. Finding the Key Players in Online Child Exploitation Networks. Policy and Internet 3 (2).

[3] Carr, J. 2004. Child Abuse, Child Pornography and the Internet. London: NCH.

[4] Wolak, J., D. Finkelhor, and K.J. Mitchell. 2005. “Child Pornography Possessors Arrested in Internet-Related Crimes: Findings from the National Juvenile Online Victimization Study (NCMEC 06–05–023).” Alexandria, VA: National Center for Missing and Exploited Children.


Read the full paper: Westlake, B.G., Bouchard, M., and Frank, R. 2012. Finding the Key Players in Online Child Exploitation Networks. Policy and Internet 3 (2).

]]>
https://ensr.oii.ox.ac.uk/uncovering-the-structure-of-online-child-exploitation-networks/feed/ 2