Internet Filtering: And Why It Doesn’t Really Help Protect Teens

There is equivocal to strong evidence that household-level Internet filtering does not reduce the chance of adolescents having recent aversive online experiences. Image: Paul Walsh / Flickr CC BY-NC-SA 2.0.

Young British teens (between 12-15 years) spend nearly 19 hours a week online, raising concerns for parents, educators, and politicians about the possible negative experiences they may have online. Schools and libraries have long used Internet-filtering technologies as a means of mitigating adolescents’ experiences online, and major ISPs in Britain now filter new household connections by default.

However, a new article by Andrew Przybylski and Victoria Nash, “Internet Filtering Technology and Aversive Online Experiences in Adolescents”, published in the Journal of Pediatrics, finds equivocal to strong evidence that household-level Internet filtering does not reduce the chance of adolescents having recent aversive online experiences. The authors analysed data from 1030 in-home interviews conducted with early adolescents as part of Ofcom’s Children and Parents Media Use and Attitudes Report.

The Internet is now a central fixture of modern life, and the positives and negatives of online Internet use need to be balanced by caregivers. Internet filters have been adopted as a tool for limiting the negatives; however, evidence of their effectiveness is dubious. They are expensive to develop and maintain, and also carry significant informational costs: even sophisticated filters over-block, which is onerous for those seeking information about sexual health, relationships, or identity, and might have a disproportionate effect on vulnerable groups. Striking the right balance between protecting adolescents and respecting their rights to freedom of expression and information presents a formidable challenge.

In conducting their study to address this uncertainty, the authors found convincing evidence that Internet filters were not effective at shielding early adolescents from aversive experiences online. Given this finding, they propose that evidence derived from a randomized controlled trial and registered research designs are needed to determine how far Internet-filtering technology supports or thwarts young people online. Only then will parents and policymakers be able to make an informed decision as to whether their widespread use justifies their costs.

We caught up with Andy and Vicki to discuss the implications of their study:

Ed.: Just this morning when working from home I tried to look up an article’s author and was blocked, Virgin Media presumably having decided he might be harmful to me. Where does this recent enthusiasm for default-filtering come from? Is it just that it’s a quick, uncomplicated (technological) fix, which I guess is what politicians / policy-people like?

Vicki: In many ways this is just a typical response to the sorts of moral panic which have long arisen around the possible risks of new technologies. We saw the same concerns arise with television in the 1960s, for example, and in that case the UK’s policy response was to introduce a ‘watershed’, a daily time after which content aimed at adults could be shown. I suppose I see filtering as fulfilling the same sort of policy gap, namely recognising that certain types of content can be legally available but should not be served up ‘in front of the children’.

Andy: My reading of the psychological and developmental literature suggests that filters provide a way of creating a safe walled space in schools, libraries, and homes for young people to use the internet. This of course does not mean that reading our article will be harmful!

Ed.: I suppose that children desperate to explore won’t be stopped by a filter; those who aren’t curious probably wouldn’t encounter much anyway — what is the profile of the “child” and the “harm-filtering” scenario envisaged by policy-makers? And is Internet filtering basically just aiming at the (easy) middle of the bell-curve?

Vicki: This is a really important point. Sociologists recognised many years ago that the whole concept of childhood is socially constructed, but we often forget about this when it comes to making policy. There’s a tendency for politicians, for example, either to describe children as inherently innocent and vulnerable, or to frame them as expert ‘digital natives’, yet there’s plenty of academic research which demonstrates the extent to which children’s experiences of the Internet vary by age, education, income and skill level.

This matters because it suggests a ‘one-size-fits-all’ approach may fail. In the context of this paper, we specifically wanted to check whether children with the technical know-how to get around filters experienced more negative experiences online than those who were less tech-savvy. This is often assumed to be true, but interestingly, our analysis suggests this factor makes very little difference.

Ed.: In all these discussions and policy decisions: is there a tacit assumption that these children are all growing up in a healthy, supportive (“normal”) environment — or is there a recognition that many children will be growing up in attention-poor (perhaps abusive) environments and that maybe one blanket technical “solution” won’t fit everyone? Is there also an irony that the best protected children will already be protected, and the least protected, probably won’t be?

Andy: Yes this is ironic and somewhat tragic dynamic. Unfortunately because the evidence for filtering effectiveness is at such an early age it’s not possible to know which young people (if any) are any more or less helped by filters. We need to know how effective filters are in general before moving on to see the young people for whom they are more or less helpful. We would also need to be able to explicitly define what would constitute an ‘attention-poor’ environment.

Vicki: From my perspective, this does always serve as a useful reminder that there’s a good reason why policy-makers turn to universalistic interventions, namely that this is likely to be the only way of making a difference for the hardest-to-reach children whose carers might never act voluntarily. But admirable motives are no replacement for efficacy, so as Andy notes, it would make more sense to find evidence that household internet filtering is effective, and second that it can be effective for this vulnerable group, before imposing default-on filters on all.

Ed.: With all this talk of potential “harm” to children posed by the Internet .. is there any sense of how much (specific) harm we’re talking about? And conversely .. any sense of the potential harms of over-blocking?

Vicki: No, you are right to see that the harms of Internet use are quite hard to pin down. These typically take the form of bullying, or self-harm horror stories related to Internet use. The problem is that it’s often easier to gauge how many children have been exposed to certain risky experiences (e.g. viewing pornography) than to ascertain whether or how they were harmed by this. Policy in this area often abides by what’s known as ‘the precautionary principle’.

This means that if you lack clear evidence of harm but have good reason to suspect public harm is likely, then the burden of proof is on those who would prove it is not likely. This means that policies aimed at protecting children in many contexts are often conservative, and rightly so. But it also means that it’s important to reconsider policies in the light of new evidence as it comes along. In this case we found that there is not as much evidence that Internet filters are effective at preventing exposure to negative experiences online as might be hoped.

Ed.: Stupid question: do these filters just filter “websites”, or do they filter social media posts as well? I would have thought young teens would be more likely to find or share stuff on social media (i.e. mobile) than “on a website”?

Andy: My understanding is that there are continually updated ‘lists’ of website that contain certain kinds of content such as pornography, piracy, gambling, or drug use (see this list on Wikipedia, for example) as these categories vary by UK ISP.

Vicki: But it’s not quite true to say that household filtering packages don’t block social media. Some of the filtering options offered by the UK’s ‘Big 4’ ISPs enable parents and carers to block social media sites for ‘homework time’ for example. A bigger issue though, is that much of children’s Internet use now takes place outside the home. So, household-level filters can only go so far. And whilst schools and libraries usually filter content, public wifi or wifi in friends’ houses may not, and content can be easily exchanged directly between kids’ devices via Bluetooth or messaging apps.

Ed: Do these blocked sites (like the webpage of that journal author I was trying to access) get notified that they have been blocked and have a chance to appeal? Would a simple solution to over blocking simply be to allow (e.g. sexual health, gender-issue, minority, etc.) sites to request that they be whitelisted, or apply for some “approved” certification?

Vicki: I don’t believe so. There are whitelisted sites, indeed that was a key outcome of an early inquiry into ‘over-blocking’ by the UK Children’s Council on Internet Safety. But in order for this to be a sufficient response, it would be necessary for all sites and apps that are subject to filtering to be notified, to allow for possible appeal. The Open Rights Group provide a tool that allows site owners to check the availability of their sites, but there is no official process for seeking whitelisting or appeal.

Ed.: And what about age verification as an alternative? (however that is achieved / validated), i.e. restricting content before it is indexed, rather than after?

Andy: To evaluate this we would need to conduct a randomised controlled trial where we tested how the application of age verification for different households, selected at random, would relate (or not) to young people encountering potentially aversive content online.

Vicki: But even if such a study could prove that age verification tools were effective in restricting access to underage Internet users, it’s not clear this would be a desirable scenario. It makes most sense for content that is illegal to access below a certain age, such as online gambling or pornography. But if content is age-gated without legal requirement, then it could prove a very restrictive tool, removing the possibility of parental discretion and failing to make any allowances for the sorts of differences in ability or maturity between children that I pointed out at the beginning.

Ed: Similarly to the the arguments over Google making content-blocking decisions (e.g. over the “right to forget”): are these filtering decisions left to the discretion of ISPs / the market / the software providers, or to some government dept / NGO? Who’s ultimately in charge of who sees what?

Vicki: Obviously when it comes to content that is illegal for children or adults to access then broad decisions about the delineation of what is illegal falls to governments and is then interpreted and applied by private companies. But when it comes to material that is not illegal, but just deemed harmful or undesirable, then ISPs and social media platforms are left to decide for themselves how to draw the boundaries and then how to apply their own policies. This increasing self-regulatory role for what Jonathan Zittrain has called ‘private sherriffs’ is often seen a flexible and appropriate response, but it does bring reduced accountability and transparency.

Ed.: I guess it’s ironic with all this attention paid to children, that we now find ourselves in an information environment where maybe we should be filtering out (fake) content for adults as well (joke..). But seriously: with all these issues around content, is your instinct that we should be using technical fixes (filtering, removing from indexes, etc.) or trying to build reflexivity, literacy, resilience in users (i.e. coping strategies). Or both? Both are difficult.

Andy: It is as ironic as it is tragic. When I talk to parents (both Vicki and I are parents) I hear that they have been let down by the existing advice which often amounts to little more than ‘turn it off’. Their struggles have nuance (e.g. how do I know who is in my child’s WhatsApp groups? Is snapchat OK if they’re just using it amongst best friends?) and whilst general broad advice is heard, this more detailed information and support is hard for parents to find.

Vicki: I agree. But I think it’s inevitable that we’ll always need a combination of tools to deal with the incredible array of content that develops online. No technical tool will ever be 100% reliable in blocking content we don’t want to see, and we need to know how to deal with whatever gets through. That certainly means having a greater social and political focus on education but also a willingness to consider that building resilience may mean exposure to risk, which is hard for some groups to accept.

Every element of our strategy should be underpinned by whatever evidence is available. Ultimately, we also need to stop thinking about these problems as technology problems: fake news is just as much a feature of increasing political extremism and alienation just as online pornography is a feature of a heavily sexualised mainstream culture. And we can be certain: neither of these broader social trends will be resolved by simple efforts to block out what we don’t wish to see.

Read the full article: Przybylski, A. and Nash, V. (2017) Internet Filtering Technology and Aversive Online Experiences in Adolescents. Journal of Pediatrics. DOI: http://dx.doi.org/10.1016/j.jpeds.2017.01.063


Andy Przybylski and Vicki Nash were talking to blog editor David Sutcliffe.