The Internet, Policy & Politics Conferences

Oxford Internet Institute, University of Oxford

Katherina Drinkuth: Developing a regulatory approach to shift the scales between user autonomy and algorithmic power on social media platforms

Katherina Drinkuth, Queensland University of Technology, Brisbane, Australia

This paper suggest a novel regulatory approach to rebalance the political tensions surrounding the design and deployment of algorithms on social media platforms to allow for greater user autonomy. Using an interdisciplinary approach, I examine controversies surrounding online self-representation and visibility. Drawing on the Hegelian classification of visibility as the core tenet of the human experience, authority over visibility is established as a precondition for user autonomy. Effects of visibility are ambivalent themselves and interrelated with circumstance and affordances. Online, many of these affordances are set by the platforms mediating our visibility, including through deployment of algorithms (Beer, 2009). A deeper understanding of the controversies playing out between social media users and platform algorithms will highlight diverse and bidirectional interdependences and interactions, and illustrate how algorithms are operating in contested environments. These insights offer new potential for regulatory interventions which reinforce the user’s position in these controversies. Ultimately, these interventions will increase user influence on algorithmic advancement, and strengthen user autonomy.

For the purpose of this paper, visibility is defined as both a matter of representation and of straightforward presentation and as a means through which social relationships are stabilised. Moreover, visibility pertains to the processes of subjectification, the construction of subjects in the social world (Brighenti, 2010). Visibility therefore is key to identity, and authority over visibility is an indispensable part of (user) autonomy. Visibility also contains a strategic aspect in that it can be, and indeed is, manipulated by subjects themselves in order to obtain real social effects. Controversies around user-algorithm interactions on social media reflect these different visibility aspects. From a social point of view, what matters are the ways in which, and the processes through which, subjects come to be ‘visibilised’or ‘invisibilised’and different visibility regimes are created. When the frictions of what can or cannot be seen are diverted to, or assumed by, a third party, visibility itself becomes mediated. Like mass media before, online social media is built around specific parameters of visibility and, through the internet’s medium specificity, these parameters are depended on software (Thompson, 2005). Mediation of visibility has become a key component of platform politics asking us ‘to rethink regimes of visibility that hinge on and operate through algorithmic architectures’ (Bucher, 2012).

Recent scholarship has highlighted that the ability of users to effectively affect their online representation is largely shaped by the algorithms that construct visibility contingent on the platform they work on (cf. Thompson, 2005; Gillespie, 2010; Hull, 2011). Social media algorithms predetermine what and who we see, how we interact with content and persons and, ultimately, what we reveal about ourselves and to whom. Users too often remain unaware of the workings and the biases of the program in which they are participating (Morozov, 2013). However, this does not mean that algorithms are therefore automatically out of realm for discussion, questioning, or change requests. Rather, algorithms are ‘perennially open to user suspicion that their criteria skew to the provider’s commercial or political benefit, or incorporate embedded, unexamined assumptions that act below the level of awareness, even that of the designer’ (Gillespie, 2012). Even without concrete knowledge of the workings of the algorithm users will often ‘reflexively play with algorithmic power to their own advantage’ (Gillespie, 2012) and algorithms indeed ‘react and reorganize themselves around the users’engagements’ (Beer, 2009). Algorithms are constantly ‘called into being by, enlisted as part of, and negotiated around collective efforts to know and be known’ (Gillespie, 2014). It is important to understand algorithms as operating within a contested environment and the dialectical (Giddens, 1984) relationship between users and algorithms as bringing forth new power relationships.

This paper suggests a responsive approach to regulation (Ayres & Braithwaite, 1992) to enable social groups to influence the development and deployment of social media algorithms. In particular, the potential of collibrative measures (Dunsire, 1993) is considered: In the context of platform politics, collibration can be understood as the purposeful manipulation of the tension between users and platform owners in the deployment of algorithms. However, instead of remediating existing oppositions or prescribing precast regulatory outcomes for any platform or algorithm, collibrative measures are aimed at adjusting the starting positions of stakeholders involved in any regulatory process. In this case, this suggests levelling the playing field on which users themselves (can) negotiate algorithmic advancement. Changing the predominant power balances in the environment in which algorithms are developed and deployed could potentially allow for resulting algorithms to be better aligned with user requirements. I argue that collibration is a promising approach to increase user autonomy over online visibility regimes, and this paper presents a first attempt to imagine effective collibrative interventions in this specific area of platform politics.

References

Ayres, I. & Braithwaite, J., (1992) Responsive Regulation: Transcending the Deregulation Debate. Oxford University Press.

Beer, D., (2009) Power through the Algorithm? Participatory Web Cultures and the technological unconscious. New Media & Society, 11, 985-1002.

Brighenti, A. M., (2010) Visibility in Social Theory and Social Research. Palgrave Macmillian. Bucher, T., (2012) Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media and Society, 0(0), 1-17.

Dunsire, A., (1993) Manipulating Social Tensions: Collibration as an Alternative Mode of Government Intervention, MPIFG Discussion Paper 93/7. Retrieved from http://www.mpifg.de/pu/mpifg_dp/dp93-7.pdf.

Giddens, A., (1984) The constitution of society: outline of the theory of structuration. Polity Press.

Gillespie, T., (2010) The politics of ‘platforms’. New Media & Society, 12(3), 347-364.

Gillespie, T., (2012) Can an algorithm be wrong?. Limn 2. Retrieved from http://limn.it/can-an-algorithm-bewrong/. Gillespie, T., (2014) The Relevance of Algorithms. In: T. Gillespie, P. Boczkowski & K. Foot (Eds.) Media Technologies. MIT Press.

Hull, M., (2011) Facebook changes mean that you are not seeing everything that you should be seeing. Retrieved from http://www.facebook.com/notes/mark-hull/please-read-facebook-changes-meanthat-you-are-not-seeing-everything-that-you-sh/10150089908123789.

Morozov, E., (2013) To save everything, click here. Allen Lane.

Thompson, J.B., (2005) The new visibility. Theory Culture and Society, 22(6), 31-51.

 

Authors: 
Katherina Drinkuth