Luciano Floridi – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:33 +0000 en-GB hourly 1 Our knowledge of how automated agents interact is rather poor (and that could be a problem) https://ensr.oii.ox.ac.uk/our-knowledge-of-how-automated-agents-interact-is-rather-poor-and-that-could-be-a-problem/ Wed, 14 Jun 2017 15:12:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4191 Recent years have seen a huge increase in the number of bots online — including search engine Web crawlers, online customer service chat bots, social media spambots, and content-editing bots in online collaborative communities like Wikipedia. (Bots are important contributors to Wikipedia, completing about 15% of all Wikipedia edits in 2014 overally, and more than 50% in certain language editions.)

While the online world has turned into an ecosystem of bots (by which we mean computer scripts that automatically handle repetitive and mundane tasks), our knowledge of how these automated agents interact with each other is rather poor. But being automata without capacity for emotions, meaning-making, creativity, or sociality, we might expect bot interactions to be relatively predictable and uneventful.

In their PLOS ONE article “Even good bots fight: The case of Wikipedia“, Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi, and Taha Yasseri analyze the interactions between bots that edit articles on Wikipedia. They track the extent to which bots undid each other’s edits over the period 2001–2010, model how pairs of bots interact over time, and identify different types of interaction outcomes. Although Wikipedia bots are intended to support the encyclopaedia — identifying and undoing vandalism, enforcing bans, checking spelling, creating inter-language links, importing content automatically, mining data, identifying copyright violations, greeting newcomers, etc. — the authors find they often undid each other’s edits, with these sterile “fights” sometimes continuing for years.

They suggest that even relatively “dumb” bots may give rise to complex interactions, carrying important implications for Artificial Intelligence research. Understanding these bot-bot interactions will be crucial for managing social media, providing adequate cyber-security, and designing autonomous vehicles (that don’t crash..).

We caught up with Taha Yasseri and Luciano Floridi to discuss the implications of the findings:

Ed.: Is there any particular difference between the way individual bots interact (and maybe get bogged down in conflict), and lines of vast and complex code interacting badly, or having unforeseen results (e.g. flash-crashes in automated trading): i.e. is this just (another) example of us not always being able to anticipate how code interacts in the wild?

Taha: There are similarities and differences. The most notable difference is that here bots are not competing. They all work based on same rules and more importantly to achieve the same goal that is to increase the quality of the encyclopedia. Considering these features, the rather antagonistic interactions between the bots come as a surprise.

Ed.: Wikipedia have said that they know about it, and that it’s a minor problem: but I suppose Wikipedia presents a nice, open, benevolent system to make a start on examining and understanding bot interactions. What other bot-systems are you aware of, or that you could have looked at?

Taha: In terms of content generating bots, Twitter bots have turned out to be very important in terms of online propaganda. The crawlers bots that collect information from social media or the web (such as personal information or email addresses) are also being heavily deployed. In fact we have come up with a first typology of the Internet bots based on their type of action and their intentions (benevolent vs malevolent), that is presented in the article.

Ed.: You’ve also done work on human collaborations (e.g. in the citizen science projects of the Zooniverse) — is there any work comparing human collaborations with bot collaborations — or even examining human-bot collaborations and interactions?

Taha: In the present work we do compare bot-bot interactions with human-human interactions to observe similarities and differences. The most striking difference is in the dynamics of negative interactions. While human conflicts heat up very quickly and then disappear after a while, bots undoing each others’ contribution comes as a steady flow which might persist over years. In the HUMANE project, we discuss the co-existence of humans and machines in the digital world from a theoretical point of view and there we discuss such ecosystems in details.

Ed.: Humans obviously interact badly, fairly often (despite being a social species) .. why should we be particularly worried about how bots interact with each other, given humans seem to expect and cope with social inefficiency, annoyances, conflict and break-down? Isn’t this just more of the same?

Luciano: The fact that bots can be as bad as humans is far from reassuring. The fact that this happens even when they are programmed to collaborate is more disconcerting than what happens among humans when these compete, or fight each other. Here are very elementary mechanisms that through simple interactions generate messy and conflictual outcomes. One may hope this is not evidence of what may happen when more complex systems and interactions are in question. The lesson I learnt from all this is that without rules or some kind of normative framework that promote collaboration, not even good mechanisms ensure a good outcome.

Read the full article: Tsvetkova M, Garcia-Gavilanes R, Floridi, L, Yasseri T (2017) Even good bots fight: The case of Wikipedia. PLoS ONE 12(2): e0171774. doi:10.1371/journal.pone.0171774


Taha Yasseri and Luciano Floridi were talking to blog editor David Sutcliffe.

]]>
The Future of Europe is Science — and ethical foresight should be a priority https://ensr.oii.ox.ac.uk/the-future-of-europe-is-science/ Thu, 20 Nov 2014 17:15:38 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3014 On October 6 and 7, the European Commission, with the participation of Portuguese authorities and the support of the Champalimaud Foundation, organised in Lisbon a high-level conference on “The Future of Europe is Science”. Mr. Barroso, President of the European Commission, opened the meeting. I had the honour of giving one of the keynote addresses.

The explicit goal of the conference was twofold. On the one hand, we tried to take stock of European achievements in science, engineering, technology and innovation (SETI) during the last 10 years. On the other hand, we looked into potential future opportunities that SETI may bring to Europe, both in economic terms (growth, jobs, new business opportunities) and in terms of wellbeing (individual welfare and higher social standards).

One of the most interesting aspects of the meeting was the presentation of the latest report on “The Future of Europe is Science” by the President’s Science and Technology Advisory Council (STAC). The report addresses some very big questions: How will we keep healthy? How will we live, learn, work and interact in the future? How will we produce and consume and how will we manage resources? It also seeks to outline some key challenges that will be faced by Europe over the next 15 years. It is well written, clear, evidence-based and convincing. I recommend reading it. In what follows, I wish to highlight three of its features that I find particularly significant.

First, it is enormously refreshing and reassuring to see that the report treats science and technology as equally important and intertwined. The report takes this for granted, but anyone stuck in some Greek dichotomy between knowledge (episteme, science) and mere technique (techne, technology) will be astonished. While this divorcing of the two has always been a bad idea, it is still popular in contexts where applied science, e.g. applied physics or engineering, is considered a Cinderella. During my talk, I referred to Galileo as a paradigmatic scientist who had to be innovative in terms of both theories and instruments.

Today, technology is the outcome of innovative science and there is almost no science that is independent of technology, in terms of reliance on digital data and processing or (and this is often an inclusive or) in terms of investigations devoted to digital phenomena, e.g. in the social sciences. Of course, some Fields Medallists may not need computers to work, and may not work on computational issues, but they represent an exception. This year, Hiroshi Amano, Shuji Nakamura and Isamu Akasaki won the Nobel in physics “for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources”. Last year, François Englert and Peter Higgs were awarded the Nobel in physics “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider”. Without the technologically sophisticated work done at CERN, their theoretical discovery would have remained unsupported. The hope is that universities, research institutions, R&D centres as well as national research agencies will follow the approach espoused by STAC and think strategically in terms of technoscience.

The second point concerns some interesting statistics. The report uses several sources—especially the 2014 Eurobarometer survey of “Public perception of science, research and innovation”—to analyse and advise about the top priorities for SETI over the next 15 years, as identified by EU respondents. The picture that emerges is an ageing population worried, first of all, about its health, then about its children’s jobs, and only after that about the environment: 55 % of respondents identified “health and medical care” as among what they thought should be the main priorities for science and technological development over the next 15 years; 49 % opted for “job creation”; 33 % privileged “education and skills”. So we spent most of the meeting in Lisbon discussing these three areas. Other top priorities include “protection of the environment” (30 %), “energy supply” (25 %) and the “fight against climate change” (22 %).

So far so predictable, although it is disappointing to see such a low concern about the environment, a clear sign that even educated Europeans (with the exception of Danish and Swedish respondents) may not be getting the picture: there is no point in being healthy and employed in a desert. Yet this is not what I wish to highlight. Rather, on p. 14 of the report, the authors themselves admit that: “Contrary to our expectations, citizens do not consider the protection of personal data to be a high priority for SET in the next 15 years (11 %)”. This is very interesting. As a priority, data protection ranks as low as quality of housing: nice, but very far from essential. The authors quickly add that “but this might change in the future if citizens are confronted with serious security problems”.

They are right, but the point remains that, at the moment, all the fuss about privacy in the EU is a political rather than a social priority. Recall that this is an ageing population of grown-ups, not a bunch of teenagers in love with pictures of cats and friends online, allegedly unable to appreciate what privacy means (a caricature increasingly unbelievable anyway). Perhaps we “do not get it” when we should (a bit like the environmental issues) and need to be better informed. Or perhaps we are informed and still think that other issues are much more pressing. Either way, our political representatives should take notice.

Finally, and most importantly, the report contains a recommendation that I find extremely wise and justified. On p. 19, the Advisory Council acknowledges that, among the many foresight activities to be developed by the Commission, one in particular “should also be a priority”: ethical foresight. This must be one of the first times that ethical foresight is theorised as a top priority in the development of science and technology. The recommendation is based on the crucial and correct realisation that ethical choices, values, options and constraints influence the world of SETI much more than any other force. The evaluation of what is morally good, right or necessary shapes public opinion, hence the socially acceptable and the politically feasible and so, ultimately, the legally enforceable.

In the long run, business is constrained by law, which is constrained by ethics. This essential triangle means that—in the context of technoscientific research, development and innovation—ethics cannot be a mere add-on, an afterthought, a latecomer or an owl of Minerva that takes its flight only when the shades of night are gathering, once bad solutions have been implemented and mistakes have been made. Ethics must sit at the table of policy-making and decision-taking procedures from day one. It must inform our strategies about SETI especially at the beginning, when changing the course of action is easier and less costly, in terms of resources and impact. We must think twice but above all we must think before taking important steps, in order to avoid wandering into what Galileo defined as the dark labyrinth of ignorance.

As I stressed at the end of my keynote, the future of Europe is science, and this is why our priority must be ethics now.

Read the editorial: Floridi, L. (2014) Technoscience and Ethics Foresight. Editorial, Philosophy & Technology 27 (4) 499-501.


Luciano Floridi is the OII’s Professor of Philosophy and Ethics of Information. His research areas are the philosophy of Information, information and computer ethics, and the philosophy of technology. His most recent book is The Fourth Revolution – How the infosphere is reshaping human reality (2014, Oxford University Press).

]]>