New ethics to understand and regulate digital technologies

Digital ethics is the study of how digital technologies modify ethics.
Luciano Floridi

Founding Director of the Digital Ethics Center

22 May 2025
Luciano Floridi
Key Points
  • While some fundamental ethics never change, the field needs to be upgraded as society and technology evolve. Digital ethics is the study of how digital technologies modify ethics.
  • Classical ethics focuses on the agent of an action while modern ethics focuses on the recipient of that action. Digital ethics, instead, focuses on the relationship between the two entities.
  • Artificial intelligence represents an extraordinary form of agency. While it remains unclear whether artificial agents can behave ethically, the ethical implications of designing and implementing these systems are noteworthy.

 

Uncertain times

Photo by Halfpoint

We all know that we live in a strange time. Technologies of all kinds, especially digital technologies, have made a massive difference in everybody’s life. This was especially true during the pandemic. When such a remarkably significant and widespread change occurs, sometimes new ethical issues arise concerning what’s right and wrong, or the best way for society to proceed. What kind of lives do we want to live? What kind of people do we want to be?

All these issues, which are studied in ethics, thereby develop a different flavour. They become interesting because they not only affect what technology does but how we use it. This change also affects our ability to do more or differently what we have always done.

Digital ethics

In some ways, ethics never changes because it’s about fundamental choices in our lives. However, ethics are upgraded depending on what we can modify and how certain decisions become easier or more difficult. Digital ethics is the study of ethics as digital technologies modify it.

There are classic problems, like privacy, with which we are all very well acquainted. Other issues are less obvious, but they are becoming more present in our lives. For instance, issues surrounding autonomy are concerned with how much of our choices can be determined by an algorithm or artificial intelligence. There are also other issues such as bias and justice.

On the other hand, people can live separate lives online, where they may be offered certain choices or become friends with other people. All these issues – privacy, bias, choice, autonomy – are modified and coloured by the technologies we interact with. This is why we need this new way of approaching ethics, what we call digital ethics.

A new discipline

We’ve been talking about the ethical impact of digital technologies for some time. After all, computers have been around for more than half a century, since the Second World War. So why are we only now talking about digital ethics?

The truth is that we’ve been talking about digital ethics for the past 70 years or so, just by another name. A long time ago, the area used to be called computer ethics. Before mobile phones and web apps, we thought these issues were all linked to computers.

Over time, however, we moved from computer ethics to information ethics. In so doing, we recognised that it wasn’t the hardware that makes a difference but how it manipulates information. The label was changed to data ethics, and the field has been rebranded several times since.

Digital ethics in a modern world

The label that is preferable today is digital ethics. This reflects the major transformation from a world that was entirely analogue to an increasingly digital world. Indeed, we are currently changing the nature of the world, and, therefore, we are also upgrading the ethics that deal with the new features of modern society.

This upgrade is an important endeavour. Digital ethics are the ethics that we need to develop for our future well-being. By doing digital ethics properly, we will leave society as a better place for future generations.

A challenging compromise

Photo by Gorodenkoff

Throughout history, humankind has been faced with a classic problem. On the one hand, we desire to be safe against terrorist attacks, disease, abuse of power and so on. To ensure our safety, those who protect us, the police and armed forces, need more information about the world and its inhabitants. Think about what we do at the airport, for example. Lots of data help us to prevent bombings, an outcome we all support.

On the other hand, we also want to enjoy our private lives. We want to be left alone, and we don’t want to be spied upon and monitored. It is an unpleasant feeling to know that others are continually monitoring our choices and preferences. Overall, we want to have security but without sacrificing every semblance of privacy.

Facial recognition

Often this calls for trade-offs. Digital ethics seeks to find a balance between these competing factors. After all, privacy and safety are equally valuable on their own but are often incompatible. Digital ethics works to find the right context and the right trade-offs to manage this discrepancy.

Facial recognition provides an apt example. Although questionable, the technology can be useful in high-security settings, such as airports or nuclear power stations. However, it is not acceptable in a classroom, where privacy is fundamentally important to students. Considering this, we can understand how the need for security calls for facial recognition at the airport as an acceptable sacrifice for a better life. Yet, privacy is more important in most other settings.

Digital versus classical ethics

I’d like to discuss digital versus classical ethics from my own philosophical standpoint. It seems that we are about to undergo a philosophical shift from focusing on issues surrounding ourselves as individuals – questions like who should I be, who could I be, why should I do this or that – to focusing more on the other. This includes questions like what is important for you, what are your rights, what are your needs, am I listening to your voice and so on. Suppose we imagine more contemporary forms of ethics, such as environmental or medical ethics. In that case, they are more concerned with the receiver of the action, the person, the environment or the entity that suffers or receives a moral input from the agent: me.

Yet, it’s pertinent to consider how digital ethics compares to this shift. I think that there is a third movement here. This movement takes the emphasis on the agent (who should I be) and the receiver of my actions (the environment or patient) and focuses it on the relationship between the two entities. So, it’s not about me; it’s not about you – but it’s about the relationship that is between you and me.

In other words, it’s no longer about either Juliet or Romeo, but it’s about their love. It’s not about this party or another party but about politics. It’s not about citizens, but it’s about citizenship.

A new paradigm

This shift from the agent, ancient ethics, to the patient, contemporary ethics, to the relation between the agent and the patient, digital ethics, is a significant improvement. It provides a wider variety of options and more opportunities to improve.

If you have a network, the nodes of the network arise after the links are built. For example, this is a bit like having roundabouts and roads. First, you have the roads and then the roundabouts where the roads meet. You don’t build roundabouts first and then connect them with roads. I interpret digital ethics in the context of nodes that are linked by a relationship. I see it as network ethics, which attempts to understand the relationships between nodes to improve these connections.

Reconsidering my earlier analogy, if the marriage doesn’t quite work, is not about working on Juliet or Romeo, but rather the link between them. If politics doesn’t work, it’s not about that party or another party but about what it means to have a political relationship. Overall, I see digital ethics as network-oriented ethics.

Artificial intelligence

Photo by PaO_Studio

I like to describe artificial intelligence as an extraordinary form of agency. The technology embodies the capacity to solve problems, carry on tasks successfully given a goal and learn from data and improve.

Of course, this extraordinary form of agency comes with ethical problems. This technology marks the first time we can divorce the ability to do something successfully, agency, from the need to be intelligent in doing so. This divorce is where ethics occurs.

Problems and possibilities

Sure enough, we have new forms of artificial agency. Nevertheless, it remains ambiguous whether artificial intelligence can be ethical. Artificial agency generates potential problems as well as beneficial possibilities. After all, artificial agents have no intentions, motivations, mind and so on. Therefore, they are part of an ethical discussion centred upon the choices, made by humans, that occur as these systems are built and allowed to operate.

This discussion is highly relevant because artificial agency will multiply the opportunities for moral action, both good and bad. It will enable people to do more actions, but it also multiplies the amount of responsibility on our shoulders. Overall, artificial agency can be a source of ethically good or ethically bad behaviour, but the ultimate responsibility is entirely, and will remain entirely, human.

Discover more about

Digital ethics

Floridi, L. (2015). The Ethics of Information. Oxford University Press.

Floridi, L., & Taddeo, M. (2016). What is Data Ethics. Philosophical Transactions the Royal Society A. 374(2083).

Floridi, L. (2019) Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology, 32, 185-193.

0:00 / 0:00