info@alia.com.tr
  • Mainpage
  • Issues
  • Writers
  • Team
  • Contact
  • Türkçe

Artificial Intelligence between Ethics, Logic and Law: a challenge that affects our national democracies

  1. Mainpage
  2. Issues
  3. The Aesthetics of Ownership
  4. Artificial Intelligence between Ethics, Logic and Law: a challenge that affects our national democracies

Artificial Intelligence between Ethics, Logic and Law: a challenge that affects our national democracies

***This article is a summary of a reflection developed for the Department of Innovation Engineering at the University of Salento, within the framework of the Erasmus Edufair project workshop held from 5 to 7 May 2025.

Abstract: This short paper reflects on the implications of the data era, because artificial intelligence systems and models are only a consequence of what we have been experiencing for a long time on the web and social media and what is happening can seriously call into question our democratic organizational model and the guarantees on which it is based.

 What is the situation today?

Currently, there is a widespread narrative that tends to consider either artificial intelligence systems as a miracle that will solve all the world's problems or, on the contrary, to imagine dystopian scenarios that will lead to the end of the world. Often the titles of the articles themselves suggest dystopian and unrealistic visions of what we are experiencing. We have even come to hypothesize the suicide of a robot, while we would never dream of thinking about the suicide of a car when we damage it due to excessive and incorrect use[1]. In fact, for a long time we have been fascinated in cinematography (and not only) by the possibility of machines that are more intelligent than us, capable of replacing us and taking control of the world[2].

In reality, we are still far from the development of a true artificial conscience, but the suggestions of certain films help us to understand what could happen if certain nightmares were to materialize, even simply by remembering the content of Asimov's Three Laws of Robotics[3]:

1.      A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.      A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.      A robot must protect its own existence as long as such protection does not conflict with the First or Second Law[4].

It must be clear, however, that there is currently no artificial intelligence with autonomous and sentient consciousness. There is no autonomous and artificial subjectivity. We must unmask the false myths that are circulating today as fake news.

Even robotics is still behind today and current robots still move awkwardly and not with the same fluidity as our bodies, equipped with exceptional technology[5].

We are currently working a lot on Large Language Models, such as Chatgpt, Copilot or DeepSeek. These are essentially statistical language models that are based on learning from huge databases, without any understanding of the text. The words that are generated are extrapolated through recurring verbal patterns in the analysed data and repeat those patterns in a simple way and according to statistical and probabilistic rules. They were called "stochastic parrots" years ago (in 2021) by two researchers and scientists from Google, Timnit Gepru and Margaret Mitchell[6]. These Large Language Models use both today's computing power and incredible memory capabilities, exploiting the huge databases we have today. But there is nothing new and creative in what we receive.

We always draw from the past that we have already produced.

For all these reasons, it makes little sense to talk about the ethics of artificial intelligence. The systems and models of artificial intelligence that amaze us so much today must always be considered our product and therefore tools at our disposal. Ethics must be applied to us and not to the algorithms that manifest and endlessly repeat our errors and our discriminatory bias.

By simply reading the definition contained in the AI Act[7], we can easily understand the importance of the data taken as reference by artificial intelligence systems. If the quality of the input data is not guaranteed, the result contained in the outputs will inevitably be poor and full of bias (or cognitive distortions).

As we know, today we live in the midst of Surveillance Capitalism, as defined by Shoshana Zuboff[8], according to which our data has enormous value and has allowed the development of digital oligopolies in the hands of the big players in the sector, such as Microsoft, Google, Apple, Amazon and a few others. These companies have divided the digital market behind our backs, basing it on an unbridled profiling of our tastes, our habits, even our most intimate interests. Today our identities are online and we are totally transparent on social media and in our digital life, while the world is in the hands of very few individuals who have assets greater than the budgets of national states.

Today everything can be oriented and manipulated, as clearly demonstrated by what happened in the United States and Great Britain with the Cambridge Analytica case. The American political vote and UK referendum were surprisingly swayed by social media manipulation of entire geographical areas through fake news and online hate[9]. I think it is useful for everyone to see the documentary Social Dilemma (2020)[10] which denounced these practices and opened our eyes to what put our fragile democracies in crisis at an international level.

Both personal and non-personal data are a valuable resource: a tool to analyse, predict, and influence an individual’s behaviour. And data, as you no doubt know, is the new oil.

On social media we now belong to well-profiled communities based on our tastes, preferences and habits, which are reinforced and leads us to hate those who do not think like us. The echo chamber will be amplified by the massive use of artificial intelligence tools.

Europe's regulatory response

In June 2019, Giovanni Buttarelli[11], during an important conference organized by ANORC[12], left us his will with these words:

“The current digital ecosystem is based on the intensive and indiscriminate exploitation of information and personal data. In little more than a decade, the structure of the markets has converged towards quasi-monopoly situations, decreeing the exponential growth of the market power of a few, but very powerful, private players. The result is the concentration of the power to control information flows in the hands of technology giants, a circumstance that facilitates the consolidation of a business model based on the profiling and financial manipulation of people. In this regard, a structural rethinking of the prevailing business model is necessary.

Furthermore, a coordinated intervention by data protection, consumer protection and competition authorities is necessary, which takes into account the synergies and challenges common to the different regulatory areas.”

Thanks to people like Mr. Buttarelli, the European Union has not only had the AI Act, but Europe has long promoted copious legislation dedicated to digital innovation, which has the ultimate goal of curbing the excessive power of Big Tech and guaranteeing the fundamental rights and freedoms of European citizens. In addition to these technical standards, there is a Declaration of digital rights and principles, jointly approved in 2023 by the European Commission, Parliament and Council, not only for us as citizens, but all governments belonging to the European Union and for the legislative bodies of the European Union to regulate and protect our rights and fundamental freedoms.

The first article of AI Act clarifies the purposes of this important European legislation: The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.

All the most recent European regulations on digital scenarios revolve around the following fundamental principles:

- free circulation, portability and (personal and non-personal) data protection;

- transparency (not only information on legal bases, purposes, roles and rights of vulnerable subjects involved in digital processes, but also transparency of algorithms);

- non-exclusivity of algorithmic decision and algorithmic non-discrimination;

- accountability, therefore definition of roles and responsibilities, organizational models and procedures;

- integrity and, therefore, reliability of data;

- security and, therefore, risk management;

- interoperability of IT systems, databases and digital archives;

- accessibility and sustainability.

Corruptissima re pubblica plurimae leges. So said Publius Cornelius Tacitus, a Roman historian, orator and senator, considered one of the greatest and most influential Latin writers. He argued that too many laws favoured corruption. In fact, all these European regulations appear redundant and excessive. They bureaucratize the digital market instead of truly protecting us as citizens from the abuses of large technology companies. For example, the Italian Data Protection Authority asked DeepSeek for greater transparency[13]. The Chinese company's response was that the European regulation is not applicable because their headquarters are in China. And today it continues to operate in European territory. Unfortunately, the web has no geographical borders and for this reason international treaties and agreements are necessary.

The Council of Europe Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law is the first-ever international legally binding treaty in this field. Come into effect for signature on 5 September 2024, it aims to ensure that artificial intelligence systems are fully compliant with human rights, democracy and the rule of law, while being conducive to technological progress and innovation. This international convention is open gobally and has already been signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Israel, the United States of America and the European Union.

The unsolved problems
Finally, we cannot forget how artificial intelligence can be weaponised. In fact, intelligence systems are used by governments (unfortunately even democratic ones) to authorize the murder of civilians, simply because an algorithm has decreed that a terrorist is probably hiding in a building or an ambulance[14]. What ratio of probability can justify the murder of women and children? For this reason, we can not believe that artificial intelligence is an autonomous decision maker. Behind every algorithm that authorizes a murder, there are always humans who have chosen to kill.

Unfortunately, the European AI Act does not address this aspect. How long can this delicate issue remain unregulated at European and international level?

Furthermore, we cannot forget that behind the learning of these machines that we call intelligent there are many underpaid people, often confined to the third world, who work morning and night to correct the prejudices of the "large language models". They are the so-called digital proletariat. There are entire families who work from morning to night and dedicate themselves only to this to survive, as documented in an article I read in the magazine "Internazionale", published in Italy[15].

Finally, as I stated in a recent interview published in a magazine dedicated to digital health[16], the real risk is that thinking, that is both the critical and creative process, is delegated to machines. The effect is, in essence, to block the birth of new ideas, styles, intuitions, entering a cycle of repetition. AI, in fact, is based on data. And data is always and inevitably oriented towards the past. Intuition, the stroke of genius that takes a step forward, that changes paradigm, is typical of humanity. In this scenario, people risk becoming dependent on machines that they do not understand or control, while AI predictions will continue to come true because they are based on a pool of data that continues to repeat itself because it is continuously confirmed by previous predictions.

The challenge today is to ensure that this does not happen through widespread literacy, specialized training, and the development of interdisciplinary skills. In a word: Culture.



[1] In a first, robot dies of suicide in South Korea. What really happened?, article published on July 5, 2024 on Firstpost, available at the link:https://www.firstpost.com/explainers/first-robot-dies-of-suicide-in-south-korea-what-really-happened-13789625.html.

[2] Let us remember, for example, Age of Ultron, in the saga dedicated to the Avengers (of the Marvel world) (made in 2015). But we could remember many other films such as Blade Runner (1982), Terminator (1984) or Matrix (1999). But we can go even further back in time, to Hall 9000, A Space Odyssey (by Stanley Kubrick) (1968) or to the music of Pink Floyd, with Welcome to Machine in the album Wish You Were Here (1975).

[3] Isaac Asimov (1920-1992) was a Russian-born American author of science fiction, popular science fiction, and other works. He was also a professor of biochemistry at Boston University. The Three Laws of Robotics (often abbreviated to Asimov's Three Laws or Asimov's Laws) were introduced by the science fiction writer in his 1942 story "Runaround" (included in the 1950 collection I, Robot), although similar restrictions were implicit in earlier stories as well.

[4] The three laws of robotics have also influenced the European legislator who expressly mentioned them in paragraph T of the European Parliament resolution of 16 February 2017 with recommendations to the Commission on civil law rules relating to robotics.

[5] In this video you can see Sarah, a robot presented in Saudi Arabia about a year ago at an international fair dedicated to robotics and artificial intelligence: https://www.youtube.com/watch?v=y0LVqoLDmSw.  
We can give a name to robots, but this does not mean that they have a personality. Or at least it is not so today, and we do not know when and if this will happen.

[6] Google might ask questions about AI ethics, but it doesn't want answers, article published in The Guardian, available at the link: https://www.theguardian.com/commentisfree/2021/mar/13/google-questions-about-artificial-intelligence-ethics-doesnt-want-answers-gebru-mitchell-parrots-language.

[7] REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)
Article 3 Definitions

For the purposes of this Regulation, the following definitions apply:

(1) ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

[8] The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power is a 2019 non-fiction book by Shoshana Zuboff.

[9] Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach, article published in the Guardian, 17 Mar 2018, available at the link https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.

[10] https://thesocialdilemma.com/

[11] Giovanni Buttarelli (Frascati, 24 June 1957 – Milan, 20 August 2019) was an Italian civil servant. He was appointed European Data Protection Supervisor (EDPS) by the European Parliament and the Council of the European Union. Before joining the EDPS, Buttarelli was Secretary General of the Italian Data Protection Authority. He was also a magistrate at the Court of Cassation.

[12] ANORC (Associazione Nazionale Operatori e Responsabili della Custodia di contenuti digitali/ National Association of the Custodians and Operators of Digital Content) https://anorc.eu/

[13] Italy: Garante opens investigation into DeepSeek, article published in the DataGuidance magazine on January 28, 2025, available at the link https://www.dataguidance.com/news/italy-garante-opens-investigation-deepseek.

[14] ‘If I die, I want a loud death’: Gaza photojournalist killed by Israeli airstrike, article published on The Guardian on 18 Apr 2025, available at the link https://www.theguardian.com/world/2025/apr/18/gaza-photojournalist-killed-by-israeli-airstrike-fatima-hassouna. It is also useful to read Israel’s AI Experiments in the War in Gaza Raise Ethical Concerns”,  article published on The New York Time in April 26, 2025, available at the link https://gvwire.com/2025/04/26/israels-ai-experiments-in-the-war-in-gaza-raise-ethical-concerns/.

[15] https://www.internazionale.it/reportage/laura-melissari/2024/08/06/intelligenza-artificiale-lavoratori-sfruttamento.

[16] "Le AI non conoscono la parola futuro", interview with Andrea Lisi, published on Fare Sanità Magazine, dated April 11, 2025, available at the link: https://faresanitamagazine.it/le-ai-non-conoscono-la-parola-futuro/.

Andrea Lisi

© Copyright Alia
Designed by Alia