The Italian Law on Artificial Intelligence: the overall framework in general compliance with the AI ACT
The sauce.
Not only the "Brussels Effect"1, but perhaps we could also speak of Italy's Effect: Italy was, in fact, the first of
the 27 Member States of the European Union (EU) to adopt a national law on artificial intelligence (AI for short).
On 23 September 2025, Italian Law no. 132/2025 was published in the Official Gazette2 (OG) and entered
into force on 10/10/2025.
The framework of this legislation, both in its relationship with the European Regulation 2024/1689 (AI
ACT) and with regard to the changes made, requires the interpreter to be accurate and precise in their expression, in
order to avoid accusing – erroneously, from the point of view of the writer – the effort of the Italian legislator of
being a useless and avoidable additional regulatory act.
First premise: the relationship between rules.
First of all, the EU Regulation performs a function of harmonisation and general framing of rules and
principles in the field of AI3, leaving Member States with ample room for manoeuvre (room for execution,
designation and specification of these rules and principles) necessary to adapt this regulatory framework to
institutional structures and national sensitivities, in particular in areas that affect security, justice and fundamental
rights.
Law 132/2025, therefore, aims to implement those powers of manoeuvre directly granted within the AI
ACT, without overlapping or ruling where already done by the latter.
It should also be noted that in Italy the principle of hierarchy of sources applies, which establishes a sort of
order of prevalence between the different legal rules: a lower-ranking source cannot contradict or repeal a higherranking
source.
For what concerns us here, it is sufficient to know that in Italy this principle provides for the Constitution as the
supreme source4, EU law as the superordinate rank to the internal one, laws, legislative decrees and decree-laws as the
primary rank and the acts of bodies with regulatory power (e.g. Ministries, Administrative Authorities, etc.) as the
secondary rank.
Therefore, in the event of a conflict between the EU Regulation and Italian law, the former would prevail (except,
as mentioned, for the violation of constitutional principles and rights).
Furthermore, the Italian Law already sets a clear boundary point of its general scope in the first articles: "This
law does not produce new obligations with respect to those provided for by Regulation (EU) 2024/1689 for AI systems
and for artificial intelligence models for general purposes" (Article 3, paragraph 5 General principles); this
1 "Brussels Effect": this refers to the ability of the EU, through its rules, to influence the disciplines of other states, arriving first, on the assumption that
companies will have every interest in having equal rules so as not to have additional costs for having different rules. The term was coined in 2012 by Professor
Anu Bredford in her article entitled "The Brussels Effect", published in the Northwestern University Law Review.
2 https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/sg .
3 The legal basis of the AI ACT is, in fact, Article 114 TFEU (together with Article 16 on the realisation of the single market).
4 Article 11 of the Italian Constitution is the rule that allows the entry of EU law into the national legal system, with the limitation that, if an EU rule or act
were to infringe constitutional principles and rights, the Constitutional Court has the power to disapply it.
2
commitment is reaffirmed in particular in Article 16, paragraph 1, where the Government is entrusted with the task
of issuing legislative decrees within 12 months to define an organic discipline relating to the use of data, algorithms
and mathematical methods for the training of AI systems "without (providing for) further obligations in the areas
subject to Regulation (EU) 2024/1689 with respect to those already provided for therein".
Second premise. The aims and framework: the fundamental role of EU
principles.
Having ascertained that Law 132/2025 operates in accordance with the AI ACT, guaranteeing its
implementation at national level by providing the necessary instruments of domestic law (not to be confused,
however, with the role it would have if the AI ACT were a Directive), the objective of the Italian Law is, as stated in
Article 1, "to lay down principles on research, experimentation, development, adoption and application of artificial
intelligence systems and models" in relation to coordination and complementarity, as has just been said, with the
general framework of the Regulation, intervening in the areas of its competence, such as, in particular, health, labour
law, procedural code, Public Administration, establishment of Authorities for the promotion, development,
supervision of the use of AI systems, sanctioning powers and criminal matters.
Having clarified the relationship and the levels of action between the two rules, it is still worth mentioning,
before entering into the merits of Italian legislation, the general framework of approach and the principles established
by the European Regulation in which Law 132/2025 is inserted and which it recalls in full.
The AI ACT clearly sets out its aims from the very first recitals: to improve the functioning of the single
internal market, to promote the spread of trustworthy and human-centric AI and to promote innovation.5
These aims are the result and the declination in the AI field of the direction already marked by the European Digital
Single Market Strategy (2015), the "2030 Digital Compass" (2021)6, and the "Long-term competitiveness of the EU:
looking beyond 2030" 7: the EU has, in fact, the objective of creating a single digital market with a view to
competitiveness, especially looking to the USA and China, using its efforts in this sense to prevent the Member States
from fragmenting the markets through internal regulation.
The AI ACT is also based on a risk-based approach, building a visual pyramid at the top of which are
unacceptable AI systems (and, therefore, not allowed in the EU), in the body high-risk systems (which also include
general purpose AI systems and GenAI) and at the base AI systems at acceptable risk8. The Italian Law takes this
architecture for granted and recalls (Article 2) the definitions of AI systems and models of the AI ACT (Article 3,
points 1 and 63), in addition to what is not expressly provided for.
5 Recitals 1: "The purpose of this Regulation is to improve the functioning of the internal market by establishing a uniform legal framework in particular with regard to the development,
placing on the market, putting into service and use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the deployment of humancentric
and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety and fundamental rights enshrined in the Charter of Fundamental Rights of
the European Union ("the Charter"), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, as well as to
promote innovation. This Regulation ensures the free cross-border movement of AI-based goods and services, thereby preventing Member States from imposing restrictions on the
development, marketing and use of AI systems, unless expressly authorised by this Regulation ".
6 Communication COM (2021) 118: https:// eur-lex.europa.eu/legalcontent/
EN/TXT/HTML/?uri=CELEX:52021DC0118&utm_source=chatgpt.com .
7Communication COM (2023) 165: https://commission.europa.eu/system/files/2023-03/Communication_Long-term-competitiveness.pdf
?utm_source=chatgpt.com .
8 The effect of the inclusion of the AI system in one or the other category is that the recipients of the regulation (provider and deployer) are subject to more
or less stringent compliance obligations. At the time of writing, however, the European Commission is preparing a draft proposal with the aim of extending
the deadline (or even simplifying) the compliance obligations of high-risk AI systems, in particular, those of GenAI: https://digitalstrategy.
ec.europa.eu/en/news/commission-collects-feedback-simplify-rules-data-cybersecurity-and-artificial-intelligence-upcoming .
3
We have referred to a "risk-based approach": what risk does it refer to? The risk of violating people's fundamental
rights and freedoms. 9 One of the ways in which the EU wants to achieve the objective of protecting fundamental
rights and freedoms is precisely through the enunciation of the whole series of principles that protect them: central
is, first of all, the concept of humancentric and trustworthy AI, established by the AI ACT (Recital 1) and
immediately reaffirmed by Italian Law (Article 1, paragraph 1), corroborated and supported by the ethical guidelines
developed in 2019 by the AI HLEG Committee10, reported in the AI ACT in Recitals 7 and 2711.
Similarly, the Italian Law recalls and reiterates them, in particular by promoting “a correct, transparent and
responsible use (of AI) in an anthropocentric dimension” (Article 1) and the development and application of AI
systems and models in “respect for the autonomy and decision-making power of man (…) ensuring human surveillance
and intervention” (Article 3, paragraph 3).
The human supervision of AI systems and, consequently, the entrustment of the final decision to the human being
are the manifestation of the human in the loop-on the loop- in command triad described by the HLEG12. To ensure
compliance, Italian law requires that the data used and the processes of development of AI systems and models must
be correct, reliable, secure, of quality, appropriate, transparent (Article 3, paragraph 2).
To guide and oversee this guarantee and supervisory activity of the processes of development, research,
experimentation, adoption, application and development of AI systems, the principles "of transparency,
proportionality, security, protection of personal data, confidentiality, accuracy, non-discrimination, gender equality
and sustainability" (Article 3, paragraph 1), of the fundamental rights and freedoms provided for in the Constitution
and of EU law must always be placed.
Given that an essential requirement for talking about AI is its connection with the main source for its operation, i.e.
the data that is provided to it throughout the cycle of activity, Italian law explicitly and constantly recalls, with regard
to personal data, the rules and principles laid down by EU Regulation 2016/679 (GDPR) and by Legislative Decree
101/2018 (formerly Legislative Decree 196/2003) (Italian Privacy Code)13.
With regard to a sensitive category of data, Article 4 of Law 132/2025, in particular, is specifically aimed at
the processing of personal data of minors, distinguishing between minors under 14 years of age and under-eighteenyear-
olds (between 14 and 18)14, establishing the legal basis – as prescribed by Article 6 GDPR for processing to be
considered lawful – of consent: of those who exercise parental responsibility for the first category, of the same undereighteen-
year-old in the case of the second. In addition, the minor under the age of eighteen (between 14 and 18),
provided that the information on the processing of personal data related to the use of the AI system is clear, easily
accessible and understandable.
9 However, this aspect is not new: the AI ACT is, fact, part of a series of European regulations that follow this approach (first of all, EU Regulation 2016/679,
GDPR), highlighting the fact that the European legal system places the person with their fundamental rights and freedoms at the centre of its protections.
AI HLEG10 Committee, independent committee appointed by the EU Commission: https://www.aepd.es/sites/default/files/2019-12/ai-ethicsguidelines.
pdf ?utm_source=chatgpt.com.
11"While the risk-based approach forms the basis for a proportionate and effective set of binding rules, it is important to recall the 2019 ethical guidelines for trustworthy AI developed
by the Commission's independent AI HLEG. In these guidelines, the AI HLEG has developed seven non-binding ethical principles for AI that are intended to help ensure that AI is
trustworthy and ethically sound. The seven principles include: human intervention and oversight, technical robustness and security, privacy and data governance, transparency, diversity,
non-discrimination and fairness, social and environmental well-being and accountability. Without prejudice to the legally binding requirements of this Regulation and any other applicable
Union law, these guidelines contribute to the development of a coherent, reliable and anthropocentric AI, in line with the Charter and the values on which the Union is founded " (Recital
27, AI ACT).
12 Page 16 of the HLEG Guidelines: "Human oversight. Human oversight helps ensure that an AI system does not undermine human autonomy or causes other adverse effects.
Oversight may be achieved through governance mechanisms such as a human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach. HITL refers
to the capability for human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable. HOTL refers to the capability for human intervention
during the design cycle of the system and monitoring the system’s operation. HIC refers to the capability to oversee the overall activity of the AI system (including its broader economic,
societal, legal and ethical impact) and the ability to decide when and how to use the system in any particular situation".
14 It should be noted that in relation to minors, the AI ACT in Recital Recital 9 says: "For example, this Regulation should not affect national labour law and legislation
on the protection of minors, i.e. persons under the age of 18, taking into account General Comment No. 25 of the Convention on the Rights of the Child (2021) on the rights of minors
in relation to the digital environment, insofar as they do not specifically concern AI systems and pursue other legitimate objectives of public interest".
4
Third premise. The strategic role of artificial intelligence in Italy.
To complete the premises on the framework and role of national legislation, also in relation to the AI ACT,
it must be said again that Law 132/2025, as well as future implementing decrees, impact both the private and public
fields. Chapter III "National strategy, national authorities and promotional actions" provides that the Presidency of
the Council of Ministers, the Ministries and the competent administrative authorities prepare a national strategy for
artificial intelligence, approved at least every two years, which "promotes collaboration between public administrations
and private entities regarding the development and adoption of artificial intelligence systems, coordinates the activity
of the public administration in this area, the dissemination of knowledge in the field of artificial intelligence and directs
the measures and incentives aimed at the entrepreneurial and industrial development of artificial intelligence" (Article
19, paragraph 2). In addition, a Committee is established to coordinate the activities of bodies, organisations and
foundations operating in the field of digital innovation and artificial intelligence, with the function of coordinating
the action of directing and promoting research, experimentation, development, adoption and application of AI
systems and models (Article 19, paragraphs 6–7).
Article 5, even earlier, is placed in this context as a driving force for the State and public authorities to
encourage small and medium-sized enterprises to develop and use AI systems, both "as a useful tool for starting new
economic activities and supporting the national productive fabric", and with the aim of "increasing the competitiveness
of the national economic system and the technological sovereignty of the Nation" (Article 5, paragraph 1, letter a)).
In order to achieve the ambitious objectives in the economic field, including the creation of an AI market
that is "innovative, fair, open and competitive and of innovative ecosystems" (art. 5, para. 1, lett. b)), Italian law sets as
an essential condition the possibility for economic operators to have access to and availability of high-quality datasets
– with the constant reference to the GDPR, as regards the processing of personal data.
Economic operators refer to both companies and scientific and innovation communities: collaborative research
between companies, such as the economic and commercial exploitation of results, is also encouraged and promoted
in the scientific and non-profit sector, even in the public sector.
The heart and the new features of the legislation: the sector provisions (health,
public administration and justice, intellectual professions, intellectual
property).
Entering, therefore, into the heart of the Law's field of application, Chapter II is dedicated precisely to the
sector provisions, giving substance to the aforementioned principle of the human in the loop.
Starting from the articles that regulate the use of AI in the healthcare sector (7-11), it is specified that this use
must be understood as supporting the choices and decisions of the professionals, therefore, never in any way
replacing them.
This provision is reiterated in the following Articles 13, 14 and 15, which establish that the scope of AI must
be limited only to the "instrumental and support activities" of the activity related to the intellectual professions
(Article 13), to the "provisional activity" of the PA (Article 14), to the "organisation of services related to justice", to
the "simplification of judicial work" and to "ancillary administrative activities" (Article 15).
Therefore, the final decision (and supervision) must always remain in the hands of the human being, i.e. the
medical practitioner, returning to Articles 7-11, who may use AI technologies for organisational, diagnostic and
treatment purposes (Article 7, paragraph 5).
5
Similarly, Article 15 limits the use of AI systems in the field of justice, clarifying that the assessment of facts
and evidence, the interpretation and application of the law and the adoption of measures are activities exclusively
entrusted to the judiciary; therefore, the admission of predictive justice systems is excluded for the moment, since AI
can be used only for the organisation of justice-related services, for the simplification of judicial work and for ancillary
administrative activities.
Establishing that the final decision must always be attributable to a human being also means identifying only
the latter (provider or deployer) as the party to whom responsibility is attributed, excluding any idea of the machine
having legal subjectivity. We therefore refer here to the traditional categories of liability applied in Italy (for example,
that for unlawful act 2043 of the Italian Civil Code, that of the producer, that of the service provider).
The principle of non-discrimination is also reaffirmed (Article 7, paragraph 2)15, with regard to access to
health services, introducing the duty to inform the patient if AI systems are used in their treatment pathway, but the
real novelty is contained in Article 8, according to which the legal basis for the processing of personal data for research
purposes and scientific experimentation in the creation of artificial intelligence systems in the health sector is the
public interest (therefore not consent), based on the constitutional rights to protect research and health referred to
in Articles 32 and 33 of the Constitution, and on two conditions: such processing must be communicated to the
Italian Data Protection Authority and a period of 30 days must be observed; if the Authority does not issue a blocking
order, the processing may begin (Article 8, paragraph 5).
Article 9 applies the provisions of Recitals 138, 139 and Articles 57 and 59 of the AI ACT to establish a
"regulatory experimentation space for AI at the national level", i.e. a sandbox, giving the Ministry of Health the
mandate to regulate it with its own decree, after consulting the Guarantor for the protection of personal data and the
parties directly concerned. Although the objective is laudable, there is a risk of a potential regulatory overlap, in
particular with EU Regulation 2025/327 (which is, however, being applied in stages over time), which establishes
the European Health Data Space16, and with the previous Decree of the Italian Ministry of Health of 31 December
2024, which established the Health Data Ecosystem, focusing on the processing of health data in scientific research
in the medical, biomedical and epidemiological fields.
The list of regulations in the healthcare sector ends with the vertical Article 10 in the public sphere, referring
to the Italian legislative acts establishing the Electronic Health Record (Fascicolo Sanitario Elettronico, FSE), a digital
medical record of the individual established on a regional basis, which following recent amendments can be
automatically fed with the patient's personal data, i.e. without prior consent, but with the possibility of opting out
(the possibility for the data subject to object to the automatic uploading of their health data). This reference was
made precisely by virtue of the heartfelt need to guarantee advanced tools and technologies in the health sector
(Article 10, paragraph 1), allowing the development of AI solutions to support the purposes for which the FSE was
established17.
Finally, a platform will be set up to support the purposes of care and local assistance and to provide support services
for clinicians and users (Article 10, paragraph 2).
15 In this regard, please refer to the provisions on high-risk systems in Annex III of the AI ACT, point 5 "Access to and use of essential private services and
essential public services and services" letters a) and d).
16 In this regard, Recital 68 AI ACT: "with regard to health, the European Health Data Area will facilitate non-discriminatory access to health data and the training of AI
algorithms from such data sets in a secure, timely, transparent, reliable and privacy-protecting manner, as well as with appropriate institutional governance. Relevant competent authorities,
including sectoral ones, that provide or support access to data, may also support the provision of high-quality data for the purposes of training, validation and testing of AI systems".
17 Art. 12 bis Law 179/2012: a) diagnosis, treatment and rehabilitation; a bis) prevention; b) international prophylaxis; c) scientific study and research in the
medical, biomedical and epidemiological fields; c) health planning, verification of the quality of care and evaluation of health care: c bis) health evaluations
and assessments for the recognition of welfare and social security benefits.
6
It is now worth dwelling on the scope of the duty to inform the data subject about the use of AI (reiterated
in the provisions on work and intellectual professions), which represents a fulfilment of the principle/duty of
transparency also addressed to the PA, where it is prescribed that "Public administrations (which) use artificial
intelligence (for the purposes referred to in this article) ensuring that data subjects are aware of its operation and the
traceability of its use" (Article 14, paragraph 1).
In Italy, in fact, the principle of transparency thus defined has a long tradition, which starts from the dictates of the
Constitution (Article 97 with the principles of impartiality and good performance of the PA), evolving through a
long regulatory excursus, whereby we have moved from a principle of transparency on documents, then on data
(open data) and on activity, on the reasons for the decisions of the PA.18
Coming to the present day, the application of the principle of transparency must face a new evolution: that posed
by the use of new technologies, including AI systems. Transparency no longer concerns only documentation and
data, therefore, but also automated decision-making processes, for which the data subject must always be ensured of
the knowledge of the functioning and traceability of the use by the PA of the AI systems used in order to “increase
the efficiency of its activity, to reduce the time required to define procedures and to increase the quality and quantity of
services provided to citizens and businesses” (art. 14).
All this discussion on the principle of transparency thus defined and the duty to guarantee the knowledge of the
functioning of the AI system opens the field to another age-old problem: that of the explainability of the decision
and the decision-making process of AI and the "open the black box", addressed in the AI ACT19.
Although redundant, where Italian law imposes on the intellectual professional the duty to inform (in "clear,
simple and exhaustive language") of the use of AI systems in the sole exercise of instrumental activities and support
for professional activity, it also introduces an important concept: that of "prevalence of the intellectual work that is
the object of the work performance" (Article 13, paragraph 1).
Mirroring Chapter IV "Provisions for the protection of users and in the field of copyright", Article 25, entitled
"Protection of the copyright of works generated with the aid of artificial intelligence", amends Article 1, paragraph 1 of
the Italian Copyright Law (Law no. 633 of 22 April 1941 or LDA), prescribing that those forms of expression "even
where created with the aid of artificial intelligence tools, provided that they constitute the result of the intellectual work
of the author" can be considered intellectual works.
It is also specified that, in order to obtain the protections provided for by the LDA, it must be an intellectual work
(of a creative nature, Article 1, paragraph LDA) that is "human" (Article 25, paragraph 1 Law 132/2025), a sign that
the advent of artificial intelligence is so disruptive that it has made the intervention of the legislator necessary, who
has proceeded to insert this adjective, modifying Article 1 of the LDA.
But the fact that it can no longer be assumed that a work of ingenuity is the product of human creativity, so much
so that it must be specified, has been and is currently the subject of several legal disputes (including worldwide20),
which have led the Italian Courts to try to express themselves on the point.
In fact,18 we have moved from Law 241/1990, which established for the first time the right of documentary access to citizens (and Law 15/2005, which
introduced the term "transparency"), to the Digital Administration Code (Legislative Decree 82/2005) Article 1, paragraph 1 letter l-ter on "open data"18
and the concept of reusability of PA data, to the Anti-Corruption Law (190/2012) which included transparency as a fundamental tool for preventing and
combating corruption in the PA and Legislative Decree 97/2016 "Media Reform", which introduced the main changes to the "Transparency Code"
(Legislative Decree 33/2013), sanctioning Italy's alignment with the international approach of the Freedom of Information Act;
19 Explainability and transparency are closely related concepts but they are not the same concept. We find a first definition of "transparency" already in
Recital 27: "Transparency means that AI systems are developed and used in such a way as to allow adequate traceability and explainability, making human beings aware of the fact
that they are communicating or interacting with an AI system and duly informing deployers of the capabilities and limitations of that AI system and the persons concerned of their rights";
while "Explainability" is defined in Article 86: "Any person concerned who is the subject of a decision taken by the deployer on the basis of the output of a high-risk AI system
listed in Annex III, with the exception of the systems listed in point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that he or she considers
to have a negative impact on his or her health, safety or fundamental rights, has the right to obtain from the deployer clear and meaningful explanations on the role of the AI system in
the decision-making procedure and on the main elements of the decision taken", applying only if not otherwise provided for by EU law (in this regard, it should be recalled
that Article 22 GDPR, already a forerunner of this principle of explainability, provides the data subject with a safeguard in the event that the automated
processing of personal data is permitted through the guarantee by the Data Controller of the existence of adequate measures to protect the rights, freedoms
and legitimate interests of the data subject and providing for 1) human intervention 2) the right to express his or her opinion and challenge the decision and
3) opposition to that automated decision;
20 Among the best-known cases are Zayra of the Dawn and Dreamwriters.
7
In particular, it is worth noting an important reasoning of the Court of Cassation, the third and final instance of
judgment in Italy, which found itself for the first time reasoning concretely on the possibility of admitting copyright
protection to a digital work of art. The Court, with order 1107/2023, despite having deemed inadmissible – as a
new issue, not dealt with in the contested judgment – the ground of appeal that considered the creative character
absent due to the digital nature of the work as processed by software through mathematical algorithms, stated
that "a factual assessment would have been necessary to verify whether and to what extent the use of the tool (of the software) had
absorbed the creative processing of the artist who had used it" (Court of Cassation Civ. Order No. 1107 of 16/1/2023)21; it
can therefore be inferred, by means of a contrario reasoning, that the Judges' thinking is that the use of digital
technology for the creation of a work does not in itself preclude the possibility of qualifying it as an intellectual
work, unless, following a factual assessment in which the rate of creativity has been rigorously scrutinised, it appears
that the use of technology has absorbed the creative elaboration of the artist.
Although the principle of stare decisis does not apply in the Italian legal system, if the Courts were to translate this
reasoning into a true principle of law, supported by the amendment to the LDA, the effect would therefore be to
recognise the protection of copyright for works created through artificial intelligence; to achieve this goal, it will
be essential to assess the actual human creative contribution on a case-by-case basis, in relation to the use of the
AI system by the author.
Another important amendment to the LDA is that provided for by art. 25, para. 1, letter b), which inserts
a new article (70-septies), intervening on the activity of Text and Data Mining (TDM) – "any automated technique aimed
at analysing texts and data in digital format in order to generate information, including in particular models, trends and
correlations"(art. 2, point 2 of the Copyright Directive) – through AI models and systems of data accessible online
but covered by copyright and, therefore, limited in their use.
The problem that Article 70-septies seeks to regulate therefore sees the need, on the one hand, to protect the
intellectual property of online content and, on the other, not to castrate the potential brought by AI technologies
for the analysis, enjoyment and use of data, however protected22.
The problem is not new and had already begun to be addressed with Directive 2019/790/EU23 (known as the
"Copyright Directive"), transposed into Italian law by Legislative Decree 177/2021, which introduced Articles 70-
ter and quater in the LDA, which set out ad hoc rules in Italy for TDM activity.
Article 70- septies, taking into account the further evolution of technology and the impact of this on the activity of
TDM, therefore, now refers to the purposes of "text and data extraction through artificial intelligence models, including
generative ones", referring to the "reproduction and extraction of works and other materials contained on the network or in databases
to which one has legitimate access".
The article only permits such activities if they are carried out in accordance with Articles 70-ter (which defines
what is meant by TDM or "text and data mining": "any automated technique aimed at analysing large amounts of text, sounds,
images, data or metadata in digital format with the aim of generating information, including models, trends and correlations") and
70-quater ("reproductions and extractions from works or other materials contained in networks or databases to which there is
legitimate access for the purpose of text and data mining"), which allow, without any need for authorisation from those
21 The order reads: "RAI (the appellant) complains that the Court of Appeal has erroneously qualified as an intellectual work an image generated by software and not attributable
to a creative idea of its supposed author. The appellant claims that the work of the architect B. (defendant) is a digital image, with a floral subject, with a so-called "fractal" figure, i.e.
characterised by self-similarity, or by repetition of its forms on different scales of magnitude and was processed by a software, which processed its shape, colours and details through
mathematical algorithms; the alleged author would only have chosen an algorithm to be applied and approved a posteriori the result generated by the computer (...). The plea appears
inadmissible (...) because it is aimed at introducing for the first time in the context of legitimacy a new issue not dealt with in the trial. (…) It is certainly not sufficient for this purpose
for the counterparty to admit that it used software to generate the image, a circumstance which, as the appellant itself admits, is still compatible with the development of an intellectual
work with a rate of creativity that should only be scrutinised more rigorously (...), if, as happened in the specific case, RAI did not ask the trial judges to reject the application for that
reason. Indeed, a factual assessment would have been necessary to verify whether and to what extent the use of the tool had absorbed the creative elaboration of the artist who had used it.
The plea must therefore be declared inadmissible, without the need to address here the issues, for now unexplored in the jurisprudence of this Court, of so-called digital art (also called
digital art or computer art) as a work or artistic practice that uses digital technology as part of the creative process or exhibition presentation".
According to the legal journal "Arte e Diritto 2023, 2, II, 391", Giuffrè, it could therefore be deduced that: "The unauthorised reproduction of the creative image of a
flower constitutes a violation of the copyright of the person who created the image, even if the author used software in the creative process. The qualification of the result as an intellectual
work is not in fact precluded by the use of digital technology for the realisation of the work, unless, following a careful assessment of the facts and a careful analysis of the level of creativity
employed, it does not emerge that the use of technology has replaced the creative elaboration of the artist";
22 Although the TDM was not yet provided for, this last requirement linked to the advent of digital tools, which have redefined and expanded the concept
of access, use and reuse of digital content, had already been intercepted with Directive 2001/29/EC, with a first attempt to adapt the exclusive right of
reproduction to the characteristics of the "information society", recognising the author of the intellectual work the exclusive right to prohibit direct or indirect,
temporary or permanent reproduction, in any way or form, in whole or in part, with exceptions. Further changes were then provided for with the Copyright
Directive.
23 "on copyright and related rights in the digital single market and amending Directives 96/9/EC and 2001/29/EC";
8
who hold copyright and/or sui generis rights over databases, the extraction of data from sources and databases, on
the assumption that those who perform this operation have legitimate access to them.24
Furthermore, in order to ensure compliance with Article 70-septies, Law 132/2025 coordinates it with Article 171,
paragraph 1 of the LDA, extending the sanction provided for therein to those who: "reproduce or extract text or data
from works or other materials available on the network or in databases in violation of Articles 70-ter and 70-quater, including through
artificial intelligence systems". In this way, Article 70- septies of the LDA receives protection at the criminal level.
To complete this framework, we cannot ignore the reference to the AI ACT, which in Recital 105 gives a
framework on TDM: "Text and data mining techniques can be widely used in this context for the retrieval and analysis of such
content, which can be protected by copyright and related rights", referring to the fact that the development and training of
general purpose models require access to large amounts of data, thus posing a great challenge to the effectiveness
of the protections afforded by copyright. Therefore, it also provides that "Any use of copyrighted content requires the
authorisation of the relevant rightholder, unless exceptions and limitations apply" provided for by the Copyright Directive,
transposed into our legal system through Articles 70-ter and quater of the LDA.
In order to protect the holders of intellectual property rights, the AI ACT prescribes transparency obligations, in
particular for suppliers of general-purpose AI models (Recitals 106, 107, Article 53).25
Criminal area: new aggravating circumstances and new crimes.
As mentioned above, Law 132/2025 also intervenes in criminal matters. According to the EU's founding
treaties26, this area remains the responsibility of the Member States27. In this area of operation, Chapter V "Criminal
provisions" has been introduced, providing for new aggravating circumstances and new crimes related to the
improper use of artificial intelligence.
Thus, a new common aggravating circumstance was provided for in Article 61, 11-decies) of the Italian
Criminal Code, which provides for an increase in the penalty of up to 1/3 regardless of the criminal offence
committed, and a special aggravating circumstance in Article 294 of the Italian Criminal Code relating to the
offence of political rights: "The penalty is imprisonment from two to six years if the deception is carried out through the use of
artificial intelligence systems".
Other new special aggravating circumstances have been introduced in relation to the crime of market manipulation
provided for by art. 2637 of the Italian Civil Code and market manipulation referred to in art. 185 of the
Consolidated Law on Finance, where Law 132/2025 provides for an increase in the penalty if the offences have been
committed with the help of AI systems.
Finally, it is important to note the provision of the new crime of "deepfake" pursuant to art. 612 quater:
"Anyone who causes unjust damage to a person, by transferring, publishing or otherwise disseminating, without their consent, images,
videos or voices falsified or altered through the use of artificial intelligence systems and capable of misleading as to their authenticity, is
24 What differentiates the two rules are the areas of application: 7 0 ter refers to the extraction for scientific purposes by research organisations and cultural
heritage protection institutes, thus allowing the TDM without the possibility of exceptions; while 70 quater allows the extraction of text and data in general,
by anyone, even for mere profit, but only when their use has not been expressly reserved by the copyright holders (opt-out).
25 Recital 107: "In order to increase transparency on the data used in the pre-training and training phases of AI models for general purposes, including text and data protected by
copyright law, it is appropriate that the suppliers of such models elaborate and make available to the public a sufficiently detailed summary of the contents used for the training of the
AI model for general purposes".
Art. 53, para. 1 letter c) and d): Providers of general purpose AI models "shall implement a policy aimed at complying with Union law on copyright and related rights and,
in particular, at identifying and respecting, including through state-of-the-art technologies, a reservation of rights expressed in accordance with Article 4, paragraph 3, of Directive (EU)
2019/790; they shall draw up and make available to the public a sufficiently detailed summary of the contents used for training the AI model for general purposes, according to a model
provided by the office for AI".
26 Art. 5 TEU, para. 1 "The limits of Union competences are governed by the principle of conferral. The exercise of Union competences is based on the principles of subsidiarity and
proportionality ”, para. 2 “ Under the principle of conferral, the Union shall act only within the limits of the competences conferred upon it by the Member States in the Treaties to
attain the objectives set out therein. Competences not conferred upon the Union in the Treaties remain with the Member States ”, par. 3“Under the principle of subsidiarity, in areas
which do not fall within its exclusive competence, the Union shall act only if and insofar as the objectives of the proposed action cannot be sufficiently achieved by the Member States, either
at central level or at regional and local level, but can rather, by reason of the scale or effects of the proposed action, be better achieved at Union level”.
27 Except as provided for in Articles 67, 82, 83 and the whole of Chapter 4 TFEU.
9
punished with imprisonment from one to five years". It is therefore not illegal to disseminate a deepfake28 made with AI,
but it is illegal to transfer, publish and disseminate it without the consent of the person concerned (for this reason,
in fact, the latter's complaint is necessary as a condition of admissibility).
The importance of cybersecurity.
It is still worth highlighting the fundamental role recognised by national law in cybersecurity. The AI ACT
already clarifies its crucial role in ensuring that AI systems are resilient, prescribing for this purpose stringent
obligations on providers to adopt cybersecurity measures appropriate to the risks (in particular, with regard to
high-risk AI systems, Article 15).
In this context, Law 132/2025 reaffirms the role of cybersecurity, defining it in Article 3, paragraph 6, as "an essential
precondition" in order to ensure compliance with the rights and principles related to the use and exploitation of AI
systems and models (general purpose), which must be ensured throughout the life cycle of such systems and
"according to a proportional and risk-based approach".
Also relevant are Article 18, which promotes public and private agreements and initiatives, including in partnership,
aimed at enhancing AI as a resource to strengthen Italian cybersecurity, and Article 23, which authorises the
allocation of a fund of €1 billion to support the development of companies operating in the field of AI,
cybersecurity and enabling technologies (such as quantum technologies).
To complete the examination of the Italian Law on artificial intelligence, we must mention the Italian
governance structure envisaged: to ensure compliance with European and national regulations and in execution of
the AI ACT itself, the Digital Agency for Italy (AgId) and the National Cybersecurity Agency (ACN) are
designated, with different powers and competences, assisted by the Bank of Italy, IVASS and Consob, which
maintain supervisory powers for the use of high-risk AI systems by financial institutions.
Final remarks.
The Italian Law on Artificial Intelligence, although it cannot be defined as an innovative intervention tout
court, since the main plot on the storytelling of AI in the EU has already been written by the AI ACT, certainly has
the merit of having set firm points at the national level regarding the national strategy on artificial intelligence and
the fact of having adapted the sector disciplines most impacted by the new changes brought by the technological
revolution of AI; think, for example, of the important updates to the Criminal Code.
But that is not all. Italy will still have to issue the legislative decrees delegated to the Government for a
comprehensive regulation on AI in the next 12 months and, for the most part, it will have to do so in view of a
new move by the EU: two draft proposals for the revision of European digital regulations (called Digital Omnibus)
are being examined by the institutions29, which will also concern the AI ACT.
