The Council of Europe’s Cybercrime Programme Office (C-PROC) in Bucharest marked its 12th anniversary on April 21st 2026 with a high-level meeting hosted by Romania’s National Institute of Magistracy (INM). Over 110 participants from Romanian authorities, EU bodies, and international organizations discussed cybercrime challenges, electronic evidence, disinformation, and online violence, while advancing cooperation. C-PROC has supported over 2.700 activities across 140+ countries since 2014, strengthening global criminal justice and rule of law in the European cyberspace.
The timing of the event could not be better, as Romania, and all the other EU member states, are facing an upsurge of cyberattacks. Romania is facing a significant increase of daily Distributed Denial of Service (DDoS) cyberattacks targeting public institutions and companies, aimed at disrupting access and eroding public trust in authorities, warned Dan Cîmpean, Director of the Romanian National Cybersecurity Directorate (DNSC) . The wave intensified amid the Middle East conflict between the US and Iran, with many attacks claimed to be linked to it, although motives often mix hybrid warfare, political interests, and financial gain. Pro-Kremlin group Noname057(16) stands among the most active, operating as part of broader state-backed strategies. Unlike past sporadic incidents tied to elections or specific events, the current cyberattacks have become nearly constant across multiple sectors. While no major breaches have been reported so far, the goal behind the attempts remains undermining the Romanian citizens’ confidence in local digital services.
For example, Romanian Foreign Ministry (MAE) sites eviza.mae.ro and econsulat.ro were targeted by DDoS cyberattacks on the afternoon of March 13th and early night of March 14th 2026. The attacks caused temporary slowdowns and short periods of inaccessibility. Engaged cybersecurity systems and MAE specialists responded promptly, mitigating the impact with no sensitive data being accessed or breached. MAE noted that such DDoS attacks overload traffic to block access without hacking, inconveniencing citizens in need of consular services.
The current cyberattacks also go beyond simpler forms of unlawful interference. The European Commission has been hit by at least two major hacks in early 2026 amid a broader EU cyber crisis. In late March, attackers compromised its cloud services powering the europa.eu website, with the ShinyHunters group stealing and leaking 340GB of personal data including names, emails, and content on the dark web. Earlier, at the end of January, traces of a cyberattack were found in the central infrastructure managing mobile devices of European Commission employees, potentially exposing staff names and mobile numbers, though no device compromise occurred.
On March 16th 2026, the EU Council imposed sanctions on three entities and two individuals responsible for cyber-attacks against the EU and its member states, marking a firm response amid a deepening bloc-wide hacking crisis that has seen public institutions repeatedly targeted. China-based Integrity Technology Group supplied advanced tools that compromised over 65.000 devices in six member states between 2022 and 2023, while Anxun Information Technology and its two co-founders offered hacking services aimed at critical infrastructure. Separately, Iran’s Emennet Pasargad group breached a French database later sold on the dark web, hijacked Paris 2024 Olympic billboards for disinformation, and compromised a Swedish SMS service. The measures (asset freezes, funding bans, and travel restrictions) bring the EU cyber sanctions regime to 19 individuals and 7 entities.
Apart from standard cybercrimes, the world has also come to know a new form of digital criminal activity, based on AI-generated or related activity. AI-driven cybercrime is becoming a distinct prosecutorial threat in Romania and Europe because it changes the complexity of offenses that prosecutors already know: fraud, extortion, harassment, identity abuse, child sexual exploitation, data theft, disinformation and public-order manipulation. The novelty is not that AI creates an entirely new criminal universe, but that in doing so, it allows for the automatization of cybercrime. Europol’s 2025 Serious and Organised Crime Threat Assessment warns that AI is already being used by criminal networks to create multilingual messages, realistic impersonations, deepfakes, voice cloning and AI-generated child sexual abuse material, making attacks more scalable and harder to detect. Europol also reported one of the first major cases involving fully AI-generated child sexual abuse material, with 25 arrests worldwide in February 2025.
AI-generated phishing and fraud are probably the most immediate threat that digital users are facing. Classic phishing was often identifiable through poor language, generic wording or technical inconsistencies, but generative AI removes many of those signals. A criminal can now produce credible emails in Romanian, English, French or German, imitate corporate tone, refer to real transaction details, and adapt the message to the victim’s professional role. In Romania, this matters especially for business email compromise, payment diversion, banking fraud and fake investment schemes. Romanian criminal law already has tools for handling these types of cases under the Romanian Criminal Code: computer fraud under Article 249, illegal access under Article 360, illegal interception under Article 361, alteration of computer data under Article 362, disruption of systems under Article 363, unauthorized transfer of data under Article 364, illegal operations with devices/programs under Article 365, and computer forgery under Article 325.
Voice and video cloning fraud can be more dangerous because it attacks the trustworthiness of an agency or individual directly. A cloned voice of a CEO, parent, lawyer, banker or public official can be used to request urgent transfers, obtain confidential data, bypass informal verification, or pressure a victim into action. Prosecutors will have to prove not only that a fraud occurred, but also how the cloned material was generated, distributed and relied upon. The evidentiary file may need audio forensics, metadata, platform logs, device seizures, cryptocurrency tracing and cooperation with providers outside Romania, which are not always easy or quick to get in contact with.
Deepfakes create a second category of harm: reputational, sexual and political abuse. Non-consensual intimate deepfakes, including “nudification” images, are increasingly treated in Europe as a form of cyber-violence. The EU Directive 2024/1385 on combating violence against women and domestic violence requires Member States to criminalize certain forms of cyber-violence, including non-consensual sharing of intimate images, such as deepfakes. Member States have three years to implement it.
AI-generated child sexual abuse material is perhaps the most legally and morally acute issue. Romanian law is already applicable because Article 374 of the Romanian Criminal Code covers child pornography, including material that does not depict a real person but credibly simulates a minor in explicit sexual conduct. This is important: purely synthetic AI-generated child sexual abuse material (CSAM) may still fall within the Romanian definition if it credibly simulates a minor. The prosecutorial problem, however, becomes harder. If there is no direct child victim in the image, prosecutors must still address the social harm: normalization of abuse, use of real children’s faces or identities, grooming, blackmail, and the training or circulation of models specialized in sexualized child-like outputs. The defense may argue that there is “no real victim”, and prosecutors will need to show that the law protects not only identifiable children but also children as a class, and that synthetic CSAM can fuel demand, grooming and abuse ecosystems.
AI-assisted swatting is another emerging risk. Criminals may use cloned voices to make emergency calls, imitate a victim or a witness, and report fake violence, hostage situations or threats. This can trigger armed police responses, endanger innocent people, waste public resources and create panic. In Europe and Romania, this may engage offenses such as false reporting, public order offenses, threats, harassment, misuse of emergency services, and, where digital systems are manipulated, cybercrime provisions. The AI element aggravates the practical difficulty in that emergency responders may hear a convincing voice, caller-ID spoofing may be used, and the real perpetrator may be in another jurisdiction. There is also a risk of AI-generated evidence contamination. Prosecutors will increasingly face fake screenshots, fake audio, fake closed-circuit television (CCTV)-style video, fake chat logs, fake IDs and AI-generated “documents”. These new forms of deception will affect both prosecution and defense teams.
The European Convention of Human Rights (ECHR), the charter laying the foundation of the Council of Europe, addresses the topic through several of its articles. While the Council of Europe strives for the punishment of illegal cyber activity, it also stands as a strong reminder to prosecutors and law enforcement agencies that protecting the rights of individuals is as important as prosecuting criminals.
Article 6 ECHR, covering the right to a fair trial, requires that evidence be challengeable, that expert evidence be intelligible, and that the accused have a real opportunity to contest authenticity, chain of custody and reliability. A conviction based on digital materials whose provenance cannot be properly tested would be vulnerable.
Article 8 ECHR ensures the right to respect for private and family life, home, and correspondence. Cybercrime investigations often require intrusive measures: device searches, biometric identification, facial recognition, scraping of public profiles, platform data requests, geolocation, undercover online operations and analysis of private communications. These tools may be necessary, but they also interfere with private life and personal data. Therefore, state agencies also have to comprehend that their investigations must be lawful, necessary and proportionate to their aim.
Article 10 ECHR, regulating freedom of expression and information, becomes relevant in two opposite ways. First, AI deepfakes can be used to silence journalists, politicians, lawyers, activists or victims through intimidation, reputational attacks or sexualized abuse. Prosecutors must recognize that such crimes may have a chilling effect on speech and public participation. Second, enforcement must not overreach into other lawful forms of expression, such as parody, satire, political criticism or artistic expression. Deepfake regulation must be precise enough to target deception, abuse and harm, without criminalizing legitimate forms expression.
As an example, in Glukhin v. Russia, the European Court of Human Rights found violations of Articles 8 and 10 ECHR after Russian authorities used facial recognition to identify and arrest a peaceful protester. The Court considered facial recognition highly intrusive and emphasized the need for clear legal limits, safeguards, oversight and proportionality. It also linked surveillance to a chilling effect on expression. For prosecutors, Glukhin is not a ban on technology—it is a warning that investigative tools must be used under a clear legal basis, for legitimate aims, with strong safeguards and with proportionality tied to the seriousness of the offense. Facial recognition used to identify a terrorist suspect is not the same as facial recognition used to identify a protester or minor administrative offender.
While the EU’s Artificial Intelligence Act (AI Act) adds a regulatory layer to AI-related cybercrimes, it does not offer a complete criminal law solution. Article 50 of the AI Act creates transparency obligations for certain AI systems—including obligations around synthetic content—deepfakes and informing users when they interact with AI systems. The Commission has also worked on a Code of Practice for marking and labelling AI-generated content.
For Romania, the key point is that prosecutors do not need to wait for a special law regulating AI related cybercrimes to act. Many behaviors are already covered by existing offenses. The real gaps are operational: technical expertise, fast preservation of digital evidence, access to platform data, international cooperation, forensic standards for media, and training of judges and prosecutors on the issue at hand. A serious Romanian prosecutorial strategy should therefore include the following: specialized AI/cybercrime training within prosecution offices, rapid cooperation channels with platforms and financial institutions, standard protocols for preserving metadata and original files, access to digital forensic experts, guidance on charging AI-generated CSAM, and clear rules and safeguards for interference with the private life and communication of individuals.
AI does not merely create new cyber offenses, but upgrades and facilitates the actioning of old offenses. It makes fraud more convincing, sexual abuse more scalable, harassment more anonymous, evidence more contestable, and investigations more intrusive. European and Romanian prosecutors will need to adapt quickly, but they must do so within the ECHR framework. The stronger the technology used by criminals, the stronger the State response may need to be. But under Articles 6, 8 and 10 ECHR, that response must remain lawful, testable, proportionate and rights-compatible.