Skip to main content
GOV.SI
Disinformation is false or misleading information spread intentionally by individuals, organisations or states in order to deceive or manipulate individuals and public opinion and achieve specific economic, political or social objectives.

With the growing role of social media platforms and the development of artificial intelligence, the potential for the spread of disinformation is rapidly increasing. Disinformation has the greatest impact in times of crisis, uncertainty and general dissatisfaction, because it operates at an emotional level and influences people's opinions and decisions. Its purpose is to create confusion, provoke anger and fear, and damage the reputation of individuals, organisations, institutions and states. The spread of false information is most pronounced in times of crisis, such as during the COVID-19 pandemic.

Disinformation is used to amplify differences in public debate or even to influence political processes. It poses a threat to democracies, as it aims to undermine trust in state institutions and the media, thereby weakening confidence in democracy by hindering people's ability to make decisions based on credible data and information.

What is disinformation?

Disinformation is verifiably false or misleading information that is created, presented and disseminated with the intention of deceiving or securing economic or political gain and that may cause public harm.

The following do not constitute disinformation:

  • Misleading advertising;
  • Reporting errors;
  • Satire and parody;
  • Clearly labelled opinion news and commentary.

Foreign information manipulation and interference (FIMI) refers to attempts from abroad to interfere with the public opinion of a state through false information. In such cases, foreign states or individuals intentionally spread disinformation in order to manipulate political processes, elections or the general social dynamics of a particular state. This involves deliberate, manipulative and coordinated actions by states or their proxies, carried out within the territory of their home state or outside it.

FIMI poses a threat to national security and is considered a hybrid threat. Hybrid threats combine activities carried out in a coordinated manner by state and non-state actors, often blending conventional and unconventional methods, while remaining below the threshold for the formal declaration of war. Their aim is not only to cause direct harm and exploit vulnerabilities, but also to destabilise society and create ambiguity that obstructs decision-making. They can create complex security challenges and therefore require a comprehensive approach to defence and response.

Examples of hybrid threats:

  • Information manipulation (disinformation campaigns aimed at creating and increasing divisions within society),
  • Cyberattacks (sabotage of critical infrastructure, for example electricity distribution operators, cyberattacks on hospital information systems),
  • Economic influence or coercion (the exploitation of Europe's dependence on Russian oil and gas, threats to cut off gas supply to Europe),
  • Covert political manoeuvring (bribery of politicians, directing refugee and migration flows),
  • Coercive diplomacy (the cancellation of free trade agreements between states or other bilateral agreements, the suspension of visa-free transit for citizens of certain countries),
  • Threats of military force (the use of paramilitary units, military exercises by one state near the border of another).

Misinformation is unintentionally incorrect information that someone shares in good faith without harmful intent, but which may nevertheless cause harm.

Malinformation refers to information that is based on facts but taken out of context and presented in a one-sided way. As a result, it is misleading and therefore potentially harmful.

The primary purpose of misinformation and information taken out of context is not intentional deception.

Examples:

  • Sensationalist headlines designed to encourage clicks
  • Satire or parody
  • A hoax

Deepfake

A deepfake is a technology that enables the creation of convincing yet entirely fabricated images or videos of events that never actually took place, as well as the resulting content itself.

The English term deepfake is a compound of deep learning and fake. The technology began appearing online in 2017 and is developing rapidly, and it can be seen as the 21st century equivalent of Photoshop. Deepfake technology allows images and videos to be manipulated to such an extent that it is almost impossible to detect with the naked eye.

Deepfake technology allows anyone to create images or videos by combining existing footage so that a person appears to say or do things they never actually said or did. In many cases, all that is needed is a moderately powerful personal computer.

See an example of how realistic deepfakes can be and how to recognise deepfakes.

How deepfakes can affect people

In today's society, most people obtain information about the world and form opinions based on content found online. Therefore, anyone capable of creating deepfakes can publish false information and influence large audiences in ways that support their personal plans or objectives. Disinformation based on deepfakes can cause significant harm to victims – whether individuals, institutions, organisations or states – on a small, medium or large scale.

At a smaller or medium scale, fake videos purportedly showing a friend or relative asking for a large sum of money in an emergency could be used to deceive unsuspecting victims into sending money. The vast majority of deepfake images and videos exploit images of women to publicly humiliate or discredit them, and images of children to distribute child pornography.

On a large scale, fake videos of world leaders making invented statements could provoke unrest, violence and even war.

How to recognise deepfakes

The existence of images and videos created using deepfake technology does not mean that no image or video can be trusted. It is important to recognise that deepfake technology will most likely continue to develop and become even more widespread in the coming years. For this reason, it is necessary to remain vigilant and critical when searching for information online, particularly when it comes to images or videos that ask you to send money to a particular account, disclose personal data, show explicit footage involving people you know, or depict well-known individuals making unusual or extreme claims.

How the EU counters disinformation

The European Union is combating the spread of disinformation in order to protect its values and democracy.

Since its establishment in March 2019 ahead of the European elections, Slovenia has been actively involved in the Rapid Alert System (RAS) for disinformation. The RAS operates under the auspices of the European External Action Service (EEAS) and is a key component of the European Union's comprehensive approach to tackling disinformation.

The aim of the RAS is to:

  • Provide real-time alerts about FIMI and disinformation threats;
  • Enable easier and faster exchange of data and assessments on FIMI and disinformation campaigns between the EU and its Member States;
  • Strengthen cooperation and coordinated responses to FIMI and disinformation;
  • Raise awareness and empower citizens by promoting media literacy and supporting independent fact-checkers.

The European Democracy Action Plan (EDAP) was adopted in 2020. It focuses on three areas:

  • Protecting the integrity of elections and promoting democratic participation;
  • Strengthening media freedom and media pluralism;
  • Countering disinformation.

It also develops guidelines on the obligations and responsibilities of online platforms in the fight against disinformation.

The Action Plan against Disinformation aims to strengthen the EU's capacity and cooperation in tackling disinformation.

The Digital Services Act (DSA) regulates the operation of online intermediaries and platforms such as online marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms. Its main objective is to prevent illegal and harmful activities online and limit the spread of disinformation. It ensures user safety, protects fundamental rights, and creates a fair and open environment for online platforms.

The European Media Freedom Act (EMFA) introduces new rules designed to better protect editorial independence and media pluralism, ensure transparency and fairness, and improve cooperation between media authorities through the new European Board for Media Services. The Act includes the most extensive safeguards to date to enable journalists to work freely and safely. The new rules will also make it easier for both public and private media to operate across borders within the EU internal market without undue pressure, while taking into account the digital transformation of the media landscape.

The Artificial Intelligence Act establishes a comprehensive framework for regulating artificial intelligence (AI) in the EU. Its aim is to ensure the safety of products and services involving AI, compliance with existing legislation, legal certainty and predictability, stronger governance, effective enforcement and supervision, and the development of a single market for lawful, safe and trustworthy AI solutions.

The 2018 Code of Practice on Disinformation was the first example worldwide of industry actors voluntarily agreeing on self-regulatory standards to combat disinformation.

The Strengthened Code of Practice on Disinformation was signed on 16 June 2022 and currently has 44 signatories, including Google, Meta, Microsoft, TikTok, Vimeo and Adobe.