On December 17, a training session was held on ‘Peculiarities of spreading disinformation in social networks. The Role of Artificial Intelligence’. The event was organised by the High School of Public Governance in cooperation with the Centre for Strategic Communications and Information Security with the support of the UK Foreign and Commonwealth Office, Zinc Network and the Centre for Democracy and Rule of Law. The participants were civil servants and local government officials responsible for strategic communications in their institutions.
The training covered key aspects of disinformation, including how it is spread through social media and how artificial intelligence technologies make it harder to recognise. The main topics included dipshits, generative text, and artificial voices – tools that are actively used to manipulate and create fake news.
Olha Yurkova, disinformation expert and co-founder of StopFake, stressed that the development of dipfakes and synthetic content is becoming a challenge for government institutions that need to respond quickly to information threats. She gave specific examples of real-life cases where artificial intelligence was used to create content that misled the public.
Particular attention was paid to methods of recognising dipshits and synthetic content. In particular, they discussed technical tools for identifying fake images and videos, critical thinking skills in analysing information, and the Principles of Communication Transparency, which help reduce the risk of manipulation. The expert emphasised that civil servants should be prepared to respond quickly to disinformation, especially in the context of the active use of artificial intelligence to spread it.
As a result of the training, participants gained practical knowledge on how to effectively counter information threats. In today’s reality, when artificial intelligence technologies are developing incredibly fast, the ability to recognise fake content is critical for government communications.