
Roskomnadzor and the Main Radio Frequency Center, together with the analytical center Mindsmith and Rostelecom, as a result of the study “AI Tools in the Hands of Malefactors – Classification of Threats and Countermeasures”, identified 12 key groups of technologies based on artificial intelligence (AI) that should be used to identify a dangerous online content and neutralize threats caused by dangerous content.
Groups of mentioned technologies:
- ⏺ Deepfake detection
- ⏺ Determining the context of what is happening in the video
- ⏺ Automation of content monitoring and moderation
- ⏺ Face recognition
- ⏺ Extracting meaning from text
- ⏺ Fact-checking support
- ⏺ Symbol recognition
- ⏺ Metadata extraction and analysis
- ⏺ Emotion recognition
- ⏺ Decision support during information attacks
- ⏺ Content generation
- ⏺ Content recommendation
The study was based on the analysis of scientific publications, patents, and investment projects – more than three thousand materials in total.
DESPITE THE SIGNIFICANT DIFFERENCES IN THE TECHNICAL FEATURES OF SOLUTIONS IN EACH CLUSTER, THERE ARE A NUMBER OF UNIVERSAL PROVISIONS WHICH HAVE BEEN CONFIRMED OR DISCOVERED DURING THE RESEARCH:
1 Russia has the expertise to create domestic datasets and models, but lacks computing power, infrastructure, and cooperation between key stakeholders. The situation is complicated by the fact that the production of high-quality, powerful computing equipment is currently underdeveloped in Russia. Despite this, there are extremely high-quality solutions in Russia, especially when it comes to face recognition and work with information attacks.
2 China and the United States lead by a wide margin in a significant part of the clusters. Despite the fact that these countries have significant differences both in the direction of research and development and in the tactical approach, cooperation between government organizations and commercial organizations is developed in both countries.
3 Due to the automation of information wars and the development of generative models, it will be extremely problematic to ensure the cognitive security of the population without the introduction of artificial intelligence.
4 Availability of domestic models. and their implementation, as well as the use of domestic datasets, is a matter of national security, since foreign actors are able to export artificial intelligence, which will be controlled by them after being transferred to the client.
5 Development of procedures for testing and evaluating models is an important infrastructural task that will allow the state to keep abreast of technology development, track promising developers and projects.
6 A significant proportion of models is a “black box” due to the huge amount of parameters, it is extremely difficult to determine exactly how these models make their decisions. This poses a threat in the case of the introduction of foreign models, the use of foreign datasets and the development of low-quality models.
7 Regulators in most countries have not kept up with the pace of technology development. This applies to a large proportion of clusters, starting with generative algorithms, the regulation of which has not yet reached the required level of detail, and ending with the datasets themselves, the approach to filling and using which in many countries remains on the conscience of the developers themselves.
8 Artificial intelligence has vastly outstripped humans in terms of the amount and speed of data processing, the ability to recognize subtle patterns, and to compare patterns with large amounts of information. At the moment, artificial intelligence is still far from a real understanding of the context and cultural nuances, which is often exploited by attackers.

You must be logged in to post a comment.