FAKE NEWS INTERNET DIGITAL TECHNOLOGIES ROSKOMNADZOR


The authorities plan to control the spread of fakes using the IT system.

Subordinate to Roskomnadzor, the Main Radio Frequency Center is looking for a developer of an information system that will allow finding on the Internet “facts of the dissemination of socially significant information under the guise of reliable messages.” This follows from the materials of the organization, which Izvestia got acquainted with. The starting price of the competition is 60 million rubles. The new system should help in identifying negative and insulting, public disinformation campaigns, assess the degree of influence of fakes on the audience, and so on. Today, the market already has algorithms for searching for primitive fakes, but such solutions still require the participation of a live moderator, experts pointed out. However, only a person can still reveal ingeniously presented false information, for example, related to the manipulation of statistical data, experts admitted.
Multitasking Vepr


The Main Radio Frequency Center (GRFC), subordinated to Roskomnadzor, is looking for a developer of a system for early detection of “points of information tension” in the Internet media and other network resources . This follows from the materials of the GRCHTS, which Izvestia got acquainted with. The starting price of the competition for the creation of a system under the symbol “Vepr” is 60 million rubles , according to the papers.

Under the “points of information” tension” (TIN), the customer understands “the facts of the dissemination of socially significant information under the guise of reliable messages, which creates a threat of harm to the life and (or) health of citizens, property, the threat of a mass violation of public order and (or) public safety” – i.e. fakes .
Aty, bots: the number of anti-Russian fakes in social networks has decreased by five times.


What helped to reduce the volume of fakes about a special military operation in Ukraine


The algorithm should work on the basis of mathematical search and optimization models and machine learning methods , requires potential developers of the GRFC. Vepr should cover resources with a daily audience of at least 1 million people and be able to prioritize TIN according to the degree of influence on their visitors, assess the possibility of turning a fake into a threat to information security. Among the tasks of the system are also “identification of negativity, insults in relation to a given subject and object composition” within certain topics, identification of the dissemination of false information on these topics, as well as purposefully conducted information campaigns.
Vepr is not the only IT system that an enterprise subordinate to Roskomnadzor orders to automate the detection of illegal content on the Web . For example, as Izvestia wrote , in June of this year, the GRCHTs announced a competition for the creation of the Oculus information system, which, using artificial intelligence, will identify images and video materials that violate Russian law . As follows from the terms of reference, the system must analyze at least 200 thousand images per day with an error of no more than 20%.

Algorithms should identify, among other things, materials with signs of terrorism and extremism, calls for riots, insults against Russian state symbols and authorities.
There are no systems in the world that could fully autonomously detect fakes in social networks and the Internet, said Karen Kazaryan, expert of the Russian Association for Electronic Communications (RAEC). Twitter and Facebook (the social network belongs to the organization Meta, recognized as extremist in the Russian Federation) can only mark publications that potentially violate their rules for verification by people, he said. Fakes are a purely subjective thing, so it is extremely difficult to automate their search, but the new system could be used as an auxiliary tool to reduce the work of moderators , the expert believes.


For more than 10 years, articles about the silver wars in Russia and the New Orleans maniac lasted in the online encyclopedia.

Monitoring with a human face

It is not difficult to organize monitoring of fakes, there are a lot of products for this, said Igor Bederov, head of the information and analytical research department at T.Hunter. The announced project budget of 60 million rubles is even somewhat redundant, he argues. But difficulties arise with what information is considered reliable – there are no methods and techniques here .

  • With the exception of the most elementary – it is possible to identify, for example, when some people openly insult others, in particular, those associated with the state. That is, such content can be identified where there is a clear semantics. But where it is fuzzy, where statistical and other complex information is provided, forcing the user to draw a conclusion regarding the reliability of certain information, artificial intelligence will not be able to help. And it is precisely such publications that pose the greatest danger, the expert noted.
  • The reliability of information is often not able to be assessed even by the owner of the resource that places it – for example, if it comes from trustworthy sources, whose statements are difficult to verify, Igor Bederov pointed out.
  • “Cookie” monsters: government sites track users through third-party files
    What Annoying Cookies Do and Should You Be Worried About Them
  • From a technical point of view, the creation of such a system is a real task, Yaroslav Shitsle, head of the IT & IP Dispute Resolution department at the law firm Rustam Kurmaev and Partners, is sure.
  • It seems that the idea of the supervisory authorities is to automate the analysis of publications in the media and social networks, as well as user comments for the content of information that violates applicable law.
  • After identifying such publications, two scenarios are possible: either the interaction of the department with the platform to force the removal of content, or the initiation of cases against the authors of the content, the expert believes.
  • It can be assumed that if the project is successful, the owners of Internet resources will also implement such a system in the future, Yaroslav Shitsle did not rule out.
  • Such solutions may be of interest and demand not only by authorities, but also by structures responsible for corporate communications , in particular anti-crisis ones, added Denis Kuskov, CEO of TelecomDaily

– The new system should help in identifying negative and insulting campaigns to misinform the public, assess the degree of influence of fakes on the audience, and so on.

“Vepr” should cover resources with a daily audience of at least 1 million people and be able to distinguish “points of information tension” by the degree of influence on their visitors, as well as assess the possibility of turning a fake into a threat to information security

By Treadstone 71

@Treadstone71LLC Cognitive Warfare Training, Intelligence and Counterintelligence Tradecraft, Influence Operations, Cyber Operations, OSINT,OPSEC, Darknet, Deepweb, Clandestine Cyber HUMINT, customized training and analysis, cyber psyops, strategic intelligence, Open-Source Intelligence collection, analytic writing, structured analytic techniques, Target Adversary Research, strategic intelligence analysis, estimative intelligence, forecasting intelligence, warning intelligence, Disinformation detection, Analysis as a Service