This one slipped by my awareness, from July 2024, a PDF put out by the Joint Cybersecurity Advisory, authored by a bunch of different alphabet agencies. It describes a Russian state-sponsored software system to manage fake accounts en masse on social media platforms. The overall system is called Meliorator, and one of its components which I guess is the UI, is called Brigadir:
Brigadir serves as the primary end user interface of Meliorator and functions as the administrator panel. Brigadir serves as the graphical user interface for the Taras application and includes tabs for “souls,” false identities that would create the basis for the bots, and “thoughts,” which are the automated scenarios or actions that could be implemented on behalf of the bots, such as sharing content to social media in the future.
This is not the first time I’ve heard of systems like this. Did some pretty detailed work around this in a past life, visible in archived form here. Another more detailed 2017 long form research piece of mine was published here based on my looking into more of the actual tactics used by the Internet Research Agency. (I used to have that article hosted on my blog here, but I was seeing often reports from my hosting system that there were high numbers of Russian IPs attacking my site, until I took it down and they magically disappeared, mostly.)
That second linked article above tracked some quotes going back to a 2010 US Air Force for a solicitation for vendors to build a Persona Management System that has pretty much exactly the same product description as Russia’s Meliorator at its core, as described in PDF at top.
“Software will allow 10 personas per user, replete with background , history, supporting details, and cyber presences that are technically, culturally and geographacilly [sic] consistent. Individual applications will enable an operator to exercise a number of different online persons from the same workstation and without fear of being discovered by sophisticated adversaries. Personas must be able to appear to originate in nearly any part of the world and can interact through conventional online services and social media platforms. The service includes a user friendly application environment to maximize the user’s situational awareness by displaying real-time local information.”
Probably these kinds of ad hoc management systems have existed as long as people have been automating social media systems, which is presumably as long as they have existed. Now, of course, we get to throw AI into the mix and see what happens…
From a May 2024 article about OpenAI’s report of disrupting state actors using its fools for disinformation:
“All of these operations used AI to some degree, but none used it exclusively,” the report stated. “Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as manually written texts, or memes copied from across the internet.”
Same old same old forever and ever.
I wonder when we’re allowed to look at these things from a more neutral lens than that of fixating on misinformation & disinformation, as bad as they can be. Like what if we started calling such endeavors “hyperreality” campaigns, and try to map them based on more complex sets of criteria? I’ve outlined something to that effect here. Narratologically, they make use of networked narratives and transmedia storytelling, and having a chance to see all this up close was very much at the beginning of how this art project of mine all got started. I’m interested in when these kinds of distributed storytelling systems can be open-sourced, and become simply another tool in a toolbox of communication and creative expression (aka “art”), instead of this use that is strictly bad or harmful. Maybe one day in a decidedly different form…
My thinking has always been, if everyone had a botnet, then the power of them would at least be widely distributed instead of concentrated in the hands of a few. People talk about teaching kids media literacy, but I never hear anyone saying we should teach them how to build botnets. Part of me wonders, if this future we’re heading towards might require them to have that kind of deep inside knowledge in order to counter other forces using those same techniques to push their own dominator hyperreality narratives. Just like they might need the skills and knowledge to be able to deter drone swarms in physical space.
Leave a Reply
You must be logged in to post a comment.