The government agency known for developing wild technology like a penny-sized vacuum, enigmatic sky balloons, and a shit-ton of robots is now tasked with solving perhaps the greatest threat to democracy—misinformation.
AdvertisementAccording to the SemaFor announcement, the program wants to develop a bunch of algorithms that will analyze these coordinated attacks. The text specifically details three types of algorithms—a semantic detection one that would figure out if media is generated or manipulated, an attribution one that would conclude whether media came from a certain organization or person, and a characterization algorithm that would determine if media was created or manipulated with malicious intent.
DARPA’s call for proposals makes it clear that they aren’t looking for new spins on old tricks—the agency wants research that explores a new approach to defending against misinformation, not, as it states, “evolutionary improvements to the existing state of practice.”
And while an automated model sounds nice in theory, in execution, to date, these types of algorithmically-generated systems are still flawed and biased, and in more disturbing cases, outright discriminatory. Existing applications do not inspire a lot of faith in a near-future system that would be both effective and just.
It’s not inherently bad that the government wants to funnel resources into developing a unique system to prevent the types of coordinated attacks that have enabled the likes of election interference, dangerous conspiracy theories, and genocide. But it’s a bit strange that the agency most famous for its mostly inapplicable pipe dream-like technology is the one charged with figuring out an essential, albeit complex, solution to an increasingly pervasive societal problem.