Sensemaking in the Era of Authoritarian Media
Truth-tracking is a primary check on power. Let's get busy building.
In a recent article on media-made egregores, I wrote that the “key to ultimate victory will be to develop truth-tracking technologies capable of arresting media mind viruses before they do too much damage.” The following is my effort to articulate what a truth-tracking technology could/should do.
We’re living in an information Hall of Mirrors. That means everything we consume — whether articles, memes, videos, or the nightly news — should invite our suspicion. It’s hard to tell what’s true anymore. Ordinary people have not only lost faith in “the institutions,” they have lost faith in the media as a particular kind of institution.
According to Martin Gurri writing at Discourse, this is the faith we lost:
“By distributing objective reports to the public, they broke through the fog of government secrecy and political propaganda to hold elected officials accountable. Journalists spoke truth to power and exposed corruption at the top. The Watergate scandal and downfall of Richard Nixon were taken to be mathematical proof of this proposition—in the Hollywood version of All the President’s Men, investigative reporters assumed the guise of film noir detectives searching for truth in a dangerous and deceitful world. Absent such fearless light-bearers, democracy, we were told, would die in darkness.”
It’s hard to find the light-bearers anymore, if they ever existed.
Journalism has devolved from being a truth-tracking enterprise into a mechanism of behavioral control by powerful elites. (Glenn Greenwald offers some proof about just how authoritarian this nexus has become.) Call that nexus of political elites and journalism the Authoritarian Media. The trouble is, those elites didn’t plan on so many people losing faith so quickly. That loss of faith is a good thing up to a point, but we still need a better way to make sense of the world.
Creatures of the Information Ecosystem
In other words, we need to bring a healthy skepticism to so many of the claims we encounter every day. After all, most of those claims are going to be uttered or amplified by one of the following ten types of media creatures:
Truth Trackers. They seek to make sense of the world as it is, even if that goes against their party or priors.
Political Authorities. They need you to think x so that they can do y and you will do z.
Special Interests. They benefit somehow from the actions of Political Authorities.
Tribal Partisans. They believe whatever their political tribe believes.
Group Thinkers. They blindly accept some narrative because everyone else seems to.
Clickbait Capitalists. They make money off of the fact that you click.
Fact Checkers. They masquerade as Truth Trackers to reinforce a preferred narrative.
Meme Warriors. They want their narratives to be simple, replicable, and dominant.
Virtue Signalers. They’re cousins to Meme Warriors but are motivated by seeming good.
Meta Mongers. They speak in abstract or lofty language to appear intelligent or ‘nuanced.’
Let’s assume, quite generously, that twenty percent of the information that originates from all of the above types tracks the truth.
“Truth tracking” is my phrase for approximating truth through limited epistemic means. We are, after all, subjective creatures, clouded by biases and mediated by screens. No matter what system we build, we cannot stand outside ourselves to take on some God’s-eye view. Indeed, even though one of the ten types is a “Truth Tracker,” that doesn’t mean Truth Trackers always track the truth; it only means they are motivated to do so. Likewise, though the remaining types have other motivations, they sometimes track the truth. Consumers of information just need better tools to figure out the difference–at least when it counts.
So the idea is to create a system that aligns incentives for truth tracking.
Elements of a Better Sensemaking System
Sensemaking in the twenty-first century needs an upgrade. That is, we need to be able to escape this Hall of Mirrors, but we also need a system for separating out what is more likely to be valuable, actionable information. Otherwise, we will unwittingly serve ends we had no intention of serving. And we will remain in darkness.
So what are the ingredients for a superior sensemaking system? In other words, what features can be developed such that any given information consumer is better able to track the truth?
Imagine a decentralized, participatory system with objectors, supporters, checkers, and bettors–all of whom are evaluating claims made by someone. Claimants are usually targets of the sensemaking system, so while their claims are the most visible, they don’t have to be active participants in the truth-tracking process. Everyone else has to stake something, enough that it stings to lose.
Consider the following system elements:
1.) Has Immutability and Censorship Resistance
Truth or falsity won’t do anyone any good if pieces of the sensemaking puzzle disappear down the memory hole. So, whatever system gets developed will have to ensure that claims, evidence, and other interactions with the system are tracked on some sort of distributed ledger or censorship-resistant system. A distributed ledger will keep authorities from being able to use their big erasers as they do with Big Tech.
2.) Allows for Hypermediation
Whereas some technologies disintermediate (remove middlemen from the equation), other technologies can hypermediate, introducing more participants in some process — which, in this case, is adjudicating claims. Besides the target claimants, we can imagine four other types of users in this system:
Objectors, who launch objections to the claim using the Objection Criteria below;
Supporters, who answer the Objectors using supporting evidence;
Checkers, who stake on the work of objectors or supporters; and
Bettors, who simply bet for or against the original claim.
All of these user types must stake something to participate. (See Puts Skin in the Game.)
3.) Includes Provenance for Evidence
Evaluating the truth of any given claim requires evidence. Whether it’s countervailing evidence or supporting evidence, users need some assurance that evidence was properly gathered, properly tracked or recorded, and can reveal a chain of custody. Such a feature reduces the likelihood that no one has corrupted or faked the evidence. (In the future, the best chains-of-provenance will be traceable back to sources, whether research instruments, smartphone cameras, or other devices where any given function gets logged using an associated cryptographic hash. But this will have to coevolve with sensemaking systems.)
4.) Tracks User Reputation and History
Major contributors to the process, whether institutional or individual, should have transparent histories associated with their contributions and reputation. If someone has no reputation or history in the system, it’s reasonable to treat their contributions with greater suspicion. Someone who dealt in falsehoods yesterday can tell the truth today. But as humans apply Bayesian thinking, it can be helpful to know whether some participant has been a trustworthy source up to this point in time.
5.) Puts Skin in the Game
To the extent possible, every participant should have skin in the game. That means everyone should get a reward for being closer to “right” but lose something for being closer to “wrong.” There might even be a way to get a bigger reward for making a significant contribution, such as substantial evidence. One has to be careful designing any incentive systems, but we can learn something from prediction markets. Again, the idea is that the system should reward truth tracking.
6.) Privileges Significant Claims
Any public sensemaking system has to separate the wheat from the chaff. In other words, we have to have a mechanism for determining which claims are worth evaluating. Because a claim’s importance is subjective, it might be worthwhile to have a staking system in place so that the community can decide which claims should get more eyeballs and which should not. For example, “Fauci lied to Congress” might be more important than “Kim Kardashian was wearing Chanel yesterday.” Otherwise, a voting system such as the Janecek Method might suffice to determine which claims are more prominent.
7.) Includes Objection Criteria (At Least Three Ways to Dispute a Claim)
Imagine the following claim, plucked, let’s say, from an article in The Newspaper of Record.
“There is very little evidence to support the crackpot theory that COVID19 was engineered in a lab,” said Agency Director Andrew Grauchi.
Consider the following as criteria for disputing a given claim.
Falsification. Introduces countervailing evidence to dispute the original claim.
Insufficient Evidence. Critiques the claim by showing there is a lack of supporting evidence or that the evidence is scant or half-baked by some degree.
Obfuscatory Language. Disputes the manner of a claimant’s presentation, for example, if the claimant uses tentative, weasely, or misleading language.
As basic evaluation criteria for a given objection, maybe the above aren’t sufficient, but I suspect they are necessary.
8.) Displays Probabilities
A lot of claims don’t lend themselves to true/false binaries. Other claims do, but intersubjective agreement in a market can include likelihoods based on the weight of evidence for or against a claim. Therefore, the system should yield probabilities or the likelihood that some claim is being considered accurate based on the perspectives of the participants and the totality of the evidence they bring. Developers might even determine a range for claims that are deemed ‘indeterminate’ by the community.
9.) Discourages GameStop-Style Betting
Remember the GameStop fiasco? In such scenarios, an army of activists get together and “invest” in a way that runs counter to investor norms, such as evaluating a stock based on the prospect of future returns. These activists coordinate to disrupt other investors or to distort the market for ideological reasons. In this way, you could get too many bets/investments that are not about tracking truth at all, but instead are about–
what participants desperately wish to be true,
what participants think others desperately wish to be true, or
chaos.
The difficulty with this element is that it’s almost impossible to build a system that understands any given participant’s motivations. Indeed, any given participant’s motivations are subjective. How can the system oblige people to track truth and not ideology or chaos or information warfare? One tentative suggestion is to require users upon registration to commit to truth-seeking and transparency by signing a pledge to track truth, which can be registered on a blockchain. Your reputation can be improved if someone else with a good reputation vouches for you, and there must be a pretty severe cost to everyone if a participant is found to break the pledge.
10.) Encourages Transparency and Participant Vouching
There might be conditions under which pseudonymous or anonymous participation is welcome or needed. For example, the system should leave room for whistleblowers to protect themselves from the powerful. However, encouraging openness and transparency generally provides better information and fewer opportunities for system gaming or sockpuppetry. Pseudonymous participants accrue reputations, as well–which has value. And as one’s reputation improves, the more she has to lose — whether in terms of higher payouts or voting power.
Vouching adds more to reputation, but bad actors/activist bettors can cause those who vouched for him to lose reputation, too. Organizational consultant Michael Porcelli refers to this sort of mutual accountability subsystem as “recursive reputation.”
The above ten are by no means exhaustive for an effective truth-tracking system. My goal here is to offer a sketch so that keener minds can improve upon the concept.
Some features didn’t make the list but probably should have. For example, to have any given bet settle or for the market to clear, mustn’t there be an end date/time for adjudication? How do discrete claims get categorized, connected, and searched? And can anyone solve the GameStop problem, as thinkers like Curtis Yarvin have warned might not be possible?
Authoritarians Win With Misinformation
One of the world’s most vital resources is good information. Free people need good information to make good decisions, especially as the world grows more complex. Indeed, as the world becomes more complex, people will need to make more decisions in more localized circumstances.
But authoritarians don’t like free people to make their own decisions, which is more or less the definition of being an authoritarian. So they’ll do what they must to preserve their authority, including spreading misinformation or outright lying. Misinformation is a lever of power, particularly where there is less separation between information and state.
That’s why now, more than ever, we need a system that tracks the truth. We may never build the perfect system. But it’s hard to imagine a state of affairs worse than the Hall of Mirrors in which we currently live. Authoritarians are counting on our ignorance, which they will use to justify a new kind of truth: the Truth a politburo expects you to swallow whole, without question, before a boot stamps on your face—forever.
Software developers of the world: Please build us a truth-tracking system. You might just upgrade humanity’s sensemaking for good.
This article originally appeared at AIER.
Hey Max,
This challenge is even tougher than you are perhaps giving credit, although there are solid ideas in your sketch here, including the concept of staking. I covered this issue in my 2018 book The Virtuous Cyborg, and proposed the broadstrokes of a system for dealing with this in the light of other known tech-behaviour problems (such as 'reputation bankrupcy', abuses of reputation systems etc.). However, one problem never goes away, which is distinguishing untestable (metaphysical) claims from claims of fact. In this regard, the utter collapse of academic philosophy has left us up certain waterways without certain implements.
Stay wonderful!
Chris.
This is a tough problem.
Sounds a little bit like a prediction market, except that the events being evaluated have probably already happened and we just need to determine the probability that they are true. To settle bets, decentralized prediction markets need oracles that can reliably report what really happened, and have developed some interesting techniques for incentivizing oracles to tell the truth.
Not exactly what you are talking about, but Rootclaim has an interesting technique for getting at the bottom of disputed events/facts. https://www.rootclaim.com/