
By most measures, today’s media-literacy boom has been a public good. Charts from Ad Fontes, ratings from AllSides and Media Bias/Fact Check, “nutrition labels” from NewsGuard, and “blindspot” dashboards from Ground News give ordinary readers quick heuristics for what’s trustworthy and how coverage breaks across left–right lines. In a chaotic information environment, that’s helpful. But these tools also flatten the very thing they’re trying to measure. Bias is not just a point on a horizontal spectrum—often it’s embedded in what gets covered, who gets quoted, and how complexity is collapsed into a single line of copy. When rating services only score overt partisanship and headline-level reliability, they risk missing the blind spots that most shape public understanding.
A recent essay in the Milwaukee Independent makes a similar point: rating platforms intended to counter spin can end up penalizing outlets that refuse false equivalence, confusing “moral clarity” with “partisan bias.” That critique should ring a bell for anyone who’s ever read a nuanced beat story reduced to a pin on a bias chart.
Case Study: The AP, a Temple, and the Meaning of “Bigger”
Consider Associated Press coverage of The Church of Jesus Christ of Latter-day Saints’ Lone Mountain Nevada Temple in Las Vegas. An AP dispatch about temple growth asserted that the Lone Mountain temple would be “larger in size than the Notre Dame Cathedral in Paris,” with a steeple nearly 200 feet tall. The phrase “larger in size” landed with neighbors—and readers—like a bomb. Larger than Notre Dame? The problem is that the temple is about one-third the size of Notre Dame and one hundred feet shorter. The error comes from a misunderstanding of square footage. That’s framing bias, not partisan bias—and you won’t find a category for it on most ratings sites. Today’s media-literacy boom has been a public good.
Case Study: What Wasn’t Said at General Conference
In 2024, AP ran a story on the conference of The Church of Jesus Christ of Latter-day Saints under the headline “Latter-day Saints leader addresses congregants without a word on racial or LGBTQ+ issues.” That piece treated omission—what didn’t happen—as the news. That isn’t a left-right bias, but it is quite obviously a bias nonetheless.
The author, Hannah Schoenbaum, has no background in religion reporting, but instead covers government, politics, and LGBT+ rights. Six months later, she was still on the same beat, and her coverage of the conference mostly covered political angles. Despite these two incidents, AP still assigned Schoenbaum to the same article for the most recent conference. She was also responsible for the inaccurate Las Vegas Temple coverage.
The bias here isn’t a partisan one; it’s a worldview one. When you assign a political and LGBT+ rights reporter to do religious reporting, what you get are only stories that fit into the narrow lens of the reporter. This headline imports the author’s opinion about what should have been spoken about into a story that was in fact about something entirely different. The headline “Latter-day Saints leader addresses congregants without a word on environmental issues in Asia” is equally as accurate, but manages to convey an entirely different story. The bias here isn’t a partisan one; it’s a worldview one.
Case Study: Larger than Life Abuse Findings
Similarly, the AP had investigative reporter Michael Rezendes devote significant resources to sex abuse cases within the Church of Jesus Christ.
Rezendes received a Pulitzer Prize for his reporting about the sex abuse scandals inside the Catholic Church, systemic issues of offending priests being known, covered up, and moved to a new diocese to continue causing harm.
Rezendes’ selection for the assignment communicates certain ideas to the readers: There is a sex abuse problem in the Church of Jesus Christ; it is a problem of significant size and a serious institutional error.
But what Rezendes actually found over the course of several years was that there are some Latter-day Saints who commit sexual abuse (he found three stories), including some of our leaders. They are excommunicated when they are discovered. The Church has a helpline so that local leaders know how to follow complicated disclosure laws. And the Church also tries to provide financial restitution to the victims.
It’s a tragic story, but one about the inevitable tragedy of human frailty rather than institutional cover-ups.
But by choosing to write long features for stories that would normally be reserved for page-seven crime beats, it communicates that this is news worth paying attention to, which communicates a nefariousness, pervasiveness, or culpability that doesn’t in fact exist in any of the reported cases.
The lasting impression left with many readers was of a sweeping institutional cover-up, even though the stories were ultimately about distinct criminal acts by individuals. That’s a classic scale problem: to what extent does a set of horrific cases justify institutional generalization? Bias checkers don’t score how disciplined news outlets are in attributing scale—but it’s central to how audiences come away thinking about an institution.
And the effects of this bias are serious. The best available evidence suggests that Latter-day Saints commit sexual abuse at rates significantly lower than those of many other faiths or the general population. Our protective factors should be a lesson to others. Instead, a recent survey by YouGov had more people believing that abuse is a “very big problem” in the Church of Jesus Christ, more than in the Southern Baptist churches, despite the fact that Southern Baptist churches had been involved in a systemic controversy covering up sexual abuse, dwarfing in severity the problems in the Church of Jesus Christ.
Is that unfortunate misunderstanding a result of the editorial choices of the Associated Press? Do Americans know less about sexual abuse and where kids are safest because of the Associated Press’ coverage? It’s certainly possible, but it’s not a kind of bias you would be able to identify in the media literacy tools currently available.
The Bias You’re More Likely to Encounter: Access and Sourcing
Here’s a quieter example. I recently had a wonderful experience with Maggie Penman of The Washington Post. Penman runs “The Optimist,” a column about positive things in the world.
After the Michigan attack on an LDS chapel, Penman ran a feature about Latter-day Saints raising money for the attacker’s family—an act of grace that surprised many readers. It was a beautiful and generous story. This is why I was surprised to find a quote by a religion scholar at the end of the article attacking Latter-day Saints: he disagreed with them on a doctrinal point. For those within the Latter-day Saint sphere, this attack from this commentator, who is a frequent critic, is unsurprising. What was surprising was that he was included. Media checkers have done incredible work.
What the Checkers Miss
Most popular rating systems do some things well: They reward corrections, penalize serial fabricators, and map partisan lean. However, several endemic newsroom behaviors, including those discussed above, fall outside their frameworks.
None of these is chiefly about “left vs. right.” They’re about habits, networks, and time.
My intention here is not to call out the media checkers. These are still emerging projects. And media checkers have done incredible work, shining light on real issues and helping to improve media literacy. My hope is to encourage their work. As they are continuing to grow, here are some suggestions of practical metrics that might be tracked and could add to our understanding of media bias:
- Source Diversity Index: Track whether coverage of a community consistently quotes the same one or two academics/activists, or shows range (rank-and-file members, leaders, critics, independent scholars).
- Correction Transparency & Latency: Not just “did they correct,” but how long did it take, and was the core ambiguity addressed?
- Scale Discipline Score: When a story makes institutional claims from individual cases, does it disclose sample size, scope, and limits?
- Beat Maturity Indicator: Tag when a reporter is new to a complex beat and flag when framing changes as literacy improves.
Whatever their flaws, biased tools are still better than the invisible curation of our social feeds, which reward engagement over understanding and routinely amplify the most polarizing takes. And they’re certainly better than the reflexive dismissal of all journalism because of a monolithic, misunderstood “bias.” We want readers to be able to recognize the kinds of bias they actually encounter in the checkers describing them. That work—however halting—beats a world where the only algorithm that matters is the one designed to keep us scrolling.








