Philosophy & Spirituality

Rules of Reason: Making and Evaluating Claims

by Bo Bennett, PhD

2,903 words (~15 min read) 19 min audio 125 views
Rules of Reason: Making and Evaluating Claims

Key Takeaways

Evaluate the strength of a claim by its clarity and precision, not by its truth; a clearly stated claim is easier to test and dispute.
Acknowledge the limits of your knowledge and defer to genuine expertise when appropriate; be aware of the Dunning-Kruger effect and the genetic fallacy.
Actively surface and manage biases (motivated reasoning, confirmation bias, courtesy bias, groupthink, overconfidence, halo/pitchfork effects, stereotyping) so they don’t distort claim evaluation.
Disambiguate claims: isolate the explicit or implicit claim, define each relevant term precisely, and specify scope and degree (use numbers/percentages when possible).
Operationalize vague or loaded terms so claims become measurable and investigable; use reductio ad consequentia to test the appropriateness of operational definitions.
Favor falsifiable formulations of claims; unfalsifiable claims are weak because they offer no route to disconfirmation and often fuel pseudoscience or dogma.
Express confidence appropriately: distinguish possibility, plausibility, and probability and avoid vacuous statements like "it is possible that" when possibility hasn’t been demonstrated.
Avoid binary causal claims; prefer phrasing that recognizes multiple contributing factors rather than a single cause.
Treat analogies as claims on a continuum from strong to weak—be explicit about the similarity being claimed, call out weak analogies and false equivalences, and use thresholds for practical decisions.
Unpack and run all relevant assumptions through the same rules: implied claims often hide additional assumptions that must be evaluated independently.

Summary

When Dr. Bennett wrote Logically Fallacious and launched the companion website, he thought he had given people the tools to spot errors in reasoning. What he learned over years of online debate and from thousands of interactions was that spotting fallacies is only part of the work. Identifying errors tells you when an argument is broken, but it does not teach you how to build claims that are strong, testable, and useful. The Preface and opening material of Rules of Reason set out a different project: a practical guide to making and evaluating claims well. These are not laws of logic—immutable truths that cannot be bent—but rules of reason, flexible guidelines that, when followed, reliably produce clearer thinking and better discourse. Think of them like nutrition guidelines for reasoning: adaptable, improvable, and designed to keep you healthy intellectually even as new information arrives.

Bennett defines a claim simply as a statement that something is the case, usually presented without proof. Claims differ from opinions because they purport to state objective facts rather than value judgments. A claim can be true or false independent of who utters it, and its strength is judged not by whether it happens to be true but by how clear, precise, and testable it is. That distinction lies at the heart of this book. A strongly worded claim—“a living unicorn is in my bedroom”—can be a strong claim even if it is almost certainly false, because the claim is clear, specific, and falsifiable. Weak claims hide in vagueness, ambiguity, and rhetorical sleights of hand. They invite wasted energy: countless hours arguing over claims that were never properly stated.

Bennett organizes his rules into three broad phases that mirror the healthy habits of thinking: know thyself, disambiguate, and embrace the continuum. The first phase teaches humility and psychological self-awareness: most errors begin not with logic but with the thinker. Rule one asks you to acknowledge the limits of your knowledge. The internet has empowered many to feel like experts after a few hours of research. The Dunning-Kruger effect explains why this tendency to overestimate competence is so common—without knowing how much there is to know, you cannot gauge how much you actually know. Admit it. When you recognize this bias, you are more likely to defer to genuine experts and less likely to dismiss useful information just because it comes from an unlikely source. But the companion error is the genetic fallacy: dismissing a claim purely because of its origin. Bennett cautions that while source credibility is a useful heuristic, it is not decisive. Reliable sources are sometimes wrong, and unreliable ones are sometimes right. If you have the time for careful evaluation, test the claim on its merits first; if you need a quick heuristic, consider the source but don’t blindly rely on it.

Rule two deepens the inventory of self-knowledge: it asks you to explore your biases. Bennett uses his own life as an example: raised Catholic, confronted in high school by an atheist friend, he realized how powerful motivated reasoning is. When what you believe feels existential—when you are convinced that disbelief implies eternal punishment—reasoning becomes less about truth and more about psychological self-defense. Motivated reasoning colors our search for evidence and our interpretation of facts. Then come cognitive biases that operate more subtly and systematically: confirmation bias makes us select news sources that confirm our views and forget those that don’t; courtesy bias makes us agree with a friend to be polite, inflating apparent consensus; groupthink encourages conformity in social networks and echo chambers; the overconfidence effect leads people to be more certain than their accuracy warrants; halo and pitchfork effects make impressions in one domain skew judgments in another; stereotyping lets membership in a group unjustly affect how we treat claims from its members. Bennett’s advice is practical: notice the emotions attached to a claim, practice separating the claim from the claimant, and, when possible, bring a mindset that is passionate about discovering the truth rather than validating an identity.

Once you have a check on your internal distortions, you move to the work of disambiguation—teasing clarity out of fuzzy language and hidden assumptions. Rule three insists you isolate the actual claim. Bennett makes an observation that is deceptively simple but devastatingly important: many online arguments are fights over claims that were never made. Implicit claims are common; a sarcastic phrase or a meme often implies assertions that are never stated. Before refuting or supporting a claim, ask “what exactly is being claimed?” Don’t build strawmen. Distinguish explicit claims—clear statements like “the sun is about 93,000,000 miles from the earth”—from implicit claims that require inference, such as “Are you really going to wear that?” which implies a negative judgment about a person’s clothing. When a claim is implicit, solicit clarification. If you cannot, make your assumptions explicit so you aren’t arguing against ghosts.

Rule four presses you to define each relevant term clearly and precisely. Words carry many meanings. “Climate change is a hoax” is a perfect example: which part of climate change? The warming of the planet? The human contribution? The seriousness of the predicted consequences? The word “hoax” itself ranges from “not real” to “a deliberate, organized deception.” By forcing precise definitions you can convert vague assertions into propositions that can be tested or debated sensibly. Two people may be arguing right past each other because they mean different things by the same words.

Rule five demands that the words you use reflect the scope and degree of your claim accurately. Scope tells us who or what is included—“all men,” “some men,” “many men”—and degree quantifies how strong a claim is. A sweeping statement like “all men are bastards” is easy to falsify if you define “bastard” narrowly, while “some men are bastards” is functionally unfalsifiable in practice but a weaker claim in rhetorical terms. Bennett recommends replacing vague quantifiers with numbers or percentages when possible: “Based on my experience, about 30 percent of men are bastards” is clumsy but clearer and more testable than “men are bastards.” This precision matters in real-world debates, as Bennett demonstrates with COVID-19 examples. When President Trump said in March 2020 that the virus was “very contagious” but we had “tremendous control” of it, that second phrase overstated the degree of control. Media critics responded with the opposite extreme—no control at all. The truth lay between the extremes. Calibrating scope and degree helps avoid misleading rhetoric and prepares an audience for evidence-based argument.

Rule six introduces operationalization, a scientific habit of making concepts measurable. If you want to test whether isolation causes depression, you must operationalize “isolation” and “depression”: what counts as isolation? How long? Is internet access allowed? Which assessment instrument measures depression—self-reports like the Beck Depression Inventory, clinical diagnoses, or observable behaviors? Bennett’s point is practical: you don’t need laboratory-grade definitions for everyday reasoning, but you do need a benchmark. Operationalization matters most when claims involve loaded language or legal and moral categories. He gives an instructive vignette: a man touches a woman’s lower back and the woman claims she was “sexually assaulted.” If the parties define “sexual assault” differently, their reactions range from a need for an apology to calls for imprisonment. Here Bennett introduces a technique he calls reductio ad consequentia—test a definition by considering its consequences. If your definition of sexual assault excludes cases most people would agree are assault, it’s too narrow; if it sweeps up non-problematic behavior, it’s too broad. Operationalization is not about finding one correct standard but about choosing measurable criteria, then using if-then logic to explore implications.

Rule seven insists that claims be falsifiable when possible. Falsifiability is a principle borrowed from philosophy of science: a claim that can be shown false by evidence is preferable because it allows disconfirmation and thus real learning. “The world will end on December 21, 2012” is falsifiable; the date passed and the claim was demonstrably false. “The world will end after 1000 years of peace,” however, is functionally unfalsifiable in one human lifetime; “the world will end when God wants it to end” is virtually unfalsifiable because it relies on opaque divine intention. Unfalsifiable claims are weak because they provide no route to refutation; they are the currency of pseudoscience, mysticism, and certain political or marketing rhetoric. If someone insists on an unfalsifiable claim, Bennett recommends asking them to modify it into a falsifiable form. But be realistic: people motivated to believe will often rationalize a falsification away. Still, asking for a falsifiability criterion is a productive move even if it doesn’t convert a believer.

Rule eight tackles the language of certainty. Public life prizes confidence; critical thinking prizes appropriate skepticism. Bennett advises distinguishing possibility, plausibility, and probability. “It is possible that two million Americans will die of COVID-19” is often a vacuous statement if “possible” merely means “we can imagine it.” Possibility is only meaningful when it has been demonstrated or when there is an established mechanism. Plausibility is subjective: something may seem reasonable to you. Probability, by contrast, is mathematical and evidence-based. When making claims, use the right term. If your belief is an educated estimate, express a probability or at least a level of confidence that corresponds to evidence. Avoid “it could happen” as a rhetorical device meant only to sow doubt. Bennett also notes an important cultural point: leaders are rewarded for certainty, but in scientific and public-health contexts, humility about uncertainty is often the more responsible stance.

The third phase, “embrace the continuum,” is an antidote to binary thinking. Many of our errors arise from simplifying complex phenomena into either-or categories. Rule nine reduces claims of simple causality by converting causes to contributing factors where appropriate. Few social, economic, or medical outcomes have single causes. Saying “the economy suffers because citizens prefer handouts” or “violent crime exists because humans are naturally violent” collapses complex interactions into a single villain. More precise reasoning acknowledges contributory factors: “One of the reasons the economy is struggling is that some citizens are disengaged from work” is less rhetorically satisfying but truer and less likely to mislead. Bennett’s goal is not to escape responsibility in causal thinking, but to arrive at nuanced, defensible claims that can be examined and falsified.

Rule ten addresses analogies, one of the most powerful and treacherous tools in reasoning. Analogies help us understand unfamiliar situations by comparing them to known ones, but they can also mislead if the comparison is vague or if the similarities are superficial relative to the differences. Bennett gives a taxonomy of pitfalls: false equivalence claims that two things are the same when the comparison ignores crucial differences; ambiguous analogies that make comparisons without explaining how they are similar; and weak analogies that are dissimilar in too many or too important ways to be instructive. He stresses that analogies are claims in themselves and should be evaluated like any claim: specify the way things are similar, consider the degree of similarity, and test the analogy’s consequences. He gives memorable examples: comparing a president who orders a military strike that kills civilians to a mass murderer may highlight a shared characteristic—responsibility for deaths—but is likely to be a false equivalence regarding legal culpability. The “stop and frisk saves lives, so ban cars because that would save even more lives” example functions as a reductio ad absurdum: it shows that “saving lives” alone cannot be the sole criterion for policy. Bennett emphasizes thresholds: in practice we must often make binary policy decisions, and we accept or reject analogies relative to a threshold determined by their plausibility and stakes.

Rule eleven is a final exhortation to be recursive: filter all relevant assumptions through the rules themselves. Claims are often nested: “Zeus’ lightning bolt fuels the sun” may pass superficial scrutiny if you acknowledge your ignorance and define terms, but it contains an embedded assumption—Zeus exists—that itself needs examination. If a claim depends on other claims, unpack them and run each through the eleven rules. Some assumptions are self-evident and needn't be overchecked—no reasonable person suspects the sun doesn’t exist—but others are foundational and must be interrogated.

To make these rules concrete, Bennett works through a few extended examples. A bumper-sticker claim—“Guns don’t kill people; people kill people”—is revealed as ambiguous and in need of clarification. Is the claim legal, empirical, or moral? By isolating the claim and asking follow-up questions, you can recast it into a stronger, more precise claim such as “Stricter gun laws violate the Second Amendment.” Then you can operationalize “violate” and “Second Amendment,” examine falsifiability via courts, and unpack assumptions about what kinds of arms fall under constitutional protection. The process converts a throwaway slogan into a debateable legal claim.

Bennett also examines an analogy circulating during the early days of the COVID-19 pandemic: flattening the epidemiological curve compared to a parachute slowing a fall. A good analogy in this context must clarify whether flattening the curve means healthcare capacity is safe or only temporarily under control, whether “lifting restrictions” means all at once or selectively, and whether taking off a parachute midair is more similar to lifting restrictions while infections persist than taking off a parachute after you have landed. The better analogical claim recognizes contributing factors and scope: it is plausible and instructive to compare lifting restrictions to removing a parachute midair if restrictions are the primary safeguard against an otherwise overwhelming wave of infections. But the strength of the analogy depends on precise definitions and operationalization: what counts as “flattened” and what thresholds for reopening are acceptable? The analogy is more useful when specific similarities and differences are spelled out.

Faith-based claims receive careful, respectful scrutiny. “Prayers work” is too broad to be useful. Bennett shows how to narrow the claim—asking, for instance, whether petitionary prayer to a Catholic saint produces statistically significant improvements compared to control groups—and how operationalization and falsifiability turn such religious assertions into testable hypotheses. If the proponent insists that a saint healed someone “through God’s power,” the claim becomes unfalsifiable and hence a dead-end for scientific evaluation. The practical compromise is often to separate the question into two parts: does the act of petitionary prayer correlate with improved outcomes? And if so, what mechanisms (psychological placebo effects, social support, or supernatural intervention) might explain it? Only the first question admits to empirical testing; the second quickly collapses into metaphysical claims that, if specified as divine causation, are often unfalsifiable.

Throughout the Preface and opening chapters, Bennett returns to two guiding principles. First, strengthen claims by making them clearer, narrower, and more testable; the quality of a claim is measured by its clarity and precision, not by its truth. Second, cultivate intellectual humility and manage your biases. These moves are mutually reinforcing: humility makes you more likely to define terms, ask for operational measures, and prefer falsifiability; clarity makes it harder to hide behind rhetoric or motivated reasoning.

Bennett also makes several smaller but potent points. When probability is unknowable—as with claims about miracles or extremely rare historical events—Bayesian methods can help, but they begin with prior assumptions that are themselves biased. The principle of parsimony, or Occam’s Razor, advises selecting the claim with fewer assumptions when probabilities are unknowable. Competing claims that are not mutually exclusive should be understood as such: saying that “something caused the universe” is not logically inconsistent with saying “a god caused the universe,” but the more parsimonious claim is usually the more plausible absent additional evidence. Loaded language and rhetoric must be operationalized if they are to be discussed meaningfully. And analogies and causal claims will almost always rest on implicit assumptions that must be made explicit and evaluated in turn.

In closing, Bennett offers readers a checklist—the eleven rules of reason—intended as a practical toolkit: acknowledge your limits, explore your biases, isolate the claim, define terms precisely, use appropriate scope and degree, operationalize terms when possible, make claims falsifiable when possible, express meaningful levels of confidence, convert causes to contributing factors, craft analogies carefully, and subject hidden assumptions to the same scrutiny. The Preface makes clear that you do not have to memorize these rules or apply them in strict order; repeated practice will make them second nature. But used deliberately, they will turn ambiguous shouting matches into productive evaluation and make your own claims stronger and more defensible.

The voice of the book is candid and conversational, mixing personal anecdote with philosophical insight and practical examples. Bennett does not promise a miracle cure for all reasoning problems. He promises, credibly, that by taking these rules seriously you will get better at making claims that can be tested and at discerning which claims warrant your attention. The aim is civic as much as individual: better claims, better debate, and healthier democratic discourse. If you leave the Preface with one takeaway, it is this: clarity is the precondition of disputability. A claim that is clear and precise is easier to test, easier to challenge, and therefore more useful than a rhetorically powerful, vague assertion. Reasonable people disagree, but they can disagree productively—and agreeing on the meaning of words, the scope of claims, and how to measure them is the first step toward that productive disagreement. Follow these rules, and you will not only avoid many common errors of thought; you will also be in a position to learn when you are wrong, to change your mind, and to contribute to conversations that actually lead to better outcomes.

Chapter Summaries

1
Rule #1: Acknowledge the Limits of Your Knowledge Regarding the Claim

This chapter emphasizes humility about what you know and why that humility matters for claim evaluation. It explains how a small amount of information can create overconfidence—invoking the Dunning-Kruger effect—and urges readers to lower their subjective estimate of competence when faced with complex topics. The author stresses that recognizing knowledge gaps helps avoid shallow conclusions and opens the door to learning from domain experts. Practical examples include the internet-savvy amateur who assumes parity with specialists after a few hours of Googling. The chapter warns against the genetic fallacy—dismissing correct information because of an untrusted source—and suggests that correct facts can come from unlikely places. Actionable steps: explicitly admit when you lack expertise, identify and consult authorities, and remain receptive to well-supported information even from surprising sources. The chapter closes with a short rule summary: understand that you probably know less than you think and that even frequently wrong sources can occasionally be right. The recommended habit is modesty combined with targeted learning—ask questions, check credentials, and prioritize epistemic honesty in conversations.

2
Rule #2: Explore Your Biases Related to the Claim

This chapter catalogs key cognitive and emotional biases that distort our evaluation of claims and offers practical ways to identify and counteract them. It opens with motivated reasoning demonstrated through personal anecdote—how emotional investment in beliefs (e.g., religious faith) can hijack reasoning—and frames passion as something to redirect toward truth rather than confirmation. The author lists common biases: confirmation bias, courtesy bias, groupthink, overconfidence effect, halo/pitchfork effects, and stereotyping. For each bias the chapter provides examples (e.g., selecting agreeable news sources, nodding along in friendly groups, dismissing an argument because of the speaker) and countermeasures. Countermeasures include actively seeking disconfirming evidence, separating claims from their sources, thinking independently in group settings, lowering confidence estimates to account for overconfidence, and consciously separating positive or negative impressions of a person from assessment of their claims. Actionable guidance recommends routine mental checks: ask "what would change my mind?", invite critical perspectives, and treat source credibility as a heuristic for quick assessments but not as a substitute for claim-level evaluation. The core takeaway: recognize when emotions or social pressures are driving your judgments and cultivate a temperament that prizes corrective feedback.

3
Rule #3: Isolate the Actual Claim

This chapter focuses on extracting the precise claim someone is making, distinguishing explicit statements from implicit implications. It explains how strawman arguments and assumptions arise when we fail to isolate the claim, and emphasizes clarifying ambiguity before debating evidence. Examples show explicit claims ("The sun is about 93,000,000 miles from the earth") versus implicit or conversational claims (e.g., insults, nostalgic generalizations, rhetorical questions) where listeners must infer intent. Actionable techniques include asking direct clarifying questions, paraphrasing the claim back to the speaker, and refusing to argue against an inferred version that the speaker didn’t assert. The chapter warns against filling in gaps unconsciously and recommends making the implicit explicit whenever possible ("Do you mean X or Y?"). The practical benefit is efficiency: isolating the claim prevents wasted argument over points nobody actually made and prevents escalation caused by mischaracterization. The short rule summary stresses that isolating the claim often uncovers hidden assumptions or entirely different claims that must be addressed separately.

4
Rule #4: Clearly and Precisely Define Each Relevant Term

This chapter teaches how multiple meanings of words create weak or misleading claims and why precise definitions matter. Using the example "Climate change is a hoax!" the author demonstrates how parsing "climate change" and "hoax" yields very different claims—from denying temperature rise to alleging organized deception. The chapter shows how scope and implied subclaims hide within broad phrases and how clarifying terms converts rhetorical claims into testable ones. Readers are instructed to list relevant terms, ask for definitions, and consider alternative senses of loaded words. The author recommends disambiguation as a precondition to evidence evaluation: you cannot choose the right evidence until you know what is being claimed. Practical exercises: rewrite sweeping claims into narrow, testable versions; ask whether the speaker means "hoax" as "not occurring" or "deliberate deception." The chapter frames precise definition as a communicative and critical tool—better definitions improve dialogue, reduce misunderstanding, and create stronger claims that either can be tested or legitimately defended.

5
Rule #5: Use Terms That Reflect the Scope and Degree of the Claim Accurately

This chapter handles scope and degree—how broad or strong a claim is—and shows why many disputes stem from imprecise quantifiers. The author contrasts statements like "All men are bastards," "Some men are bastards," and the vague "Men are bastards," illustrating how falsifiability and evidential burden change with scope. The recommendation: use categorical words deliberately (all, most, some) and, when possible, prefer numerical estimates or percentages to reduce vagueness. Real-world examples include political leaders' hyperbolic language about control during the COVID-19 pandemic. The chapter critiques extremes such as "tremendous control" versus "no control," and recommends aiming for middle-ground language (e.g., "we have some degree of control and are struggling to get it under control"). Practical steps: when you make a claim, specify the population, timeframe, and degree; when evaluating, push for those specifics. The core insight is that truth often sits between extremes; precise scope helps prevent miscommunication and allows more productive evidence gathering. The rule summary encourages adding numbers or at least clear qualifiers to statements whenever feasible.

6
Rule #6: Operationalize Terms When Possible

This chapter explains operationalization: turning vague concepts into measurable criteria so claims can be investigated. Using examples from psychology (isolation and depression) and public health (COVID-19 deaths), the author shows how operational definitions determine what counts for or against a claim. Operationalization doesn't require universal agreement—multiple reasonable measures can be used and compared—but it does require explicit benchmarks. The chapter introduces the Reductio Ad Consequentia technique: test operational definitions by exploring the consequences of making them too broad or too narrow. The sexual-assault example ("caress" versus "moved his hand in a circular motion") demonstrates how operationalization can change the moral and legal outcomes. Practical advice: pick clear criteria, state how outcomes will be measured, and consider alternative metrics to reveal sensitivity of claims to definitions. The immediate benefit is clarity in debate and research: operationalized claims make evidence meaningful, reduce rhetorical slipperiness, and reveal when disputes are definitional rather than empirical.

7
Rule #7: Make the Claim Falsifiable When Possible

This chapter argues for framing claims so they can be proven false in principle. It distinguishes falsifiable claims ("The world will end on December 21, 2012") from functionally unfalsifiable claims (events beyond practical testing) and from purely unfalsifiable claims (divine timing). Falsifiability is not a guarantee of truth but is a hallmark of strong, investigable claims. The author provides tactics for converting unfalsifiable claims into falsifiable ones by asking for specifics or demanding testable implications (e.g., relating scriptural claims to empirically verifiable predictions). A cautionary note explains that even if you falsify a claim for neutral observers, motivated reasoning may prevent the original believer from accepting refutation. Action items: when confronted with an unfalsifiable claim, request a version that yields observable consequences; if none exists, treat the claim as weak for empirical purposes and address it on philosophical or theological grounds instead.

8
Rule #8: Express an Accurate and Meaningful Level of Confidence

This chapter explores how to communicate uncertainty responsibly by distinguishing possibility, plausibility, and probability. It critiques vacuous language such as "it is possible that" when no demonstration of possibility exists and highlights misuses of absolute confidence in uncertain domains (e.g., predicting an election). The chapter recommends matching the linguistic strength of a claim to the speaker's actual degree of belief and the evidence available. Illustrative examples include mistaken public predictions during the 2016 election and hyperbolic COVID-19 projections framed only as hypothetical. The author underscores that plausibility is subjective belief, whereas probability is an objective estimate grounded in data. Practical guidance: quantify confidence where possible (percentages, confidence intervals), avoid meaningless hedges, and label speculative statements as such. Ultimately, the chapter positions accurate confidence expression as essential to honest communication and effective decision making—overstating certainty misleads, while understating it can paralyze action.

9
Rule #9: Convert Causes to Contributing Factors When Appropriate

This chapter combats binary, single-cause thinking by encouraging readers to frame causal claims as contributions within a network of causes. Examples show how complex social outcomes (economic performance, crime rates, pandemic fatalities) are rarely the result of one simple cause. The author suggests shifting language from "the reason" to phrasing like "one of the reasons" or "a contributor" to reflect causation's multi-factorial reality. Practical rewrites are provided: blanket causal claims are replaced with calibrated ones that attribute partial influence and leave room for other factors. The chapter warns that scientific causation requires stronger standards, but for everyday reasoning, recognizing contributing factors reduces error and ideological oversimplification. Readers are urged to ask: what other factors could be at work? How would we measure relative contributions? This habit reduces polarized, blame-focused narratives and fosters more nuanced policy and everyday judgments.

10
Rule #10: Make Strong Analogies and Call Out Weak Ones

This chapter treats analogies as claims that must themselves be evaluated for strength. It explains that analogies are inherently imperfect but can be persuasive or misleading depending on specificity and relevance. Criteria for strong analogies include explicit identification of the shared feature, acknowledgement of dissimilarities, and avoidance of implied full equivalence that ignores critical differences. The chapter defines the weak-analogy fallacy and false equivalence, and offers examples (e.g., "Believing in God is like believing in Santa Claus" and policy analogies like stop-and-frisk vs banning cars). It prescribes making analogies precise: state in which way the two items are similar and limit the claim to that feature. The parachute/COVID analogy is dissected to show that strength depends on which attributes are compared and whether the comparison is to an ongoing or completed process. Actionable guidance: when you hear an analogy, ask "in what way are these alike?" and "does the similarity matter for the conclusion?" Use thresholds to translate analogy-based reasoning into binary decisions (accept/reject) with an explicit confidence level rather than reflexive dismissal or acceptance.

11
Rule #11: Filter All Relevant Assumptions Through These Same Rules

The final rule reminds readers that claims often rest on hidden assumptions which themselves are claims requiring scrutiny. Using a playful example ("Zeus' lightning bolt fuels the sun"), the chapter shows that even if the surface claim seems coherent under earlier rules, underlying assumptions (existence of Zeus, function of his lightning bolt) must be isolated and evaluated independently. The author instructs readers to unpack compound claims into constituent assumptions and run each through the rules of knowledge limits, bias checks, disambiguation, operability, falsifiability, confidence-calibration, causal nuance, and analogy strength. The point: failure to examine assumptions yields false confidence and proliferates weak claims. Practical takeaway: adopt an assumption-audit habit—list implicit premises, test them, and revise the primary claim accordingly. This recursive application of the rules is the author's recommended path to consistently stronger claims and more reliable evaluations.

Notable Quotes

Our goal in this book is to evaluate the strength of claims, including the ones that we make.

The strength of a claim should not be confused with the strength of an argument

We don’t know what we don’t know, or to put another way, without knowing how much there is to know about a particular topic, we have no way to know how much about that topic we do know.

Make the claim falsifiable when possible

Who Should Read This

This book is ideal for anyone who frequently encounters or makes claims and wants practical tools to improve clarity and rigor: students, journalists, lawyers, policymakers, educators, debaters, managers, and engaged citizens. It is especially useful for those who must communicate persuasively without slipping into vagueness or ideological shorthand. Readers who value clear thinking will gain a compact, actionable checklist—the eleven rules—that they can apply immediately to everyday conversations, public discourse, and policy debates. Compared with more academic treatments (e.g., textbooks on logic or Bayesian epistemology) or broader cognitive-debiasing tomes (like Thinking, Fast and Slow), this work is concise and pragmatic: it focuses tightly on making and evaluating claims, with numerous real-world examples and hands-on suggestions (operationalize, falsify, calibrate confidence, unpack assumptions). If you want a practical companion to improve how claims are framed and scrutinized—without wading into heavy formal theory—this book delivers a clear, repeatable method.