Follow the data, not the conclusion.
Most middle-schoolers learn “the scientific method” as an eight-step flowchart, run it once, write the conclusion the teacher already wrote on the board, and call it science. That is not science. Science is a loop, not a line — and the most important move in the loop is the one most students never get to make.
Walk into almost any U.S. middle-school science class and ask the students to recite the scientific method, and you will get a tidy eight-step list: question, research, hypothesis, materials, procedure, experiment, analysis, conclusion. It will be on a poster on the wall. It will be on the rubric for the science fair. It will appear, almost word for word, on the unit test.
The list is not exactly wrong. It’s just not how science actually works. Real science is not a linear flowchart that you start at the top and finish at the bottom. It is a cycle — one that practicing scientists run, and re-run, and re-run, sometimes for a career, with no fixed endpoint and a hypothesis that is allowed, even encouraged, to lose.
The diagram below is a model of that cycle — the Cycle of Scientific Enterprise — that I came across years ago in an undergraduate science-methodology textbook and have used ever since to walk students through how scientific knowledge actually accumulates.1 The four-bubble cycle on the inside is the part most students get. The amber loop on the outside is the part most students miss.
How scientific knowledge actually accumulates: a four-stage loop (teal) plus a falsification path (amber) that sends a failed experiment back to question its own assumptions.
The four moves of the cycle
Working clockwise from the top:
- Hypothesis. An informed prediction, based on the current theory. Not a hunch, not a wish, not what the teacher said the answer was. A specific, testable statement: “If theory X is right, then in this setup I should observe Y.”
- Experiment. Set the prediction up so it can actually fail. A real experiment is one where the hypothesis has a chance to lose — controlled variables, a clear outcome measure, methods that someone else could repeat.
- Analysis. Compare what you actually observed to what the theory said you would observe. Not what you hoped you would observe. Not what would get the prettier grade. What the data actually shows.
- Theory. If the data confirms the prediction, the existing theory survives a test, and the result becomes a new scientific fact — another data point in the growing pile that the theory has to keep accounting for. The theory is the “best explanation at present,” and the word present is doing real work in that phrase.
That’s the inner loop, and it’s the one most kids can recite. The hard part — the part that makes this actually science instead of just confirmation — is the amber loop on the outside.
The “No” branch is the entire point
Look at the Analysis bubble again. There are two arrows out of it: Yes and No. The Yes arrow is comfortable. The No arrow is where science lives.
When the data doesn’t agree with the prediction, an honest scientist doesn’t quietly massage the data, fudge the measurement, or rewrite the conclusion to match the theory. They takes the No arrow into the Review step and asks three uncomfortable questions, in order: Did I run the experiment well? Was my hypothesis well-formed? And — if both of those check out — is the theory itself wrong?
That last question is the one that makes science a self-correcting enterprise rather than a religion. Karl Popper’s 1934 insight, still the cleanest one-line definition of science we have, is that a claim is scientific only if it could in principle be shown to be false.2 A theory that survives an honest, well-run experiment that could have killed it is a stronger theory than it was yesterday. A theory that can’t be killed by any conceivable observation isn’t a theory at all.
A real scientist designs experiments that could prove them wrong — and then runs them anyway.
Don’t fool yourself
The single most-quoted line in modern science teaching comes from Richard Feynman’s 1974 Caltech commencement address, “Cargo Cult Science.” It’s worth typing out in full because no paraphrase improves it:
“The first principle is that you must not fool yourself — and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”3
The whole essay is about a particular failure mode — doing the surface motions of science without the underlying habit of actually trying to disprove yourself — and Feynman’s name for it, “cargo-cult science,” has stuck for fifty years for a reason. It is exactly what happens in the kind of classroom science fair where the conclusion is decided in advance and the experiment is reverse-engineered to produce it.
That is not what we are training. We are training the opposite.
Why the honest scientist is a-political
When I tell a student that real scientists are a-political, I don’t mean scientists don’t vote, or don’t care about the world, or don’t have opinions. They do. I mean something narrower and more important: the method itself is indifferent to your opinions.
The cycle in the diagram doesn’t care which side of an argument you started on. It doesn’t care what your parents believe, what your professor believes, what your peer group believes, or what is currently fashionable to believe. It only asks one question, over and over: does the data agree with the prediction? A scientist who lets the answer to that question depend on a political affiliation has stopped doing science and started doing advocacy. The two activities are both legitimate. They are not the same activity. Confusing them is how public trust in science gets eroded, and it is one of the quietest, most preventable failures of high-school science education.4
The student who leaves Bright Minds knowing the difference is better-prepared for college, better-prepared for a career, and frankly better-prepared for a healthy adult civic life than one who doesn’t.
What this looks like at our bench
We don’t teach the philosophy of science as philosophy. We teach it through habits at the bench, every Saturday:
- Draw what you see, not what you remembered. On microscope slides, students sketch the actual field of view, including the parts that don’t match the textbook. The textbook image is the theory. The slide is the data. When they disagree, the slide wins on the page, and we ask why.
- “I don’t know yet” is a complete answer. Maybe my favorite phrase to hear in the lab. It means the student has stopped pattern-matching to a memorized conclusion and is actually looking. The next move is another observation, or a check experiment — not a confident wrong guess.
- Mistakes stay in the notebook. Single-line cross-out, initialled, dated, original entry still legible. That isn’t bookkeeping; it’s the falsification loop made visible. The notebook is supposed to show where the prediction was wrong, because that’s where the actual learning happened.
- The capstone defends a finding, not a position. Every cohort student finishes by presenting what they observed, what they think it means, and where they could be wrong. That last clause is non-optional. A capstone with no possible error bars isn’t a capstone — it’s a speech.
The habit we’re actually trying to build
A student who internalizes the cycle — not as a poster on the wall, but as the way they actually approach a question — has acquired the single most transferable thinking skill STEM has to offer. It travels into chemistry, into nursing assessments, into engineering debugging, into reading a research paper, into reading a news article. It is the difference between a person who can be told what to think and a person who can figure out what is, in fact, true.
That’s the kid we’re trying to graduate. They start a question without already knowing the answer. They design an experiment that could embarrass them. They follow the data when it contradicts what they expected. And when somebody tells them confidently that the science is settled on a question they actually know something about, they have the habit, and the quiet confidence, to ask the only question that matters: which experiment showed that, and could it have come out the other way?
That’s a real scientist. The eight-step poster doesn’t get them there. The loop does.
Sources & further reading
- The “Cycle of Scientific Enterprise” diagram in this essay is adapted from a model widely used in undergraduate science-methodology textbooks as a more honest replacement for the linear “eight-step scientific method” taught in K–12 classrooms. [Leslie may add the specific textbook citation she uses with her BSU students.]
- Popper, K. R. (1959 [1934]). The Logic of Scientific Discovery. London: Hutchinson. The original argument that falsifiability — the ability of a claim to be shown wrong by some conceivable observation — is the line between science and non-science. See also the encyclopedia summary at Stanford Encyclopedia of Philosophy — “Karl Popper”.
- Feynman, R. P. (1974). “Cargo Cult Science.” Caltech commencement address, reprinted in Surely You’re Joking, Mr. Feynman! (W. W. Norton, 1985). The canonical short statement of intellectual honesty in scientific work, including the line about not fooling yourself.
- On the corrosive effect of treating disputed empirical questions as identity markers rather than as testable claims, see Naomi Oreskes, Why Trust Science? (Princeton University Press, 2019), and the broader public-understanding-of-science literature on the difference between the institutional consensus around a finding and the methodological grounds for that consensus.
Posted Apr 30, 2026.