Philosophical paradoxes

Philosophical paradoxes are very often the result of what are called heuristics by psychologists: simple, quick, easy ways of answering questions or solving problems.
Timothy Williamson

Professor of Philosophy

15 May 2026
Timothy Williamson
Citation-ready summary

Philosophical paradoxes are very often the result of what are called heuristics by psychologists: simple, quick, easy ways of answering questions or solving problems.

Author: Timothy Williamson
Last updated: 15 May 2026
Key Points
  • We are using heuristics that are built into the way that we think, and we're not aware of using them. And they can lead us into trouble, often into contradictions.
  • If you imagine a heap and then imagine taking one grain away from the heap, you can ask the question: is what's left still a heap? The Sorites paradox, or the paradox of the heap is a good example of philosophical paradox.
  • Philosophers often are willing to make the theory very complicated in order to accommodate all the data. That is not good scientific practice.

Heuristics

I want to say something about where philosophical paradoxes come from. And in my view, they're very often the result of what are called heuristics by psychologists. So heuristics are simple, quick, easy ways of answering questions or solving problems.

© Shutterstock

An example of a heuristic is that we use colour boundaries to judge the edges of physical objects in our environment. That's something that we do automatically. We're not aware of doing it, but it's built into our visual systems, and it's not completely reliable. What reveals that, for example, is the use of camouflage in warfare, where you can induce illusions about the layout of objects by camouflaging things, by painting shapes on them where the colours don't correspond to the shapes of the underlying objects.

I think something similar is going on with philosophical paradoxes: we are using heuristics that are built into the way that we think, and we're not aware of using them. And they can lead us into trouble, often into contradictions.

The paradox of the heap

An example of what I'm talking about is what I call the persistence heuristic, which is effectively a heuristic that tells us to ignore small changes. And that's something that we use all the time. For example, if you have a database of information, you can't be updating it all the time and checking every minute to see whether it's still correct. That's just not practical. So you have to take for granted that things haven't changed within the last 20 seconds, or whatever it happens to be.

The paradox that it leads to in philosophy is what is sometimes called the Sorites paradox, or the paradox of the heap. If you imagine a heap and then imagine taking one grain away from the heap, you can ask the question: is what's left still a heap? The obvious answer is yes. And that gives you the principle that subtracting one grain from a heap of sand, or whatever it happens to be, still leaves a heap.

© Shutterstock

But of course, that principle, if you apply it often enough, leads to disastrous results. If you subtract grains of sand from a heap, sooner or later — because there are only finitely many of them in the heap — you will get down to three grains and still be saying it's a heap.

We get into this paradox because we're just applying the same heuristic over and over again. It gives the right results, as it were, 99% of the time. But it's not 100% correct.

Better philosophy

The problem of how to handle the fact that our data is likely to include some errors is one that scientists face all the time. And part of the way they handle it is by not trying to fit their theories too closely to the data. Because if you fit your theories very closely to the data, and the data contains errors, then the theory is going to be wrong.

© Shutterstock

Often you can get a very close fit by complicating the theory a lot, by, as scientists say, using lots of parameters. But that kind of very close fitting tends to go badly, because you're fitting to data that are not all correct. And so what turns out to be a better approach in science is to go for much simpler theories, even though they don't fit the data as well as the more complicated ones. In the long run, that means that you're not building the errors from the data into your theory.

That's something that philosophers are not used to doing. Philosophers often are willing to make the theory very complicated in order to accommodate all the data. That is not good scientific practice. And I think that philosophers can do better in handling this kind of problem — that some of the data we are using may be wrong. If we go for simpler theories and don't allow ourselves to complicate and complicate in order to fit the data, that is better scientific practice. And it's quite relevant to philosophy as well.

Editor’s note: This article has been faithfully transcribed from the original interview filmed with the author, and carefully edited and proofread. Edit date: 2026

Discover more about

Philosophical paradoxes

Williamson, T, (2025), Heuristics. Ch. 1 of Overfitting and Heuristics in Philosophy. Oxford University Press.

Williamson, T, (1994), Vagueness. Routledge (Taylor & Francis Group).

Williamson, T, (2020), Suppose and Tell: The Semantics and Heuristics of Conditionals. Oxford University Press.

Williamson, T, (2024), Overfitting and Heuristics in Philosophy. Oxford University Press.

0:00 / 0:00