A different way of thinking about thinking

Fascinating interview on Edge.org with Tom Griffiths of Berkeley. For me, the most interesting passage is this:

One of the mysteries of human intelligence is that we’re able to do so much with so little. We’re able to act in ways that are so intelligent despite the fact that we have limited computational resources—basically just the stuff that we can carry around inside our heads. But we’re good at coming up with strategies for solving problems that make the best use of those limited computational resources. You can formulate that as another kind of computational problem in itself.

If you have certain computational resources and certain costs for using them, can you come up with the best algorithm for solving a problem, using those computational resources, trading off the errors you might make and solving the problem with the cost of using the resources you have or the limitations that are imposed upon those resources? That approach gives us a different way of thinking about what constitutes rational behavior.

The classic standard of rational behavior, which is used in economics and which motivated a lot of the human decision-making literature, focused on the idea of rationality in terms of finding the right answer without any thought as to the computational costs that might be involved.

This gives us a more nuanced and more realistic notion of rationality, a notion that is relevant to any organism or machine that faces physical constraints on the resources that are available to it. It says that you are being rational when you’re using the best algorithm to solve the problem, taking into account both your computational limitations and the kinds of errors that you might end up making.

This approach, which my colleague Stuart Russell calls “bounded optimality,” gives us a new way of understanding human cognition. We take examples of things that have been held up as evidence of irrationality, examples of things where people are solving a problem but not doing it in the best way, and we can try and make sense of those. More importantly, it sets up a way of asking questions about how people get to be so smart. How is it that we find those effective strategies? That’s a problem that we call “rational metareasoning.” How should a rational agent who has limitations on their computational resources find the best strategies for using those resources?

Worth reading (or watching or listening to) in full.

I can see the point of trying to understand why humans are so good at some things. The capacity to make rapid causal inferences was probably hardwired into our DNA by evolution — it’s ‘System 1’ in the categorisation proposed in Daniel Kahneman’s book, Thinking Fast and Slow, i.e. a capacity for fast, instinctive and emotional thinking — the kind of thinking that was crucial for survival in primeval times. But the other — equally important — question is why humans seem to be so bad at Kahneman’s ‘System 2’ thinking — i.e. slower, more deliberative and more logical reasoning. Maybe it’s because our evolutionary inheritance was laid down in a simpler era, and we’re just not adapted to handle the complexity with which (as a result of our technological ingenuity) we are now confronted?

This has interesting contemporary resonances: climate change denial, for example; fake news; populism; and the tensions between populism and technocracy.