J.S. Cruz

Human solution finding versus checking

I was reading Hillel Wayne’s always excellent mailing list, and there was a passage in this post that caught my attention:

This is more dangerous: where the AI gives you something that looks like what you want but it’s subtly wrong, so you need to proofread everything you get. To which I’d respond that checking a solution still faster than finding the solution, so it’s still a net benefit. To which you’d point out that I myself have argued we are really bad at proofreading, and there have been high profile cases where “proofread” AI articles had major mistakes.

(Emphasis mine.)

Is this true for humans? I don’t think this is obvious at all, although I don’t believe in the contrary either.

What is solution checking? Finding out if some artifact (program, system, etc.) satisfies a given problem statement. Presumably this artifact went through multiple states during development (solving problems is always iterative). We can only evaluate whether the state we think is final actually satisfies the problem statement if we have sufficient knowledge of it (duh), and what better way of gaining knowledge about the state than knowing the previous state to it, and the one previous to that one, and so on?

But isn’t this just going through the iterative process in the first place, i.e., writing the program/designing the system/solving the problem?

We have this experience every day: we understand things better when we make them because we are the ones making them. This is particularly true of programming.

I have no proper basis for this, other than observation, but I’d say that, in non-trivial problems, writing code is easier than reading code — or, if, for a particular problem, we find that writing is more difficult, I’d say that it is difficult because it is harder to integrate with currently existing code (or to understand how it fits within the whole program), so I’d lay that at the feet of “reading” code.

Model dominance

This is an example of something that, once you’re tuned to it, you start seeing everywhere: models/aphorisms which are correct in one domain, with one set of actors, can be wrong in the same domain, with a different set of actors.

An example where Hillel does not fall into this trap is his post about P vs NP. We know that, in general abstract, NP problems take longer to solve than P problems but, if we let this model dominate our way of thinking about NP problems, we miss out on solving a big number of real-world problems, because many (from above — most?) real-world NP problems are perfectly tractable.

Tags: #modelling