Replicated Typo and @jasonbaldridge inter alia have been discussing a recent forum where Chomsky and Minsky and Labov (among others) met to discuss the past and future of artificial intelligence. Replicated Typo points out the most interesting contribution from Chomsky:
Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. “That’s a notion of [scientific] success that’s very novel. I don’t know of anything like it in the history of science,” said Chomsky.
More evidence, as I suggested before, that Chomsky — and his acolytes — really don’t mean the same thing that empiricists do when talk about “what we mean when we say we know something”. Having a good predictive model — even if it’s statistical, or perhaps especially if it’s statistical — is a form of knowing something, and it’s a form that’s younger than Noam himself.
Empiricists adopt a “walks like a duck” model for demonstrating their understanding. I trace it to Shannon and Turing (pace Babbage and Lovelace). The Turing Test itself is a form of “walks like a duck” empiricism (“if it looks and acts like an intelligence….”), and statistical models that behave the same as a natural phenomenon demonstrate an understanding of that phenomenon. Rejecting an explanation because it is messy is a form of willful blindness — the real world sometimes is messy, and an explanation must actually match observations: simplicity is not the only criterion.
Mirrored from Trochaisms.