Societies need rules that make no sense for individuals. For example, it makes no difference whether a single car drives on the left or on the right. But it makes all the difference when there are many cars!
There was a failure to recognize the deep problems in AI for instance, those captured in Blocks World. The people building physical robots learned nothing.
When David Marr at MIT moved into computer vision, he generated a lot of excitement, but he hit up against the problem of knowledge representation he had no good representations for knowledge in his vision systems.
I think Lenat is headed in the right direction, but someone needs to include a knowledge base about learning.
We wanted to solve robot problems and needed some vision, action, reasoning, planning, and so forth. We even used some structural learning, such as was being explored by Patrick Winston.
I think Lenat is headed in the right direction, but someone needs to include a knowledge base about learning.
Kubrick's vision seemed to be that humans are doomed, whereas Clarke's is that humans are moving on to a better stage of evolution.
If you just have a single problem to solve, then fine, go ahead and use a neural network. But if you want to do science and understand how to choose architectures, or how to go to a new problem, you have to understand what different architectures can and cannot do.