The fundamental problem with symbolic AI is that logic is bounded. There is a shape to any corpus of logical expressions that can be programmatically generated via following a set of manipulation rules. And due to the rigid manner in which the corpus evolves, the corpus is bound by its initial position and the rules of manipulation. This is a problem when the corpus has to choose the direction in which it wants to evolve itself further, because the corpus itself is a product of following a set of rigid manipulation rules. Any heuristic rule that can be devised is often not self-referential, and even if it is, it has a limited scope of self-inspection. This is because the heuristic rule itself must be a product of the same mechanism which generated the corpus of logical expressions: the rules of manipulation. And unfortunately, the rules of manipulation are often atomic, often akin to the fundamental laws of logic (e.g. modus ponens). I think it is often hard for people to imagine the s...
Handwritten notes about computer science, artificial intelligence, and mathematics. Essays about philosophy, psychology, and society.