Super Intelligence
Essay by review • October 23, 2010 • Essay • 4,576 Words (19 Pages) • 1,567 Views
Eric Fingerman
By a "superintelligence" we mean an intellect that is much smarter than the
best human brains in practically every field, including scientific creativity,
general wisdom and social skills. This definition leaves open how the
superintelligence is implemented: it could be a digital computer, an
ensemble of networked computers, cultured cortical tissue or what have
you. It also leaves open whether the superintelligence is conscious and has
subjective experiences.
Entities such as companies or the scientific community are not
superintelligences according to this definition. Although they can perform a
number of tasks of which no individual human is capable, they are not
intellects and there are many fields in which they perform much worse than
a human brain - for example, you can't have real-time conversation with
"the scientific community".
Superintelligence requires software as well as hardware. There are several
approaches to the software problem, varying in the amount of top-down
direction they require. At the one extreme we have systems like CYC which
is a very large encyclopedia-like knowledge-base and inference-engine. It
has been spoon-fed facts, rules of thumb and heuristics for over a decade by
a team of human knowledge enterers. While systems like CYC might be
good for certain practical tasks, this hardly seems like an approach that will
convince AI-skeptics that superintelligence might well happen in the
foreseeable future. We have to look at paradigms that require less human
input, ones that make more use of bottom-up methods.
Given sufficient hardware and the right sort of programming, we could
make the machines learn in the same way a child does, i.e. by interacting
with human adults and other objects in the environment. The learning
mechanisms used by the brain are currently not completely understood.
Artificial neural networks in real-world applications today are usually
trained through some variant of the Backpropagation algorithm (which is
known to be biologically unrealistic). The Backpropagation algorithm
works fine for smallish networks (of up to a few thousand neurons) but it
doesn't scale well. The time it takes to train a network tends to increase
dramatically with the number of neurons it contains. Another limitation of
backpropagation is that it is a form of supervised learning, requiring that
signed error terms for each output neuron are specified during learning. It's
not clear how such detailed performance feedback on the level of
individual neurons could be provided in real-world situations except for
certain well-defined specialized tasks.
A biologically more realistic learning mode is the Hebbian algorithm.
Hebbian learning is unsupervised and it might also have better scaling
properties than Backpropagation. However, it has yet to be explained how
Hebbian learning by itself could produce all the forms of learning and
adaptation of which the human brain is capable (such the storage of
structured representation in long-term memory - Bostrom 1996).
Presumably, Hebb's rule would at least need to be supplemented with
reward-induced learning (Morillo 1992) and maybe with other learning
modes that are yet to be discovered. It does seems plausible, though, to
assume that only a very limited set of different learning rules (maybe as few
as two or three) are operating in the human brain. And we are not very far
from knowing what these rules are.
Creating superintelligence through imitating the functioning of the human
brain requires two more things in addition to appropriate learning rules
(and sufficiently powerful hardware): it requires having an adequate initial
architecture and providing a rich flux of sensory input.
The latter prerequisite is easily provided even with present technology.
Using video cameras, microphones and tactile sensors, it is
...
...