Text 602, 209 rader
Skriven 2004-11-01 06:16:00 av Michael Ragland (1:278/230)
Ärende: AI Evolution and Human Br
=================================
Relative Advantages of Computer Programs, Minds-in-General, and the
Human Brain
© 2003 by Anand and Michael Anissimov
Knowledge of the comparative strengths and weaknesses of computer
programs, minds-in-general (including Artificial Intellects), and the
human brain is of paramount importance to rigorous analysis of the
benefits and risks of AI and human intelligence enhancement. Since
humans evolved for millions of years to model, outwit, and predict the
psychology of other humans, we lack the cognitive support for accurate
analysis of complex computer programs and future AIs, entities we never
encountered in our ancestral environment. Accordingly, we have a
tendency to ascribe characteristics to minds-in-general that are
actually unique to humans. This class of cognitive error is known as
"anthropomorphism". Anthropomorphism results in a widespread tendency to
pigeonhole future AIs as an awkward chimera between mechanical,
automatic computer programs such as Microsoft Word, and scheming,
repressed humans. Unlike computer programs or human beings, AIs will be
an entirely new class of mind; in no way obligated to behave in
accordance with our default intuitions, except insofar as they are
programmed (or reprogram themselves) that way. Considering that humans
possess many design features that derive directly from our adaptive
history and biological constraints, and that Artificial Intellects would
not be created by the same underlying process, nor would they be
composed of the same materials, we have little reason to anticipate that
AIs in general would gravitate towards specific humanlike
characteristics.
For example, some thinkers have assumed that if human beings "set a
positive example" for AIs, then that will increase the probability that
such AIs will behave in psychologically balanced, pro-social, or
altruistic ways. But this is not necessarily the case. The cognitive
machinery underlying the human tendency to absorb norms and morals from
our peers and society is exquisitely complex and precisely designed by
evolution for specific methods of functionality. We have no reason to
believe such complex machinery would arise spontaneously in AIs without
being explicitly programmed, any more than we should expect a tornado
passing through a junkyard to construct a 747 airplane. If we did choose
to construct AIs with the tendency to absorb norms from peers, we will
be free to structure the absorption algorithm in ways entirely different
than it appears in Homo sapiens sapiens - our particular method of moral
absorption is one among many, and does not represent any theoretical
ideal of moral, psychological, or inferential optimality. "Setting a
good example" may be a good approach to fostering morality in some AI
designs, but in others it could be totally useless.
Just as we should not expect AIs to have humanlike goals or morals by
default, we should not expect AIs to have humanlike levels of
intelligence, areas of specialty, or cognitive abilities. We must
consider the probable underlying hardware and inherent advantages or
disadvantages AIs might have relative to humans or computer programs. In
an effort to encourage more detailed analysis of these issues, following
are three lists of relative advantages between computer programs,
minds-in-general, and the human brain.
Advantages of Computer Programs over the Human Brain
The following advantages of computer programs over the human brain do
not necessarily apply to advantages of minds-in-general over the human
brain.
More design freedom, including ease of modification and duplication; the
capability to debug, re-boot, backup and attempt numerous designs.
The ability to perform complex tasks without making human-type mistakes,
such as mistakes caused by lack of focus, energy, attention or memory.
The ability to perform extended tasks at greater serial speeds than
conscious human thought or neurons, which perform approx. 200
calculations per second. Computing chips (~2 GHz) presently have a 10
million to one speed advantage over our neurons.
The in principle capacity to function 24 hours a day, seven days a week,
365 days a year.
The human brain can not be duplicated or "re-booted," and has already
achieved "optimization" through design by evolution, making it difficult
to further improve.
The human brain does not physically integrate well, externally or
internally, with contemporary hardware and software.
The non-existence of "boredom" when performing repetitive tasks.
Advantages of the Human Brain over Minds-in-General
Present AIs lack human general intelligence and multiple years of
real-world experience.
The computational capacity of the human brain is estimated at 2 * 10^16,
or 20 million billion calculations per second, which is twenty times
greater than the supercomputer Blue Gene's predicted achievement of
10^15, or 1 million billion calculations per second, by 2005.
However, the human brain may not have a computational advantage over
computers for much longer. Ray Kurzweil, for example, predicts that the
computational capacity of the human brain will be accomplished on
supercomputers, or clustered systems, by 2010, followed on personal
computers by 2020.
The human brain has already achieved a high-level of complexity and
"optimization" through design by evolution, and thus has proven
functionality.
Advantages of Minds-in-General over the Human Brain
The following are not advantages of specific AI approaches, but rather
advantages of minds-in-general over the human brain.
An increased ability to acquire, retrieve, store and use information on
the Internet, which contains most human knowledge.
Lack of human failings that result from complex functional adaptations,
such as observer-biased beliefs or rationalization.
Lack of neurobiological features that limit human control over
functionality.
Lack of complexity that we have acquired from evolutionary design, e.g.,
unnecessary autonomic processes and sexual reproduction.
The ability to advance on the design of evolution, which is continually
constrained by blindness, the requirement to maintain preexisting
design, and a weakness with simultaneous dependencies.
The ability to add more computational power to a particular feature or
problem. This may result in moderate or substantial improvements to
preexisting intelligence. (AI does not have an upper limit on
computational capacity; we do.) Note that the speed of computational
power is predicted to continually increase exponentially, and decrease
exponentially in cost, every 12-24 months, in accordance with Moore's
Law.
The ability to analyze and modify every design level and feature.
The ability to combine autonomic and deliberative processes.
The ability to communicate and share information (abilities, concepts,
memories, thoughts) at a greater rate and on a greater level than us.
The ability to control what is and what is not learned or remembered.
The ability to create new modalities that we
lack, such as a modality for code, which may improve the AI's
programming ability-by making the AI inherently native to programming -
far beyond our own (a modality for code may allow the AI to perceive its
hardware machine code, i.e. the language used to write the AI, and other
abilities).
The ability to learn new information very rapidly.
The ability to consciously create, analyze, modify, and improve
abilities, concepts, or memories.
The ability to operate on computer hardware that has powerful advantages
over human neurons, such as the ability to perform billions of
sequential steps per second.
The capacity to self-observe and understand on a fine-grained level that
is impossible for us. AIs may have an improved capacity for
introspection and manipulation, such as the ability to introspect and
manipulate code, which would be the functional level comparable to human
neurons, which we can't think about or manipulate.
The most important and powerful capacity of minds-in-general over the
human brain is the ability to recursively self-encapsulate and
self-improve its intelligence. As a mind becomes smarter, the mind can
use its intelligence to improve its design, thereby improving its
intelligence, which may allow further improvements to its design, thus
allowing further improvements to its intelligence. It's unknown when
open-ended self-improvement may begin. A conservative assumption is
human-similar general intelligence; but it may begin before then, and it
is important to plan nonconservatively.
The advantages of minds-in-general, and self-improving AIs in
particular, is elaborated at greater length in third section of the
paper "Levels of Organization in General Intelligence", released by the
Singularity Institute for Artificial Intelligence in 2002.
General References
Kurzweil, Ray. 2001. The Law of Accelerating Returns. KurzweilAI.net,
March 7, 2001.
Singularity Institute. 2001. Creating Friendly AI.
Singularity Institute. 2001. What is Seed AI?
Voss, Peter. 2001. Why Machines Will Become Hyperintelligent before
Humans Do.
Yudkowsky, Eliezer. 2002. Levels of Organization in General
Intelligence. To appear in Advances in Artificial General Intelligence,
Goertzel and Pennachin, eds.
(Back to articles.)
Relative Advantages of Compu
ter Programs, Minds-in-General, and
the Human Brain© 2003 by Anand and Michael
Anissimov Knowledge of the comparative strengths and weaknesses of
computer programs, minds
"Tiny green men might have been a better experiment."
Stephen Hawking
(paraphrasing from a "Universe in a Nutshell".
---
þ RIMEGate(tm)/RGXPost V1.14 at BBSWORLD * Info@bbsworld.com
---
* RIMEGate(tm)V10.2áÿ* RelayNet(tm) NNTP Gateway * MoonDog BBS
* RgateImp.MoonDog.BBS at 11/1/04 6:16:03 AM
* Origin: MoonDog BBS, Brooklyn,NY, 718 692-2498, 1:278/230 (1:278/230)
|