Text 8752, 210 rader
Skriven 2006-09-21 18:26:00 av Robert E Starr JR (9249.babylon5)
Ärende: Re: Atheists: America's m
=================================
* * * This message was from Josh Hill to rec.arts.sf.tv.babylon5.m * * *
* * * and has been forwarded to you by Lord Time * * *
-----------------------------------------------
@MSGID: <kg86h219l17lrk5embtq1akomskjkbjqtl@4ax.com>
@REPLY:
<tutlb254fojs61c8unms3lhhptfkcu9bop@4ax.com><lomdnWE2dezOYibZnZ2dnUVZ_q6dnZ2d@comcast.com><tqkob2p4eu6r3nd329ucn7uthob8q
On Wed, 20 Sep 2006 19:48:25 -0500, Carl <cengman7@hotmail.com> wrote:
>
>
>Josh Hill wrote:
>> On Fri, 21 Jul 2006 17:40:33 -0500, "Carl" <cengman7@hotmail.com>
>> wrote:
>>
>>> "Josh Hill" <usereplyto@gmail.com> wrote in message
>>> news:nc8vb29meqilgk385nndnvui13ip7h8hbr@4ax.com...
>>>> On Wed, 19 Jul 2006 19:18:32 -0500, "Carl" <cengman7@hotmail.com>
>>>> wrote:
>>>>
>>>>> "Josh Hill" <usereplyto@gmail.com> wrote in message
>>>>> news:pdssb21ipo7pbhk8pk33rbtumq7o3ardd1@4ax.com...
>>>>>> On Tue, 18 Jul 2006 17:01:03 -0700, "Vorlonagent"
>>>>>> <nojtspam@otfresno.com> wrote:
>>>>>>
>>>>>>
>>>>>>>>>> BTW, I'm not so sure that the band and the computer are so
>>>>>>>>>> different,
>>>>>>>>>> except insofar as bands are more sophisticated than today's
>>>>>>>>>> computers,
>>>>>>>>>> which have the cognitive sophistication of insects (no snide remarks
>>>>>>>>>> about rap, please). And I don't see any reason why the computer
>>>>>>>>>> couldn't be given the same emotional drives as we have.
>>>>>>>>> At the moment, that's a statement far more composed of faith than
>>>>>>>>> fact.
>>>>>>>> I very much disagree. It's composed of thought and observation. A
>>>>>>>> toaster has emotions. Ask yourself what emotions are, and you'll reach
>>>>>>>> the same conclusion I did.
>>>>>>> Don't I get a say in this?
>>>>>>>
>>>>>>> Toasters may have emotions but it'd take a technological shaman to find
>>>>>>> them.
>>>>>> Not if you look at emotions analytically and ask what they are, rather
>>>>>> than approaching them from a subjective level -- I /feel/ this way,
>>>>>> I'm flesh and blood, the toaster isn't, so I have something magical
>>>>>> that the toaster doesn't.
>>>>>>
>>>>>> One way of looking at it is the Turing approach. Two teletypes, one
>>>>>> with a man behind it, one a computer. If the computer reacts the same
>>>>>> way as the man when you ask it whether it loves its daughter or insult
>>>>>> its mother, you can't distinguish them.
>>>>>
>>>>> I remember the old Eliza program (I even had the code), as well as a
>>>>> number
>>>>> of others could be written in short order that could fool someone for a
>>>>> short time into thinking there was someone on the other side. That's
>>>>> simulated intelligence, not real intelligence, and there is a huige
>>>>> difference. Extending that emotions
>>>>>
>>>>> Computers can't attribute meaning to anything. The number 32 might be a
>>>>> constant relating to the freezing point of water in Farenheit, the
highest
>>>>> index in an array of values, the red component in a color of s single
>>>>> pixel
>>>>> displayed on a monitor, or the space between the words "Artificial
>>>>> Intelligence." The computer doesn't "know" or "care." Instructions,
>>>>> data...no meaning, just context.
>>>> That's not quite true -- a computer can be programmed to understand
>>>> context. More to the point, we now know how to make a neural net,
>>>> which allows a computer to develop its own contextual understanding in
>>>> the same way the brain does. So what we're dealing with now is
>>>> practical quantitative limits and R&D. Computers just aren't fast
>>>> enough yet to equal the processing power of the human brain. That will
>>>> change with time.
>>> No, a computer can be programmed to recognize context. That's a *very*
>>> different thing than understanding context.
>>>
>>> The computer doesn't "know" that the result that comes out of it's
>>> programming has any meaning.
>>
>>> Some of the examples (facial recognition, etc), while not trivial, are
>>> largely taking an existing sample and comparing critical points (distance
>>> between the eyes, length of nose, etc) and allowing for variations because
>>> of differing angles of the comparison image. That's not intelligence.
>>>
>>> The example in which the program came up with a third sentence from the
>>> previous two might be more interesting, but I'd have to know information
>>> before drawing any real conclusions. I could image such a program making
>>> some wildly inconsistent sentences too.. but if the program actually
>>> understood the context, it would either "know" the subsequent sentence was
>>> incorrect or th elogic by which it actually was correct could be discerned.
>>> If the app were functioning at this level, it would be well known now
rather
>>> than being years away.
>>>
>>> I infer from the fact that they're still working omn it to mean they're
>>> modifying it to do a better job at simulated intelligence rather that
>>> actual intelligence in an artificial construct.
>>
>> "Level," I think, is the key here. The neural network is the stuff of
>> thought: we just have more of it than today's computers. So the
>> difference is, essentially, quantitative rather than qualitative, just
>> as it was in the case of our own evolution (last I heard, the
>> intelligence of a microprocessor was about equal to that of an
>> insect).
>
>You take for granted that the difference is quantitative. Why? Just
>because electrical signals are conducted through a physical system
>doesn't mean the the two systems are equivalent.
>
>I've seen no proof suggesting that the difference isn't qualitative as
>well as quantitative.
There is no proof either way, and won't be until we see the first real
AI's.
>If I programmed a large office building's lights to turn only certain
>lights on at a certain time and the result looked like a big smiley
>face from the outside, that would not suggest that the building was
>intelligent. If I tripled the number of light switches, it would not
>mean that the building were three times smarter just because the number
>of electrical switches came closer to what people have, nor is there
>anything to suggest that if I continued to increase the number of light
>switches that eventually the building would become intelligent.
>
>If a system were *really* intelligent,it would be able to not only
>spit out a sentence that was profound, it would know it was profound.
>It would be able to discern when the sentence that it spit out was nonsense.
>
>A truly intelligent program would be able to tell when an analogy is
>valid.
People don't always spit out sentences that are profound, and people
can't always recognize analogies. So really, all you've established is
that today's computers aren't as smart as people -- something that
isn't in dispute.
But the signs of progress in computing are obvious with no fundamental
theoretical barrier in sight. And we now have a pretty good idea of
how the brain processes data, and so know (if we didn't already) that
it's something that can be done in silicon, which can mimic a neural
network and, in some current AI applications, does.
By way of contrast, there's no evidence whatsoever for something
magical or metaphysical in human thought. The brain works like
anything else.
>> I say "essentially" because we're clearly pretty far from reproducing
>> a complete array of specialized brain structures, including those that
>> allow sophisticated self-awareness (even a simple computer can and
>> generally does have a model that reflects some aspects of its current
>> state, so it does have self-awareness to some extent).
>
>
>If I set up a thousand set mouse traps with ping pong balls in a small
>room and threw one ping pong ball in the middle to set off the
>equivalent of a chain reaction...it could be thought to simulate
>a nuclear reaction. If I then moved the simulation to a computer
>and made a much better simulation... it's a better simulation... it's
>not qualitatively closer to being a nuclear reaction.
This computer detects a bad analogy: you're using "thought" where the
proper analogy is "brain" --
simulated reaction:actual reaction::computer:brain
or
mathematics of simulated reaction:mathematics of actual
reaction::computer thought:human thought
but not
simulated reaction:actual reaction:computer:thought
>> As to simulated intelligence, I'm not sure that there's as broad a
>> difference between it and the real thing as one might at first
>> suspect. See Turing . . .
>
>Some politicians are made to look intelligent by their handlers.
>Looking intelligent is not the same as being intelligent; Differences
>are crucial.
>
>Any fool can hear something profound; and intelligent person knows what
>the words mean and why it's profound.
Yes, but I believe that that overstates the difference between
intelligence and non-intelligence. Intelligence is a matter of degree.
A 100-neuron flatworm possesses intelligence: it can learn from
experience, it can make decisions, it is even self-aware, that is, it
has a simple model of some aspects of its own state. Similarly, an
intelligence "emulation" routine that recognizes a few key phrases and
responds with some degree of appropriateness to them expresses some
small degree of intelligence. On the flip side, there are times when a
human being's intelligence is not adequate to solve a problem, and
there are times when human beings emit erroneous packaged responses in
response to stimuli, just like an intelligence emulation program.
--
Josh
[Truly] I say to you, [...] angel [...] power will be able to see that [...]
these to whom [...] holy generations [...]. After Jesus said this, he departed.
- The Gospel of Judas
--- SBBSecho 2.11-Win32
* Origin: Time Warp of the Future BBS - Home of League 10 (1:14/400)
|