Text 8708, 161 rader
Skriven 2006-09-20 19:53:00 av Robert E Starr JR (9205.babylon5)
Ärende: Re: Atheists: America's m
=================================
* * * This message was from Carl to rec.arts.sf.tv.babylon5.m * * *
* * * and has been forwarded to you by Lord Time * * *
-----------------------------------------------
@MSGID: <gcudnRYTiOrGfIzYnZ2dnUVZ_vqdnZ2d@comcast.com>
@REPLY: <2j93h216vgj16hono9njciqlv8lqmmo79i@4ax.com>
Josh Hill wrote:
> On Fri, 21 Jul 2006 17:40:33 -0500, "Carl" <cengman7@hotmail.com>
> wrote:
>
>> "Josh Hill" <usereplyto@gmail.com> wrote in message
>> news:nc8vb29meqilgk385nndnvui13ip7h8hbr@4ax.com...
>>> On Wed, 19 Jul 2006 19:18:32 -0500, "Carl" <cengman7@hotmail.com>
>>> wrote:
>>>
>>>> "Josh Hill" <usereplyto@gmail.com> wrote in message
>>>> news:pdssb21ipo7pbhk8pk33rbtumq7o3ardd1@4ax.com...
>>>>> On Tue, 18 Jul 2006 17:01:03 -0700, "Vorlonagent"
>>>>> <nojtspam@otfresno.com> wrote:
>>>>>
>>>>>
>>>>>>>>> BTW, I'm not so sure that the band and the computer are so
>>>>>>>>> different,
>>>>>>>>> except insofar as bands are more sophisticated than today's
>>>>>>>>> computers,
>>>>>>>>> which have the cognitive sophistication of insects (no snide remarks
>>>>>>>>> about rap, please). And I don't see any reason why the computer
>>>>>>>>> couldn't be given the same emotional drives as we have.
>>>>>>>> At the moment, that's a statement far more composed of faith than
>>>>>>>> fact.
>>>>>>> I very much disagree. It's composed of thought and observation. A
>>>>>>> toaster has emotions. Ask yourself what emotions are, and you'll reach
>>>>>>> the same conclusion I did.
>>>>>> Don't I get a say in this?
>>>>>>
>>>>>> Toasters may have emotions but it'd take a technological shaman to find
>>>>>> them.
>>>>> Not if you look at emotions analytically and ask what they are, rather
>>>>> than approaching them from a subjective level -- I /feel/ this way,
>>>>> I'm flesh and blood, the toaster isn't, so I have something magical
>>>>> that the toaster doesn't.
>>>>>
>>>>> One way of looking at it is the Turing approach. Two teletypes, one
>>>>> with a man behind it, one a computer. If the computer reacts the same
>>>>> way as the man when you ask it whether it loves its daughter or insult
>>>>> its mother, you can't distinguish them.
>>>>
>>>> I remember the old Eliza program (I even had the code), as well as a
>>>> number
>>>> of others could be written in short order that could fool someone for a
>>>> short time into thinking there was someone on the other side. That's
>>>> simulated intelligence, not real intelligence, and there is a huige
>>>> difference. Extending that emotions
>>>>
>>>> Computers can't attribute meaning to anything. The number 32 might be a
>>>> constant relating to the freezing point of water in Farenheit, the highest
>>>> index in an array of values, the red component in a color of s single
>>>> pixel
>>>> displayed on a monitor, or the space between the words "Artificial
>>>> Intelligence." The computer doesn't "know" or "care." Instructions,
>>>> data...no meaning, just context.
>>> That's not quite true -- a computer can be programmed to understand
>>> context. More to the point, we now know how to make a neural net,
>>> which allows a computer to develop its own contextual understanding in
>>> the same way the brain does. So what we're dealing with now is
>>> practical quantitative limits and R&D. Computers just aren't fast
>>> enough yet to equal the processing power of the human brain. That will
>>> change with time.
>> No, a computer can be programmed to recognize context. That's a *very*
>> different thing than understanding context.
>>
>> The computer doesn't "know" that the result that comes out of it's
>> programming has any meaning.
>
>> Some of the examples (facial recognition, etc), while not trivial, are
>> largely taking an existing sample and comparing critical points (distance
>> between the eyes, length of nose, etc) and allowing for variations because
>> of differing angles of the comparison image. That's not intelligence.
>>
>> The example in which the program came up with a third sentence from the
>> previous two might be more interesting, but I'd have to know information
>> before drawing any real conclusions. I could image such a program making
>> some wildly inconsistent sentences too.. but if the program actually
>> understood the context, it would either "know" the subsequent sentence was
>> incorrect or th elogic by which it actually was correct could be discerned.
>> If the app were functioning at this level, it would be well known now rather
>> than being years away.
>>
>> I infer from the fact that they're still working omn it to mean they're
>> modifying it to do a better job at simulated intelligence rather that
>> actual intelligence in an artificial construct.
>
> "Level," I think, is the key here. The neural network is the stuff of
> thought: we just have more of it than today's computers. So the
> difference is, essentially, quantitative rather than qualitative, just
> as it was in the case of our own evolution (last I heard, the
> intelligence of a microprocessor was about equal to that of an
> insect).
You take for granted that the difference is quantitative. Why? Just
because electrical signals are conducted through a physical system
doesn't mean the the two systems are equivalent.
I've seen no proof suggesting that the difference isn't qualitative as
well as quantitative.
If I programmed a large office building's lights to turn only certain
lights on at a certain time and the result looked like a big smiley
face from the outside, that would not suggest that the building was
intelligent. If I tripled the number of light switches, it would not
mean that the building were three times smarter just because the number
of electrical switches came closer to what people have, nor is there
anything to suggest that if I continued to increase the number of light
switches that eventually the building would become intelligent.
If a system were *really* intelligent,it would be able to not only
spit out a sentence that was profound, it would know it was profound.
It would be able to discern when the sentence that it spit out was nonsense.
A truly intelligent program would be able to tell when an analogy is
valid.
>
> I say "essentially" because we're clearly pretty far from reproducing
> a complete array of specialized brain structures, including those that
> allow sophisticated self-awareness (even a simple computer can and
> generally does have a model that reflects some aspects of its current
> state, so it does have self-awareness to some extent).
If I set up a thousand set mouse traps with ping pong balls in a small
room and threw one ping pong ball in the middle to set off the
equivalent of a chain reaction...it could be thought to simulate
a nuclear reaction. If I then moved the simulation to a computer
and made a much better simulation... it's a better simulation... it's
not qualitatively closer to being a nuclear reaction.
>
> As to simulated intelligence, I'm not sure that there's as broad a
> difference between it and the real thing as one might at first
> suspect. See Turing . . .
Some politicians are made to look intelligent by their handlers.
Looking intelligent is not the same as being intelligent; Differences
are crucial.
Any fool can hear something profound; and intelligent person knows what
the words mean and why it's profound.
Carl
Nihil est--In vita priore ego imperator romanus fui
--- SBBSecho 2.11-Win32
* Origin: Time Warp of the Future BBS - Home of League 10 (1:14/400)
|