That doesn't relate to his comment, really. He's asking what gives people the reason to believe a robotic system capable of comprehensive thought would be inherently dangerous to humans.
So basically this...
~Tural
Not removing this 'till I get back. Leaving on [01/05/09]
The UK already has a Skynet (in all fairness, I believe they created that name before the Terminator movies came out), and it controls unmanned "hunter-killer UAVs designed for long-endurance, high-altitude surveillance". Quick, everyone start doomsaying.
I apologize for not reading all the replies, I hope this isn't too far off-topic:
Computers will only ever be as smart as we are. I don't think AI is possible without actually mounting computer parts onto a human brain, but then that's not AI, that's just a cyborg brain? There's really no way around it, which is a shame because the idea of AI is awesome.
Dalek wrote:There are already many various "intelligent" robots in the works as it is, some being spy's for the U.S airforce.
Not to recently there was a documentary about a machine that reads F.O.F tags and was linked to the landwarrior combat suit which activly searched and relayed various vital information it, itself distingushed as relevent.
As time moves on such systems will only fold in the capacity to make such decisions and others like it and even have a sentience for its surroundings and choose what to do.
I'l look up the name of the machine tomorrow and get links to details im sure you would find them most interesting.
Still, the example that you give is a human-given ability. We've told it to do things such as check whether something is a friend or enemy, and it hasn't come up with the concept of friend or enemy by itself. To it, friend is 0 and enemy is 1. 1 means bad. It's a human-given trait, and I don't see computers running around rewriting what is good and evil.
(7:15:27 PM) Xenon7: I BRUK THE FIRST PAGE OMGOMGOMG RONALD REGAN
what about if this was advanced enough to be used in military roles and learned by it self differnt information which could possibly sway its ideals towards its creators? http://en.wikipedia.org/wiki/Kismet_%28robot%29
Your whole argument is based on "What if this wasn't some dumb thing, and it was some super advanced, military, logically thinking, self-driven robot?" You're basically saying "What if this perfect scenario happened, even though there is no reason to suggest it will?"
neural net is a design that attempts to mimic the human brain. as our brains have many interconnected neurons, with outputs being tied to inputs and so forth, this is a program that tries to replicate that by virtually creating a bunch of unique neurons and tying them to each other. you can find out more on wikipedia, but the idea is to "teach" it to process information correctly by weakening or strengthening neuron connections based on whether the produced output is desired or not. almost exactly like how we learn things.
as for if the possibly-maybe sentient machines will ever pose a threat, i don't think so. i'm sure we'll code something akin to Isaac Asimov's 3 laws of robotics into them (except the kill-bots in the military) and put measures in place to keep that data from being corrupted as well as a backdoor to stop a runaway robot should that happen. but aside from that, i still think it's too early to say anything for sure.
another interesting point kurzweil brings up in his books however is how technology will increasingly become a more important part of everyday life. i'm sure most people here have a cell phone, watch tv and surf the internet, so no doubt we've already started down that slippery slope, but how long until we interface solid-state memory with our brains? how long until convincingly realistic prosthetics make losing your legs no big deal, or possibly even a benefit? when do i get my cyberbrain?
ASPARTAME: in your diet soda and artificial sweeteners. also, it's obviously completely safe. it's not like it will cause tumors or anything. >.>
always remember: guilty until proven innocent
Aumaan Anubis wrote:The day that humanity creates technology smarter than humanity, itself, is the day we're all screwed.
It wont be humanity that invents it though.
it would be Computers with a larger capacity than us that will.
And i was not talking about some super advanced military that wants to rule the world i was actualy reffering to what is currently going on.
I've been keeping an eye on this kind of thing for quite a few years.
If we can not create a computer smarter than us, how can a computer do so on its own? The computer would thus be less intelligent than us, and if we can not do it, the less intelligent computer would not be able to either. It is a paradox to say we can't make a computer to do it, but a computer can make a computer to do it, because then we would have had to make a computer to do it anyways.
You just missed the entire point, again. It has become very apparent that there is no logic or reasoning in arguments with you on this matter, so I'll be leaving you to your poor argument strategies I previously described. Suffice to say, you can not make claims on the basis of "Well, if it was some super smart thing, even though there is zero reason to believe that, in any way, at all, and there is no evidence, at all, then it could happen."
No, actualy i believe you are missing my point.
Once technology has gotten to the point were basic AI [advanced by current standards] are situated in machines and are used to either replicate their own code for implementation in robots or create new structures that will be where i had pointed my theoroy of AI code inproving it self by self learning.
Robot's don't exactly have to be humanoid in form or other's like that they can easily be drones for scouting and such as the SUGV
I don't think you understand coding at all... As for your example with SUGV's, they don't detect chemicals, snipers, ect. because they have "learned", programmers give it certain criteria for its censors to meet to detect those things. You seem to act like you know how this will all happen even though you don't seem to really know very much...