the singularity [discussion]
the singularity [discussion]
if you've read any of Kurzweil's books, you'll now what i'm talking about right away, but otherwise, here's a brief summary:
as man, one of the most important things to us are tools, whether they be tangible or not. things like fire, spears, and language are what gave ancient man the edge to survive over the beasts. as time went on, newer, more complicated tools would be developed, like different languages, basic math, irrigation, and agriculture. fast forward to the current day, and mechanical calculators led to electronic counterparts, which in the long run made the computers of today an inevitability.
as computers continue to increase in speed and capacity according to moore's law, this makes a supercomputer with intelligence greater than a human's an inevitability (unless the human race dies out or something) which also means that this type of technology will be unavoidable in everyday life (i personally think it already is).
so what do you think of a machine being more intelligent than you? do you think they will ever become sentient? do you like or dislike this prediction of the future?
I was reminded of this when i read tural's post in sneakyn8's locked thread. he mentioned the aztec calendars ending in 2012 not indicating the end of the world, but the end of an era. could 2012 be the year a machine passes a turing test? becomes self-conscious? becomes the overlord of the inferior meatbags?
as man, one of the most important things to us are tools, whether they be tangible or not. things like fire, spears, and language are what gave ancient man the edge to survive over the beasts. as time went on, newer, more complicated tools would be developed, like different languages, basic math, irrigation, and agriculture. fast forward to the current day, and mechanical calculators led to electronic counterparts, which in the long run made the computers of today an inevitability.
as computers continue to increase in speed and capacity according to moore's law, this makes a supercomputer with intelligence greater than a human's an inevitability (unless the human race dies out or something) which also means that this type of technology will be unavoidable in everyday life (i personally think it already is).
so what do you think of a machine being more intelligent than you? do you think they will ever become sentient? do you like or dislike this prediction of the future?
I was reminded of this when i read tural's post in sneakyn8's locked thread. he mentioned the aztec calendars ending in 2012 not indicating the end of the world, but the end of an era. could 2012 be the year a machine passes a turing test? becomes self-conscious? becomes the overlord of the inferior meatbags?
ASPARTAME: in your diet soda and artificial sweeteners. also, it's obviously completely safe. it's not like it will cause tumors or anything. >.>
always remember: guilty until proven innocent
always remember: guilty until proven innocent
- Cryticfarm
- Posts: 3611
- Joined: Sat Dec 09, 2006 7:23 pm
- Location: canada
- Contact:
I also heard that futurist are predicting that in about 20 years, they will develop nanobots that are injected into your body making you extremely smart. It is also believed that in 600 years, there will be a computer capable of simulating the universe. I do think this can happen, I'm not sure what would happen, or if I would care or not, but bleh.
I wish people would just stop and think for a second, so what if the Aztec calender ends at 2012. Does anyone think that the people died out before they had a chance to continue it?
Also I think its possible for machines to surpass us in intellegence. They would have to be able to think about problems logically and have an unlimited learning capacity which as of right now isn't possible, so I wouldn't worry about it.
Also I think its possible for machines to surpass us in intellegence. They would have to be able to think about problems logically and have an unlimited learning capacity which as of right now isn't possible, so I wouldn't worry about it.

Heh. Don't make claims when you are uncertain about the information in the first place. That is not the way it works.D4rkFire wrote:Does anyone think that the people died out before they had a chance to continue it?
As for the actual comment:
No, not really. The end of the world is much, much more plausible. The calendars were based on astrology. They can predict many events that way, and it signals the time of some astrological event that happens very rarely, can't recall what right now. But there is no indication, in any way, that somehow Aztecs could predict the future of man-made items and events. That's just nonsense. If anything did happen, it would be pure coincidence.[cc]z@nd! wrote:I was reminded of this when i read tural's post in sneakyn8's locked thread. he mentioned the aztec calendars ending in 2012 not indicating the end of the world, but the end of an era. could 2012 be the year a machine passes a turing test? becomes self-conscious? becomes the overlord of the inferior meatbags?
I wasn't making any claim's, I was merely stating that it could be possible that they just stopped making it. I do not know for sure, but some people tend the believe in the conspiracy theories that are wrapped around that subject.Tural wrote:Heh. Don't make claims when you are uncertain about the information in the first place. That is not the way it works.
Now that I look back at what I wrote I should of said, "Does anyone think that the people just stopped making the calender?."

People always look for big answers to big questions. Pick any big event in history. Nobody is ever satisfied thinking something can be so simple. It is human nature.D4rkFire wrote:I wasn't making any claim's, I was merely stating that it could be possible that they just stopped making it. I do not know for sure, but some people tend the believe in the conspiracy theories that are wrapped around that subject.
Now that I look back at what I wrote I should of said, "Does anyone think that the people just stopped making the calender?."
That being said, they did not just stop making it, as I said. There is a significance to the Mayan (Stop confusing me saying Aztecs, people =p) end-date. For what it's worth, December 21, 2012 is the Winter Solstice. The long count of their calendar at that time is 13.0.0.0.0, which is where they chose to stop. Here's a nice summary of why they chose to end there:
It was simply based on their beliefs regarding an astronomical event and something in their beliefs. That is why it ends there. It was purposeful, and it was significant. They didn't just stop for no reason, but they did not, in any way, predict the end of the world. You are incorrect in any assumption that the date has no significance."For early Mesoamerican skywatchers, the slow approach of the winter solstice sun to the Sacred Tree was seen as a critical process, the culmination of which was surely worthy of being called 13.0.0.0.0, the end of a World Age. The channel would then be open through the winter solstice doorway, up the Sacred Tree, the Xibalba be, to the center of the churning heavens, the Heart of Sky."
- shadowkhas
- Posts: 5423
- Joined: Wed Jun 23, 2004 8:00 am
- Location: Salt Lake City, Utah
![]() |
![]() |
I think it was when the Earth crosses the galactic horizontal plane, but I've seen theories that say that that happens anywhere from every 50 years to every 50 million years, so...:/Tural wrote:They can predict many events that way, and it signals the time of some astrological event that happens very rarely, can't recall what right now.
On the topic of computer intelligence, define intelligence. Is it raw capability in computing, and knowledge? If so, I don't doubt it at all. A computer can store vast amounts of data and recall them quicker than humans in many cases. How many times have you forgotten something that you swear you know, and then go to Wikipedia for the answer?
If you're considering intelligence under the idea of sentience and self-awareness, I highly doubt that computers will ever do that. Computers follow what we tell instruct them to, so I don't see it as a possibility.
(7:15:27 PM) Xenon7: I BRUK THE FIRST PAGE OMGOMGOMG RONALD REGAN
Re: the singularity [discussion]
Far fetched. Kurzweil watches terminator too much.
I also don't get the idea of a self-conscious computer being naturally insanely dangerous.
I also don't get the idea of a self-conscious computer being naturally insanely dangerous.
Re: the singularity [discussion]
heres good example of the AI i reffered to.Danke wrote:Far fetched. Kurzweil watches terminator too much.
I also don't get the idea of a self-conscious computer being naturally insanely dangerous.
Re: the singularity [discussion]
That doesn't relate to his comment, really. He's asking what gives people the reason to believe a robotic system capable of comprehensive thought would be inherently dangerous to humans.Dalek wrote:heres good example of the AI i reffered to.
From what I can remember from a show on the History channel, it was when the Earth, Sun, and the center of the Milky Way go past the cross made by the horizontal and vertical galactic plane. It is also supposed to happen sometime between 1992-2012.shadowkhas wrote:I think it was when the Earth crosses the galactic horizontal plane, but I've seen theories that say that that happens anywhere from every 50 years to every 50 million years, so...:/Tural wrote:They can predict many events that way, and it signals the time of some astrological event that happens very rarely, can't recall what right now.
Anyway, I think that the computers will not be able to think by themselves. As other people have posted, the computers do what we tell them, and only what we tell them. Sometimes they have choices, but those choices were decided by us, not by them. So unless they can somehow think for themselves, I do not think they pose any threat to us.
Your claim is based entirely on an assumption which there is little evidence to support at all. Your argument only works if the dramatic situation happens. That being said, that would still not make them inherently dangerous, that would make them changed to be dangerous. When they were made, they would not be dangerous to humans, which is what we were asking. There is no reason to believe that if we had intelligent robots that they would decide to become violent towards humans on their own free will. You have no ability to claim they would be violent, as there is no evidence to suggest that.Dalek wrote:A child learns the way of life from their surroundings and parents.
A robot which can think would be more than likely be put into a military role such as a soldier.
As such all that machine would know is violence and killing and would more possibly be valverable to hacking or misuse.
There are already many various "intelligent" robots in the works as it is, some being spy's for the U.S airforce.
Not to recently there was a documentary about a machine that reads F.O.F tags and was linked to the landwarrior combat suit which activly searched and relayed various vital information it, itself distingushed as relevent.
As time moves on such systems will only fold in the capacity to make such decisions and others like it and even have a sentience for its surroundings and choose what to do.
I'l look up the name of the machine tomorrow and get links to details im sure you would find them most interesting.
Not to recently there was a documentary about a machine that reads F.O.F tags and was linked to the landwarrior combat suit which activly searched and relayed various vital information it, itself distingushed as relevent.
As time moves on such systems will only fold in the capacity to make such decisions and others like it and even have a sentience for its surroundings and choose what to do.
I'l look up the name of the machine tomorrow and get links to details im sure you would find them most interesting.
yes, computers can only do what we tell them, but there are ways to tell them to solve problems and behave certain ways. personally, i think the only reason a machine wouldn't become sentient is that the programmers on the project find it unethical to give it this capability.
the best example i can come up with off the top of my head is this: an experiment done quite a long time ago using self-replicating programs. they ran inside a larger program which allocated all the necessary memory from the OS to be their environment, as well as ended processes that became too old, creating a separate world where the programs could thrive. this 'master' program also had the ability to change a random bit in-transfer when a program replicated itself, creating the element of random mutation.
as i recall (possibly incorrectly on minor details), it started with a single 80 byte (or bit?) program that re-wrote a copy of itself in memory. as this program and all the other programs it created ran, random mutations would occur for random transfers. most of the time, this would break the program, but before long, two different "species" emerged, one 81 bytes, the other 79. these worked fine by themselves, and before long, a fourth variety of the program came to be, but it was significantly shorter (in the neigborhood of 26 bytes) than the others. as it's code was examined, it was discovered to act like a parasite, using the code of other programs in memory to replicate instead of doing it itself.
i don't remember all the details, but the point i was trying to get across was that simply by chance a block of machine code could be changed by a computer and still work correctly. now take note of several programs available to obfuscate code by getting rid of blank space and changing identifier names, and it becomes painfully obvious a program can be written that changes it's code while still accomplishing it's goal.
as i see it, this can be the beginning of AI that learns and adapts itself to it's environment, or is able to be repurposed from one task to another without intervention of a human. and then there's neural nets and all sorts of other things...
the best example i can come up with off the top of my head is this: an experiment done quite a long time ago using self-replicating programs. they ran inside a larger program which allocated all the necessary memory from the OS to be their environment, as well as ended processes that became too old, creating a separate world where the programs could thrive. this 'master' program also had the ability to change a random bit in-transfer when a program replicated itself, creating the element of random mutation.
as i recall (possibly incorrectly on minor details), it started with a single 80 byte (or bit?) program that re-wrote a copy of itself in memory. as this program and all the other programs it created ran, random mutations would occur for random transfers. most of the time, this would break the program, but before long, two different "species" emerged, one 81 bytes, the other 79. these worked fine by themselves, and before long, a fourth variety of the program came to be, but it was significantly shorter (in the neigborhood of 26 bytes) than the others. as it's code was examined, it was discovered to act like a parasite, using the code of other programs in memory to replicate instead of doing it itself.
i don't remember all the details, but the point i was trying to get across was that simply by chance a block of machine code could be changed by a computer and still work correctly. now take note of several programs available to obfuscate code by getting rid of blank space and changing identifier names, and it becomes painfully obvious a program can be written that changes it's code while still accomplishing it's goal.
as i see it, this can be the beginning of AI that learns and adapts itself to it's environment, or is able to be repurposed from one task to another without intervention of a human. and then there's neural nets and all sorts of other things...
ASPARTAME: in your diet soda and artificial sweeteners. also, it's obviously completely safe. it's not like it will cause tumors or anything. >.>
always remember: guilty until proven innocent
always remember: guilty until proven innocent