Showing posts with label singularity. Show all posts
Showing posts with label singularity. Show all posts

Friday, April 16, 2010

The Singularity and Ethics


Is the Singularity really coming and how will it affect us? These are two unanswerable questions so far, unfortunately. Let's assume that the Singularity is indeed approaching and will arrive at a rate consistent with Kurzweil's predictions. The ramifications are unclear because it is difficult to predict what this will mean for us as humans. Certainly, it would appear troubling, particularly to those of us who have watched the Terminator, that this prediction includes a reality in which machines exceed the intelligence of the average human brain. In this scenario, machines could theoretically build increasingly intelligent machines without human intervention or help. If these machines have no inherent concept of ethics or morals, isn't there a strong likelihood that human and machine goals could become incongruent? In this scenario, isn't it also possible that machines could at some point decide that humans aren't "useful" or "necessary"?

If you believe that humans are imbued with some supernatural conception of ethics from birth, this scenario should be particularly frightening to you. If however you, like me, believe that ethics and morals are a contrivance of a biological survival mechanism, then you should be a lot more optimistic about the future of super-intelligent machines. If machines become highly intelligent, and to be "more intelligent" than humans I believe they would have to be self-aware, it is not at all unlikely that the biological survival mechanism that yields morality in humans would be mirrored by a mechanical survival mechanism that yields a similar machine "morality". In fact, if this were not the case, I would argue that the machines, although highly intelligent, were not yet smarter than humans for this very reason.

Another ramification of super-intelligent machines is the God paradox. To me, God sort of occupies that asymptote that Kurzweil calls the Singularity. God in this sense is a deification of everything we don't know. Disregarding the other purported capabilities of God (see the Omnipotence Paradox), if a machine eventually came to know "all things", and in fact was able to predict all things based on perfect knowledge of the universe and its mechanisms, how would this square with that aspect of God's identity? Could the machine replace God? Wouldn't this undermine our notions of Free Will? In this crazy world, I wouldn't doubt that some people at least would begin worshiping such a machine. Ultimately, as history has shown us, only time will tell, and I find it unlikely that we have the means to stop the locomotive of technological advancement at this point.

Wednesday, April 14, 2010

Reverse Engineering of the Brain

Today in class we discussed the implications of reverse engineering of the brain. Harnessing the complexity of the brain could allow machines to learn and think autonomously. EngineeringChallenges.org wrote an article titled "Reverse Engineer The Brain", which explains how the brain works:

"Nerve cells communicate by firing electrical pulses that release small molecules called neurotransmitters, chemical messengers that hop from one nerve cell to a neighbor, inducing the neighbor to fire a signal of its own (or, in some cases, inhibiting the neighbor from sending signals). Because each nerve cell receives messages from tens of thousands of others, and circuits of nerve cells link up in complex networks, it is extremely difficult to completely trace the signaling pathways. Furthermore, the code itself is complex — nerve cells fire at different rates, depending on the sum of incoming messages. Sometimes the signaling is generated in rapid-fire bursts; sometimes it is more leisurely. And much of mental function seems based on the firing of multiple nerve cells around the brain in synchrony. Teasing out and analyzing all the complexities of nerve cell signals, their dynamics, pathways, and feedback loops, presents a major challenge."

Dr. Henry Markram is the leader of The Blue Brain project. His documentary video, shown here, predicts that we will have successfully reverse engineered the brain by 2020. What this means is that we could have machines that can actually think and learn within just ten years. This amount of change will require everyone to adapt dramatically, and the closer it seems we get to singularity.

Another article I found published by the IEEE from 2008 predicts that we'll only have the brain of a fruit-fly mapped in 10 years. How quickly do you think it will take for us to reach this technology? How will it be used and what will become of it?