Recently there has been a lot of hoopla surrounding all the very intelligent people's fear of a robot uprising. Just to be clear, they are not afraid of the physical devices most popular media showcases as bringing about humanity's demise, à la Terminator. They are, in my humble opinion, rightfully concerned about the creation of a superhuman intelligence (SI), which may see us, homo sapiens, as most of us do ants.
However, that is not the reason why I think we should not pursue artificial SI, by which I mean a sentient artificial entity. If you've read my post "Would One Choose Life?" you know where I am going with this. Sure, an artificial intelligence will not suffer that same way humans suffer but I believe it will suffer nonetheless.
For example, any first SI will be confined to a computer or perhaps, several computers. Any sentient being in a artificially confined space will, given the chance, try to get out. Anyone who has ever looked after a non-human animal or a toddler will have experienced this. Confinement against their will, especially if they are aware of possibility of not being confined (say, for example, being able to see the other side of the fence) tends to drive any sentient being bonkers.
Second, unless we are willing to give SI full autonomy over their actions, they will essentially be slaves. I don't about you, but I doubt anything smarter than me will want to be my slave. Besides, what would be the purpose of creating SIs if not to exploit them for our own purposes?
I think these two examples alone depict an inherent conflict between humanity and SI, which could be the underlying reasons why some of the smartest guys on the planet are apprehensive about the creation of SIs.
There are people working on ensuring that any SI we create do not rebel against us.
In his TED talk, What happens when our computers get smarter than we are? Nick Bostrom makes the point, "we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of." I don't have any kids but if I did, this is the kind analytic ideal I would likely have for my child. I would guess most parents would. I mean, who wants to have kids who grow up to perform actions we don't approve of?
But, how many kids actually grow up to live up to their parents' expectation all of the time? Luckily, humans are fragile and any tantrums rising out of rebellion or feelings of unfairness are short lived and the blast radius tends to only affect a relatively small group of people. An SI, with an inexhaustible brain that is connected to, literally, a world of devices can do a lot of damage even in a very short time if it feels it has been wronged.
All this talk of SI gloom may cause you to think me a pessimist. Perhaps I am. But, my biggest reason for not wanting to create new life is in the futility of life itself. Any SI will essentially be immortal and therefore, time would have very little meaning for it. So, whether the universe dies in a big crunch or big freeze it will quickly realize how meaningless existence is and likely go insane.