In response to my claim that people, not machines, have needs  someone recently noted at SciForum.com that machines also have needs.  For example, they need energy to function.  I agree; machines need energy to function. But do they need to function? Would a machine struggle to survive? Would it fight for the energy it needs to function?

For now, we use machines to help us do things that we need or desire. The originating source of motion is OUR need. The machine needs energy to fulfill OUR tasks, the tasks we supply.

In “The Singularity is Near,” Ray Kurzweil talks about how complex it is to design a machine that will “understand and respond to emotion.”  I don’t doubt that such a machine is coming, even fairly soon. But the bigger question seems to me whether or not the machine that can understand and respond to emotion will also FEEL emotion? If it doesn’t, the ONLY reason to design a machine with this capability is for the sake of the human beings who FEEL them.  Us, for now.

Furthermore, if a machine does not feel emotion, if a machine cannot be elated or dejected, head-over-heels in love or heart-broken, proud or ashamed, confident or fearful, light-hearted or sullen…how can it be said to UNDERSTAND these emotions? A computer that “understands” heartbreak by detecting a certain pattern of neuron activity in a particular region of the brain does not thereby understand HEARTBREAK as we experience it.

I have no doubt machines are on the way that will get better and better at calculating how to respond to the emotions it detects human beings experiencing, through voice analysis, brain scanning, etc. But human beings are much more efficient, require FAR LESS computational power to understand and respond to emotion, BECAUSE we FEEL them. From an engineering perspective alone, it would seem to make sense to build machines that feel pleasure and pain, and all the emotions we feel, because that would reduce the computational power needed to respond to them.  Kurzweil adopts Einstein’s principle: no superfluous complexity. Only as much complexity as needed to solve the problem. Thus, should we not be building machines that FEEL emotions, FEEL pleasure and pain, desire etc.?  Or would it require more complexity to build a machine that not only “understands and responds to emotions,” but also experiences them?  Is this even possible?  Or, would it better to create an “intelligence” that does not truly experience emotion, pain, pleasure, desire…? Will we only need machines that respond to these things in the mean time, until we move beyond them ourselves?  Is that the goal?

Will WE have any needs after the Singularity?

More From WFNT