Archive

Archive for the ‘Ethics’ Category

The Ethics of Artificial Intelligence

March 19, 2010 11 comments

The state of artificial intelligence is rapidly developing. It is entirely possible that we will soon develop independently rational computer programs — programs that can think for themselves. From an ethical perspective, how should we deal with such a situation?

It could be argued that such a being would not be deserving of ethical treatment, as its actions would be completely deterministic. However, ethical decisions are not necessarily based on free will — besides, we don’t have free will either.

The treatment warranted to artificial beings is dependent on what school of morality one is following. Some would argue that moral treatment is only warranted to rational beings. For example, a Deontologist would argue that any rational being has a certain moral worth. In that sense, an artificial being should be morally no different from a person.

However, that is not the perspective that I will be taking. I believe that moral rights are not inherent, but are only based on what rights will lead to the preferred consequences. How we should treat an artificial being entirely depends on the circumstances of the situation.

If this rational being is capable of desires, then we have a moral obligation to not inhibit it from fulfilling its desires (provided that those desires are not harmful). This seems simple enough; it is no different from how we are obligated to treat any other person. Problems may arise, though, when we consider how it is possible for us to know that an artificial being is rational.

One way to determine how rational a computer program is is to administer the Turing test — if a person can have a conversation with this computer and not be able to tell the difference between it and an actual intelligent person, then it can be considered intelligent. But is this test sufficient?

This raises another interesting question — how do we know that other people are rational beings? We assume that they are, but how we actually know this seems rather mysterious. Do I really know that you are rational because of how I’ve seen you behave, or do I merely assume that you are rational because I am rational and you are like me? Is it possible that that assumption is sufficient?

A being could be considered rational if it can take a situation and deduce a conclusion. A being could be considered rational if it can learn from previous experiences and use its new knowledge to better deduce what actions to take in certain situations. Computers, though, might cheat this. Instead of using whatever it is that we humans use to decide what to do, it may instead simply use brute force to simulate thousands of possible outcomes and then determine which one is best. Is this really cheating, or does it still count?

Interestingly enough, I do not believe that this sort of brute force is truly cheating. Perhaps trying every possible combination would not be true rationality. But even we humans do simulate different possible outcomes before we try things. Have you ever been about to make an important decision, and first stopped to consider what might happen if you decide one way or the other? We make this kind of decision all the time. It could hardly be considered cheating if computers do it. Unless, of course, it is cheating when we do it.

I’ll leave you with that thought. If you have any responses, I encourage you to leave a comment.

Advertisements