The state of artificial intelligence is rapidly developing. It is entirely possible that we will soon develop independently rational computer programs — programs that can think for themselves. From an ethical perspective, how should we deal with such a situation?
It could be argued that such a being would not be deserving of ethical treatment, as its actions would be completely deterministic. However, ethical decisions are not necessarily based on free will — besides, we don’t have free will either.
The treatment warranted to artificial beings is dependent on what school of morality one is following. Some would argue that moral treatment is only warranted to rational beings. For example, a Deontologist would argue that any rational being has a certain moral worth. In that sense, an artificial being should be morally no different from a person.
However, that is not the perspective that I will be taking. I believe that moral rights are not inherent, but are only based on what rights will lead to the preferred consequences. How we should treat an artificial being entirely depends on the circumstances of the situation.
If this rational being is capable of desires, then we have a moral obligation to not inhibit it from fulfilling its desires (provided that those desires are not harmful). This seems simple enough; it is no different from how we are obligated to treat any other person. Problems may arise, though, when we consider how it is possible for us to know that an artificial being is rational.
One way to determine how rational a computer program is is to administer the Turing test — if a person can have a conversation with this computer and not be able to tell the difference between it and an actual intelligent person, then it can be considered intelligent. But is this test sufficient?
This raises another interesting question — how do we know that other people are rational beings? We assume that they are, but how we actually know this seems rather mysterious. Do I really know that you are rational because of how I’ve seen you behave, or do I merely assume that you are rational because I am rational and you are like me? Is it possible that that assumption is sufficient?
A being could be considered rational if it can take a situation and deduce a conclusion. A being could be considered rational if it can learn from previous experiences and use its new knowledge to better deduce what actions to take in certain situations. Computers, though, might cheat this. Instead of using whatever it is that we humans use to decide what to do, it may instead simply use brute force to simulate thousands of possible outcomes and then determine which one is best. Is this really cheating, or does it still count?
Interestingly enough, I do not believe that this sort of brute force is truly cheating. Perhaps trying every possible combination would not be true rationality. But even we humans do simulate different possible outcomes before we try things. Have you ever been about to make an important decision, and first stopped to consider what might happen if you decide one way or the other? We make this kind of decision all the time. It could hardly be considered cheating if computers do it. Unless, of course, it is cheating when we do it.
I’ll leave you with that thought. If you have any responses, I encourage you to leave a comment.
This is the story of one man, on a quest to write an interpreter. This is a man who did not have the stomach to read The Dragon Book at more than twenty pages per week, but didn’t want to wait until he finished that epic tome. No, this man was far too eager.
This man knew little of interpreters or compilers. He knew that he used them often, but didn’t have a clue as to their inner workings. Armed only with his wits and his trusty K&R, he ventured alone into Compilerland.
Our man, whose name we shall say was Jackson, knew exactly where he wanted to begin. He started by constructing a program capable of performing numerical calculations. But he did not want these numbers to be limited by memory registers. For the first week, he tried building a system capable of performing arbitrary-precision arithmetic using those primitive devices that you know as arrays. Jackson was writing this program in C, so he struggled with the limited scope of his arrays. Soon he overcame this struggle, but to his dismay, his arbitrary-precision arithmetic system was far too slow. It was time to call for backup.
Jackson turned to GMP, the world’s greatest bignum library. With this new ally at his side, he was able to push on into Compilerland.
During the second week, he began constructing the platform for the objects in his interpreter. He was able to simulate dynamic typing by using some structs, unions, and a lot of functions. Before long, Jackson’s program could hold both integers and floats, perform various operations on them, and convert between them.
During the third week, Jackson began externalizing his functions. This was his most difficult step yet. Having functions is one thing. But being able to read in a file and determine which functions you’re supposed to call is an entirely different matter.
Soon, though, our hero triumphed. All the bugs were worked out. He could type in arithmetic, like “3 + 5”, or even more advanced functions, like “10 choose 2” or “4000 factorial”, and it would all evaluate properly — to 8, 45, and a number that I do not have room for here. Perhaps the most significant, he had implemented parentheses by recursively defining them. “(3 + 5) * 2” would become “8 * 2”, which would in turn return 16. It was beautiful.
Our hero’s exciting adventures will continue next week, when he implements strings and arrays!
You can tell a lot about a programming language by the native data structures it has. The main division of primary data structures seems to be between imperative and functional programming languages.
Imperative languages use arrays, with elements held in consecutive memory addresses. Lookup is efficient but insertion is slow. Functional languages more often use lists, where insertion is fast but lookup is slow. It seems interesting that there are these differences in fundamental data structures between imperative and functional languages. So why the difference?
Imperative languages are systematic. They follow a series of instructions in order. Arrays are optimal for imperative languages: different elements can easily be accessed and changed whenever necessary. This can make a lot of things really easy, and a lot of other things really aggravating.
Functional languages, on the other hand, are less about following a series of instructions as they are about flowing in and out of different functions. Lists are optimal for functional programming for a variety of reasons. Perhaps the most obvious is that they are very easy to use with recursive functions: indeed, their very definition is recursive. Implementing the Quicksort algorithm, as well as other such algorithms, is much easier with lists than with arrays. The most intuitive implementation of Quicksort requires repeatedly splitting a data set in half. With lists this is easy, but with arrays there is no simple way to do it. (There is a faster way to sort an array, but it’s complicated.)
Lists are much better at adding elements, removing elements, and breaking into parts; but sometimes it is still useful to be able to mess with elements in the middle of an array. In a purely functional language, though, there is not much reason why you would need to do this. So each data structure is optimal for its own style of programming.