Sign In
Not register? Register Now!
Pages:
5 pages/β‰ˆ1375 words
Sources:
Check Instructions
Style:
MLA
Subject:
Psychology
Type:
Essay
Language:
English (U.S.)
Document:
MS Word
Date:
Total cost:
$ 21.6
Topic:

Artificial Intelligence: An Analysis

Essay Instructions:

There are three discussion prompts. Students are expected in the main to choose discussion prompt 1, so that the TAs can focus almost exclusively upon it in discussion section. In addition, there is considerable guidance provided here on how to address prompt 1. Prompts 2 and 3 are provided for students with writing experience who are attracted to other topics. Prompts 2 and 3 are not intended to be harder, but the course instructors may not provide the same support. You are more on your own. Papers should be approximately 5 pages long, double spaced, Times New Roman 12 pt font. This is a soft target. Less than 4 pages is allowed but you should be concerned you are missing opportunities to score credit. More than 6 pages will be penalized. A first draft of the paper is due in paper form in your discussion section the week starting 3rd May. Make sure you leave time to print it out ahead of section. Papers that are not ready in paper form at the beginning of section will be marked late and a penalty will be assessed on the first writing assignment. The penalty may also be assessed, at the TA’s discretion, if the paper is not a good faith draft, e.g. if it is excessively short or excessively under-written. This draft of the paper will be swapped with other students for peer review, according to the TA’s instructions. It is a waste of your peer’s time to draw your attention to errors that you could correct yourself. So try to prepare a draft that is free from spelling errors and other simple problems. Peer review is most helpful for drawing attention to genuine weaknesses prior to grading. It is less helpful if the paper is just under-written because you didn’t put in the time. If English is not your first language, peer review is especially helpful. It is of particular importance that you have a mature draft ready for peer review, so that your peers can help you with English language problems. A final version of the paper should be submitted on canvas on May 13th at 11:59pm. First Prompt Some people think that an AGI should be treated as an ethical patient and an ethical agent. But AGI might not have the same interests as humans, and may even lack an interest in selfpreservation. Say why this is, and what problems it would cause for the attempts to treat AGI as ethical agents and patients. Finish by discussing the possibility of defining “intelligence” in such a way that systems that lack human interests don’t count. What problems might that cause? Breakdown for First Prompt You don’t have to follow the outline below when responding to prompt 1, but it is provided as optional guidance. 1. Introduce the question of whether an AGI should be treated both as an ethical agent and an ethical patient. Explain what this means, clearly stating what you mean by the italicized terms and what the difference between them is. (1-2 paragraphs) 2. Explain, very roughly, why someone might say such a thing. Don’t go into too much detail here. The idea is just to motivate the question to someone who hasn’t considered it before, to provide “probable cause” that this is a question worth investigating and therefore to justify you conducting the investigation that will take up the rest of your paper. Because you are not trying to provide a complete answer at this stage, so you should use language that signals an appropriate level of uncertainty. (1 paragraph) 3. Introduce the possibility that an AGI could be created that had no interest in selfpreservation. Say how or why that could occur. (2-3 paragraphs) 4. Say how that might raise some problems with respect to: 1. Treating AGI as ethical patients. 2. Treating AGI as ethical agents. Give an example of each kind of problem. How might the ability to create AGI with strange non-human interests, including no interest in self-preservation, compromise the attempt to treat them as ethical patients? And how might it complicate the attempt to treat them as ethical agents? (2-3 paragraphs) 5. A potential way to escape these problems is to define “intelligence” in such a way that, if a system lacks human interests like self-preservation, we simply won’t count it as intelligent. Explain why this would allow us to treat AGI as ethical patients and agents without worrying about divergent interests by narrowing the range of things that count as AGI. (1-2 paragraphs) 6. Should we define the term this way? Why/why not? Finish the paper by discussing the problems it may cause and justifying your preferred approach. (2+ paragraphs). 7. Summarize. Hints & Preparatory Exercises A lot of our intuitions about whether or not machines can be punished or given rights depends on whether they are conscious, but the prompt doesn’t provide much room to discuss that. The prompt only raises one kind of problem for giving AI rights and obligations—namely the problem of divergent interests—and one solution strategy—namely solution by definition. Since it doesn’t provide an opportunity for discussing consciousness, be careful about bringing it up. It might take you off topic. For part 2, you might roughly argue that since we grant rights and obligations to intelligent beings (i.e. adult humans), people might think we should extend rights and obligations to intelligent machines. There’s no need to get into any detailed questions about sapience vs sentience, etc, here, or to worry too much about being precise. This is just a rough introduction to the question, and your language should suggest that you are just sketching things roughly. Here are some exercises to prepare for part 3. Much of the material you will generate from these exercises will not go into the paper, but it will make you better prepared to write good content. Think about (or discuss with a friend) the example raised in class of the repair bot. Then think about cases where we build machines to do dangerous tasks. What interests would be build into such machines? Would they have an interest in self-preservation? A thing cannot accomplish its goals if it does not exist. Does that mean that there’s a rational interest in self-preservation that all beings must have, if they have interests at all? If you have time, the following exercises are also useful for part 3, though they do not bear so directly on the question. Discuss with a friend whether we humans get to choose our own interests, or whether we have no control over them. Consider interests like: your interest in living and being healthy, your interest in having friends, your interest in the welfare of your family and children, your romantic and sexual interests, your taste in food and music. Discuss with a friend whether you can switch these on and off at will. Is the difficulty people experience when dieting or giving up smoking because they can’t choose their desires? What about your desire to apply for a certain job, take a certain class, or visit a certain place? Are they more under your control? Or not? What’s the difference? Consider the difference between those interests that are a means to an end and those that are ends in themselves. Try to explain the distinction clearly, to a friend, using examples. How does this distinction relate to the distinction between the interests you can and can’t control? If your interests are not under your control and you didn’t choose them, does that mean it is irrational to have them? Or just non-rational? What’s the difference? Is it rational / irrational / non-rational to pursue your interests, even if you didn't choose them? Here are some exercises to prepare for part 4: For the part about ethical patients, talk with friends about whether it’s ethical to allow someone to do your housework for free because that’s what they want, where they only want it because they were genetically engineered to enjoy doing housework for free. If it raises moral problems to genetically engineer people with exploitable interests, think about whether the same problems arise when it’s an AI that has been created to do our housework, rather than a genetic modification of a human. Does it matter that it is made of metal rather than flesh and bone? Is there any difference? Would an AGI have to be made of metal? Could one be made from flesh and bone, but designed not evolved, e.g. with different kinds of cells, or no cells at all? Would these things affect the way you think of it ethically? If so, why? For the part about ethical agents, think about how to punish an AGI that has no interest in self-preservation, and has interests different from a human. Discuss with friends how that might make it difficult to hold it genuinely accountable for its actions. Part 6 has the most room for exploration. Here are some exercises to get you thinking in the right way. Again, much of the material you will generate from these exercises will not go into the paper, but it will make you better prepared to write good content. Write down different ways in which we might try to define “intelligence”. Do IQ tests provide us with a good test of intelligence? Do IQ tests measure how good someone is at making a joke or writing a poem? If not, are they missing something? How would a definition of intelligence based on IQ tests differ from one based on the Turing test? Which is better? Why? Does the term “intelligence” have to be defined via a test? Most terms aren’t defined using tests. Why not? Are tests a good way to define words? Why/why not? Discuss with friends whether we are allowed to define terms however we like. Though these might not have a direct bearing on the paper, it is interesting to ask questions like the following: • Can I define “unicorns” as “things that exist that look like horses with horns”? Does that mean they exist by definition? • If I owe you ten dollars, can I redefine “ten” to mean “five”? Why not? Can’t I choose to use words any way I please? • If I say that all persons should have freedom of speech, the content of what I say depends on what is meant by “persons”. Should that limit our freedom to define terms in any way we please? • E.g. an oppressive regime might grant free speech to all “persons” but only count landowners or members of a particular group or political party as “persons”. • E.g. in the other direction, the definition might be opened up to let some surprising things count. The Supreme Court, for example, ruled that corporations count as persons for free speech purposes. With the previous exercise in mind, discuss with a friend how the right to freedom of speech, and any other right, depends on how terms like “person” and “speech” get defined. Discuss with a friend how the same goes for obligations. If I say that all persons should pay their debts, can you wriggle free of that obligation by refusing to identify as a person? If you’re allowed to do that then does the rule no longer apply to you? How do “rights” vs “obligations” map onto “ethical agent” vs “ethical patient”? With the previous exercises in mind, discuss with a friend how, if we make personhood depend on intelligence, it matters how “intelligence” is defined. That is a lot to discuss and think about. Remember that the point isn’t to include everything you discuss in the eventual paper, but to get yourself thinking in the right way, and to play around with the issues surrounding the prompt. If you choose another prompt, think about ways to do similar exercises. Second Prompt Discuss whether AI is science or engineering. Your answer should focus on the paper by Newell and Simon. N&S regard AI as an attempt to investigate an empirical hypothesis about how intelligence is created. In your discussion, you should clearly explain what N&S’s hypothesis is and contrast it with a rival hypothesis. Your explanation should say what they mean by describing their inquiry as “empirical”. Your discussion might also focus on any/all of the following: How might evidence for N&S’s hypothesis be empirically acquired? Could the Turing test provide us with a way to acquire evidence for/against the hypothesis? If so, how? N&S describe several “Laws of Qualitative Structure”. These are examples in which science explains the behavior of large objects by reducing them to their parts, specifically to parts of the same sort (e.g. atoms, cells, tectonic plates). What “parts” do N&S think intelligence breaks down into? How might thinking this way allow them to provide scientific explanations of intelligence? If you can explain something by reducing it to its parts, does that automatically count as scientific explanation? Why/ why not? Third Prompt The central insight of formal logic is that a reasoning pattern is valid in virtue of its form. E.g. all arguments of the form: All P’s are Q. x is a P. Therefore x is a Q. …are valid, no matter what we put in for P and Q. Explain the connection between this observation and the Physical Symbol System Hypothesis by explaining how this allows software engineers to get machines to “reason” without first making them understand what they are reasoning about. In taking this approach, are software engineers reducing intelligence to symbol manipulation or eliminating the need for intelligence in reasoning? Make the difference clear before choosing one answer and arguing for it. A good justification should consider and discharge opposing responses.

Essay Sample Content Preview:

Artificial Intelligence
Student’s Name
University
Course
Professor
Date
Artificial Intelligence
Artificial Intelligence, otherwise abbreviated as AI, is a vast branch of computer science concerned with the building of smart machines that can perform tasks that require human Intelligence. Over the years, the rise and development of machine learning, neural networking, and deep learning have created a paradigm shift in almost all technological industry sectors. One of the world’s greatest mathematicians, Alan Turing, posed the question that for many years has remained unanswered; Can machines think?
In his paper, Computing Machinery and Intelligence, Turing (1950) predicts the ultimate goal and vision of AI: to provide a heuristic approach to problem-solving and learn from experience. His work formed the foundation for vast research in AI, and many of the modern-day works and discoveries are reinventions of his works. The rise of AI has led to numerous questions and vast debate over the issue, and no singular definition of the field has been accepted widely. The definition is somewhat limited as it states that AI is the branch that deals with the building of intelligent computers which are artificially intelligent.
A significant grey area exists in that this broad definition does not state what Artificial Intelligence actually is. This has led to the generalization of the definition: AI is the science and engineering of making intelligent machines. This raises multiple further questions and leads to further debate in the field, the main one being: Is AI a science or engineering? Does AI simulate human Intelligence and thinking by studying psychology or neurobiology? Is the biology of human beings relevant in the study of Artificial Intelligence? Can the intelligent behavior of machines be described by using simple principles, for example, logic? This has led to various opposing hypotheses and vast research in the field of Artificial Intelligence.
The Physical Symbol System Hypothesis
In their paper, Computer Science as an Empirical Discipline, Newell and Simon (1976) introduce the view of artificial intelligence from an empirical perspective. They use the term empirical to suggest that AI is in fact, an experimental science and every machine or program designed is an experiment. From a single experiment, scientists are able to achieve more discoveries about the unknown based on the information they have on the known subjects. Newell & Simon (1976) thus advance the theory that AI is more of an engineering concept than a science concept, and they back up their theory by developing two notions: the notion of a symbolic system and the notion of a heuristic search.
Law of Qualitative Structure
Newell & Simon (1976) then came up with the Physical Symbol System Hypothesis, which states that symbols are the basic unit for artificial intelligence: they act as a foundation for intelligent action. This hypothesis draws largely on the law of qualitative structure which postulates that a scientific domain is composed of several basic units, and the domain can be broken down into these units. An example of this law in practice is the biological cell doctrine (Newell & Simon, 1976). Accordin...
Updated on
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

πŸ‘€ Other Visitors are Viewing These MLA Essay Samples:

HIRE A WRITER FROM $11.95 / PAGE
ORDER WITH 15% DISCOUNT!