Docstoc

EECS 690

Document Sample
EECS 690 Powered By Docstoc
					EECS 690

  April 9
       A top-down approach:
• This approach is meant to generate a rule
  set from one or more specific ethical
  theories.
• Wallach and Allen start off pessimistic
  about the viability of this approach, but
  point out that adherence to rules is an
  aspect of morality that must still be
  captured.
    The Big Picture Theories (in
         Western thought)
• Utilitarianism
• Deontology
• What will be the authors’ questions about
  these theories is what the computability
  requirements would be for each. This
  approach may shed a unique light on the
  practice of morality itself.
             Consequentialism:
•    Utilitarianism (a subset of consequentialism)
     might initially appeal to us because of
     Bentham’s focus on calculability.
•    James Gips, in 1995, supplied this list of
     computational requirements for a
     consequentialist robot:
    1. A way of describing the situation in the world
    2. A way of generating possible actions
    3. A means of predicting the situation that would result
       if an action were taken given the current situation
    4. A method of evaluating a situation in terms of its
       goodness or desirability
             Some difficulties:
• How can one assign numbers to something as subjective
  as happiness?
• Do we aim for total or average happiness?
• What are the morally relevant features of any given
  situation? (People, animals, ecosystems?)
• How far/wide should the calculation of effect go?
• How much time is a moral agent allowed to devote to the
  decision-making process?
• Note that these are not only problems generated while
  thinking about the computability of moral theories, they
  are problems that concern peoples’ application of these
  moral theories, and they are issues that have not been
  widely settled, and not for lack of discussion.
                    A note:
• The authors do a good job of avoiding the
  question “how do humans do this?” when
  discussing ethical algorithms and behaviors. It
  may well be that general human behavior is not
  a good model to emulate for ethical systems.
• This raises the question of what standard to hold
  ethical systems to. Do we tolerate the same
  range of moral failure among these systems?
  These are questions that might fit here, but for
  the sake of organization are addressed later in
  the book.
    Asimov’s Laws of Robotics
1. A robot may not injure a human being or,
    through inaction, allow a human being to come
    to harm.
2. A robot must obey orders given it by human
    beings except where such orders would
    conflict with the First Law
3. A robot must protect its own existence as long
    as such protection does not conflict with the
    First or Second Laws.
(Later, a Zeroth law was added: A robot may not
    harm humanity, or, by inaction, allow humanity
    to come to harm)
              Laws of Robotics
• Asimov was really serious about this, and was (I think
  foolishly) optimistic about the usefulness of robotic laws
  as stated. (Asimov’s short essay on the robotic laws
  forthcoming on the Further Resources section)
• This doesn’t fit with consequentialist theories very well
  because of its reliance on special duties
• The zeroth law is hopelessly vague for an action-guiding
  principle, and the first law alone can generate conflicts.
• There may be a real pressing difficulty with negative
  responsibility
     Specific versus Abstract
• Specific rules are very easy to apply, but
  have limited usefulness in novel situations.
  Still, perhaps part of what ethical systems
  require is a few specific rules for specific
  circumstances, though these alone would
  not be sufficient.
• Abstract rules are more generally useful,
  as they allow adaptation, but are
  correspondingly difficult to apply.
    The Categorical Imperative
• Act only as you could will that your maxim
  become universal law.
  – A computer would need to appreciate:
     • a goal
     • a maxim (a behavior-guiding means to the goal)
     • an understanding of the implications for achievement of the
       goal by making the maxim universal
• Lying, for example, could not be a universal law,
  because its goals would be thwarted by its being
  universalized. (This provides a problem,
  according to critics of Kant.)
Language vagueness and morality
• Our language is full of words that are vague but
  that have clear applications and misapplications.
  (e.g. ‘baldness’ is a vague concept, but Captain
  Picard IS bald, and the members of the band ZZ
  Top are not)
• Perhaps by focusing on the clear applications of
  moral rules, we might achieve something useful
  for the less clear cases.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:0
posted:3/19/2013
language:English
pages:11