Lockhart Book by 3B4Z110

VIEWS: 2 PAGES: 11

									                                              Notes of Lockhart and Zimmerman

1. Lockhart Book ............................................................................................................................ 1
   1.1 Overview ............................................................................................................................... 1
   1.2 Comments on chapters .......................................................................................................... 2
2. Zimmerman Book ....................................................................................................................... 4
   2.1 Basic Issues (Ch. 1) .............................................................................................................. 4
   2.2 Rights (Ch. 2) ........................................................................................................................ 6
   2.3 Actualism, Possibilism, and Courses of Action (Ch. 3) ....................................................... 6
     2.3.1 My views on basic deontic logic .................................................................................... 6
     2.3.2 Z’s views about courses of action .................................................................................. 8
   2.4 Responsibility (Ch. 4) ......................................................................................................... 10

1. Lockhart Book

1.1 Overview

Throughout, the primary focus is on the requirements for morally rational choice, that is, one
instrumental practical rationality relative to moral value, as opposed to moral permissibility. The
same issues, however, also arise for prudential or comprehensive rational choice.

If the agent believes with certainty all and only the true propositions, then moral permissibility
and morally rational choice coincide. The difference arises when the agent is partially ignorant—
either about empirical matters or (importantly) about normative (e.g., moral) matters.

Moral case with ignorance only about empirical matters (utilitarianism is known with certainty to
be true): Suppose that you have a choice between a and b, and you are 100% sure that a is
maximizes total happiness and is thus permissible and you are 90% sure that b also maximizes
total happiness (i.e., a tie). Which options are morally rational choices? Presumably only a, since
are certain that it is permissible and you are not certain that b is.

Moral case with ignorance only about moral matters: You have a choice between (k) killing one
person and saving 2 others for a total of 2 units of happiness and (a) or allowing the 2 to die for a
total of 1 unit of happiness. You are 90% sure that utilitarianism is the correct theory and 10%
sure that the following deontic theory is true: (1) killing is always wrong and infinitely morally
bad, and (2) any action is permissible if it does not kill anyone. Which options are the morally
rational choices? Utilitarianism judges only k permissible, and the deontic theory judges only a
permissible. Let us consider a few criteria for rational choice here (following L).

Probability criterion (= PR2, p. 26): A choice is morally rational if and only if it maximizes the
expected moral permissibility value of choice (1 for permissible, 0 for impermissible) [= it has
maximum probability of being permissible].

This says that only k is a morally rational choice.
Problem: This ignores the fact that not all impermissible choices are equally bad. If the deontic
theory is correct, the badness of killing may be much greater than the badness, according to
utilitarianism, of failing to save two lives by killing one person.

Expected badness criterion: A choice is morally rational if and only if it minimizes the expected
moral impermissible badness of choice (where the moral impermissible badness is the shortfall
in moral value from the least good permissible choice).

This says that only (a) is a morally rational choice (since k has a 10% chance of being infinitely
bad).

Problem: This presupposes that there is a valid basis for making inter-theoretical comparisons of
moral badness (cf. interpersonal comparison of wellbeing). There is, however, no objectively
correct basis for making such comparisons.

Partial solution: Perhaps this is irreducibly relative. Each person subjectively (but upon careful
reflection and full information) sets the basis for such comparisons.

1.2 Comments on chapters

General: Lockhart’s discussion is often quite technical. When you first read a chapter, focus on
the big picture and skip the details. Then reread the chapter reading some of the details for the
more important issues. Sometimes, it will make sense not to worry about the details of passages
for which you see no big picture relevance. (Usually there will be big picture relevance, but if
it’s not apparent, the text is better skipped, at least initially.)

Ch. 2-3: Note that the Lockhart here assumes that his PR2 is correct, but he will later rightly
reject this principle.

Typo (thanks to Leo) in Table 2.4, p. 44: First column should have 1 (not 0) in the second row.
The last column should be .70 for x, .78 for y, and .52 for z.

Some background for pp. 56-60: L is here assuming act consequentialism, and hence that at least
one action is permissible. Moreover, he is implicitly (and confusingly) assuming that that the
chance that the fetus’s interests are tied with the mother’s interests is 0. Hence, exactly one
action will be permissible. Finally, he assumes that the mother has some positive interest in
having the abortion. Given these assumption pr(Per(A)) = 1-p1p2, and pr(Per(~A)) = p1p2.

Some typos on p. 59 (thanks to Leo):
    At top, pr(not-A) should say “the probability that having the abortion is not right” (since
      A = abortion is right). Given the above assumptions (Per(A) iff ~Per(~A), however, is the
      same as his “the probability that having the abortion is not right”
    p1p2 ≥ .5 should be: p1p2 ≤ .5 [three occurrences]
    p1 ≥ .5 should be: p1 ≤ .5
    p2 ≥ .5 should be: p2 ≤ .5




                                               2
Ch. 4: L introduces degrees of moral rightness. I’m not convinced that this makes sense. I think
that permissibility is merely binary. Instead (and in the same spirit as L), I would introduce
degrees of impermissible moral badness. This is a measure of how bad an impermissible choice
is. It might be equal, for example, to the shortfall in goodness between the least good permissible
option and the option in question. (Of course, if this is all L means by degrees of permissibility,
then there is no problem.)

Both my proposal and L’s face the objection that inter-theoretical comparisons of moral badness
(or wrongness) are not possible. L proposes solving this problem by appealing to PEMT (p. 84).
Expressed in my terms, it holds that for any given choice situation, the maximum impermissible
moral badness is the same for all competing theories, and likewise for minimum impermissible
moral badness (except where a theory judges all feasible actions permissible). I accept the latter,
since it’s normally 0 (although not if there is prohibition dilemma). I believe, however, that the
former is implausible. First, I don’t see what the basis is for comparability of badness between
theories. Second, the specific way that he make the comparison, which is sensitive to what is
feasible, seems implausible. As Leo has pointed out, this means that scaling between two options
can change when a third option is added (with a higher maximum or lower minimum than the
other two). This seems implausible. Instead, the best approach, I think, is to accept that the
badness comparisons are settled by the agent’s practical reason (which need not be the same as
that of another agent). This, of course, is sad, since we all would like an objective basis for the
comparisons.

PR5 deals with cases where the degrees of moral badness are not known with sufficient precision
to apply PR4. It’s an important issue, but we can probably skip the details.

Ch. 5: L (p. 99) rightly notes that, even if moral permissibility is satisficing, moral rationality
might still be maximizing. He goes onto argue, mistakenly in my view, that moral permissibility
cannot be satisficing. He claims that if (1) moral reasons always outweigh non-moral reasons,
and (2) there is more moral reason to perform a1 rather than a2, then a2 is not morally
permissible. I believe, however, that we can distinguish between obligation-generating (or
deontic) reasons and merely desirability-generating (or axiological) reasons. Two actions can
both satisfy all obligation-generating reasons, even if one satisfies more desirability-generating
reasons.

Chs. 6 and 7: L applies his approach to physician confidentiality and to abortion law. We will
probably skip these chapters, but they might be good test cases to invoke in your papers.

Ch. 8: L here addresses a very different issue. He defends the theory of morally rational choice
according to which a choice is morally rational if and only if it is part of a (maximal) course of
action (into the future) that maximizes the expected degree of moral rightness. This is a kind of
indirect theory (loosely analogous to rule utilitarianism). (See p. 151.) Zimmerman, we shall see
also defends a theory of this sort. I reject it in favor of a direct theory according to which the
action on its own must maximize the expected degree of moral rightness (with the probabilities
assigned to the agent’s future actions). This, I think, more adequately reflects the agent’s
uncertainty about her own future actions.




                                               3
I would also argue against the view that morally rational choice requires that I do my part of
whatever collective action (or course of action) maximizes the expected degree of moral
rightness (see p. 158).

2. Zimmerman Book

2.1 Basic Issues (Ch. 1)

Three moral questions:
(A) Which of my feasible choices are morally permissible?
(B) Which of my feasible choices are (given my ignorance) rationally permissible to choose
    relative to the goal of minimizing the moral badness of impermissible choice in my current
    choice situation?
(C) Which of my feasible choices are (given my ignorance) morally praiseworthy, which are
    morally neutral, and which are morally blameworthy?

A general issue relating to each of these questions: Is the answer determined by
(1) objective view: the facts;
(2) subjective view: the agent’s beliefs; or
(3) epistemic view (called the prospective conception by Z): the propositions epistemically
supported by the agent’s evidence?

For example, assume for simplicity that the agent knows that the only relevant consideration is
total happiness maximization. Is the answer to the above questions determined by:
(1a) what in fact maximizes expectable total happiness based on objective chances (which is
objectively prospective),
(1b) what maximizes the total happiness relative to how things turn out (actual outcome),
(2a) what the agent believes is morally permissible.
(2b) what the agent believes has the highest expected (or perhaps objectively or epistemically
expectable) total happiness (whether or not her empirical beliefs about outcomes support this
belief) [Z calls this the subjective view],
(2c) what, relative to the agent’s empirical beliefs about outcomes, has the highest expected total
happiness (whether or not she believes this), [Z doesn’t consider this view, and it seems immune
to the following objections by Z to the previous account: infallibility, failure to have a belief
about what is permissible, and violation of ought-implies-can.]
(3a) what, relative to the evidence available to the agent (the evidence there is for the agent) has
the highest expectable total happiness (Z’s favored view in the book for permissibility)
(3b) what, relative to the evidence availed by the agent (the evidence the agent has) has the
highest expectable total happiness (Z’s new favored view for permissibility, according to e-mail)

Here and in general, I (like Z) assume, for simplicity, that the value of risky prospect is its
expected (expectable) [i.e., probability weighted] value, but this is not essential. The crucial
point is that it based in some appropriate way on the probabilities and values of outcomes. It may
thus allow risk aversion.




                                               4
Note that Z assumes that all moral views require the maximization of some value, suitably
understood. This, however, is not central for the above issue (and is arguably false).

It is often assumed (by Z too?) that for an objective conception the relevant facts are how things
eventually turn out (1b above), but this is a crazy view. It makes permissibility depend on how
things happen to turn out after choice, as opposed to the facts at the time of choice. It says, for
example, that it was objectively (e.g. prudentially) wrong to go swimming when the pleasure is
great and there is a minute chance that a jellyfish will sting you (causing you’re a bit more pain
than your pleasure), and you are in fact later stung.

An objective conception is often deemed relevant for moral permissibility (Question A), but
subjective conceptions are also deemed relevant by some. Effectively everyone agrees that an
objective conception is not relevant to morally rational choice (Questions B) and moral
praiseworthiness (Question C).

Frank Jackson’s important example (Z’s version) that led many people, including Z, to abandon
the objective view for the epistemic or subjective views:
Four options (with subjective EV)
N: Give no drugs to John, for which the evidence supports the proposition that he will
permanently have a minor skin problem (EV =0). [Suppose that the permanent problem is
objectively sure to result.]
A: Give Drug A to John, for which there is a 50% epistemic probability of a cure (50) and a 50%
epistemic probability of death (-100). [EV = -25] [Suppose that in fact death is objectively sure
to result].
B: Give Drug B to John, for which the evidence supports the epistemic certainty of a partial cure
(EV = 40). [Suppose that a cure is objectively sure to result.]
C: Give Drug C to John, for which the evidence supports there being a 50% epistemic
probability of a cure (50) and a 50% epistemic probability of death (-100). [EV = -25] [Suppose
a cure is objectively sure to result].

With respect to Question B (morally rational choice), most people would say that B is the only
morally rational choice. I do. Jackson and Z presumably agree.

With respect to Question C (moral blameworthiness) most people would again say that B is not
blameworthy. I do. Jackson and Z presumably agree.

With respect to Question A (morally permissible choice), the objective conception (both
versions) say that C is the only morally permissible choice, but Jackson, Z, and others hold that
B is the only permissible option.

Important: Note that if we change the example so that A and B each have an objective chance of
50% for curing and 50% of death, then the objective chance conception agrees that B is the only
morally permissible option. So, it matters whether the probabilities are objective chances or not.

Given that there is agreement that an objective conception is not relevant for moral rationality,
nor for praiseworthiness and blameworthiness, a crucial question is this: What is at stake, for




                                               5
actions that are morally rational and not blameworthy, for the issue of whether it is morally
permissible or not?

2.2 Rights (Ch. 2)

Correlativity thesis (p. 144): A has a (claim-) right that B X just in case B has an unconditional
prima facie duty to A to X. (Z assumes that all rights are prima facie, but they could be
conclusive.)

Kinds of duties: (1) impersonal duties (owed to no one), (2) interpersonal duties (duties owed to
someone).

Kinds of rights: claim-rights (correlative to someone else’s duty to you), liberty-rights (absence
of duty to someone else not to do), powers to alter normative status, immunities against
normative status being altered without one’s consent.

Z isn’t explicit, but let us assume the choice-protecting conception of interpersonal duties (duties
owed to someone): B has a prima facie duty to A to X just in case the absence of A’s consent to
B’s not X-ing is sufficient to establish that it is prima facie impermissible for B not to X. (An
alternative account is the interest-protecting conception.)

Given Z’s epistemic view of obligations, the above entails that A has a prima facie right that B
not kill her just in case B’s killing A without A’s consent is, relative to B’s epistemic evidence,
not her best option. This entails that A has no right, against B, not to be killed by B when,
relative to B’s mistaken evidence, killing is B’s best option. This seems quite bizarre to me.

Z uses the expression “forfeits a right” to cover only cases where a right is lost in virtue of
having infringed someone’s rights. He allows (p. 115) that rights can be non-consensually lost
without forfeiture (as when the wind blows one’s body in a way that will kill an innocent party).

2.3 Actualism, Possibilism, and Courses of Action (Ch. 3)

2.3.1 My views on basic deontic logic

Basic choices (basic actions): I assume that the most minimal objects of deontic (e.g. prudential
or moral) assessment are choices (willings, volitions, decisions, or attempts; e.g., willing that p).
(Here I agree with Z, p. 133-38, 184-86.) These are actions that can be done intentionally (as
opposed to coincidentally) and over which we have (partial) direct control (i.e., control without
the control being via control of something else). They are the most specific minimal actions
under the direct control of the agent, where minimal actions are actions that once initiated cannot
be stopped by the agent. Z seems to agree, at least roughly (pp. 12-14, 133, 150).

I often treat choices as basic actions, but strictly speaking it is better to use “action” for basic
bodily movements under (partial) direct intentional control via choices.




                                                 6
Ought-implies-can (as do may and may-not): Although not uncontroversial, this principle is
widely accepted, and I accept it. There is, however, a question about the sense of “can” that is
relevant. Clearly, it means that the agent (not someone else) can do it in the situation (as opposed
to some other situation). The question is whether it requires that the agent be able to do
intentionally (or deliberately) as opposed to by accident. Suppose that I play darts well enough to
always hit the board but badly enough to rarely hit the bull’s-eye. Can I have an obligation to hit
a bull’s-eye (e.g., where this will save millions of lives)? No, none of my feasible choices ensure
that I will hit the bull’s-eye. I can, however, have an obligation to hit the board, since some of
my feasible choices do this.

What about a case where promising is absolutely wrong and the agent promised to A to be in
New York at t and promised B to be in Paris at t? Does the agent have an obligation to keep both
her promises? Strictly speaking no, since this is not possible. It is, however, wrong for her to
break both promises (since that entails only that it is feasible not to break both, not that it is
feasible to keep both).

Z claims (pp. 146-51) that ought (and may) implies can refrain. I deny this, since it makes perfect
sense to say that where only one action is feasible, it is obligatory (permissible and no feasible
alternative is). The assessment is pointless, of course, but it keeps the logic simpler not to invoke
a “can refrain” requirement.

Actions tokens are specific events (which I assume are coarsely individuated) of a certain kind
(basic choices). Action types are types of action tokens (e.g., a scaring of Jones).

Deontic assessment for action tokens of a given feasible set (where permissibility and
       impermissibility presuppose feasibility):
A token is optional just in case (1) permissible and (2) some feasible alternative is permissible.
A token is obligatory just in case it is permissible and (2) no feasible alternative is permissible
A token is impermissible just in case it is not permissible.

Deontic assessment for action types, relative to a specific feasible set:
Action type T is optional just in case (1) some token of type T is permissible and (2) some token
       of type ~T is permissible.
Action type T is obligatory just in case it (1) some token of type T is permissible and (2) no
       token of type ~T is permissible.
Action type T is impermissible just in case no token of type T is permissible.

Maximizing Axiological Criteria for Permissibility, assuming completeness (for any two tokens
of given feasible set, one of them is at least as valuable as the other; i.e., either a1 ≥ a2 or a2 ≥
a1):
A token is permissible just in case it is at least as valuable as any feasible alternative.
A token is optional just in case it is tied for being the most valuable feasible alternative.
A token is obligatory just in case it is more valuable than any feasible alternative.
A token is impermissible just in case it is less valuable than some feasible alternative.




                                                7
Problem if there is incompleteness in ranking: Suppose that a1 and a2 are each more valuable
than all other feasible alternatives but they are incomparable (neither is at least as good as the
other). The above definitions say that they are each neither permissible nor impermissible.

Maximizing Axiological Criteria for Permissibility, with no assumption of completeness
       (preferred more general definition, I think):
A token is permissible just in case no feasible alternative is more valuable.
A token is optional just in case no feasible alternative is more valuable, and this is also true of
       some feasible alternative.
A token is obligatory just in case it is more valuable than any feasible alternative (= it is the only
       feasible alternative for which no feasible alternative is more valuable).
A token is impermissible just in case it is less valuable than some feasible alternative (no
       change).

2.3.2 Z’s views about courses of action

The basic problem: Suppose utilitarianism is true, I am the only agent, and my dog is the only
other sentient being (whose happiness is protected by utilitarianism). Suppose that there are only
two times for me to make a choice, and that my happiness is the same no matter what (only my
dog is affected). At each time, I can either give him the medicine or not. If give him the medicine
at both times, he has a happy life. If give him medicine at neither time, he has a neutral life. If I
give him medicine at only one time, he has a miserable life. Suppose that I am undependable and
there is a good chance (and I know this) that I will fail to give him the medicine at time 2, even if
I give it at time 1. What should I do at time 1?

Minimalism [= Actualism] (my preferred view): Each basic action is to be assessed on its own in
light of the probabilities of the agent’s future actions and the actions of others. [Thus: I should
not give the medicine at time 1, since the expected value of my action (given the good chance of
my not giving the medicine at time 2) is negative.]

Individual Maximalism [related to Z’s Holism, = possibilism?): Each basic action is to be
assessed on the basis to its conformance with the best (maximal) courses of actions for the agent.
Probabilities of the agent’s future actions are not relevant, although the probabilities of the
actions of others are. [Thus: I should give the medicine at time 1, since the best course of action
has me giving medicine each time.]

Collective Maximalism (not discussed by Z): Each basic action is to be assessed on the basis to
its conformance with the best joint courses of actions for the agents (one course for each).
Probabilities of the agent’s future actions are not relevant; nor are those of others. [The above
example does not address this issue.]

Z claims (p 121) that Actualism violates several seemingly conditions (see below), but he does
not here explain why. In Z (1996, p. 191) he says that problem comes from the fact the Actualist
(minimalist) is committed, in the above example, to holding (1) it is not obligatory to give the
medicine at time 1 (indeed it is wrong) but (2) it is obligatory to give the medicine at time 1 and
then to give the medicine at time 2. A plausible version of minimalism, however, will deny the




                                                8
second claim (or so I claim). It will say that M1&M2 is obligatory (relative to the time 1
situation) just in case in the time 1 choice situation M1 is obligatory and in the time 2 choice
situation M2 is obligatory. M1, however, is not obligatory in the time 1 choice situation. Hence,
M1&M2 is not obligatory relative to that situation. (Or am I confused?)

I (a minimalist) accept each of the conditions that Z addresses. The conditions follow from the
logic of action-types given above, and that is compatible with an actualist (minimalist) account
of the permissibility of action tokens. In what follows, I address only the case where the types
are assessed relative to a fixed choice situation (not two choice situations as in the above
example). I claim (but this needs to be verified) that a similar point apply when the action types
are applied to different choice situations (which is the crucial issue here).

Here, I assume that a token is permissible in a given choice situation only if it is feasible in that
choice situation. Infeasible tokens are neither permissible nor impermissible in that choice
situation.

(3.1) If O(A&B), then O(A): If some feasible token of A&B is permissible and no feasible token
not of that type is, then some feasible token of A is permissible and no feasible token not of that
type is. Valid.

(3.2) If O(A&B), then ~O(~A): If some feasible token of A&B is permissible and no feasible
token not of that type is, then no feasible token of ~A is permissible, and hence no such token is
obligatory. Valid.

(3.3) If O(A) and O(B), then O(A&B): If (1) some feasible token of A is permissible and no
feasible token not of that type is, and (2) some feasible token of B is permissible and no feasible
token not of that type is, then some feasible token of A&B is permissible and no feasible token
not of that type is. Valid.

(3.4) If O(A) and O(B), then Feasible(A&B): If (1) some feasible token of A is permissible and
no feasible token not of that type is, and (2) some feasible token of B is permissible and no
feasible token not of that type is, then some token of A&B is feasible. Valid.

(3.5) If O(A) and ~Feasible(A&~B), then O(B): If (1) some feasible token of A is permissible
and no feasible token not of that type is, and (2) there is no feasible token of A&~B, then there is
a feasible token of B that is permissible, and no feasible token of ~B is permissible (vacuously
true). Valid.

Z makes a distinction between going wrong and a wrong occurring (pp. 155-68). The very rough
idea is this: Suppose that I promise to return your book at noon, and the only way that I can do
this is by taking it with me at 11:30. At 11:30, I fail to take the book. At 11:30, I go wrong (act
wrongly), but the wrong does not occur until noon. I do not go wrong at noon (or just before),
because it is no longer possible for me to return the book at noon. Here’s a simplified version of
the distinction (corrections welcomed!):
    (1) An agent goes wrong at t with respect to her A-ing at T = the agent acts wrongly (does
         wrong, infringes an obligation, acts impermissibly) at t with respect to her A-ing at T




                                                9
        (e.g., fails at 11:30 to take the book so that it can be returned by noon as promised; or
        wrongly promises her mother at 11:30 that she will not return the book by noon as
        promised).
    (2) A wrong occurs (an obligation is not fulfilled) at T, with respect some agent A-ing at T=
        (a) at some earlier time, the agent had an obligation to A at T, (b) the obligation did not
        cease to exist prior to T for reasons other than the agent’s acting wrongly, and (c) the
        agent fails to A at T (e.g., a wrong occurs at noon when the book is not returned at noon
        as promised; a wrong at noon does not occur, if at 11:45 the promisee releases you)

The key notion for us is when an agent acts wrongly (goes wrong), and hence we need not, I
think, worry very much about when a wrong occurs in this sense (although I’m sure that it is
relevant for some purposes)

Z holds that for immediate obligations (t=T) the two go together, but for remote obligations there
can be one without the other.

On Z’s account, we can fail to fulfill an obligation without infringing it (p. xiii). This is because
we infringe an obligation at a given time when and only when it is possible for us then to fulfill it
but we do not. We fail to fulfill obligation at a given time whether or not it is possible for us then
to fulfill it. (At noon I fail to fulfill my obligation to return the book to by noon, but I do not then
infringe that obligation, since it is not then possible for me to fulfill it. I infringe that obligation
at 11:30, when I leave home without the book.)

On Z’s account, we can infringe an obligation even though we fulfill it (p. xiii). This is because
the obligation may require us, for example, not to make conflicting commitments. I may infringe
my obligation to return the book to you by noon, when I promise my mother to spend the entire
day with her. I nonetheless fulfill the obligation if I break my promise to my mother and return
the book to you on time.

2.4 Responsibility (Ch. 4)

Here responsibility = “moral” responsibility = backward-looking responsibility = agent-
responsibility = attributive responsibility = what is attributable to the agent’s exercise of
autonomous agency (and for which ignorance and coercion excuse one from responsibility) =
that for which reactive attitudes (e.g., praise, neutral, blame) towards, and perhaps reward and
punishment of, the agent are in principle appropriate.

Z focuses on culpability. I think that this can be understood in at least two ways, where agent-
responsibility for X is understood (as Z does not!) as entailing causing X:
    (1) as agent-responsible for acting wrongly (my normal usage),
    (2) as deserving the blame that is appropriate for someone who is agent-responsible for
        acting wrongly (whether or not the agent acted wrongly) (Z’s implicit view)

As Z points out, one can be culpable in the second sense without acting wrongly. This is not
possible in the first sense (on my understanding of agent-responsibility).




                                                10
Z defends a subjective (vs. epistemic) condition on responsibility: One must believe that one is
acting wrongly (and not merely that the evidence supports that view).

Z assumes that responsibility for something requires either the belief that one was producing it or
culpability for the lack of such a belief. One alternative to the culpability requirement is that one
should (morally? epistemically?) have had the belief—whether or not one is culpable
(blameworthy). Another alternative to the culpability requirement—my view— is that one is
agent-responsible (no reference to wrongdoing) for the lack of the belief. It may be enough that I
could have easily developed the belief by investigating a bit, even if I was not morally or
epistemically required to do so.




                                               11

								
To top