Docstoc

The Ethical Challenges of New Technologies - Department of

Document Sample
The Ethical Challenges of New Technologies - Department of Powered By Docstoc
					Department of Innovation, Industry, Science and
Research (DIISR) – Workshop


"The Ethical Challenges of New Technologies –
Risk and Responsibility in Nanotechnology,
Biotechnology and Synthetic Biology"


AAPAE 17th Annual Conference, 2010, Sydney


Chair: Professor Stephen Cohen, School of History and
Philosophy, UNSW
This event was held on 17 June 2010. The format was a general introduction,
followed by background information on aspects of enabling technologies, leading
up to a thought experiment in which participants imagined what would have
happened if a range of technologies had not been invented or adopted.

The background information included

      broad ethical issues
      the aims of public engagement and the difficulties involved in engaging
       with some sections of the public
      the legal and jurisprudential aspects of enabling technology regulation
      a media practitioner’s view on public engagement on new technologies
Following the thought experiment, there was a substantial question period,
followed by a summing-up.

Session 1
Introduction: Professor Stephen Cohen
Professor Cohen’s introduction stressed the importance of open engagement –
that the workshop would be ineffective if the discussion were to be hijacked by
special interest groups, special pleading and ideological intransigence. The aim
was to hear from people who had ideas, criticisms, questions and new
approaches.

He made it clear to the participants that the ethical dilemmas involved with new
technologies are multi-faceted, but not necessarily unique. The areas of concern
are the development of new technologies, and what the government and industry
policies and actions about them are. One of the main aims of the workshop was
to tease out the question of whether these technologies present new ethical
issues, or whether they are simply old wine in new bottles – a new, albeit very
important environment – in which standard ethical issues emerge.




                                                                                   2
Dr. Craig Cormick,
Manager of Public Awareness and Community Engagement, National
Enabling Technologies Strategy (NETS), DIISR
The Ethical Challenges of New Technologies – Risk and Responsibility in
Nanotechnology, Biotechnology and Synthetic Biology
Dr Cormick’s presentation focussed on DIISR’s community engagement activities
on enabling technologies. These include talking with NGOs, industry groups and
researchers. One ongoing difficulty is the intrusion of sensationalism, sometimes
media-driven, sometimes driven by activist organisations. Work with the general
public shows they are often at something of a loss to understand and/or clarify
what the issues are all about. DIISR is attempting to redress this by conducting
as many dialogues as possible, looking for perspectives, ideas, and challenges.
The aim of raising the subject with ethicists is to obtain guidance, so that in
talking to the public, the conversation can include the key ethical questions
involved.

He provided a brief introduction to enabling technologies. There have been four
recent revolutions: materials (nanomaterials), biotechnology, information
technology, cognitive sciences. There are points of convergence – cognitive IT,
nano-IT, nano-biomaterials. Synthetic biology is the latest.

Some of the ethical issues involved are designer animals and animal welfare,
illegal human cloning, genetic selection of children, next generation bio-weapons,
regulatory efficiency, and extremist community activism. We need to find ways to
work with these issues.

New technologies have seen a divide between a promise which would appear to
be unambiguously morally justified, and the reality. An example is genetically
modified (GM) foods. They were supposed to feed the world, providing cheaper
food which lasts longer and uses less pesticide.

The reality was different – GM is found more on highly processed foods. The GM
debate has been dogged by poor communication, the spread of misinformation,
and poor public engagement by scientists and regulators. They were more
interested in explaining the processes than the outcomes.

There is also a divide between the promise and the reality of nanotechnology, but
the debate has been different from the GMO question.

Although the landscape of NGOs involved with nanotechnology has exploded,
consumers might not understand it but are generally favourable. Media coverage
is also more favourable. However, the amount of misinformation is growing and
anti-nano groups are attempting to set the agenda. Yet again, scientists and
regulators are not giving strong unified messages.

Early engagement efforts for nanotechnology featured:


                                                                                  3
      more emphasis on the benefits than the risks,
      engaging with NGOs rather than consumers,
      scientists willing to engage in communications, seeking to explain the
       outcomes more than the processes.
Nanotechnologies (in general) are at a very different phase of community
understanding and acceptance compared with biotechnologies. At present there
is high support for nanotechnologies. This may fall later, before settling to a point
where the public is willing to accept them. This was the case with
biotechnologies: support dipped in the last decade, but is now on the rise.

Support for fields of enabling technology application is spread over a wide
spectrum from "Against" to "For". In food and agriculture, the bulk of the
population tends to be in the middle, leaning to "for". There is great public
support for stem cell research and nanotechnology, with strong interest in climate
change mitigation.

Paradigm changes have a huge impact on attitude change. For example, in
2005-2007 there was a huge jump in acceptance of biotechnologies. This was
largely due to climate change awareness and to an attitude characterised by
"global citizenry".

But with the GFC in 2008 people moved to "nesting". This is likely to reduce
acceptance.

Public debate has different drivers and approaches from scientific debate. Public
debate is emotionally driven – "what don't we know?" and “what aren’t they telling
us?” Scientific debates are fact-based – "what do we know?"

The public perceives risk very differently from scientists or regulators. For
example, they perceive flying as very high risk although it actually is very low
risk. The reverse is true with driving cars (high risk), which they consider low risk.

Surveys of technologies are based on awareness, benefit, risk and acceptability.
There is a risk divide.

          o Regulator perception of risk = probability x consequence
          o Public perception of risk = hazard x outrage.
There is also a precautionary principle divide, ranging from "Don't do it until it is
proven safe" to "Don't stop until it is proven dangerous".

People will move their attitudes based on the values they inherently hold.

The Australian public, like all publics, is segmented on its attitudes to science.
According to a June 2007 research report from Victoria’s DIIRD,




                                                                                        4
      Segment 1: Interested in science but not active in searching for science
       information (23 per cent)
      Segment 2: Interested in science, active in searching for science
       information, and able to find information that they can easily understand
       (27 per cent)
      Segment 3: Interested in science, active in searching for science
       information but unable to find it or have difficulty understanding it (16 per
       cent)
      Segment 4: Neutral towards science and not actively searching for
       science information (8 per cent)
      Segment 5: "The indifferent" – Limited understanding of science and were
       not concerned about its control. Highest proportion of parents with children
       under 16. Small proportion of people educated to degree level or higher.
       (20 per cent)
      Segment 6: Neutral or disinterested towards science but active in
       searching for science information. (8 per cent)
Segments 1, 2 and 3 are engaged in the public debate on science and
technology but a third of the population is not. How do we engage the
uninterested?

Among the main challenges are:

          o How to engage with the general public instead of activist groups?
          o How best to engage with activists/interest and affected groups?
          o How to engage with the unengaged?
Clearly, there need to be different engagement strategies/activities based on
levels of interest.

          o Activist public
          o Affected public
          o Interested public
          o Uninterested public.
      Appropriate levels of engagement with increasing levels of public impact:
          o Inform (eg. fact sheets, web sites, open hours)
          o Consult
          o Involve
          o Collaborate
          o Empower (eg. Citizen juries, ballots, delegated decisions)



                                                                                       5
Most governments are comfortable with "Consultation" - public comment, focus
groups, surveys, public meetings.

      Some ethical challenges to consider in this workshop:
          o Is regulation adequate?
          o Are laws keeping up with the technology?
          o Should the community be more involved in the decision making that
            affects them
          o Are we dealing with new ethical issues, or the same ethical issues?
          o Is there equality of access to new technologies?
          o Just because we can, should we?
          o Who decides on the ends and the means?
          o Are we enhancing or advancing life?
          o Who bears the risks and who reaps the benefits?
          o If these technologies significantly improve human life and the state
            of the planet does it change the debate in any way?
          o Privacy and surveillance
          o Nanotechnology
          o Are we designing evolution?
          o The technology divide – the haves and the have-nots.
Dr Cormick invited comments on a draft version of a fact-sheet on ethics and
invited participants to register for a follow-up researchers’ workshop for on
enabling technologies. He stressed that the point of public engagement is not
simply influencing the media. There are many intermediary stakeholders who are
talking to other people (teachers, community leaders etc.)




                                                                                   6
Professor Belinda Bennett
Professor of Health and Medical Law, University of Sydney
"Laying down the Law – ethical challenges with new technologies"
Professor Bennett began by mapping out what we mean by new technologies,
and why they present legal and ethical challenges, then moved on to considering
the role of law and some of the legal issues in relation to these new challenges

She argued that many, if not all, of these new technologies present essentially
the same ethical and regulatory challenges. We need to be able to respond to
the specific issues presented by each technological advance, but also to step
back and focus on the similarities between them so that we can develop common
approaches.

It is not helpful to recreate the wheel every time a new technology comes along.
But it is helpful to consider what values should be regarded as important when
crafting new laws for new technologies, because there is unlikely to be
consensus in society about the way we should proceed.

Professor Bennet gave a run-through of recent controversial technologies and
the legal/ethical questions they pose.

Assisted Reproductive Technologies (ART)
The birth of the first IVF baby in 1978 raised new questions and twists on older
issues such as:

      The meaning of parenthood
          o Distinctions between genetic and social parenthood, and in the
            case of surrogacy, gestational parenthood.
          o Who has parental responsibilities for a child conceived using
            donated sperm/eggs?
          o Legislation enacted in Australia clarifying the issue of parental
            status in these circumstances.
      The definition of infertility
          o Relevant to our rules about eligibility for treatment
          o Does infertility presuppose being in a heterosexual relationship?
          o Will single women or those in same sex couples also qualify?
      How people view "the family"
          o The definition is being recast by new appreciation of rights of the
            child conceived through donated gametes.



                                                                                   7
          o This is reflected in the laws through legislative requirements for of
            donor registers and statutory rights of entitlement to information
            about biological parentage.
Cloning
Dolly the sheep was born in 1996. It was the first successful application of
cloning technologies in a mammal, and media coverage focused on concerns
over the potential for cloning in humans. Governments around the world
assessed the adequacy of their regulatory frameworks, with several international
declarations condemning human cloning.

      Council of Europe, Additional Protocol to the Convention for the Protection
       of Human Rights and Dignity of the Human Being with Regard to the
       Application of Biology and Medicine, on the Prohibition of Cloning Human
       Beings (1998)
      UNESCO – Universal Declaration on the Human Genome and Human
       Rights (1997)
Stem Cells
Human Embryonic Stem Cells (ESCs) were isolated in 1998. ESCs are
undifferentiated: they can develop into brain, skin, or other organs. This
pluripotency of the ESC may offer hope for the development of regenerative
medicine such as treatment of spinal cord injuries or Parkinson's Disease.

However, ESC research also requires the destruction of the human embryo in
order to obtain the stem cells. This is morally abhorrent to those who believe that
life begins at conception.

Adult Stem Cells are stem cells present in adult cells. As yet, these cells have not
shown the same pluripotency as ESCs, and may not have the same potential for
development of new treatments.

Induced pluripotent stem cells (iPS) are adult stem cells that have been
genetically programmed so that they act like ESCs. Their development shows
that we may be able to un-differentiate cells. If they provide a viable alternative to
ESCs, iPSs may allow us to achieve therapeutic outcomes without using human
embryos.

It is important not to offer unrealistic hope for new treatments. Even if we were to
develop new treatments, their provision in a clinical context is still a very long
way down the track.

Australia responded to the challenges of cloning and stem cell research through
the enactment of a legislative framework:

      Prohibition of Human Cloning for Reproduction Act 2002 (Commonwealth)
       – Human cloning is prohibited


                                                                                     8
      Research Involving Human Embryos Act 2002 (Commonwealth) –
       Research involving embryos must be licensed.
      Corresponding legislation at state and territory level
Genetics
There has been a dramatic increase in knowledge of genetic diseases as a result
of the mapping of the human genome and developments in genetic science.

This raises concerns about genetic discrimination, even against people who do
not yet, or may never have symptoms of an inherited illness. The degree to which
genetic information is predictive of human health is often poorly understood.

      Might people experience discrimination in their ability to get a job or
       receive insurance if their genetic makeup puts them at increased risk of
       some condition?
      Concern about genetic knowledge and eugenics among people with
       disabilities.
How can the law best protect people from genetic discrimination?

      The Australian Law Reform Commission produced a landmark report
       "Essentially Yours: The Protection of Human Genetic Information in
       Australia" (2003)
      Recommendations on spectrum of issues (genetics and employment,
       insurance, healthcare and law enforcement).
Genetic modification (GM)
GM "allows us to change things in a way that to the layperson may seem
extraordinary, unnatural, or in some ways, just plain strange." There may be
perfectly sound scientific reasoning behind these changes, but they can cause
unease in the broader sections of the society. Among these are:

      Concern that we are tinkering with life itself
      Some of the things we are doing are not "natural" - we are "playing God".
       Examples include:
           o Monkey with green fluorescent protein inserted. Created to allow
             scientists to study the progress of Huntington's Disease on the
             brain
           o Puppy with red fluorescent protein introduced to show it is a
             successful clone of a genetically modified dog.
Genetic modification and food
GM has potential to have real advances in health and industry. However, GM in
food production is a contested topic. The main concerns are for human health
and for environment should GM crops escape from defined areas.


                                                                                   9
      Australia enacted the Gene Technology Act at Commonwealth level and
       established the Office of the Gene Technology Regulator as part of the
       national regulatory framework for GM organisms.
Nanotechnologies
This is an umbrella term to cover a range of technologies in different industries.

      Challenge: to determine the significance for particle size in law and ethics
      Concerns:
          o Potential for nanoparticles to cause risks to human health
          o Adequacy of our regulatory framework
Developments in synthetic biology
      Potential for new therapeutics
      Concerns - some may use for harmful purposes
Human enhancement
The possibility to advance one's abilities would be enormously appealing, as is
clear from the popularity of cosmetic surgery. Human enhancement raises a
number of definitional questions:

      What is enhancement?
          o Set definitional threshold too low, and it will quickly become
            apparent that most of our lives are spent in an enhanced state. e.g.
            reading glasses.
          o Maybe don't regard these as enhancements because they don't
            make us better than those around us, but simply bring us up to a
            similar level.
          o Enhancements thought as those that make us better than our
            peers.
          o Even if we were to pursue universal enhancement, the extent to
            which is would be universally available would be questionable.
      Is our understanding of enhancement historically specific?
          o "The concept of normalcy lacks both precision and moral content;
            'treatment' and 'enhancement' are morally indistinguishable" (Chan
            and Harris "In Support of Human Enhancement" (2007) 1(1)
            Studies in Ethics, Law and Technology.
While some of these questions might seem at present to be in the realm of
science fiction, this literary form speaks to our concerns about the pace of
scientific change and where it might lead us.




                                                                                     10
Professor Bennett then raised the question of why we should regulate? Aren't we
regulated enough? She cited Mr Justice Kirby: ‘in these fields, not to do anything
is, effectively, to make a decision. It is to accept that science and technology may
take our societies where they will.’ Or in Roger Brownsword’s words: ‘even non-
regulation must give some normative signal, whether of prohibition or
permission’.

In areas we regard as morally or ethically unacceptable, we can express our
disapproval in clear, unequivocal terms by prohibiting certain practices and
penalising those who disobey the rules.

But there is another role for the law – as ‘foundational and enabling’. Law creates
frameworks for appropriate interaction within boundaries set by society. It
provides the foundations for stability and guidance of interactions and institutions.
And it supports new endeavours, through:

      Marking some activities as allowable within specified parameters
      Creating rules for engagement within our shared social spaces
      Specifying the regulatory mechanisms important for balancing interests
      Creating opportunities for society to share in the benefits of new
       technologies while providing safeguards.
      At its best, law provides all of us with clarity and guidance for our actions.
In general, scientists tend to believe there are too many restrictions already on
scientific endeavours, while the public thinks there are not enough. The role of
law is to reflect the balancing of those interests.

Professor Bennett then explored the common features of new technologies. All
tend to involve:

      Complex science
          o It is difficult to understand practical aspects without specialist
            training.
          o Nonetheless, the public can understand when it affects them if the
            information is presented in an accessible way.
          o May present challenges for regulators who need to develop more
            than a lay understanding of the issues.
      Rapid change
          o Law and regulations can quickly become out of date.
          o In the 70s ‘law was marching with medicine, but in the rear and
            limping a little’. This remains the case.
          o ‘Regulatory connections’ – the law matching the science – remain
            critically important.


                                                                                    11
      Often, there is more than one technology, or a convergence of
       technologies
          o If regulators are focussed on a single technology, it can quickly
            become the case that their field of jurisdiction does not extend to
            cover the field before them.
      Uncertain risks
          o Often the understanding of risks only develops as the technology
            develops.
          o It is impossible to see all the potential risks.
      Diverse community opinions
      Ethically complex definitional questions
A key question is whether our regulatory approach should be generalist or
exceptionalist. Should we adapt existing frameworks to cover new technologies,
or choose new forms of regulation and create new regulatory bodies?

We need to ask if a technology raises different issues that can’t be covered by
existing regulatory approaches.

Professor Bennett gave genetic privacy as an example. An exceptionalist
approach would be to introduce a genetic privacy act. The Australian Law
Reform commission recommended that we instead amend the existing privacy
act to ensure genetic privacy is also within the scope of its protections.

It is tempting to say each area is different and an exceptionalist approach is best.
But creating new laws may not lead to regulatory clarity, and may make
compliance more complex.

Moreover, a technology-by-technology approach to regulation does not equip us
to handle converging technologies. There may be times we need to look at the
big picture of converging technologies, and others when we need to deal with
technology-specific issues. At this point, we should pause and consider whether
the generalist or exceptionalist approach is more suitable.

Neither technology nor community values remain static. How do we ensure
regulation stays current?

It is important to review, and sometimes replace, legislation. Periodic review
gives us formal opportunities to review science, its regulation, and community
values to ensure they continue to fit together.

A crucial challenge is to develop harmonised approaches at a federal level.
National consistency is important, because inconsistent regulation can create
public confusion and impose significant compliance costs on businesses and
researchers.


                                                                                  12
The public are key stakeholders and it is vital that they are involved and
informed. But we must keep in mind that community values are not necessarily
uniform.

Both law and ethics are fraught with definitional questions. For example, what do
‘family’, ‘illness"’, ‘human dignity’, or ‘human rights’ mean? When we try to
legislate on such things, we enshrine an ethical position. Any attempt to create
laws on these things will inevitably generate responses from interest groups, and
demands that the values underpinning the disparate variety of responses are
reflected in the regulations.

The law should reflect community values, but whose values? The proper role of
law is to provide a balanced approach in the face of widely diverging views. .........

Regulating technology encompasses dealing with risks and uncertainties. One
approach we can use to exercise caution is the precautionary principle, although
even this is subject to contested definitions. A legal perspective tends to favour
an incremental approach to law reform, with a balanced approach to viewpoints
from community. Laws should be reviewed regularly to ensure they keep pace
with changes in our understanding of risk.

It is also beneficial to learn from jurisdictions in other areas, and see what has
worked well and what hasn't. The globalisation of science and commerce means
that we all live and work in global spaces today. If laws are too restrictive we may
lose good scientists to other countries with more elastic regulations. If
compliance costs are too high businesses have no compunction about relocating.
These are important considerations for the development of Australia's knowledge
economy.

We also need to consider the question of regulatory reach. An example is health
tourism, where consumers travel overseas to receive treatments that are
unavailable or too expensive at home.

The reach of our laws may be limited. We would be well-advised to be active
participants in international dialogues and legislative bodies. This can also have
a harmonising effect on laws that develop within individual countries.

One vital message is that we should be mindful of the similarities between
technologies, not just their differences.

Professor Bennett then took questions.

Q1. Around the issues of generalist and exceptionalist regulation – is it possible
to develop a new kind of regulation focussed on addressing a problem that the
technology raised, rather than focusing on the new technologies?

A1: I'd see that as coming under the umbrella of a general approach to
regulation where we have laws that aim, for example to address the issues of


                                                                                   13
privacy or occupational health and safety, the way we regulate chemicals, all
sorts of things. We have existing frameworks for those things, and with the
general approach I was thinking the sort of thing that was not technologically
specific would be fitted in. One of the things we've got to do is see whether those
existing frameworks can be adapted, and there may be times when they can or
when they can't. And if they can't...

Q1.contd. I agree with you where the existing frameworks work. My point is
where they don't work, it doesn't mean that what is put in place needs to be
technologically specific.

A1. contd. I agree with you. There may be something that doesn't fit within our
existing frameworks, but that we would nonetheless like to craft in a general
cross-technology way. And I guess what I'm really arguing is we need to ask
ourselves those sort of questions at the outset, before we start creating laws.

Q2. Mentioning the need for stability and knowing the ground rules, but also the
issue of making incremental changes in law reform, what would be the
appropriate timeframe for the reviews to be conducted, and how significant would
changes be?

A2. It's the key question here. We need to be able to provide people with stability.
If we change our laws every three weeks, nobody knows where they stand on
anything. Need to have sufficient certainty that people can manage their day-to-
day lives and businesses. But by the same token, we can create endless
certainty by never changing the law, but all that would happen is that the law
would become incredibly out of date. So we need to often adapt the law, update
it, amend it, review it, to see if it still relevant and applicable to the areas we
have. This might be something on the scale of three to five years depending on
how quickly the field is moving. You might reach a point where things look like
they might have plateaued out for a bit, and perhaps you might be happier with a
longer time frame. Something in the order of five years or so would be an
opportunity to reflect upon whether the law was still serving the task that you had
developed for initially, or whether it had fallen behind.

Q3: What about significant changes to government policy due to a change of
government?

There are changes to government policy due to a change of government. But if
we put those issues to one side... there are political realities that do influence
these processes... how significant a change might be is something you can't say.
If you are going to have a review of the legislation to see whether it's kept pace
with the science, you can't say in advance whether the changes will be minor or
major. It depends on how well that piece of legislation is still working and doing
the task you need.




                                                                                  14
Q4: Where does responsibility lie in relation to converging technologies that fall
between existing regulators?

A4: That is the challenge that converging technologies pose for us. If we make
our regulatory frameworks too technology-specific, we have great expertise on
one issue over here, great expertise on another issue over there, but the ground
in between may not be covered. We need to look at broader regulation that is not
technology or industry specific but does kind of draw together – that is able to
look across the field and see where it's developing.

Where the responsibility lies will depend on where we decide it should be if we
decide to regulate the field, but we need to ensure that the way we approach that
issue is one where the regulators are able to cut to address all the issues that
come up when the issues we thought of as separate start merging together and
start presenting new issues for us. It's a challenge as to how we may do that. We
may develop a regulatory agency that has some sort of oversight across
technologies that's focused very much on the convergence issues.

Q5. Who should the regulator be? Should the regulator be a scientist? How do
we get the differing views reflected within the regulatory process?

A5: I think we've done pretty well with those issues in Australia in terms of
having multi-disciplinary bodies to regulate different areas, or to undertake
various activities. It is often a case of bringing an inter-disciplinary group together
so that scientific expertise, consumer perspectives and various different
perspectives on the issue so that you do have an informed approach to it. I think
that's certainly a desirable way forward.

We need to make sure when we are working on the issues that we are informed
by the views in society as much as the views in science. So bringing together
inter-disciplinary groups, or if it's not structured in that way, at least ensuring that
there are mechanisms within our legislation, or at least that we regulate for
ensuring community consultation around key decisions, or calling for
submissions, and those sorts of things. It's very important that we keep that
dialogue going between all of the key people that are interested in these issues.

Q6: How many regulatory bodies are there?

Craig Cormick: Lots! Within biotechnology there are five or six regulatory
agencies; nanotechnology has probably seven or eight key regulatory agencies.
I'll take you through and explain why – for example, we have a Department of
Health looking after health, a Department of Environment looking after
environmental safety; Workplace looking after worker's safety; the OGTR looking
after gene technology; Food Standards Australia looking after food safety;
Therapeutic Goods Association looking after health and medicines; NICNAS
(National Industrial Chemicals Notification and Assessment Scheme) looking




                                                                                      15
after industrial chemicals; APVMA (Australian Pesticides and Veterinary
Medicines Authority) etc.

They're all looking after their own sphere which evolved out of discrete areas.
The problem is a lot of these technologies cut across them.

The issue has been – and this is in every developed country in the world – we
were very happy with our silo approach to regulation, and suddenly we've got
technologies that go sideways. So we're working with the OECD towards either
cross-regulatory bodies or better ways to get together.

We formed the Health Safety and Environment working body, which
encompasses all the regulatory bodies working on nanotechnology. They come
together once a month and look for the gaps.

We also commissioned Monash University to do a study into regulatory gaps. It
found that the regulations we have are sufficient for the technologies we have at
the moment. It also identified six key trigger areas where would have to do more
work rapidly.




                                                                                  16
Thought experiment: ‘What if we hadn’t invented the
wheel?’
The next session was a thought experiment, chaired by Professor Susan Dodds,
Dean of Arts/Professor of Philosophy, Australian Centre of Excellence for
Electromaterials Science (ACES) University of Tasmania.

Called ‘What if we hadn't invented the wheel? – Precaution, risk and uncertainty
in technological development’, its aim was to encourage the audience to think
about the ‘what ifs’. Had the precautionary principle been applied to technologies
that have already been developed, what would that tell us about our discussion
of emerging technologies now?

Often, technology responses to the precautionary principle relate to people's own
perspectives.

The classic example is, of course, the wheel and technological progress. If we
hadn't had the wheel, then some people wouldn't have been crushed, the
automobile wouldn't have been invented and so on. And on.

The wheel is often used as the beginning of a series of technological
developments that are characteristic of a certain kind of ‘advanced’ culture.
Failing to invent the wheel would have been a failure of cultural progress. This is
a disturbing suggestion because from our perspective the wheel was not
developed by a very advanced culture.

Professor Dodds posited an alternative account: What if we look at the
boomerang instead?

It was a technology appropriate for its environment (a nomadic population, and
few agricultural opportunities).

She argues that the developments in technology will always be appropriate to the
environment we are in. Rather than seeing the technology as the one crucial link
in the chain of progress, we need to make a critical assessment of whether
something else would have come up given the challenges people were facing,
available resources and social structure at that point.

The precautionary principle is often used as an argument to slow down moving
full bore down a particular technological stream.

The definition from environmentalist literature is: ‘Where a technology or activity
threatens harm to health or the environment, lack of complete scientific evidence
to establish the threat should not be a sufficient reason for allowing the activity to
go ahead without mechanisms to protect against the risk.’

If there are serious risks of potentially irreversible harm to health or environment
and there is uncertainty about the risk – clearly certainty is a very high standard


                                                                                    17
to be pushing for – is it possible to develop mechanisms to protect against this
risk?

The precautionary principle is meant to offer more than just a utilitarian calculus
of cost and benefit. Rather, it is saying ‘in the face of uncertainty, it is better to be
cautious than optimistic about probabilities of risk and benefit.’

In practice, we should proceed with a new technology only where we either know
enough about the risks involved to say that proceeding won't be harmful, or
where we can adequately avoid or mitigate the risks. But uncertainty can sneak
in here too – how failsafe does that protection need to be?

Uncertainty can also play in both directions. Sometimes by protecting against the
risk occurring we create something that is more burdensome. Sometimes when
something looks risky at one time, we can develop something that reduces that
risk.

But we also need to consider the opportunity cost of not going ahead with the
new technology, process or practice. What were the potential benefits? And what
are the risks of not proceeding?

There are serious epistemological question involved. How much evidence is
required to say there is a risk, or there is no risk? There can be no certainty in a
social context involving multiple variables.

What alternatives do we need to consider?

If we adopt the strong version of certainty, any potentially risky activity would be
prevented. The weak version (good enough evidence) is not greatly different
from a cost benefit analysis.

The precautionary principle is not adequate to do the epistemological heavy
lifting required.

We are essentially left with two alternatives:

‘If in doubt, don't’ <-------------------> ‘If in doubt, it's okay to try it to find out’

The ‘If in doubt, don't’ approach:

       The development and marketing of products is permissible only where
        there's evidence of safety (This, by the way, is not possible to provide.
        Popper spent his career trying to explain falsifiability. There is no magic
        number of safe uses that will ever prove safety. It is flawed reasoning to
        induce that something is safe simply because it has thus far been used
        safely.)
       We need to weigh up how serious the risk of harm is.



                                                                                            18
The ‘if in doubt, it's okay to try it to find out’

       This is a market/tort approach. A product, process or technology should
        only be restricted after a wait and see period to discover if a real risk
        exists. Only if there is clear evidence of harm should we limit, restrict or
        regulate.
       We need to consider how significant the cost of not testing first is.
Examples include drugs, where there is a very strong test of efficacy, likelihood
to cause harm and checking to make sure it doesn't increase the possibility of
risk.

At the other end of the spectrum are some products where the materials or
purpose are very similar to other products being used. A new spoon, say,
requires little testing.

Automobiles present a case where the argument from familiarity does not apply.
There is a higher onus on the assessment of risk, despite our being utterly
familiar with automobiles. This is because the risk involved can have catastrophic
consequences.

The precautionary principle wants a shift more to the ‘if in doubt’ side. For
example, it might be wrong to assume that a material on one scale will act the
same way on the nano-scale.

The Precautionary Principle should be seen as a pragmatic heuristic
       Where a new technology or practice is introduced and there is some
        evidence of potentially serious risk, prudence recommends controlled
        introduction, regulation etc.
       If little is known about a new technology or practice, and it is potentially
        very difficult to control possible harmful effects, precautions are merited
        while finding out whether those risks are real.

Uncertainty, precaution and social epistemology – the
importance of dialogue/public engagement
Risk assessment approaches to new technologies span a wide spectrum.

"If in doubt, don't" <-------------------> "If in doubt, it's okay to try it to find out"

We can see the precautionary principle as saying, "where does this new
technology belong on that spectrum? And why do you think it shouldn't be further
over to the left?"

       Don't look at this in terms of expertise in assessing knowledge claims
        about risk of harm:



                                                                                            19
          o The less that is known about a potentially significant new
            technology the less likely it is to fit into existing frameworks of social
            expectations
      The more likely that the new tech will change social practices and
       challenge social values, the more reason to engage in both expert and
       public deliberation and evaluation.
      Public trust in and acceptance of the new technology will be shaped by the
       nature of the debate about it (transparency, accessibility, reason-giving,
       answerability to public concerns, accountability)
Precautionary principle as a call to account by those who are sceptical issued to
those who are promising benefit without risk.

But on the other hand, as Sandy Starr argues in Science, Risk and the Price of
Precaution (2003): "Imagine a world without vaccines, penicillin, antibiotics...
Imagine transport without aeroplanes, railways, cars or bicycles; power without
gas, electricity... agriculture without... hybrid crops or the plough... 'pretty much
everything' would have been prevented or limited under the precautionary
principle".

Some of the questions we need to ask are:

      Is it accurate to say that using the precautionary principle, we wouldn't
       have the same technologies and advances?
      Is it an ethical problem that we might not have had some of these
       advances?
      Why or why is it not an ethical problem?




                                                                                        20
Debate: "Counterfactual histories": What would have
happened if the precautionary principle had/hadn't been
applied to three areas: Energy, Food/Agriculture,
Medicine/Health?
As an example:. Energy – Coal – impact of burning of coal, impact on people in
the mines, impact on industrialisation.

Participants grouped into groups of three and four to pick one technology and
assess counterfactually via the precautionary principle.

The questions to address were:

      What would have been gained or lost had the precautionary principle been
       applied prior to rolling out the technology?
      As far as you can assess, would this have been better or worse (for us,
       the environment, future generations)?
      Report back with comments about the value of precautionary principle.

Discussions from groups – Energy – Nuclear power, Coal
Energy Group #1: "Nuclear power"
      Decided to go back to the context the technology was developed in (1950s
       "Atoms for Peace program").
      Promise: Unlimited cheap energy.
      Risk: Meltdown (local environment problems); waste; horizontal
       proliferation; decommissioning problem once reactors have finished their
       lives.
      Would risks have been calculable at the time? This would have made it
       difficult to apply precautionary principle.
      Suggest nuclear power would not have been developed if precautionary
       principle was applied.
      If no development of nuclear power:
          o No nuclear weapons
          o No effects of waste yet (will be in the future)
      In 2010, nuclear power making a comeback – evolution of technologies.
Energy Group #2: "Coal"
      Again, context dependant. Risks of coal at the time of development would
       have been different to today. e.g. no climate change, risk to workers,


                                                                                 21
      Benefits: coal efficient alternative was wood, so likely coal would have
       been introduced.
      2010: would be difficult to ban, because of change in industry due to coal,
       and problems with alternative (nuclear). How do we shape the regulatory
       structures when looking at alternatives now?
      Difficulty with moving over to the "if in doubt" side of spectrum – can do a
       risk benefit analysis as far as possible, but what does it mean to move
       over to the "if in doubt" side once we have come up with the conclusions.
Energy Group #3: "Nuclear power"
      Alternatives at the point where nuclear stations were being developed:
       could have gone with the thorium reactor, but that wouldn't have had a
       military use. Thorium is much more efficient and has a shorter half-life, but
       you can't use it to make bombs. In the mind-set of the time, the bigger risk
       was military aggression (Cold War).
      Very few technologies that don't have a military application!
      Relationship between technology development and power.
Energy comment:

Carbon cost in building nuclear power stations – any payoffs will come 25 years
down the track, not in the immediate future (which is where we need it).




                                                                                  22
Discussions from groups – Food and Agriculture – GMOs,
George Chaffey and irrigation in Mildura
Food and Agriculture Group #1: "GMOs"
      e.g. Rice revolution in India (1970s) – one viewpoint - fed millions that
       would have otherwise starved
      2 opposing views from the same group due to difference in disciplines:
          o "Humanities" - there are not just technological solutions for
            problems, but socio-political solutions too.
          o Industrial "silver bullet" – someone will stand to make money from
            it. Whereas with socio-political decisions, they are more difficult to
            implement, softer, and can be more agenda-ed.
      "What do we know?" vs. "what don't we know?"
          o In Nano meets macro, Fern Wickson asked ‘if industry stands to
            make money from the technology, are we researching the risks as
            much as the benefits?’ She said, ‘we don't know anywhere near as
            much about the risks as we do about the benefits’. So this research
            is not an apolitical area.
          o In the same book, Sue Dodds comments ‘There is a lag between
            technology development and research into risks, in part because
            we don't know what to look for. For example. In carbon
            nanotechnology, different risks are relevant to the different
            structures of carbon. But we don't know what structure may be
            used in a particular application until we do the research. We don't
            know what we're measuring until we've decided where we are
            going to go with it.
          o Issues with golden rice, penicillin etc. – which risks do we see as
            more significant?
          o Audience discussion about influence of political movements –
            inaudible. There are risks associated with certain kinds of social
            and political decisions e.g. war.
Food and Agriculture Group #2: George Chaffey and irrigation in Mildura
      The origins of agriculture are the origins of cities.
      The context is that Chaffey has just arrived in Mildura where they are now
       considering moving to irrigation which brings community. Growing irrigated
       crops is more labour intensive.
      Outcomes for city-side were unpredictable. Was hoped that local
       community would grow (bringing wealth).
      Alternatives at the time: Continue with pastoralism vs. Growth of
       community.



                                                                                   23
   Don't think it's possible to do the counter-factual in that case.
   Don't think that community would have foreseen the outcomes.




                                                                        24
Discussions from groups – Medicine/Health – IVF, HRT,
Chimeric research
Medicine/Health Group #1: In-vitro fertilisation
      Discovery is always going to be in favour of looking for the benefit.
      Don't look at risks until we have determined that there is an existing
       benefit to something.
      If we applied strict precautionary principle to IVF, it might not been have
       developed BUT we would never know if certain risks/benefits existed
       unless you have gone down the track and have long term outcomes. Until
       then people will focus on benefits.
      Possible losses if IVF was not developed:
          o Options for infertile or same sex couples to have a genetically-
            related child.
          o Increased options for women in terms of timeframe for having a
            child.
          o Human embryonic stem cell research.
          o Testing of embryos at risk for genetic conditions.
          o Other technologies developed.
          o Thinking about risks to IVF children might have pushed forward
            research in those areas.
      Possible positives if IVF was not developed:
          o Money put towards other alternatives.
          o No ethical conflicts raised by IVF and its technologies.
          o An increase in adoption.
          o Reduction in the stigma associated with childlessness.
          o Dispute over whether IVF brings up some health issues down the
            track.
      IVF driven by consumers as number of children available for adoption
       decreased (due to contraception use). Regulatory structure took into
       account risk – slowed down the enthusiasms that might have taken over.
Medicine/Health Group #2: Hormone Replacement Therapy
      Large number of people put onto it. Small proportion had very high risk of
       adverse effects. Adverse effects were very serious (cancer).
      precautionary principle strongly applied to drugs, but drugs need to be
       developed.
      Feminist theory – the medicalisation of women.



                                                                                 25
     What was the impetus for HRT? Western popularity; Eastern countries
      don't have such an emphasis on the treatment of menopause.
     precautionary principle in the context of what drives the development of
      drugs. HRT is not a life-saving drug.
         o What is an acceptable level of risk in this case? (e.g. small
           population at risk).
         o What is an acceptable level of harm in this case? (e.g. death,
           cancer)
     Couldn't come to a conclusion as to how we should be looking at the
      precautionary principle in the medicalisation of the human condition.
Comment from the audience: What if we considered technologies that
hadn't gone ahead e.g. chimeric research
     What if we used reproductive research to look at mixed species? Many
      countries have legislation to prevent it.
     Has Australia allowed us to look at chimeric research at a very early stage
      of the embryo?
     What can you find out with the capacity to do that research?
     Chimeric research may still go forward in some circumstances, but it is not
      yet clear what the risks and concerns are.
         o For some, serious risk is the loss of human dignity and rights.
         o For others, an "ick" factor response rather than a moral reason – is
           this a good enough to not proceed with research? May not be a
           good enough reason in a society that values reason. It is an
           aesthetic value.




                                                                                 26
Summing up: The Precautionary Principle is a move to
encourage a certain kind of discussion, rather than the end of an
argument.
The fact that everyone came back with different ways of summing up the risk
draws attention to the idea that the precautionary principle is more of a move or
an argument to encourage a certain kind of discussion, rather than the end of an
argument.

People brought out issues like what contextual features were; what the
knowledge claims needed to be; the social and political issues; the scientific and
economic aspects that would relevant to assessing different alternatives.

The precautionary principle invites us to participate in social dialogue and debate
about emerging technologies. It doesn't necessarily draw us to a factual claim as
the end of an argument.




                                                                                 27
Session 3
Dr. Alan Saunders, ABC Radio National
Ethical Challenges in a New Era of Advanced
Technologies
Transcript
Recently I discovered that I had a whole hour to fill. And not only an hour to fill,
but an hour developed to “Are the ethics of new technologies different, or are
they the age-old questions with added technical jargon? Do we need an ‘ethics of
new technologies’? Or a new way to express ethical questions to a wider public,
and if we do, what are they? What should we be talking about to the people who
should be living with these technologies?”

Good lord!

That’s a lot to ask of somebody who has never really seriously done moral
philosophy – I was an epistemologist. Then I was told you were all bursting with
points you wished to discuss so could I limit my presentation to half an hour? I
said, “You can have even less than that if you like!” So it will be rather less than
that and basically these are just a few observations that I’ve flung together. This
morning I began my address by reading from a novel and I’m going to do it again
(from a different novel).

“Life and Death appeared to me ideal bounds which I should first break through
and pour a torrent of light into our dark world. A new species would bless me as
its creator and source. Many happy and excellent natures would owe their being
to me. No father could claim the gratitude of his child so completely as I should
deserve theirs.”

The speaker is Victor Frankenstein in the novel that is named after him. I mention
this because if we talk about new biological technologies these days, particularly
reproductive technologies, and I’m not going to talk about any other sort of
technology this afternoon – so no nanotechnology, no electronic technologies –
in the field of biological technology the name Frankenstein is frequently invoked,
particularly by the popular press. This has a long ancestry. Isaac Asimov, the
science fiction writer, coined the term the Frankenstein Complex to describe the
theme of his collection of robot stories in the 1930s. There was a collection of
essays on genetic engineering published in 1995 called “The Frankenstein
Syndrome”. So I think it’s worthwhile, if we’re addressing the ethical issues of
biotechnology, just actually to let our minds roam a bit and see whether there are
ethical questions which are posed by that novel and what they have to tell us.

The first thing I think is interesting here is that Frankenstein speaks of Life and
Death as ideal bounds which he wants to break through and pour a torrent of


                                                                                      28
light into our dark world. This is interesting because the novel isn’t just about
science and scientific ambition – it’s also about space. It is about bounds – about
the boundaries of what makes us human. This is symbolized in the novel by the
fact that the novel covers a vast geographic space. It begins, you might recall,
following a man called Walton who’s a polar explorer near the North Pole. The
story is told by Frankenstein to Walton, while Walton is proceeding north in his
slightly thwarted voyage of discovery.

So bounds here are important, and light is important. And it’s interesting that,
having talked about bringing light into a dark world, Frankenstein then goes on to
talk about the new species which he is going to create, which suggests that he
sees his new species as existing not for its own sake, and he’s not doing it just
because he can – he sees it instrumentally as a source of knowledge – just as
we might regard chimera research, which some of you mentioned.

But the other thing the novel shows us is what happens when you pursue your
curiosity in a context that is not properly nurturing and is not properly morally
informed. And I suppose one of the questions that we need to ask ourselves is
whether that morally-informed context needs to be a different sort of context to
deal with different new technologies, or whether, as I suspect, we can use our
existing ethical ideas, doctrines and methods, and apply them in these new
situations.

Now, to return to chimeras, there’s a fair degree of controversy about this. Let me
remind you that a chimera is an animal that is composed to a greater or lesser
extent of cells from other animals. In the case of stem cells, you can create
human-to-animal chimeras –you transfer human cells to animals to study how
these human stem cells behave in a live body. So in biological research,
chimeras are produced by physically mixing cells from two different organisms,
from two independent zygotes - for example, one from a donkey and one from a
horse. Some chimeras can eventually result in the development of an adult
animal composed of cells from both donors which may be of different species.

In 1984, a chimeric “geep“ was produced combining embryos from a goat and a
sheep. You can see why they called it a geep and not a shoat! The geep has
actually been very important in answering fundamental questions about
development and the techniques used to create it might one day help save
endangered species. Slightly worrying echoes there of another work of fiction,
Jurassic Park! For example if one tried to let a goat embryo gestate in a sheep,
the sheep’s immune system would reject the developing goat embryo. However,
if you used a geep that shares markers of immunity with both sheep and goat,
the goat embryo might survive. So it might be possible to extend this practice for
the purposes of preventing the extinction of some endangered species.

The idea here is it’s not enough simply to study stem cells in vitro, in a dish,
because as one philosopher put it to me, that’s a lot like testing the potential of
your car just by revving it up in the garage. The stem cells they have in labs don’t


                                                                                    29
tell you much unless you put them into animals how they behave in a real bodily
system. In August 2003, researchers in the second Shanghai Medical University
in China reported that they had successfully fused human skin cells and dead
rabbit eggs to create the first human chimeric embryos. They were allowed to
develop for several days in a laboratory setting, then destroyed to harvest the
resulting stem cells. Because of the high therapeutic potential of human
embryonic stem cells and the United States moratorium which was mentioned
before, on using discarded embryos from IVF, clinics, as well as others
concerned about the use of human embryos for research, are interested in this.
And increasingly we are getting realizable projects using part-human part-animal
chimeras as living factories.

Now this raises of course a number of ethical issues, and it’s interesting again to
think of the context of moral philosophy in which these issues are now discussed.
When I was an undergraduate, moral philosophy (and this is why I probably
didn’t carry on with it) was a seriously tedious topic. We were asking what one
was saying when one said that something was good. Was one attributing a
property to it? Or, according to the emotivist theory, saying “murder is bad” was
equivalent to just saying, “Murder – boo!” Which is why the emotivist theory
became popularly known as the Boo-Hurrah Theory. This is very, very boring I
think and since then, moral philosophers have been much more willing to
address solid issues. Of course the issues that they are most frequently called
upon to address are not issues like “murder is wrong” which we are more or less
agreed on, but the borderline issues.

These are the sort of areas where those borderline issues arise. They’ve been
around for some time. As soon as the artificial heart/lung machines were widely
used in intensive care units, there was concern about whether the patients were
actually dead and the machines were keeping them alive? Or, are they breathing
cadavers? Or are they really alive, and do we owe to them everything we owe to
living people? These are not medical questions – this was when the clinicians
ended up phoning the local philosophy department and in a rare instance of
scientists seeking the assistance of philosophers and seeing whether they could
reason things out. Though the actual term bioethics has been around for a long
time – I was surprised to find that it was coined in 1927.

So what do we do about "chimera", or rather more precisely, how should we think
about them? Well, I don't think that our existing moral tools are inadequate for
dealing with these questions. Suppose we went as far with chimeras as
Frankenstein went – so we actually create para-humans as they're called, if you
like, monsters. How would we regard them? If you read the book (Frankenstein)
you find that the creature suffers because he is despised by many people and
rejected by his creator, but he does find one or two people who are nice to him,
and we are encouraged to think that what he is in need of is human sympathy.
And following David Hume I do think that sympathy is very important as a moral
force and I don't think we have difficulty with it, even when as a result of
biological intervention, something has gone disastrously wrong. Think of those


                                                                                30
who were born with serious deformities as a result of thalidomide – we extend to
them the full range of our sympathies that we extend to other human beings. We
don't say, 'Oh my god, they're monsters'.

Now I said that Frankenstein regarded the creation of new life as instrumental – it
was going to help to shed light on the world. The question is how far do we go
and it could well be said that he, motivated by hubris, went too far. Now, I mean
the question about how much is foreign we want to take into a human body – you
could say there's a sort of slippery slope argument at play here. Yes we do
incorporate animal matter into our bodies when we eat meat for example. Or we
transplant pig valves into our hearts just as we might transplant human foetal
tissue into mice. Does this mean we shouldn't worry about this sort of thing, or
does it mean that we have created a situation in which the boundaries between
of what is human have become vague, and this is an assault on human dignity.

This is something that Susan mentioned – are we talking about the 'ick' factor or
are we talking about a genuine affront to human dignity? I do not see here a
genuine affront to human dignity – I see a situation really which we can deal with
the moral equipment that we have inherited from two millennia of our ancestry –
more than two millennia if you want to include Chinese philosophy into that as
well. I think that these are borderline issues – these are issues where hard
philosophy though is necessary.

To the question 'Do we need new ways to express ethical questions to the wider
public?' - I have never addressed any conference of any set of specialists
anywhere who have not liked it when I said, 'we need greater public awareness
of this'. They all said, 'we need greater public awareness'. And yes, we do need
to find new ways of expressing the ethical questions, but we don't need new
ethical methods. We need to address the ethical questions seriously – we need
to give full weight to the gravity of the questions – and we do need to stop using
the name Frankenstein. Thank you."




                                                                                 31
Summing Up of Workshop: Professor Stephen Cohen
A number of new ideas as a result of this workshop.
Comment on Craig Cormick's talk – why engage the disengaged?
This talk involved the question, 'how do we engage with the public?' Some slides
showed that a hefty portion of the public is not engaged, and a hefty portion
doesn't want to be engaged. So the question arises - Why do we want to engage
with those people? Why do you think it's significant? Why do you think we have
to engage with those people? This was brought up again by Belinda who
mentioned that it's vital that the public be involved in making decisions.

Comment on Belinda's Bennett's talk #1 - What is a balanced approach to
framing laws?
How does the law keep up? Do we need to have new laws (exceptionalist) or
redo the existing laws (generalist)? She mentioned that we need to frame laws
that reflect a balanced approach. So what is a "balanced approach to framing
laws"?

Comment #2 – Community values and a liberal society
“The law is there to frame community values” – I don't think we should accept
that as gospel. We have some laws which actually go against community values,
and those in the liberal society think that it's important that the laws are not just
there to reflect contemporary values – that they sometimes curtail those values.

Comment on Belinda's talk #3 - Globalisation
“If we don't keep up with the rest of the world, we will lose scientists and
business.” You can think about this in two ways, either:

   1. We don't want to lose these people and businesses, so we should allow
      this to be done here, or
   2. Because the particular technology is okay elsewhere – that's some
      evidence that we should think it's okay. But don't want to go too far with
      that – we may think something is utterly objectionable and just because it's
      done somewhere else is not a good reason that we should be doing that.




Comment on Susan Dodd's talk – precautionary principle raises the fact
that there are questions to be asked.




                                                                                   32
We investigate the benefits much more thoroughly than we investigate the risks.
How do we put the risks and the benefits together to say, "should we go down
this path or not?"

Idea – Use stakeholder theory. We say who the stakeholders are and why they
should be consulted. There is an analogy between the "good use" of the idea of
the precautionary principle and the good use of Stakeholder theory i.e. waving
the flag of precautionary principle as something important signals that there is a
very important question to be asked – have you thought about the risks? Have
you thought about the benefits? It also then asks, 'what have you thought?" Then
it allows the person raising the question, and the one answering the question to
frame the question, and then make some decisions.

Precautionary principle waves the flag – it raises the fact that there are
questions to be asked. So go ahead and raise those questions, that as
advocates of one moral side or the other you think we should be persuaded to be
on your side.

Alan's talk showed how hard philosophical thought can be applied to problems.
But what we shouldn't think is that this may necessarily produce an answer
and/or a direction. What it might simply do is articulate a problem and then admit
that there are some decisions to be made now. There are many cases where the
issues you should be thinking about are laid out. And then can make a decision
based on taking into account those issues.

Opened the floor to other speakers

Responses from speakers
Craig Cormick: The precautionary principle plagues governments and policy-
makers due to its jelly-like properties – it comes out different ways depending on
how you hold it, and different people hold it differently. It's not the end of the
game - it's a flag. It's raising issues, challenges and questions you have to
address.

Belinda Bennett:
What is a "balanced approach to framing laws"?
Taking a middle path through contested areas. e.g. stem cell research –
scientists who want to do it and can see the benefits, and on the other hand
people who believe that working with the embryo is morally abhorrent.

 We have to work out how we within Australia will regulate. As our community
values change – as people become more familiar with what the technology
means to them and whether they are prepared to agree with that technology
being supported or not.

At an early stage, people's views are still formative, and it's important to respond
to that when crafting laws – take a very moderate approach. Even if that sounds


                                                                                  33
wishy-washy it's probably the safest for us to be respectful to everyone's views,
and then distil those views into the best for regulation. This is about compromise.
I don't think there's anything wrong with that in a democracy and a tolerant
society. Taking a moderate and balanced approach is part of doing that.

Globalisation issue –

      It’s a fact of the world we live in. Don't think that means that "anything
       goes" but it means that we should think about the impact on the broader
       Australian community on our laws.
      We can see that in approaches other countries have taken in areas like
       stem cell research where some countries are seen as having a more
       liberal regulatory environment, whereas other are more restrictive.
       Difficulties for scientists who are trying to do research.
      If we come to the view that something is wrong and not beneficial to the
       community, we shouldn't shy away from expressing that view.
Susan Dodds:
Why engage the public in these debates?

      In part about unpacking issues.
      I have spent the last few years on a project "Big picture bioethics – policy
       making in liberal democracy". Looking at issues like stem cell research
       where you have to make a policy decision one way or another given you
       are going to publicly fund research, you know that there's significant
       ethical disagreement that won't just disappear by everyone getting into a
       room and talking.
      How do you make a defensible decision? One aspect - having review
       periods. But a lot of it has to do with having the justification for coming
       down one way or another on the contentious issue.
A reason why you want to engage the public in that discussion is that we
don't actually know what Australians value. There has been research about
surveying values, but we don't get that in the context of people saying, 'I care
about this because... and relative to...".

Looking at the idea of deliberative democracy –

      Don't just ask people to vote, but we ask people to get together and say,
       "we need to have a policy because we'll all be affected one way or
       another. We need a policy that can track things that we think are
       significant".
      Just because you don't know someone's viewpoint on an issue doesn't
       mean it isn't there.




                                                                                     34
If you bring together those kinds of social epistemology in the discussion – the
idea that we find out what's at stake by discussing what's going on. If it's not just
a technical question, then we need to find out more about what people think
about the issues. We currently have a relatively disengaged populace but
need to make the effort to get them engaged.

So there are issues about the defensibility and legitimacy of a policy, which won't
reflect the vales of everyone, and also you get better support for a policy if
people they can be part of the decision-making – not just in the voting but
in the shaping. Public engagement is just a "utopian" idea, but in reality is much
more iterative and slow-paced.

Agree with Stephen's view on stakeholder theory – is this a good way of what is
valued by group? People are constantly positioning and stakeholder
positions reflect in part the diversity of those views, and may be a useful
proxy for saying, 'here are some of the issues."

Philosophers may not be able to tell us all the issues, but there is a
process we can go through that is respectful of the diversity of ethical
issues. People need to have the space in which the issues can be discussed.

Alan Saunders:
Why involve people in these discussions?
      "If it’s not too patronizing, I want to involve people in these discussions
       because it will be good for them".
      It will give them more control over the world in which they live.
      Less "whom" and more "who". Less "subjects" and more "objects"
      Morality and epistemology need not be that far apart.
      Important that we not just turn to the philosophers who are able to give us
       answers. Example of very ill premature babies who cannot survive without
       serious medical intervention, and if they do survive will have enormous
       physical and mental defects. A "utilitarian" philosophy is often embraced
       by people working in these areas, but it may not always be the best
       approach.
Philosophy as a way or articulating the issues but not necessarily
providing the answers:
      Socrates' dialogue with Euthypro (recorded by Plato) – "is something good
       because it is pleasing to the gods, or is it pleasing to the gods because it
       is good?"
      Socrates made it clear how complex the issues were and that is why they
       killed him!




                                                                                     35
Comments and questions from the Audience:
Even if we've chosen not to engage in the technology – we may still be
vulnerable to the risks
Audience member: Part 1. Globalization – recognizing that there is globalization
doesn't mean that we can't decide how to live on our own in Australia.

Q1. Part 2. On the other hand, recognizing that things happening elsewhere
aren't happening here, given that it is a small world, we are vulnerable. And if we
don't proceed to investigate risks – even if we've chosen not to engage in the
technology – we are vulnerable to the risks just like everybody else is.

Craig Cormick: Good point - if you choose not to share the benefits, then you
can't necessarily avoid the risks.

      Case study – CSIRO accidentally discovered a super-plague in mice –
       decided to publish it so that everyone would have access to the data, so
       they would know what to do if such a plague arose in human form. That
       also takes away the ability for anyone to use it as human super plague.
      Grey goo scenario in nanotechnology (could a man-made self-replicating
       technology keep replicating and take over the world?). Same point raised
       with synthetic biology – could an oil-eating bacteria be created? Yes, but
       how do we control it?
We are in a nasty spot, and only innovation can get us out of it.
Audience member: We are in a nasty spot, and only innovation can get us out of
it.

Susan Dodds: – As a result, I tend to think of the precautionary principle as a
static, rather than a dynamic principle. It stops us now, but it doesn't necessarily
allow us to get to the future that we want.

      Might need to take the situation as an evolutionary position – the position
       that we are at now, and which we will try to adjust as we go forward.
      Globalization. 99 per cent of the science we use is produced overseas
       (Cutler). We produce about 1 per cent of the world's science, we have
       about 1 per cent of the world's GDP, and science is widely traded. So you
       can decide not to do something in Australia and have the second round in
       Australia.
Does thinking ethically mean that you have to think globally?
Audience member: Does thinking ethically mean that you have to think globally?
And so, talk about how we do things in Australia as opposed to the rest of the
world might just fall off the table.




                                                                                   36
Stephen Cohen:
    I don't think that there's anything anybody has said that would be opposed
     to that view. Rather that there are matters that we have to decide, and
     suppose we decide in Australia that a certain technology/process is
     unethical – we're not just saying it's unethical to do it here – we're saying
     that it's unethical anywhere. But the fact is that it might be being done
     elsewhere, but this doesn't mean that we need to change our minds. It
     might give us cause to reflect.

Susan Dodds:
   Comes back to the idea of ownership over decisions. e.g. climate
     change – our solutions have to be ones where "they" (i.e. others) should
     do it, but it should be something that we can do.
      Solutions may not be technical. Seems to me that real solutions need to
       be political and social. We need to deal differently with our ideas of where
       the world is going.
      Our philosophers are looking more globally, partly because the
       relationship between applied ethics issues and distribution of good and
       bad things (e.g. resources, pollution) has become more obvious.
      It's hard to say "ethics ends with my community". For example, paid
       surrogacy was illegal in Australia, but surrogates came in by plane. Today,
       we have the issue of children of paid surrogates wanting to trace their
       origins. We can't put the borders up saying that the decisions we make are
       going to completely shape the way we live – the world is much bigger than
       that.
Craig Cormick: Issues that are important to people – Global climate change,
State of Origin and World Cup.

      Study – A thousand people told "you're not a local citizen, you're a global
       citizen". Resulted in fracturing and polarization – "yes I'm a global citizen"
       vs. "No, I'm a local citizen and I want to become more involved locally".
      See local communities become stronger and stronger at the same
       time we're told about global influence. There is a sense of local
       community, then national, then a sense of the globe.
      SMH – Closed door meeting on climate change – people can see the
       impact of the problem if they look globally, but they can only see the
       solution if they look locally.

Belinda Bennett:
    Important to consider the impact on Australian community, because we
      can only make laws in Australia. However, this doesn't mean that we
      don't consider the impact beyond our borders.




                                                                                    37
      Need to engage with international bodies who are involved in setting
       ethical standards. e.g. scientific bodies, international human rights
       statements, international ethical statements.
      Have to be aware of the limits of our influence, but that doesn't mean we
       should put our heads in the sand.
Engage the public in discussion about IT and telecommunication
Audience member comment – Could be engaging the public more on decisions
to do with information technology and communications, as Australia is in a
position to put itself at the forefront of research into these technologies, and at
the forefront of engaging the public. e.g. how will telecommunication affect your
day-to-day life in terms of telecommuting, providing quality rather than quantity.

Is there a distinction between putting a moratorium on commercialization
and putting a moratorium on research?
Audience member – Thought there could be more of a distinction between
putting a moratorium on commercialization and putting a moratorium on
research. Not in favour of putting a moratorium on research, because the
potential risks can be studied in well in a lab. But, now we have proliferation of
the technologies due to lack of moratorium on commercialization, and can
suddenly find a serious ill-effect.

At what point do you stop investigating the risks?
At what point does the government decide that's enough – now you go ahead
and commercialise the process?

Craig Cormick: Good question because many people have the idea that "the
government" is impenetrable. Government actually made up of hugely diverse
agencies, with conflicting interests sometimes.

      Comes down to earlier point that sooner or later you have to make a
       decision. Even not making a decision is a decision. And not making a
       decision can mean allowing something to creep onto a marketplace and
       then have to pull it back or pull it forward.
      Processes are complex particularly when technology is moving faster than
       the regulation/research allows and particularly when technology comes
       from overseas).
          o Can control largely how technology is developed in Australia and
            how they're introduced.
          o Less ability to limit the types of knowledge coming from overseas.
            There is a clear distinction between what's happening in the lab and
            what's happening commercially
          o For example, when GM foods first came out, there was a
            moratorium on growing GM foods in Australia, but they were still


                                                                                     38
              being sold in supermarkets, because there wasn't a ban on them
              being provided to the public.
      All the government can do is take all the best data you've got at the time,
       and make the best decision you can with it. Sometimes in retrospect these
       aren't the best decisions, or at the time, can look like political expediency.
      Point to be made with public engagement is if you want to broaden the
       amount of inputs, and find out how it marries with community values, then
       you need deliberative processes.
          o Unless you talk to people you'll never hear if they are concerned
            about this issue, or have more pressing issues.
          o Community engagement matters because democracy matters –
            and if we live in a democracy it behooves you to seek input from
            everybody, even if they are reluctant to come to the table and talk
            to you – you still need to find out what they think.
We have a moral responsibility to keep forever monitoring for risks.
Audience Member: From a pharmaceutical background – it comes to mind that
we have a moral responsibility to keep forever monitoring for risks. Yes, go for it,
(especially with regard to medicines) but I think the key thing is to keep
monitoring. The law has to be flexible enough to be able to slide back when
necessary, or to have the appropriate clauses in there.

James Hardie as an example of why we need public engagement
Audience Member: To answer why do we want public engagement to the
uninterested and unbothered – take the example of James Hardie. A lot of
people affected would have been uninterested, but they needed to be consulted.

Craig Cormick: Repeat of the key question – what are the ethical discussions we
should be engaging the public on?

There is a difference between engaging the public to find out their
concerns, and engaging them to gain support for the technology
(advocacy).
Audience member: There is a difference between engaging the public to find out
their concerns, and engaging them to gain support for the technology (advocacy).
Part of the decision making procedure involves trying to inform someone and
then getting their opinion on whether to proceed, so that would be a different kind
of consultation.

The level of information available to consumers
Audience member: As a member of the public, I would still want to know what is
in the products we are using and consuming.




                                                                                   39
Belinda Bennett – A discussion could be centered around the level of information
that consumers have available to them.

      What level of info do people want to have?
      Product labelling, and availability of information. Don't feel that simply
       labelling is enough – there has to be information that goes with it so that
       people can understand.
      Audience member: Related to that, there are problems with explaining
       complicated science to people without a science background. Explaining
       things to people is all very well, but it's all wrapped with issues of
       credibility and how do you do that?

Closing:
Stephen Cohen

Like to make a point that if Toyota had approached us to have a session about
the Prius, or if RailCorp had contacted us to talk about the Ethics program that
they run, we would have said no. As it was, being approached by a government
agency caused us some consternation! The AAPAE is not an advocacy group –
we have had some sessions in the past which we were disappointed to learn that
invited speakers seemed to be pushing the company line.

This is quite unique that the AAPAE has said to an organisation, "yes, we are
very happy that you come to this", and it seems that with the way things have
proceeded, we are very happy that they have come. And with that, I'd like to
close the session and thank you all for coming and participating.




                                                                                     40

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:5
posted:10/5/2012
language:English
pages:40