If philosophy
is the attempt “to understand how things in the broadest possible sense of the
term hang together in the broadest possible sense of the term”, as Sellars
(1962) put it, philosophy should not ignore technology. It is largely by
technology that contemporary society hangs together. It is hugely important not
only as an economic force but also as a cultural force. Indeed during the last
two centuries, when it gradually emerged as a discipline, philosophy of
technology has mostly been concerned with the impact of technology on society
and culture, rather than with technology itself. Mitcham (1994) calls this type
of philosophy of technology ‘humanities philosophy of technology’ because it is
continuous with social science and the humanities. Only recently a branch of
the philosophy of technology has developed that is concerned with technology
itself and that aims to understand both the practice of designing and creating
artifacts (in a wide sense, including artificial processes and systems) and the
nature of the things so created. This latter branch of the philosophy of
technology seeks continuity with the philosophy of science and with several
other fields in the analytic tradition in modern philosophy, such as the
philosophy of action and decision-making, rather than with social science and
the humanities.
The entry
starts with a brief historical overview, then continues with a presentation of
the themes that modern analytic philosophy of technology focuses on. This is
followed by a discussion of the societal and ethical aspects of technology, in
which some of the concerns of humanities philosophy of technology are
addressed. This twofold presentation takes into consideration the development
of technology as the outcome of a process originating within and guided by the
practice of engineering, by standards on which only limited societal control is
exercised, as well as the consequences for society of the implementation of the
technology so created, which result from processes upon which only limited
control can be exercised.
- 1. Historical Developments
- 2. Analytic Philosophy of Technology
- 2.1. Introduction: Philosophy of technology and
philosophy of science
- 2.2. The relationship between technology and
science
- 2.3. The centrality of design to technology
- 2.4. Methodological issues: design as decision
making
- 2.5. Metaphysical issues: The status and
characteristics of artifacts
- 2.6. Other topics
- 3. Ethical and Social Aspects of Technology
- 3.1. The development of the ethics of technology
- 3.2. Approaches in the ethics of technology
- 3.3. Some recurrent themes in the ethics of
technology
- Bibliography
- Academic
Tools
- Other
Internet Resources
- Related
Entries
1. Historical Developments
1.1. The Greeks
Philosophical
reflection on technology is about as old as philosophy itself. Our oldest
testimony is from ancient Greece. There are four prominent themes. One early
theme is the thesis that technology learns from or imitates nature (Plato, Laws X 899a ff.). According to Democritus,
for example, house-building and weaving were first invented by imitating
swallows and spiders building their nests and nets, respectively (fr D154;
perhaps the oldest extant source for the exemplary role of nature is Heraclitus
fr D112). Aristotle referred to this tradition by repeating Democritus’
examples, but he did not maintain that technology can only imitate nature:
“generally art in some cases completes what nature cannot bring to a finish,
and in others imitates nature” (Physics II.8, 199a15; see also Physics II.2, and see Schummer 2001 for
discussion).
A second
theme is the thesis that there is a fundamental ontological distinction between
natural things and artifacts. According to Aristotle, Physics II.1, the former have their principles
of generation and motion inside, whereas the latter, insofar as they are
artifacts, are generated only by outward causes, namely human aims and forms in
the human soul. Natural products (animals and their parts, plants, and the four
elements) move, grow, change, and reproduce themselves by inner final causes;
they are driven by purposes of nature. Artifacts, on the other hand, cannot
reproduce themselves. Without human care and intervention, they vanish after
some time by losing their artificial forms and decomposing into (natural)
materials. For instance, if a wooden bed is buried, it decomposes to earth or
changes back into its botanical nature by putting forth a shoot. The thesis
that there is a fundamental difference between man-made products and natural
substances has had a long-lasting influence. In the Middle Ages, Avicenna
criticized alchemy on the ground that it can never produce ‘genuine’
substances. Even today, some still maintain that there is a difference between,
for example, natural and synthetic vitamin C. The modern discussion of this
theme is taken up in Section 2.5.
Aristotle’s
doctrine of the four causes—material, formal, efficient and final—can be
regarded as a third early contribution to the philosophy of technology. Aristotle
explained this doctrine by referring to technical artifacts such as houses and
statues (Physics II.3).
These causes are still very much present in modern discussions related to the
metaphysics of artifacts. Discussions of the notion of function , for example,
focus on its inherent teleological or ‘final’ character and the difficulties
this presents to its use in biology. And the notorious case of the ship of
Theseus—see this encyclopedia’s entries on material constitution, identity
over time, relative
identity, and sortals—was
introduced in modern philosophy by Hobbes as showing a conflict between unity
of matter and unity of form as principles of individuation. This conflict is
seen by many as characteristic of artefacts. David Wiggins (1980: 89) takes it
even to be the defining characteristic of artifacts.
A fourth
point that deserves mentioning is the extensive employment of technological
images by Plato and Aristotle. In his Timaeus,
Plato described the world as the work of an Artisan, the Demiurge. His account
of the details of creation is full of images drawn from carpentry, weaving,
ceramics, metallurgy, and agricultural technology. Aristotle used comparisons
drawn from the arts and crafts to illustrate how final causes are at work in natural
processes. Despite their negative appreciation of the life led by artisans, who
they considered too much occupied by the concerns of their profession and the
need to earn a living to qualify as free individuals, both Plato and Aristotle
found technological imagery indispensable for expressing their belief in the
rational design of the universe (Lloyd 1973: 61).
1.2. Later developments; Humanities philosophy of
technology
Although
there was much technological progress in the Roman empire and during the Middle
Ages, philosophical reflection on technology did not grow at a corresponding
rate. Comprehensive works such as Vitruvius’ De
architectura (first century
BC) and Agricola’s De re
metallica (1556) paid much
attention to practical aspects of technology but little to philosophy.
In the realm
of scholastic philosophy, there was an emergent appreciation for the mechanical
arts. They were generally considered to be born of— and limited to—the mimicry
of nature. This view was challenged when alchemy was introduced in the Latin
West around the mid-twelfth century. Some alchemical writers such as Roger
Bacon were willing to argue that human art, even if learned by imitating
natural processes, could successfully reproduce natural products or even
surpass them. The result was a philosophy of technology in which human art was
raised to a level of appreciation not found in other writings until the
Renaissance. However, the last three decades of the thirteenth century
witnessed an increasingly hostile attitude by religious authorities toward
alchemy that culminated eventually in the denunciation Contra alchymistas, written by
the inquisitor Nicholas Eymeric in 1396 (Newman 1989, 2004).
The
Renaissance led to a greater appreciation of human beings and their creative efforts,
including technology. As a result, philosophical reflection on technology and
its impact on society increased. Francis Bacon is generally regarded as the
first modern author to put forward such reflection. His view, expressed in his
fantasy New Atlantis (1627), was overwhelmingly positive.
This positive attitude lasted well into the nineteenth century, incorporating
the first half-century of the industrial revolution. Karl Marx did not condemn
the steam engine or the spinning mill for the vices of the bourgeois mode of
production; he believed that ongoing technological innovation were necessary
steps toward the more blissful stages of socialism and communism of the future
(see Bimber (1990) for a recent discussion of different views on the role of
technology in Marx’s theory of historical development).
A turning
point in the appreciation of technology as a socio-cultural phenomenon is
marked by Samuel Butler’s Erewhon (1872), written under the influence of
the Industrial Revolution, and Darwin’s On
the origin of species. This book gave an account of a fictional country
where all machines are banned and the possession of a machine or the attempt to
build one is a capital crime. The people of this country had become convinced
by an argument that ongoing technical improvements are likely to lead to a
‘race’ of machines that will replace mankind as the dominant species on earth.
During the
last quarter of the nineteenth century and most of the twentieth century a
critical attitude predominated in philosophical reflection on technology. The
representatives of this attitude were, overwhelmingly, schooled in the
humanities or the social sciences and had virtually no first-hand knowledge of
engineering practice. Whereas Bacon wrote extensively on the method of science
and conducted physical experiments himself, Butler, being a clergyman, lacked
such first-hand knowledge. The author of the first text in which the term
‘philosophy of technology’ occurred, Ernst Kapp’s Eine Philosophie der Technik (1877), was a philologist and
historian. Most of the authors who wrote critically about technology and its
socio-cultural role during the twentieth century were philosophers of a general
outlook (Martin Heidegger, Hans Jonas, Arnold Gehlen, Günther Anders, Andrew Feenberg)
or had a background in one of the other humanities or in social science, like
literary criticism and social research (Lewis Mumford), law (Jacques Ellul),
political science (Langdon Winner) or literary studies (Albert Borgmann). The
form of philosophy of technology constituted by the writings of these and
others has been called by Carl Mitcham (1994) ‘humanities philosophy of
technology’, because it takes its point of departure in the social sciences and
the humanities rather than in the practice of technology.
Humanist
philosophers of technology tend to take the phenomenon of technology itself
almost for granted; they treat it as a ‘black box’, a unitary, monolithic,
inescapable phenomenon. Their interest is not so much to analyze and understand
this phenomenon itself but to grasp its relations to morality (Jonas, Gehlen),
politics (Winner), the structure of society (Mumford), human culture (Ellul)
the human condition (Hannah Arendt) and metaphysics (Heidegger). In this, these
philosophers are almost all openly critical of technology: all things
considered, they tend to have a negative judgment of the way technology has
affected human society and culture, or at least they single out for
consideration the negative effects of technology on human society and culture.
This does not necessarily mean that technology itself is pointed out as the
direct cause of these negative developments. In the case of Heidegger, in
particular, the paramount position of technology in modern society is a symptom
of something more fundamental, namely a wrongheaded attitude towards Being
which has been in the making for almost 25 centuries. It is therefore
questionable whether Heidegger should be considered as a philosopher of
technology, although within the traditional view he is considered to be among
the most important ones. Much the same could be said about Arendt, in
particular her discussion of technology in The
human condition (1958),
although her position in the canon of humanities philosophy of technology is
not as prominent.
In its
development, humanities philosophy of science continues to be influenced not so
much by developments in philosophy (e.g. philosophy of science, philosophy of
action, philosophy of mind) but by developments in the social sciences and
humanities. Of particular significance has been the emergence of ‘Science and
Technology Studies’ (STS) in the 1980s, which studies from a broad
social-scientific perspective how social, political, and cultural values affect
scientific research and technological innovation, and how these in turn affect
society, politics, and culture. We discuss authors from humanities philosophy
of technology in Section 3 on ‘Ethical and Social Aspects of Technology’, but
do not present separately and in detail the wide variety of views existing in
this field. For a detailed treatment Mitcham’s book Thinking through technology (1994) provides an excellent overview.
A collection of more recent contributions offer Berg Olsen, Selinger and Riis
(2008); a comprehensive anthology of texts from this tradition is presented by
Scharff and Dusek (2003).
In the next
section we will discuss in more detail a form of the philosophy of technology
that can be regarded as an alternative to the humanities philosophy of
technology. It emerged in the 1960s and gained momentum in the past fifteen to
twenty years. This form of the philosophy of technology, which may be called
‘analytic’, is not primarily concerned with the relations between technology
and society but with technology itself. It expressly does not look upon
technology as a ‘black box’ but as a phenomenon that deserves study. It regards
technology as a practice, basically the practice of engineering. It analyzes
this practice, its goals, its concepts and its methods, and it relates its
findings to various themes from philosophy. After having presented the major
issues of philosophical relevance in technology and engineering that emerge in
this way, we discuss the problems and challenges that technology poses for the
society in which it is practiced in the third and final section.
2. Analytic Philosophy of Technology
2.1. Introduction: Philosophy of technology and philosophy
of science
It may come
as a surprise to those fresh to the topic that the fields of philosophy of
science and philosophy of technology show such great differences, given that
few practices in our society are as closely related as science and technology.
Experimental science is nowadays crucially dependent on technology for the
realization of its research setups and for the creation of circumstances in
which a phenomenon will become observable.
Theoretical
research within technology has come to be often indistinguishable from
theoretical research in science, making engineering science largely continuous
with ‘ordinary’ or ‘pure’ science. This is a relatively recent development,
which started around the middle of the nineteenth century, and is responsible
for great differences between modern technology and traditional, craft-like
techniques. The educational training that aspiring scientists and engineers
receive starts off being largely identical and only gradually diverges into a
science or an engineering curriculum. Ever since the scientific revolution of,
primarily, the seventeenth century, characterized by its two major innovations,
the experimental method and the mathematical articulation of scientific
theories, philosophical reflection on science has concentrated on the method by
which scientific knowledge is generated, on the reasons for thinking scientific
theories to be true, and on the nature of evidence and the reasons for
accepting one theory and rejecting another. Hardly ever have philosophers of
science posed questions that did not have the community of scientists, their
concerns, their aims, their intuitions, their arguments and choices, as a major
target. In contrast it is only recently that the philosophy of technology has
discovered the community of engineers.
To say that
it is understandable that philosophy of technology, but not philosophy of
science, has targeted first of all the impact of technology—and with it
science—on society and culture, because science affects society only through
technology, will not do. Right from the start of the scientific revolution,
science affected human culture and thought fundamentally and directly, not with
a detour through technology, and the same is true for later developments such
as relativity, atomic physics and quantum mechanics, the theory of evolution,
genetics, biochemistry, and the increasingly dominating scientific world view
overall. Philosophers of science overwhelmingly give the impression that they
leave questions addressing the normative, social and cultural aspects of
science gladly to other philosophical disciplines, or to historical studies.
There are exceptions, however, and things may be changing; Philip Kitcher, to
name but one prominent philosopher of science, has since 2000 written books on
the relation of science to politics, ethics and religion.
There is a
major difference between the historical development of modern technology as
compared to modern science which may at least partly explain this situation,
which is that science emerged in the seventeenth century from philosophy
itself. The answers that Galileo, Huygens, Newton, and others gave, by which
they initiated the alliance of empiricism and mathematical description that is
so characteristic of modern science, were answers to questions that had
belonged to the core business of philosophy since antiquity. Science,
therefore, kept the attention of philosophers. Philosophy of science is a
transformation of epistemology in the light of the emergence of science. The
foundational issues—the reality of atoms, the status of causality and
probability, questions of space and time, the nature of the quantum world—that
were so lively discussed during the end of the nineteenth and the beginning of
the twentieth century are an illustration of this close relationship between
scientists and philosophers. No such intimacy has ever existed between those
same philosophers and technologists; their worlds still barely touch. To be
sure, a case can be made that, compared to the continuity existing between
natural philosophy and science, a similar continuity exists between central
questions in philosophy having to do with human action and practical
rationality and the way technology approaches and systematizes the solution of
practical problems. To investigate this connection may indeed be considered a
major theme for philosophy of technology, and more is said on it in Sections
2.3 and 2.4. This continuity appears only by hindsight, however, and dimly, as
the historical development is at most a slow convening of various strands of
philosophical thinking on action and rationality, not a development into
variety from a single origin. Significantly it is only the academic outsider
Ellul who has, in his idiosyncratic way, recognized in technology the emergent
single dominant way of answering all questions concerning human action,
comparable to science as the single dominant way of answering all questions
concerning human knowledge (Ellul 1964). But Ellul was not so much interested
in investigating this relationship as in emphasizing and denouncing the social
and cultural consequences as he saw them. It is all the more important to point
out that humanities philosophy of technology cannot be differentiated from
analytic philosophy of technology by claiming that only the former is
interested in the social environment of technology. There are studies which are
rooted in analytic philosophy of science but address specifically the relation
of technology to society and culture, and equally the relevance of social
relations to the practice of technology, without taking an evaluative stand
with respect to technology; an example is (Preston
2012).
In focusing
on the practice of technology as sustained by engineers, similar to the way
philosophy of science focuses on the practice of science as sustained by
scientists, analytic philosophy of technology could be thought to amount to the
philosophy of engineering. Indeed many of the issues related to design,
discussed below in Sections 2.3 and 2.4, could be singled out as forming the
subject matter of the philosophy of engineering. The metaphysical issues
discussed in Section 2.5 could not, however, and analytic philosophy of
technology is therefore significantly broader than philosophy of engineering.
This is reflected in the very title of Philosophy
of technology and engineering sciences (Meijers
2009), an extensive up-to-date overview, which contains contributions to all of
the topics treated here. An undergraduate-level textbook which may serve as an
introduction to the field is (Vermaas et al. 2011).
2.2. The relationship between technology and science
The close
relationship between the practices of science and technology may easily keep
the important differences between the two from view. The predominant position
of science in the philosophical field of vision made it difficult for
philosophers to recognize that technology merits special attention for
involving issues that do not emerge in science. This view resulting from this
lack of recognition is often presented, perhaps somewhat dramatically, as
coming down to a claim that technology is ‘merely’ applied science.
A questioning
of the relation between science and technology was the central issue in one of
the earliest discussions among analytic philosophers of technology. In 1966, in
a special issue of the journal Technology
and Culture, Henryk Skolimowski argued that technology is something quite
different from science (Skolimowski 1966). As he phrased it, science concerns
itself with what is, whereas technology concerns itself with what is to be. A
few years later, in his well-known bookThe sciences of the artificial (1969), Herbert Simon emphasized this
important distinction in almost the same words, stating that the scientist is
concerned with how things are but the engineer with how things ought to be.
Although it is difficult to imagine that earlier philosophers were blind to
this difference in orientation, their inclination, in particular in the
tradition of logical empiricism, to view knowledge as a system of statements
may have led to a conviction that in technology no knowledge claims play a role
that cannot also be found in science. The study of technology, therefore, was
not expected to pose new challenges nor hold surprises regarding the interests
of analytic philosophy.
In contrast,
Mario Bunge (1966) defended the view that technology is applied science, but in a subtle way
that does justice to the differences between science and technology. Bunge
acknowledges that technology is about action, but an action heavily underpinned
by theory—that is what distinguishes technology from the arts and crafts and
puts it on a par with science. According to Bunge, theories in technology come
in two types: substantive theories, which provide knowledge about the object of
action, and operative theories, which are concerned with action itself. The
substantive theories of technology are indeed largely applications of
scientific theories. The operative theories, in contrast, are not preceded by
scientific theories but are born in applied research itself. Still, as Bunge
claims, operative theories show a dependency on science in that in such
theories the method of science is employed. This includes
such features as modeling and idealization, the use of theoretical concepts and
abstractions, and the modification of theories by the absorption of empirical
data through prediction and retrodiction.
In response
to this discussion, Ian Jarvie (1966) proposed as important questions for a
philosophy of technology an inquiry into the epistemological status of technological
statements and the way technological statements are to be demarcated from
scientific statements. This suggests a thorough investigation of the various
forms of knowledge occurring in either practice, in particular, since
scientific knowledge has already been so extensively studied, of the forms of
knowledge that are characteristic of technology and are lacking, or of much
less prominence, in science. A distinction between ‘knowing that’—traditional
propositional knowledge—and ‘knowing how’—non-articulated and even
impossible-to-articulate knowledge—had been introduced by Gilbert Ryle (1949)
in a different context. The notion of ‘knowing how’ was taken up by Michael
Polanyi under the name of tacit knowledge and made a central characteristic of
technology (Polanyi 1958); the current state of the philosophical discussion is
presented in this encyclopedia’s entry on knowledge
how. However, emphasizing too much the role of unarticulated knowledge,
of ‘rules of thumb’ as they are often called, easily underplays the importance
of rational methods in technology. An emphasis on tacit knowledge may also be
ill-fit for distinguishing the practices of science and technology because the
role of tacit knowledge in science may well be more important than current
philosophy of science acknowledges, for example in concluding causal
relationships on the basis of empirical evidence. This was also an important
theme in the writings of Thomas Kuhn on scientific theory change (Kuhn 1962).
2.3. The centrality of design to technology
To claim,
with Skolimowski and Simon, that technology is about what is to be or what
ought to be rather than what is may serve to distinguish it from science but
will hardly make it understandable why so much philosophical reflection on
technology has taken the form of socio-cultural critique. Technology is an
ongoing attempt to bring the world closer to the way one wishes it to be.
Whereas science aims to understand the world as it is, technology aims to
change the world. These are abstractions, of course. For one, whose wishes
concerning what the world should be like are realized in technology? Unlike
scientists, who are often personally motivated in their attempts at describing and
understanding the world, engineers are seen, not in the least by engineers
themselves, as undertaking their attempts to change the world as a service to
the public. The ideas on what is to be or what ought to be are seen as
originating outside of technology itself; engineers then take it upon
themselves to realize these ideas. This view is a major source for the widely
spread picture of technology as being instrumental,
as delivering instruments ordered from ‘elsewhere’, as means to ends specified
outside of engineering, a picture that has served further to support the claim
that technology is neutral with respect to values, discussed in Section 3.3.1.
This view involves a considerable distortion of reality, however. Many
engineers are intrinsically motivated to change the world; in delivering ideas
for improvement they are, so to speak, their own best customers. The same is
true for most industrial companies, particularly in a market economy, where the
prospect of great profits is another powerful motivator. As a result, much
technological development is ‘technology-driven’.
To understand
where technology ‘comes from’, what drives the innovation process, is of
importance not only to those who are curious to understand the phenomenon of
technology itself but also to those who are concerned about its role in
society. Technology is a practice focused on the creation of artifacts and, of
increasing importance, artifact-based services. The design process, the structured
process leading toward that goal, forms the core of the practice of technology.
In the engineering literature, the design process is commonly represented as
consisting of a series of translational steps; see for this e.g. Suh (2001). At
the start are the customer’s needs or wishes. In the first step these are
translated into a list of functional
requirements, which then define the design task an engineer, or a team of
engineers, has to accomplish. The functional requirements specify as precisely
as possible what the device to be designed must be able to do. This step is
required because customers usually focus on just one or two features and are
unable to articulate the requirements that are necessary to support the
functionality they desire. In the second step, the functional requirements are
translated into design
specifications, which the exact physical parameters of crucial components
by which the functional requirements are going to be met. The design parameters
are combined and amended such that a blueprint of the device results. The blueprint
contains all the details that must be known such that the final step to the
process of manufacturing the device can take place. It is tempting to consider
the blueprint as the end result of a design process, instead of a finished copy
being this result. However, actual copies of a device are crucial for the
purpose of prototyping and testing. Prototyping and testing presuppose that the
sequence of steps making up the design process can and will often contain
iterations, leading to revisions of the design parameters and/or the functional
requirements. Even though, certainly for mass-produced items, the manufacture
of a product for delivery to its customers or to the market comes after the
closure of the design phase, the manufacturing process is often reflected in
the functional requirements of a device, for example in putting restrictions on
the number of different components of which the device consists. Ease of
maintenance is often a functional requirement as well. An important modern
development is that the complete life cycle of an artifact is now considered to
be the designing engineer’s concern, up till the final stages of the recycling
and disposal of its components and materials, and the functional requirements
of any device should reflect this. From this point of view, neither a blueprint
nor a prototype can be considered the end product of engineering design.
The biggest
idealization that this scheme of the design process contains is arguably
located at the start. Only in a minority of cases does a design task originate
in a customer need or wish for a particular artifact. First of all, as already
suggested, many design tasks are defined by engineers themselves, for instance,
by noticing something to be improved in existing products. But more often than
not design starts with a problem pointed out by some societal agent, which
engineers are then invited to solve. Many such problems, however, are
ill-defined or wicked problems, meaning that it is not at
all clear what the problem is exactly and what a solution to the problem would
consist in. The ‘problem’ is a situation that people—not necessarily the people
‘in’ the situation—find unsatisfactory, but typically without being able to
specify a situation that they find more satisfactory in other terms than as one
in which the problem has been solved. In particular it is not obvious that a
solution to the problem would consist in some artifact, or some artifactual
system or process, being made available or installed. Engineering departments
all over the world advertise that engineering is problem solving, and engineers
easily seem confident that they are best qualified to solve a problem when they
are asked to, whatever the nature of the problem. What is more, politics has
tended to support engineers in this attitude. This has led to the phenomenon of
a technological fix, the
solution of a problem by a technical solution, that is, the delivery of an
artifact or artifactual process, where it is questionable, to say the least,
whether this solves the problem or whether it was the best way of handling the
problem. An candidate example of a technological fix for the problem of global
warming would be the currently much debated option of injecting sulfate
aerosols into the stratosphere to offset the warming effect of greenhouse gases
such as carbon dioxide and methane. See for a discussion of technological
fixing e.g. Volti (2009: 26–32). Given this situation, and its hazards, the
notion of a problem and a taxonomy of problems deserve to receive more
philosophical attention than they have hitherto received.
These wicked
problems are often broadly social problems, which would best be met by some
form of social interference. In defense of the engineering view, it could
perhaps be said that the repertoire of ‘proven’ ways of social interference is
meager. The temptation of technical fixes could be overcome—at least that is
how an engineer would see it—by the inclusion of the social sciences in the
systematic development and application of knowledge to the solution of human
problems. This however, is a controversial view. Social engineering is to many a specter to be kept at as
large a distance as possible instead of an ideal to be pursued. Karl Popper
referred to acceptable forms of implementing social change as ‘piecemeal social
engineering’ and contrasted it to the revolutionary but completely unfounded
schemes advocated by, e.g., Marxism. In this encyclopedia’s entry onKarl Popper,
however, his choice of words is called ‘rather unfortunate’. This topic also
deserves more attention that it seems to be currently receiving.
An important
input for the design process is scientific knowledge: knowledge about the
behavior of components and the materials they are composed of in specific
circumstances. This is the point where science is applied. However, much of
this knowledge is not directly available from the sciences, since it often
concerns extremely detailed behavior in very specific circumstances. This
scientific knowledge is therefore often generated within technology, by the
engineering sciences. But apart from this very specific scientific knowledge,
engineering design involves various other sorts of knowledge. In his book What engineers know and how they
know it (Vincenti 1990), the
aeronautical engineer Walter Vincenti gave a six-fold categorization of
engineering design knowledge (leaving aside production and operation as the
other two basic constituents of engineering practice). Vincenti distinguishes
- Fundamental
design concepts, including primarily the operational principle and the
normal configuration of a particular device;
- Criteria
and specifications;
- Theoretical
tools;
- Quantitative
data;
- Practical
considerations;
- Design
instrumentalities.
The fourth
category concerns the quantitative knowledge just referred to, and the third
the theoretical tools used to acquire it. These two categories can be assumed
to match Bunge’s notion of substantive technological theories. The status of
the remaining four categories is much less clear, however, partly because they
are less familiar, or not at all, from the well-explored context of science. Of
these categories, Vincenti claims that they represent prescriptive forms of
knowledge rather than descriptive ones. Here, the activity of design introduces
an element of normativity, which is absent from scientific knowledge. Take such
a basic notion as ‘operational principle’, which refers to the way in which the
function of a device is realized, or, in short, how it works. This is still a
purely descriptive notion. Subsequently, however, it plays a role in arguments
that seek to prescribe a course of action to someone who has a goal that could
be realized by the operation of such a device. At this stage, the issue changes
from a descriptive to a prescriptive or normative one.
Although the
notion of an operational principle—a term that seems to originate with Polanyi
(1958)—is central to engineering design, no single clear-cut definition of it
seems to exist. The issue of disentangling descriptive from prescriptive
aspects in an analysis of the technical action and its constituents is
therefore a task that has hardly begun. This task requires a clear view on the
extent and scope of technology. If one follows Joseph Pitt in his book Thinking about technology (2000) and defines technology broadly
as ‘humanity at work’, then to distinguish between technological action and
action in general becomes difficult, and the study of technological action must
absorb all descriptive and normative theories of action, including the theory
of practical rationality, and much of theoretical economics in its wake. There
have indeed been attempts at such an encompassing account of human action, for
example Tadeusz Kotarbinski’s Praxiology (1965), but a perspective of such
generality makes it difficult to arrive at results of sufficient depth. It
would be a challenge for philosophy to specify the differences among action
forms and the reasoning grounding them in, to single out three prominent
practices, technology, organization and management, and economics.
A more
restricted attempt at such an approach is Ilkka Niiniluoto’s (1993). According
to Niiniluoto, the theoretical framework of technology as the practice that is
concerned with what the world should be like rather than is, the framework that
forms the counterpoint to the descriptive framework of science, is design science. The content of
design science, the counterpoint to the theories and explanations that form the
content of descriptive science, would then be formed bytechnical norms,
statements of the form ‘If one wants to achieve X, one should do Y’. The notion
of a technical norm derives from Georg Henrik von Wright’s Norm and action (1963). Technical norms need to be
distinguished from anankastic statements expressing natural necessity, of the
form ‘If X is to be achieved, Y needs to be done’; the latter have a truth
value but the former have not. Von Wright himself, however, wrote that he did
not understand the mutual relations between these statements. Ideas on what
design science is and can and should be are evidently related to the broad
problem area of practical rationality—see this encyclopedia’s entries on practical
reason andinstrumental rationality—and also to means-ends
reasoning, discussed in the next section.
2.4. Methodological issues: design as decision making
Design is an
activity that is subject to rational scrutiny but in which creativity is
considered to play an important role as well. Since design is a form of action,
a structured series of decisions to proceed in one way rather than another, the
form of rationality that is relevant to it is practical rationality, the
rationality incorporating the criteria on how to act, given particular
circumstances. This suggests a clear division of labor between the part to be
played by rational scrutiny and the part to be played by creativity. Theories
of rational action generally conceive their problem situation as one involving
a choice among various course of action open to the agent. Rationality then
concerns the question how to decide among given options, whereas creativity
concerns the generation of these options. This distinction is similar to the
distinction between the context of justification and the context of discovery
in science. The suggestion that is associated with this distinction, however,
that rational scrutiny only applies in the context of justification, is
difficult to uphold for technological design. If the initial creative phase of
option generation is conducted sloppily, the result of the design task can
hardly be satisfactory. Unlike the case of science, where the practical
consequences of entertaining a particular theory are not taken into
consideration, the context of discovery in technology is governed by severe
constraints of time and money, and an analysis of the problem how best to proceed
certainly seems in order. There has been little philosophical work done in this
direction; an overview of the issues is given by Kroes, Franssen and
Bucciarelli (2009).
The ideas of
Herbert Simon on bounded rationality (see, e.g., Simon 1982) are relevant here,
since decisions on when to stop generating options and when to stop gathering
information about these options and the consequences when they are adopted are
crucial in decision making if informational overload and calculative
intractability are to be avoided. However, it has proved difficult to further
develop Simon’s ideas on bounded rationality. Another notion that is relevant
here is means-ends reasoning. In order to be of any help here, theories of
means-ends reasoning should then concern not just the evaluation of given means
with respect to their ability to achieve given ends, but also the generation or
construction of means for given ends. Such theories, however, are not yet
available; for a proposal on how to develop means-ends reasoning in the context
of technical artifacts, see Hughes, Kroes and Zwart (2007). In the practice of
technology, alternative proposals for the realization of particular functions
are usually taken from ‘catalogs’ of existing and proven realizations. These
catalogs are extended by ongoing research in technology rather than under the
urge of particular design tasks.
When
engineering design is conceived as a process of decision making, governed by
considerations of practical rationality, the next step is to specify these
considerations. Almost all theories of practical rationality conceive of it as
a reasoning process where a match between beliefs and desires or goals is
sought. The desires or goals are represented by their value or utility for the
decision maker, and the decision maker’s problem is to choose an action that
realizes a situation that has maximal value or utility among all the situations
that could be realized. If there is uncertainty concerning the situations that
will be realized by a particular action, then the problem is conceived as
aiming for maximal expected value or utility. Now the instrumental perspective
on technology implies that the value that is at issue in the design process
viewed as a process of rational decision making is not the value of the
artifacts that are created. Those values are the domain of the users of the technology so created. They are
supposed to be represented in the functional requirements defining the design
task. Instead the value to be maximized is the extent to which a particular
design meets the functional requirements defining the design task. It is in
this sense that engineers share an overall perspective on engineering design as
an exercise in optimization.
But although optimization is a value-orientated notion, it is not itself
perceived as a value driving engineering design.
The
functional requirements that define most design problems do not prescribe
explicitly what should be optimized; usually they set levels to be attained
minimally. It is then up to the engineer to choose how far to go beyond meeting
the requirements in this minimal sense. Efficiency,
in energy consumption and use of materials first of all, is then often a prime
value. Under the pressure of society, other values have come to be
incorporated, in particular safety and, more recently,sustainability.
Sometimes it is claimed that what engineers aim to maximize is just one factor,
namely market success. Market success, however, can only be assessed after the
fact. The engineer’s maximization effort will instead be directed at what are
considered the predictors of market success. Meeting the functional
requirements and being relatively efficient and safe are plausible candidates
as such predictors, but additional methods, informed by market research, may
introduce additional factors or may lead to a hierarchy among the factors.
Choosing the
design option that maximally meets all the functional requirements (which may
but need not originate with the prospective user) and all other considerations
and criteria that are taken to be relevant, then becomes the practical
decision-making problem to be solved in a particular engineering-design task.
This creates several methodological problems. Most important of these is that
the engineer is facing a multi-criteria decision problem. The various
requirements come with their own operationalizations in terms of design
parameters and measurement procedures for assessing their performance. This
results in a number of rank orders or quantitative scales which represent the
various options out of which a choice is to be made. The task is to come up
with a final score in which all these results are ‘adequately’ represented,
such that the option that scores best can be considered the optimal solution to
the design problem. Engineers describe this situation as one where trade-offs have to be made: in judging the merit
of one option relative to other options, a relative bad performance on one
criterion can be balanced by a relatively good performance on another
criterion. An important problem is whether a rational method for doing this can
be formulated. It has been argued by Franssen (2005) that this problem is
structurally similar to the well-known problem of social choice, for which
Kenneth Arrow proved his notorious impossibility theorem in 1950, implying that
no general rational solution method exists for this problem. This poses serious
problems for the claim of engineers that their designs are optimal solutions,
since Arrow’s theorem implies that in a multi-criteria problem the notion of
‘optimal’ cannot be rigorously defined.
This result
seems to except a crucial aspect of engineering activity from philosophical
scrutiny, and it could be used to defend the opinion that engineering is at
least partly an art, not a science. Instead of surrendering to the result,
however, which has a significance that extends much beyond engineering and even
beyond decision making in general, we should perhaps conclude instead that
there is still a lot of work to be done on what might be termed, provisionally,
‘approximative’ forms of reasoning. One form of reasoning to be included here
is Herbert Simon’s bounded rationality, plus the related notion of
‘satisficing’. Since their introduction in the 1950s (Simon 1957) these two
terms have found wide usage, but we are still lacking a general theory of
bounded rationality. It may be in the nature of forms of approximative
reasoning such as bounded rationality that a general theory cannot be had, but
even a systematic treatment from which such an insight could emerge seems to be
lacking.
Another problem
for the decision-making view of engineering design is that in modern technology
almost all design is done by teams. Such teams are composed of experts from
many different disciplines. Each discipline has its own theories, its own
models of interdependencies, its own assessment criteria, and so forth, and the
professionals belonging to these disciplines must be considered as inhabitants
of different object worlds,
as Louis Bucciarelli (1994) phrases it. The different team members are,
therefore, likely to disagree on the relative rankings and evaluations of the
various design options under discussion. Agreement on one option as the overall
best one can here be even less arrived at by an algorithmic method exemplifying
engineering rationality. Instead, models of social interaction, such as
bargaining and strategic thinking, are relevant here. An example of such an
approach to an (abstract) design problem is presented by Franssen and
Bucciarelli (2004).
To look in
this way at technological design as a decision-making process is to view it
normatively from the point of view of practical or instrumental rationality. At
the same time it is descriptive in that it is a description of how engineering
methodology generally presents the issue how to solve design problems. From
that somewhat higher perspective there is room for all kinds of normative
questions that are not addressed here, such as whether the functional
requirements defining a design problem can be seen as an adequate
representation of the values of the prospective users of an artifact or a
technology, or by which methods values such as safety and sustainability can
best be elicited and represented in the design process. These issues will be
taken up in Section 3.
2.5. Metaphysical issues: The status and characteristics of
artifacts
Understanding
the process of designing artifacts is the theme in philosophy of technology
that most directly touches on the interests of engineering practice. This is
hardly true for another issue of central concern to analytic philosophy of
technology, which is the status and the character of artifacts. This is perhaps
not unlike the situation in the philosophy of science, where working scientists
seem also to be much less interested in investigating the status and character of
models and theories than philosophers are.
Artifacts are
man-made objects: they have an author (see Hilpinen (1992) and Hilpinen’s
article onartifacts in
this encyclopedia). The artifacts that are of relevance to technology are, in
particular, made to serve a purpose. This excludes, within the set of all
man-made objects, on the one hand byproducts and waste products and on the
other hand works of art. Byproducts and waste products result from an intentional
act to make something but just not precisely, although the author at work may
be well aware of their creation. Works of art result from an intention directed
at their creation (although in exceptional cases of conceptual art, this
directedness may involve many intermediate steps) but it is contested whether
artists include in their intentions concerning their work an intention that the
work serves some purpose. A further discussion of this aspect belongs to the
philosophy of art. An interesting general account has been presented by Dipert
(1993).
Technical
artifacts, then, are made to serve some purpose, generally to be used for
something or to act as a component in a larger artifact, which in its turn is
either something to be used or again a component. Whether end product or
component, an artifact is ‘for something’, and what it is for is called the
artifact’s function.
Several researchers have emphasized that an adequate description of artifacts
must refer both to their status as tangible physical objects and to the
intentions of the people engaged with them. Kroes and Meijers (2006) have
dubbed this view ‘the dual nature of technical artifacts’; its most mature
formulation can be found in Kroes (2012). They suggest that the two aspects are
‘tied up’, so to speak, in the notion of artifact function. This gives rise to
several problems. One, which will be passed over quickly because little
philosophical work seems to have been done concerning it, is that structure and
function mutually constrain each other, but the constraining is only partial.
It is unclear whether a general account of this relation is possible and what
problems need to be solved to arrive there. There may be interesting
connections with the issue of multiple realizability in the philosophy of mind
and with accounts of reduction in science, but these have not yet been widely
explored; an exception is (Mahner and Bunge 2001).
It is equally
problematic whether a unified account of the notion of function as such is
possible, but this issue has received considerably more philosophical
attention. The notion of function is of paramount importance for characterizing
artifacts, but the notion is used much more widely. The notion of an artifact’s
function seems to refer necessarily to human intentions. Function is also a key
concept in biology, however, where no intentionality plays a role, and it is a
key concept in cognitive science and the philosophy of mind, where it is
crucial in grounding intentionality in non-intentional, structural and physical
properties. Up till now there is no accepted general account of function that
covers both the intentionality-based notion of artifact function and the
non-intentional notion of biological function—not to speak of other areas where
the concept plays a role, such as the social sciences. The most comprehensive
theory, that has the ambition to account for the biological notion, cognitive
notion and the intentional notion, is Ruth Millikan’s (Millikan 1984); for
criticisms and replies, see Preston (1998, 2003), Millikan (1999), Vermaas and
Houkes (2003) and Houkes and Vermaas (2010). The collection of essays edited by
Ariew, Cummins and Perlman (2002) presents a recent introduction to the general
topic of defining the notion of function in general, although the emphasis is,
as is generally the case in the literature on function, on biological
functions.
Against the
view that the notion of functions refers necessarily to intentionality at least
in the case of artifacts, it could be argued that even there, when discussing
the functions of the components of a larger device and their interrelations,
the intentional ‘side’ of these functions is of secondary importance only.
This, however, would be to ignore the possibility of the malfunctioning of such components. This notion seems
to be definable only in terms of a mismatch between actual behavior and
intended behavior. The notion of malfunction also sharpens an ambiguity in the
general reference to intentions when characterizing technical artifacts. These artifacts
usually engage many people, and the intentions of these people may not all pull
in the same direction. A major distinction can be drawn between the intentions
of the actual user of an artifact for a particular purpose and the intentions
of the artifact’s designer. Since an artifact may be used for a purpose
different from the one for which its designer intended it to be used, and since
people may also use natural objects for some purpose or other, one is invited
to allow that artifacts can have multiple functions, or to enforce a hierarchy
among all relevant intentions in determining the function of an artifact, or to
introduce a classification of functions in terms of the sorts of determining
intentions. In the latter case, which is a sort of middle way between the two
other options, one commonly distinguishes between the proper function of an artifact as the one intended by
its designer and the accidental
function of the artifact as
the one given to it by some user on private considerations. Accidental use can
become so common, however, that the original function drops out of memory.
Closely
related to this issue to what extent use and design determine the function of
an artifact is the problem of characterizing artifact kinds. It may seem that
we use functions to classify artifacts: an object is a knife because it has the
function of cutting, or more precisely, of enabling us to cut. It is hardly
recognized, however, that the link between function and kind-membership is not
that straightforward. The basic kinds in technology are, for example, ‘knife’,
‘airplane’ and ‘piston’. The members of these kinds have been designed in order
to be used to cut something with, to transport something through the air and to
generate mechanical movement through thermodynamic expansion. However, one
cannot create a particular kind of artifact just by designing something with
the intention that it be used for some particular purpose: a member of the kind
so created must actually be useful for that purpose. Despite innumerable design
attempts and claims, the perpetual motion machine is not a kind of artifact. A
kind like ‘knife’ is defined, therefore, not only by the intention of the
designer of each of its members that it be useful for cutting but also by an
operational principle known to these designers, and on which they based their
design. This is, in a different setting, also defended by Thomasson, who in her
characterization of what she in general calls an artifactual kind says that such a kind is defined by
the designer’s intention to make something of that kind, by a substantive idea
that the designer has of how this can be achieved, and by his or her largely
successful achievement of it (Thomasson 2003, 2007). Qua sorts of kinds in
which artifacts can be grouped, a distinction must therefore be made between a
kind like ‘knife’ and a corresponding but different kind ‘cutter’. A ‘knife’
indicates a particular way a ‘cutter’ can be made. One can also cut, however,
with a thread or line, a welding torch, a water jet, and undoubtedly by other
sorts of means that have not yet been thought of. A ‘cutter’ is an example of
what could be looked upon as a truly functional kind. As such, it is subject to
the conflict between use and design: one could mean by ‘cutter’ anything than
can be used for cutting or anything that has been designed to be used for
cutting, by the application of whatever operational principle, presently known
or unknown.
This
distinction between artifact kinds and functional kinds is relevant for the
status of such kinds in comparison to other notions of kinds. Philosophy of
science has emphasized that the concept of natural kind, such as exemplified by
‘water’ or ‘atom’, lies at the basis of science. On the other hand it is
generally taken for granted that there are no regularities that all knives or
airplanes or pistons answer to. This, however, is loosely based on
considerations of multiple realizability that apply only to functional kinds,
not to artifact kinds. Artifact kinds share an operational principle that gives
them some commonality in physical features, and this commonality becomes
stronger once a particular artifact kind is subdivided into narrower kinds.
Since these kinds are specified in terms of physical and geometrical
parameters, they are much closer to the natural kinds of science, in that they
support law-like regularities; see for a defense of this position Soavi (2008).
A recent collection of essays discussing the metaphysics of artifacts and
artifact kinds is (Franssen, Kroes, Reydon and Vermaas 2014).
2.6. Other topics
There is at
least one additional technology-related topic that ought to be mentioned
because it has created a good deal of analytic philosophical literature, namely
Artificial Intelligence and related areas. A full discussion of this vast field
is beyond the scope of this entry, however. Information is to be found in this
encyclopedia’s entries on Turing
machines, the
Church-Turing thesis,computability
and complexity, the
Turing test, the
Chinese room argument, the
computational theory of mind, functionalism, multiple realizability, and the philosophy
of computer science.
3. Ethical and Social Aspects of Technology
3.1. The development of the ethics of technology
It was not
until the twentieth century that the development of the ethics of technology as
a systematic and more or less independent subdiscipline of philosophy started.
This late development may seem surprising given the large impact that
technology has had on society, especially since the industrial revolution.
A plausible
reason for this late development of ethics of technology is the instrumental
perspective on technology that was mentioned in Section 2.2. This perspective
implies, basically, a positive ethical assessment of technology: technology
increases the possibilities and capabilities of humans, which seems in general
desirable. Of course, since antiquity, it has been recognized that the new
capabilities may be put to bad use or lead to human hubris. Often, however, these
undesirable consequences are attributed to the users of technology, rather than
the technology itself, or its developers. This vision is known as the
instrumental vision of technology resulting in the so-called neutrality thesis.
The neutrality thesis holds that technology is a neutral instrument that can be
put to good or bad use by its users. During the twentieth century, this
neutrality thesis met with severe critique, most prominently by Heidegger and
Ellul, who have been mentioned in this context in Section 2.0, but also by
philosophers from the Frankfurt School (Adorno, Horkheimer, Marcuse, Habermas).
The scope and
the agenda for ethics of technology to a large extent depend on how technology
is conceptualized. The second half of the twentieth century has witnessed a
richer variety of conceptualizations of technology that move beyond the
conceptualization of technology as a neutral tool, as a world view or as a
historical necessity. This includes conceptualizations of technology as a
political phenomenon (Winner, Feenberg, Sclove), as a social activity (Latour,
Callon, Bijker and others in the area of science and technology studies), as a
cultural phenomenon (Ihde, Borgmann), as a professional activity (engineering
ethics, e.g., Davis), and as a cognitive activity (Bunge, Vincenti). Despite
this diversity, the development in the second half of the twentieth century is
characterized by two general trends. One is a move away from technological
determinism and the assumption that technology is a given self-contained
phenomenon which develops autonomously to an emphasis on technological
development being the result of choices (although not necessarily the intended
result). The other is a move away from ethical reflection on technology as such
to ethical reflection of specific technologies and to specific phases in the
development of technology. Both trends together have resulted in an enormous
increase in the number and scope of ethical questions that are asked about
technology. The developments also imply that ethics of technology is to be
adequately empirically informed, not only about the exact consequences of specific
technologies but also about the actions of engineers and the process of
technological development. This has also opened the way to the involvement of
other disciplines in ethical reflections on technology, such as Science and
Technology Studies (STS) and Technology Assessment (TA).
3.2. Approaches in the ethics of technology
Not only is
the ethics of technology characterized by a diversity of approaches, it might
even be doubted whether something like a subdiscipline of ethics of technology,
in the sense of a community of scholars working on a common set of problems,
exists. The scholars studying ethical issues in technology have diverse
backgrounds (e.g., philosophy, STS, TA, law, political science) and they do not
always consider themselves (primarily) ethicists of technology. To give the
reader an overview of the field, three basic approaches or strands that might
be distinguished in the ethics of technology will be discussed.
3.2.1. Cultural and political approaches
Both cultural
and political approaches build on the traditional philosophy and ethics of
technology of the first half of the twentieth century. Whereas cultural
approaches conceive of technology as a cultural phenomenon that influences our
perception of the world, political approaches conceive of technology as a
political phenomenon, i.e. as a phenomenon that is ruled by and embodies
institutional power relations between people.
Cultural
approaches are often phenomenological in nature or at least position themselves
in relation to phenomenology as post-phenomenology. Examples of philosophers in
this tradition are Don Ihde, Albert Borgmann, Peter-Paul Verbeek and Evan
Selinger (e.g., Borgmann 1984; Ihde 1990; Verbeek 2005, 2011). The approaches
are usually influenced by developments in STS, especially the idea that
technologies contain a script that influences not only people’s perception of
the world but also human behavior, and the idea of the absence of a fundamental
distinction between humans and non-humans, including technological artifacts
(Akrich 1992; Latour 1992; Latour 1993; Ihde and Selinger 2003). The
combination of both ideas has led some to claim that technology has (moral)
agency, a claim that is discussed below in Section 3.3.1.
Political
approaches to technology mostly go back to Marx, who assumed that the material
structure of production in society, in which technology is obviously a major
factor, determined the economic and social structure of that society.
Similarly, Langdon Winner has argued that technologies can embody specific
forms of power and authority (Winner 1980). According to him, some technologies
are inherently normative in the sense that they require or are strongly
compatible with certain social and political relations. Railroads, for example,
seem to require a certain authoritative management structure. In other cases,
technologies may be political due to the particular way they have been
designed. Some political approaches to technology are inspired by (American)
pragmatism and, to a lesser extent, discourse ethics. A number of philosophers,
for example, have pleaded for a democratization of technological development
and the inclusion of ordinary people in the shaping of technology (Winner 1983;
Sclove 1995; Feenberg 1999).
Although
political approaches have obviously ethical ramifications, many philosophers
who have adopted such approaches do not engage in explicit ethical reflection
on technology. An interesting recent exception, and an attempt to consolidate a
number of recent developments and to articulate them into a more general
account of what an ethics of technology should look like, is the collection of
essays Pragmatist ethics for a
technological culture (Keulartz
et al. 2002). In this book, the authors plead for a revival of the pragmatist
tradition in moral philosophy because it is better fit to deal with a number of
moral issues in technology. Instead of focusing on how to reach and justify
normative judgments about technology, a pragmatist ethics focuses on how to
recognize and trace moral problems in the first place. Moreover, the process of
dealing with these problems is considered more important than the outcome.
3.2.2. Engineering ethics
Engineering
ethics is a relatively new field of education and research. It started off in
the 1980s in the United States, merely as an educational effort. Engineering
ethics is concerned with ‘the actions and decisions made by persons,
individually or collectively, who belong to the profession of engineering’
(Baum 1980: 1). According to this approach, engineering is a profession, in the
same way as medicine is a profession.
Although
there is no agreement on how a profession exactly should be defined, the
following characteristics are often mentioned:
- A
profession relies on specialized knowledge and skills that require a long
period of study;
- The
occupational group has a monopoly on the carrying out of the occupation;
- The
assessment of whether the professional work is carried out in a competent
way is done by, and it is accepted that this can only be done by,
professional peers;
- A
profession provides society with products, services or values that are
useful or worthwhile for society, and is characterized by an ideal of
serving society;
- The
daily practice of professional work is regulated by ethical standards,
which are derived from or relate to the society-serving ideal of the
profession.
Typical
ethical issues that are discussed in engineering ethics are professional
obligations of engineers as exemplified in, for example, codes of ethics of
engineers, the role of engineers versus managers, competence, honesty,
whistle-blowing, concern for safety and conflicts of interest (Davis 1998,
2005; Martin and Schinzinger 2005; Harris, Pritchard, and Rabins 2008).
Recently, a
number of authors have pleaded for broadening the traditional scope of
engineering ethics (e.g., Herkert 2001). This call for a broader approach
derives from two concerns. One concern is that the traditional micro-ethical
approach in engineering ethics tends to take the contexts in which engineers
have to work for given, while major ethical issues pertain to how this context
is ‘organized’. Another concern is that the traditional micro-ethical focus
tends to neglect issues relating to the impact of technology on society or
issues relating to decisions about technology. Broadening the scope of
engineering ethics would then, among others, imply more attention for such
issues as sustainability and social justice.
3.2.3. Ethics of specific technologies
The last
decades have witnessed an increase in ethical inquiries into specific
technologies. One of the most visible new fields is probably computer ethics
(e.g., Floridi 2010; Johnson 2009; Weckert 2007; Van den Hoven and Weckert
2008), but biotechnology has spurred dedicated ethical investigations as well
(e.g., Sherlock and Morrey 2002; Thompson 2007). More traditional fields like
architecture and urban planning have also attracted specific ethical attention
(Fox 2000). More recently, nanotechnology and so-called converging technologies
have led to the establishment of what is called nanoethics (Allhoff et al.
2007). Apart from this, there has been a debate on the ethics of nuclear
deterrence (Finnis et al. 1988).
Obviously the
establishment of such new fields of ethical reflection is a response to social
and technological developments. Still, the question can be asked whether the
social demand is best met by establishing new fields of applied ethics. This
issue is in fact regularly discussed as new fields emerge. Several authors have
for example argued that there is no need for nanoethics because nanotechnology
does not raise any really new ethical issues (e.g., McGinn 2010). The alleged
absence of newness here is supported by the claim that the ethical issues
raised by nanotechnology are a variation on, and sometimes an intensification
of, existing ethical issues, but hardly really new, and by the claim that these
issues can be dealt with the existing theories and concepts from moral
philosophy. For an earlier, similar discussion concerning the supposed new
character of ethical issues in computer engineering, see (Tavani 2002).
The new
fields of ethical reflection are often characterized as applied ethics, that
is, as applications of theories, normative standards, concepts and methods
developed in moral philosophy. For each of these elements, however, application
is usually not straightforward but requires a further specification or
revision. This is the case because general moral standards, concepts and
methods are often not specific enough to be applicable in any direct sense to
specific moral problems. ‘Application’ therefore often leads to new insights
which might well result in the reformulation or at least refinement of existing
normative standards, concepts and methods. In some cases, ethical issues in a
specific field might require new standards, concepts or methods. Beauchamp and
Childress for example have proposed a number of general ethical principles for
biomedical ethics (Beauchamp and Childress 2001). These principles are more
specific than general normative standards, but still so general and abstract
that they apply to different issues in biomedical ethics. In computer ethics,
existing moral concepts relating to for example privacy and ownership has been
redefined and adapted to deal with issues which are typical for the computer
age (Johnson 2003). New fields of ethical application might also require new methods
for, for example, discerning ethical issues that take into account relevant
empirical facts about these fields, like the fact that technological research
and development usually takes place in networks of people rather than by
individuals (Zwart et al. 2006).
The above
suggests that different fields of ethical reflection on specific technologies
might well raise their own philosophical and ethical issues. Even if this is
true, it is not clear whether this justifies the development of separate
subfields or even subdisciplines. It might well be argued that a lot can be
learned from interaction and discussion between these fields and a fruitful
interaction with the two other strands discussed above (cultural and political
approaches and engineering ethics). Currently, such interaction in many cases
seems absent, although there are of course exceptions.
3.3. Some recurrent themes in the ethics of technology
We now turn
to the description of some themes in the ethics of technology. We focus on a
number of general themes that provide an illustration of general issues in the
ethics of technology and the way these are treated.
3.3.1. Neutrality versus moral agency
One important
general theme in the ethics of technology is the question whether technology is
value-laden. Some authors have maintained that technology is value-neutral, in
the sense that technology is just a neutral means to an end, and accordingly
can be put to good or bad use (e.g., Pitt 2000). This view might have some
plausibility in as far as technology is considered to be just a bare physical
structure. Most philosophers of technology, however, agree that technological
development is a goal-oriented process and that technological artifacts by
definition have certain functions, so that they can be used for certain goals
but not, or far more difficulty or less effectively, for other goals. This
conceptual connection between technological artifacts, functions and goals
makes it hard to maintain that technology is value-neutral. Even if this point
is granted, the value-ladenness of technology can be construed in a host of
different ways. Some authors have maintained that technology can have moral
agency. This claim suggests that technologies can autonomously and freely ‘act’
in a moral sense and can be held morally responsible for their actions.
The debate
whether technologies can have moral agency started off in computer ethics
(Bechtel 1985; Snapper 1985; Dennett 1997; Floridi and Sanders 2004) but has
since broadened. Typically, the authors who claim that technologies (can) have
moral agency often redefine the notion of agency or its connection to human
will and freedom (e.g., Latour 1993; Floridi and Sanders 2004, Verbeek 2011). A
disadvantage of this strategy is that it tends to blur the morally relevant
distinctions between people and technological artifacts. More generally, the
claim that technologies have moral agency sometimes seems to have become
shorthand for claiming that technology is morally relevant. This, however,
overlooks the fact technologies can be value-laden in other ways than by having
moral agency (see e.g. Johnson 2006; Radder 2009; Illies and Meijers 2009;
Peterson and Spahn 2011). One might, for example, claim that technology enables
(or even invites) and constrains (or even inhibits) certain human actions and
the attainment of certain human goals and therefore is to some extent
value-laden, without claiming moral agency for technological artifacts.
3.3.2. Responsibility
Responsibility
has always been a central theme in the ethics of technology. The traditional
philosophy and ethics of technology, however, tended to discuss responsibility
in rather general terms and were rather pessimistic about the possibility of
engineers to assume responsibility for the technologies they developed. Ellul,
for example, has characterized engineers as the high priests of technology, who
cherish technology but cannot steer it. Hans Jonas (1984) has argued that
technology requires an ethics in which responsbility is the central imperative
because for the first time in history we are able to destroy the earth and
humanity.
In
engineering ethics, the responsibility of engineers is often discussed in
relation to code of ethics that articulate specific responsibilities of
engineers. Such codes of ethics stress three types of responsibilities of
engineers: (1) conducting the profession with integrity and honesty and in a
competent way, (2) responsibilities towards employers and clients and (3)
responsibility towards the public and society. With respect to the latter, most
US codes of ethics maintain that engineers ‘should hold paramount the safety,
health and welfare of the public’.
As has been
pointed out by several authors (Nissenbaum 1996; Johnson and Powers 2005;
Swierstra and Jelsma 2006), it may be hard to pinpoint individual
responsibility in engineering. The reason is that the conditions for the proper
attribution of individual responsibility that have been discussed in the
philosophical literature (like freedom to act, knowledge, and causality) are often
not met by individual engineers. For example, engineers may feel compelled to
act in a certain way due to hierarchical or market constraints, and negative
consequences may be very hard or impossible to predict beforehand. The
causality condition is often difficult to meet as well due to the long chain
from research and development of a technology till its use and the many people
involved in this chain. Davis (2012) nevertheless maintains that despite such
difficulties individual engineers can and do take responsibility.
One issue
that is at stake in this debate is the notion of responsibility. Davis (2012),
and also for example Ladd (1991), argue for a notion of responsibility that
focuses less on blame and stresses the forward-looking or virtuous character of
assuming responsibility. But many others focus on backward-looking notions of
responsibility that stress accountability, blameworthiness or liability.
Zandvoort (2000), for example has pleaded for a notion of responsibility in
engineering that is more like the legal notion of strict liability, in which
the knowledge condition for responsibility is seriously weakened. Doorn (2012)
compares three perspectives on responsibility ascription in engineering—a
merit-based, a right-based and a consequentialist perspective—and argues that
the consequentialist perspective, which applies a forward-looking notion of
responsibility, is most powerful in influencing engineering practice.
The
difficulty of attributing individual responsibility may lead to the Problem of
Many Hands (PMH). The term was first coined by Dennis Thompson (1980) in an
article about the responsibility of public officials. The term is used to
describe problems with the ascription of individual responsibility in
collective settings. Doorn (2010) has proposed a procedurals approach, based on
Rawls’ reflective equilibrium model, to deal with the PMH; other ways of
dealing with the PMH include the design of institutions that help to avoid it
or an emphasis on virtuous behavior in organizations (Van de Poel and Nihlén
Fahlquist 2012).
3.3.3. Design
In the last
decades, increasingly attention is paid not only to ethical issues that arise
during the use of a technology, but also during the design phase. An important
consideration behind this development is the thought that during the design
phase technologies, and their social consequences, are still malleable whereas
during the use phase technologies are more or less given and negative social
consequences may be harder to avoid or positive effects harder to achieve.
In computer
ethics, an approach known as Value-Sensitive Design (VSD) has been developed to
explicitly address the ethical nature of design. VSD aims at integrating values
of ethical importance in engineering design in a systematic way (Friedman and
Kahn 2003). The approach combines conceptual, empirical and technical
investigations. There is also a range of other approaches aimed at including
values in design. ‘Design for X’ approaches in engineering aim at including
instrumental values (like maintainability, reliability and costs) but they also
include design for sustainability, inclusive design, and affective design (Holt
and Barnes 2010). Inclusive design aims at making designs accessible to the
whole population including, for example, handicapped people and the elderly
(Erlandson 2008). Affective design aims at designs that evoke positive emotions
with the users and so contributes to human well-being.
If one tries
to integrate values into design one may run into the problem of a conflict of
values. The safest car is, due to its weight, not likely to be the most
sustainability. Here safety and sustainability conflict in the design of cars.
Traditional methods in which engineers deal with such conflicts and make
trade-off between different requirements for design include cost-benefit
analysis and multiple criteria analysis. Such methods are, however, beset with
methodological problems like those discussed in Section 2.4 (Franssen 2005;
Hansson 2007). Van de Poel (2009a) discusses various alternatives for dealing
with value conflicts in design including the setting of thresholds
(satisficing), reasoning about values, innovation and diversity.
3.3.4. Technological risks
The risks of
technology are one of the traditional ethical concerns in the ethics of
technology. Risks raise not only ethical issues but other philosophical issues,
such as epistemological and decision-theoretical issues as well (Roeser et al.
2012).
Risk is
usually defined as the product of the probability of an undesirable event and
the effect of that event, although there are also other definitions around
(Hansson 2004b). In general it seems desirable to keep technological risks as
small as possible. The larger the risk, the larger either the likeliness or the
impact of an undesirable event is. Risk reduction therefore is an important
goal in technological development and engineering codes of ethics often
attribute a responsibility to engineers in reducing risks and designing safe
products. Still, risk reduction is not always feasible or desirable. It is
sometimes not feasible, because there are no absolutely safe products and
technologies. But even if risk reduction is feasible it may not be acceptable
from a moral point of view. Reducing risk often comes at a cost. Safer products
may be more difficult to use, more expensive or less sustainable. So sooner or
later, one is confronted with the question: what is safe enough? What makes a
risk (un)acceptable?
The process
of dealing with risks is often divided into three stages: risk assessment, risk
evaluation and risk management. Of these, the second is most obviously
ethically relevant. However, risk assessment already involves value judgments,
for example about which risks should be assessed in the first place
(Shrader-Frechette 1991). An important, and morally relevant, issue is also the
degree of evidence that is needed to establish a risk. In establishing a risk
on the basis of a body of empirical data one might make two kinds of mistakes.
One can establish a risk when there is actually none (type I error) or one can
mistakenly conclude that there is no risk while there actually is a risk (type
II error). Science traditionally aims at avoiding type I errors. Several
authors have argued that in the specific context of risk assessment it is often
more important to avoid type II errors (Cranor 1990; Shrader-Frechette 1991).
The reason for this is that risk assessment not just aims at establishing
scientific truth but has a practical aim, i.e. to provide the knowledge on
basis of which decisions can be made about whether it is desirable to reduce or
avoid certain technological risks in order to protect users or the public.
Risk
evaluation is carried out in a number of ways (see, e.g., Shrader-Frechette
1985). One possible approach is to judge the acceptability of risks by
comparing them to other risks or to certain standards. One could, for example,
compare technological risks with naturally occurring risks. This approach,
however, runs the danger of committing a naturalistic fallacy: naturally
occurring risks may (sometimes) be unavoidable but that does not necessarily
make them morally acceptable. More generally, it is often dubious to judge the
acceptability of the risk of technology A by comparing it to the risk of
technology B if A and B are not alternatives in a decision (for this and other
fallacies in reasoning about risks, see Hansson 2004a).
A second
approach to risk evaluation is risk-cost benefit analysis, which is based on
weighing the risks against the benefits of an activity. Different decision criteria
can be applied if a (risk) cost benefit analysis is carried out (Kneese,
Ben-David, and Schulze 1983). According to Hansson (2003: 306), usually the
following criterion is applied: “… a risk is acceptable if and only if the
total benefits that the exposure gives rise to outweigh the total risks,
measured as the probability-weighted disutility of outcomes”.
A third
approach is to base risk acceptance on the consent of people who suffer the
risks after they have been informed about these risks (informed consent). A
problem of this approach is that technological risks usually affect a large
number of people at once. Informed consent may therefore lead to a ‘society of
stalemates’ (Hansson 2003: 300).
Several
authors have proposed alternatives to the traditional approaches of risk
evaluation on the basis of philosophical and ethical arguments.
Shrader-Frechette (1991) has proposed a number of reforms in risk assessment
and evaluation procedures on the basis of a philosophical critique of current
practices. Roeser (2012) argues for a role of emotions in judging the
acceptability of risks. Hansson has proposed the following alternative
principle for risk evaluation: ‘Exposure of a person to a risk is acceptable if
and only if this exposure is part of an equitable social system of risk-taking
that works to her advantage’ (Hansson 2003: 305). Hansson’s proposal introduces
a number of moral considerations in risk evaluation that are traditionally not
addressed or only marginally addressed. These are the consideration whether
individuals profit from a risky activity and the consideration whether the
distribution of risks and benefits is fair.
Some authors
have criticized the focus on risks in the ethics of technology. One strand of
criticism argues that we often lack the knowledge to reliably assess the risks
of a new technology before it has come into use. We often do not know the
probability that something might go wrong, and sometimes we even do not know,
or at least not fully, what might go wrong and what possible negative
consequences may be. To deal with this, some authors have proposed to conceive
of the introduction of new technology in society as a social experiment and
have urged to think about the conditions under which such experiments are
morally acceptable (Martin and Schinzinger 2005, Van de Poel 2009b). Another
strand of criticism states that the focus on risks has led to a reduction of
the impacts of technology that are considered (Swierstra and Te Molder 2012).
Only impacts related to safety and health, which can be calculated as risks,
are considered, whereas ‘soft’ impacts, for example of a social or
psychological nature, are neglected, thereby impoverishing the moral evaluation
of new technologies.
Bibliography
·
Agricola, G. (1556) De
re metallica. Translated by H. C. Hoover and L. H. Hoover. London: The
Mining Magazine, 1912.
·
Akrich, M. (1992) The description of technical objects. In Shaping technology/building
society: studies in sociotechnical change, edited by W. Bijker and J. Law.
Cambridge, MA: MIT Press, pp. 205–224.
·
Allhoff, F., P. Lin, J. Moor, and J. Weckert, eds. (2007) Nanoethics: the ethical and social
implications of nanotechnology. Hoboken, NJ: Wiley-Interscience.
·
Arendt, H. (1958) The
human condition. Chicago: University of Chicago Press.
·
Ariew, A., R. Cummins, and M. Perlman, eds. (2002) Functions: new essays in the
philosophy of psychology and biology. New York/Oxford: Oxford University
Press.
·
Bacon, F. (1627) New
Atlantis: A worke vnfinished. In Bacon, Sylva
sylvarum: or a naturall historie, in ten centuries. London: William Lee.
·
Baum, R. J. (1980) Ethics
and engineering curricula. Hastings-on-Hudson: The Hastings Center.
·
Beauchamp, T. L. (2003) The nature of applied ethics. In A companion to applied ethics,
edited by R. G. Frey and C. H. Wellman. Oxford/Malden, MA: Blackwell, pp. 1–16.
·
Beauchamp, T. L., and J. F. Childress (2001) Principles of biomedical ethics.
5th edition. Oxford/New York: Oxford University Press.
·
Bechtel, W. (1985) Attributing responsibility to computer
systems. Metaphilosophy 16: 296–306.
·
Berg Olsen J.-K., E. Selinger and S. Riis, eds. (2009) New waves in philosophy of
technology. Basingstoke/New York: Palgrave Macmillan.
·
Bimber, B. (1990) Karl Marx and the three faces of technological
determinism. Social Studies of
Science 20: 333–351.
·
Borgmann, A. (1984) Technology
and the character of contemporary life: a philosophical inquiry.
Chicago/London: University of Chicago Press.
·
Bucciarelli, L. L. (1994) Designing
engineers. Cambridge, Mass.: MIT Press.
·
Bunge, M. (1966) Technology as applied science. Technology and Culture 7: 329–347.
·
Butler, S. (1872) Erewhon.
London: Trubner and Co.
·
Cranor, C. F. (1990) Some moral issues in risk assessment. Ethics 101: 123–143.
·
Davis, M. (1998) Thinking
like an engineer: studies in the ethics of a profession. New York/Oxford:
Oxford University Press.
·
––– (2005) Engineering
ethics. Aldershot/Burlington, VT: Ashgate.
·
––– (2012) “Ain’t no one here but us social forces”:
Constructing the professional responsibility of engineers. Science and Engineering Ethics 18: 13–34.
·
Dennett, D. C. (1997) When HAL kills, who’s to blame? Computer
ethics. In Hal’s legacy:
2001’s computer as dream and reality, edited by D. G. Stork. Cambridge, MA:
MIT Press, pp. 351–365.
·
Dipert, R. R. (1993) Artifacts,
art works, and agency. Philadelphia: Temple University Press.
·
Doorn, N. (2010) A Rawlsian approach to distribute
responsibilities in networks. Science
and Engineering Ethics 16:
221–249.
·
––– (2012) Responsibility ascriptions in technology development
and engineering: three perspectives. Science
and Engineering Ethics 18:
69–90.
·
Ellul, J. (1964) The
technological society. Translated by J. Wilkinson. New York: Alfred A.
Knopf.
·
Erlandson, R. F. (2008) Universal
and accessible design for products, services, and processes. Boca Raton:
CRC Press.
·
Feenberg, A. (1999) Questioning
technology. London/New York: Routledge.
·
Finnis, J., J. Boyle, and G. Grisez (1988) Nuclear deterrence, morality and
realism. Oxford: Oxford University Press.
·
Floridi, L. (2010) The
Cambridge handbook of information and computer ethics. Cambridge: Cambridge
University Press.
·
Floridi, L., and J. W. Sanders (2004) On the morality of
artificial agents, Minds and
Machines 14: 349–379.
·
Fox, W. (2000) Ethics
and the built environment, Professional ethics. London/New York: Routledge.
·
Franssen, M. (2005) Arrow’s theorem, multi-criteria decision
problems and multi-attribute preferences in engineering design. Research in Engineering Design 16: 42–56.
·
Franssen, M. and L. L. Bucciarelli (2004) On rationality in
engineering design. Journal of
Mechanical Design, 126: 945–949.
·
Franssen, M., P. Kroes, T. A. C. Reydon and P. E. Vermaas, eds.
(2014) Artefact kinds:
ontology and the human-made world. Heidelberg/New York/Dordrecht/London:
Springer.
·
Friedman, B., and P. H. Kahn, Jr. (2003) Human values, ethics
and design. In Handbook of
human-computer interaction, edited by J. Jacko and A. Sears. Mahwah, NJ:
Lawrence Erlbaum, pp. 1177–1201.
·
Habermas, J. (1970) Technology and science as ideology. In Toward a rational society.
Boston, MA: Beacon Press, pp. 81–122.
·
Hansson, S. O. (2003) Ethical criteria of risk acceptance. Erkenntnis 59: 291–309.
·
––– (2004a) Fallacies of risk. Journal
of Risk Research 7: 353–360.
·
––– (2004b) Philosophical perspectives on risk. Technè 8: 10–35.
·
––– (2007) Philosophical problems in cost-benefit analysis. Economics and Philosophy 23: 163–183.
·
Harris, C. E., M. S. Pritchard, and M. J. Rabins (2008) Engineering ethics: concepts and
cases. 4th edition. Belmont, CA: Wadsworth.
·
Heidegger, M. (1977) The turning. In The question concerning technology
and other essays. New York: Harper and Row.
·
Herkert, J. R. (2001) Future directions in engineering ethics
research: microethics, macroethics and the role of professional societies. Science and Engineering Ethics 7: 403–414.
·
Hilpinen, R. (1992) Artifacts and works of art. Theoria 58: 58–82.
·
Holt, R., and C. Barnes (2010) Towards an integrated approach to
‘Design for X’: an agenda for decision-based DFX research. Research in Engineering Design 21: 123–136.
·
Houkes, W., and P. E. Vermaas (2010) Technical functions: on the use and
design of artefacts. Dordrecht/Heidelberg/London /New York: Springer.
·
Hughes, J. L., P. A. Kroes and S. D. Zwart (2007) A semantics
for means-end relations. Synthese158:
207–231.
·
Ihde, D. (1990) Technology
and the lifeworld: from garden to earth. Bloomington: Indiana University
Press.
·
Ihde, D., and E. Selinger (2003) Chasing technoscience: matrix for
materiality. Bloomington: Indiana Universityv Press.
·
Illies, C. and A. Meijers (2009) Artefacts without agency. The Monist 92: 420–440.
·
Jarvie, I. C. (1966) The social character of technological
problems: comments on Skolimowski’s paper. Technology
and Culture 7: 384–390.
·
Johnson, D. G. (2003) Computer ethics. In A companion to applied ethics,
edited by R. G. Frey and C. H. Wellman. Oxford/Malden, MA: Blackwell, pp.
608–619.
·
––– (2006) Computer systems: moral entities but not moral
agents. Ethics and Information
Technology 8: 195–205.
·
––– (2009) Computer
ethics. 4th edition. Upper Saddle River, NJ: Prentice Hall.
·
Johnson, D. G., and T. M. Powers (2005) Computer systems and
responsibility: a normative look at technological complexity. Ethics and Information Technology 7: 99–107.
·
Jonas, H. (1984) The
imperative of responsibility: in search of an ethics for the technological age.
Chicago/London: University of Chicago Press.
·
Kapp, E. (1877) Grundlinien
einer Philosophie der Technik: Zur Entstehungsgeschichte der Cultur aus neuen
Gesichtspunkten. Braunschweig: Westermann.
·
Keulartz, J., M. Korthals, M. Schermer, and T. Swierstra, eds.
(2002) Pragmatist ethics for a
technological culture. Dordrecht: Kluwer Academic.
·
Kneese, A. V., S. Ben-David, and W. D. Schulze (1983) The
ethical foundations of benefit-cost analysis. In Energy and the future, edited
by D. MacLean and P. G. Brown. Totowa, NJ: Rowman and Littefield, pp. 59–74.
·
Kotarbinski, T. (1965) Praxiology:
an introduction to the sciences of efficient action, Oxford: Pergamon
Press.
·
Kroes, P. (2012) Technical
artefacts: creations of mind and matter. Dordrecht/Heidelberg/New
York/London: Springer.
·
Kroes, P., and A. Meijers, eds. (2006) The dual nature of
technical artifacts. Special issue of Studies
in History and Philosophy of Science 37:
1–158.
·
Kroes, P. A., M. Franssen and L. L. Bucciarelli (2009).
Rationality in engineering design. In Meijers (2009), pp. 565–600.
·
Kuhn, T. S. (1962) The
structure of scientific revolutions. Chicago: University of Chicago Press.
·
Ladd, J. (1991) Bhopal: an essay on moral responsibility and
civic virtue. Journal of
Social Philosophy 22: 73–91.
·
Latour, B. (1992) Where are the missing masses? In Shaping technology/building
society: studies in sociotechnical change, edited by W. Bijker and J. Law.
Cambridge, MA: MIT Press, pp. 225–258.
·
––– (1993) We
have never been modern. New York: Harvester Wheatsheaf.
·
Lloyd, G. E. R. (1973) Analogy in early Greek thought. In The dictionary of the history of
ideas, edited by P. P. Wiener, New York: Charles Scribner’s Sons, vol. 1
pp. 60–64.
·
Lloyd, P. A., and J. A. Busby (2003) “Things that went well—no
serious injuries or deaths”: Ethical reasoning in a normal engineering design
process. Science and
Engineering Ethics 9:
503–516.
·
Mahner, M., and M. Bunge (2001) Function and functionalism: a
synthetic perspective. Philosophy
of Science 68: 73–94.
·
Martin, M. W., and R. Schinzinger (2005) Ethics in engineering. 4th
edition. Boston, MA: McGraw-Hill.
·
McGinn, R. E. (2010) What’s different, ethically, about
nanotechnology? Foundational questions and answers. Nanoethics 4: 115–128.
·
Meijers, A., ed. (2009) Philosophy
of technology and engineering sciences (Handbook
of the philosophy of science, volume 9). Amsterdam: North-Holland.
·
Millikan, R. G. (1999) Wings, spoons, pills, and quills: a
pluralist theory of function. The
Journal of Philosophy 96:
191–206.
·
Mitcham, C. (1994) Thinking
through technology: the path between engineering and philosophy. Chicago:
University of Chicago Press.
·
Newman, W. R. (1989) Technology and alchemical debate in the
late Middle Ages. Isis 80, 423–445.
·
––– (2004) Promethean
ambitions: alchemy and the quest to perfect nature. Chicago: University of
Chicago Press.
·
Niiniluoto, I. (1993) The aim and structure of applied research. Erkenntnis 38: 1–21.
·
Nissenbaum, H. (1996) Accountability in a computerized society. Science and Engineering Ethics2:
25–42.
·
Peterson, M., and A. Spahn (2011) Can technological artefacts be
moral agents? Science and
Engineering Ethics 17:
411–424.
·
Pitt, J. C. (2000) Thinking
about technology: foundations of the philosophy of technology. New York:
Seven Bridges Press.
·
Polanyi, M. (1958) Personal
knowledge: towards a post-critical philosophy. London: Routledge and Kegan
Paul.
·
Preston, B. (1998) Why is a wing like a spoon? A pluralist
theory of function. The
Journal of Philosophy 95:
215–254.
·
––– (2003) Of marigold beer: a reply to Vermaas and Houkes. British Journal for the Philosophy
of Science 54: 601–612.
·
––– (2012) A
philosophy of material culture: action, function, and mind. New York/Milton
Park: Routledge.
·
Radder, H. (2009) Why technologies are inherently normative. In
Meijers (2009), pp. 887–921.
·
Roeser, S. (2012) Moral emotions as guide to acceptable risk. In
Roeser, Hillerbrand, Peterson and Sandin (2012), pp. 819–832.
·
Roeser, S., R. Hillerbrand, M. Peterson and P. Sandin, eds.
(2012) Handbook of risk
theory: epistemology, decision theory, ethics, and social implications of risk.
Dordrecht/Heidelberg/London/New York: Springer.
·
Ryle, G. (1949) The concept of mind. London: Hutchinson.
·
Scharff, R. C., and V. Dusek, eds. (2003) Philosophy of technology: the
technological condition. Malden, MA/Oxford: Blackwell.
·
Schummer, J. (2001) Aristotle on technology and nature. Philosophia Naturalis 38: 105–120.
·
Sclove, R. E. (1995) Democracy
and technology. New York: The Guilford Press.
·
Sellars, W. (1962) Philosophy and the scientific image of man.
In Frontiers of science and
philosophy, edited by R. Colodny, Pittsburgh: University of Pittsburgh
Press, pp. 35–78.
·
Sherlock, R., and J. D. Morrey, eds. 2002. Ethical issues in biotechnology.
Lanham: Rowman and Littlefield.
·
Shrader-Frechette, K. S. (1985) Risk analysis and scientific
method: methodological and ethical problems with evaluating societal hazards.
Dordrecht: Reidel.
·
––– (1991) Risk
and rationality: philosophical foundations for populist reform. Berkeley
etc.: University of California Press.
·
Simon, H. A. (1957) Models
of man, social and rational: mathematical essays on rational human behavior in
a social setting. New York: John Wiley.
·
––– (1969) The
sciences of the artificial. Cambridge, Mass./London: MIT Press.
·
––– (1982) Models
of bounded rationality. Cambridge, Mass./London: MIT Press.
·
Skolimowski, H. (1966) The structure of thinking in technology. Technology and Culture 7: 371–383.
·
Snapper, J. W. (1985) Responsibility for computer-based errors. Metaphilosophy 16: 289–295.
·
Soavi, M. (2009) Realism and artifact kinds. In Functions in biological and
artifical worlds: comparative philosophical perspectives, edited by U.
Krohs and P. Kroes. Cambridge, MA: MIT Press, pp. 185–202 .
·
Suh, N. P. (2001) Axiomatic
design: advances and applications. Oxford/New York: Oxford University
Press.
·
Swierstra, T., and J. Jelsma (2006) Responsibility without
moralism in techno-scienctific design practice. Science, Technology and Human
Values 31: 309–332.
·
Swierstra, T., and H. te Molder (2012) Risk and soft impacts. In
Roeser, Hillerbrand, Peterson and Sandin (2012), pp. 1049–1066.
·
Tavani, H. T. (2002) The uniqueness debate in computer ethics:
what exactly is at issue, and why does it matter? Ethics and Information Technology 4: 37–54.
·
Thomasson, A. (2003) Realism and human kinds. Philosophy and Phenomenological
Research 67: 580–609.
·
––– (2007) Artifacts and human concepts. In Creations of the mind: essays on
artifacts and their representation, edited by E. Margolis and S. Laurence,
Oxford: Oxford University Press, pp. 52–73.
·
Thompson, D. F. (1980) Moral responsibility and public
officials: the problem of many hands.American Political Science Review 74: 905–916.
·
Thompson, P. B. (2007) Food
biotechnology in ethical perspective. 2nd ed. Dordrecht: Springer.
·
Van den Hoven, M. J., and J. Weckert, eds. (2008) Information technology and moral
philosophy. Cambridge/New York: Cambridge University Press.
·
Van de Poel, I. (2009a) Values in engineering design. In Meijers
(2009), pp. 973–1006.
·
––– (2009b) The introduction of nanotechnology as a societal
experiment. In Technoscience
in progress: managing the uncertainty of nanotechnology, edited by S.
Arnaldi, A. Lorenzet and F. Russo, Amsterdam: IOS Press, pp. 129–142.
·
Van de Poel, I., and J. Nihlén Fahlquist (2012). Risk and
responsibility. In Roeser, Hillerbrand, Peterson and Sandin (2012), pp.
877–907.
·
Van der Pot, J. H. J. (1994) Steward
or sorcerer’s apprentice? The evaluation of technical progress: a systematic
overview of theories and opinions. 2 vols. Delft: Eburon. (2nd ed. 2004
under the title Encyclopedia
of technological progress: a systematic overview of theories and opinions.)
·
Verbeek, P. P. (2005) What
things do: philosophical reflections on technology, agency, and design.
Translated by R. P. Crease. University Park, PA: Penn State University Press.
·
––– (2011) Moralizing
technology: understanding and designing the morality of things.
Chicago/London: The University of Chicago Press.
·
Vermaas, P. E., and Houkes, W. (2003) Ascribing functions to
technical artifacts: a challenge to etiological accounts of functions. British Journal for the Philosophy
of Science 54: 261–289.
·
Vermaas, P. E., P. Kroes, I. van de Poel, M. Franssen and W.
Houkes (2011). A philosophy of
technology: from technical artefacts to sociotechnical systems. S.l.:
Morgan & Claypool.
·
Vincenti, W. A. (1990) What
engineers know and how they know it: analytical studies from aeronautical
history. Baltimore, MD/London: Johns Hopkins University Press.
·
Vitruvius (first ct BC) The
ten books on architecture. Translated by M. H. Morgan. Cambridge, MA:
Harvard University Press, 1914.
·
Volti, R. (2009) Society
and technological change. 6th edition. New York: Worth Publications.
·
Von Wright, G. H. (1963) Norm
and action. London: Routledge and Kegan Paul.
·
Weckert, J. (2007) Computer
ethics. Aldershot/Burlington, VT: Ashgate.
·
Wiggins, D. (1980) Sameness
and substance. Oxford: Blackwell.
·
Winner, L. (1980) Do artifacts have politics? Daedalus 109: 121–136.
·
––– (1983) Technè and politeia: the technical constitution of
society. In Philosophy and
technology(Boston studies in the philosophy of science vol. 80), edited by
P. T. Durbin and F. Rapp. Dordrecht/Boston/Lancaster: D. Reidel, pp. 97–111.
·
Zandvoort, H. (2000) Codes of conduct, the law, and
technological design and development. In The
empirical turn in the philosophy of technology, edited by P. Kroes and A.
Meijers. Amsterdam etc.: JAI/Elsevier, pp. 193–205.
·
Zwart, S. D., I. van de Poel, H. van Mil and M. Brumsen (2006) A
network approach for distinguishing ethical issues in research and development. Science and Engineering Ethics 12: 663–684.
0 comments:
Post a Comment