Manuel Velasquez, Claire Andre, Thomas Shanks, S.J., and Michael J. Meyer
Cultures differ widely in their moral practices. As anthropologist Ruth Benedict illustrates in Patterns of Culture, diversity is evident even on those matters of morality where we would expect to agree:
Other anthropologists point to a range of practices considered morally acceptable in some societies but condemned in others, including infanticide, genocide, polygamy, racism, sexism, and torture. Such differences may lead us to question whether there are any universal moral principles or whether morality is merely a matter of "cultural taste." Differences in moral practices across cultures raise an important issue in ethics -- the concept of "ethical relativism." Ethical relativism is the theory that holds that morality is relative to the norms of one's culture. That is, whether an action is right or wrong depends on the moral norms of the society in which it is practiced. The same action may be morally right in one society but be morally wrong in another. For the ethical relativist, there are no universal moral standards -- standards that can be universally applied to all peoples at all times. The only moral standards against which a society's practices can be judged are its own. If ethical relativism is correct, there can be no common framework for resolving moral disputes or for reaching agreement on ethical matters among members of different societies. Most ethicists reject the theory of ethical relativism. Some claim that while the moral practices of societies may differ, the fundamental moral principles underlying these practices do not. For example, in some societies, killing one's parents after they reached a certain age was common practice, stemming from the belief that people were better off in the afterlife if they entered it while still physically active and vigorous. While such a practice would be condemned in our society, we would agree with these societies on the underlying moral principle -- the duty to care for parents. Societies, then, may differ in their application of fundamental moral principles but agree on the principles. Also, it is argued, it may be the case that some moral beliefs are culturally relative whereas others are not. Certain practices, such as customs regarding dress and decency, may depend on local custom whereas other practices, such as slavery, torture, or political repression, may be governed by universal moral standards and judged wrong despite the many other differences that exist among cultures. Simply because some practices are relative does not mean that all practices are relative. Other philosophers criticize ethical relativism because of its implications for individual moral beliefs. These philosophers assert that if the rightness or wrongness of an action depends on a society's norms, then it follows that one must obey the norms of one's society and to diverge from those norms is to act immorally. This means that if I am a member of a society that believes that racial or sexist practices are morally permissible, then I must accept those practices as morally right. But such a view promotes social conformity and leaves no room for moral reform or improvement in a society. Furthermore, members of the same society may hold different views on practices. In the United States, for example, a variety of moral opinions exists on matters ranging from animal experimentation to abortion. What constitutes right action when social consensus is lacking? Perhaps the strongest argument against ethical relativism comes from those who assert that universal moral standards can exist even if some moral practices and beliefs vary among cultures. In other words, we can acknowledge cultural differences in moral practices and beliefs and still hold that some of these practices and beliefs are morally wrong. The practice of slavery in pre-Civil war U.S. society or the practice of apartheid in South Africa is wrong despite the beliefs of those societies. The treatment of the Jews in Nazi society is morally reprehensible regardless of the moral beliefs of Nazi society. For these philosophers, ethics is an inquiry into right and wrong through a critical examination of the reasons underlying practices and beliefs. As a theory for justifying moral practices and beliefs, ethical relativism fails to recognize that some societies have better reasons for holding their views than others. But even if the theory of ethical relativism is rejected, it must be acknowledged that the concept raises important issues. Ethical relativism reminds us that different societies have different moral beliefs and that our beliefs are deeply influenced by culture. It also encourages us to explore the reasons underlying beliefs that differ from our own, while challenging us to examine our reasons for the beliefs and values we hold. Shannon Vallor, Irina Raicu, Brian Green
Download as PDF Conceptual frameworks drawn from theories that have shaped the study of ethics over centuries can help us recognize and describe ethical issues when we encounter them. In this way, a basic grasp of ethical theory can work like a field guide for practical ethical concerns. Many ethical theories can frame our thinking; here we focus on several that are widely used by both academic and professional ethicists, including tech ethicists in particular. However, these are ethical theories developed largely in the context of Western philosophical thought; our selection must not be taken as a license to ignore the rich range of ethical frameworks available in other cultural contexts. A brief discussion of such frameworks is included on p. 11 of this guide. Each section below includes a brief overview of one ethical theory, examples that reflect its relevance to technologists, and a list of helpful questions that technologists/design teams can ask to provoke ethical reflection through that lens. The Rights Perspective Overview Rules-based systems of ethics range from the most general, such as the ‘golden rule,’ to theories such as that of W.D. Ross (1877-1971), which offers a list of seven pro tanto moral duties (duties that can only be overridden by other, morally weightier duties): fidelity; reparation; gratitude; justice; beneficence; non-injury; and self-improvement. Even more widely used by ethicists is the categorical imperative, a single deontological rule presented by Immanuel Kant (1724-1804) in three formulations. The two most commonly used in practical ethics are the formula of the universal law of nature and the formula of humanity. The first tells us that we should only act upon principles/maxims that we would be willing to impose upon every rational agent, as if the rule behind our practice were to become a universal law of nature that everyone, everywhere, had to follow. The second, the formula of humanity, states that one should always treat other persons as ends in themselves (dignified beings to be respected), never merely as means to an end (i.e., never as mere tools/objects to be manipulated for my purposes). So, benefiting from one’s action toward another person is only permissible if that person’s own autonomy and dignity is not violated in the process, and if the person being treated as a means would consent to such treatment as part of their own autonomously chosen ends. The ethical issues and concerns frequently highlighted by looking through this ethical lens include, but are not limited to:
Examples of Rights-Related Ethical Issues in Tech Practice: In what way does a virtual banking assistant that is deliberately designed to deceive users (for example by actively representing itself as a human) violate a moral rule or principle such as the Kantian imperative to never treat a person as a mere means to an end? Would people be justified in feeling wronged by the bank upon discovering the deceit, even if they had not been financially harmed by the software? Does a participant in a commercial financial transaction have a moral right not to be lied to, even if a legal loophole means there is no legal right violated here? Looking through the rights lens means anticipating contexts in which violations of autonomy, dignity, or trust might show up, regardless of whether there was malign intent or whether material harms were done to people’s interests. Many violations of such duties in the tech sector can be avoided by seeking meaningful and ongoing consent from those who are likely to be impacted (not necessarily just the end user) and offering thorough transparency about the design, terms, and intentions of the technology. However, it is important to remember that concerns about rights often need to be balanced with other kinds of concerns. For example, autonomy is not an unconditional good (technologists should not empower users do anything they want). When user autonomy poses unacceptable moral risks, this value needs to be balanced with appropriately limited moral paternalism (which is also unethical in excess.) An excellent example of this is the increasingly standard design requirement for strong passwords. Rights-Related Questions for Technologists that Illuminate the Ethical Landscape:
The Justice/Fairness Perspective Overview Aristotle (384-322 B.C.) argued that equals should be treated equally; however, that leaves open the question of which criteria should be used in determining whether people are “equal.” Aristotle, for example, unjustly excluded women and non-Greeks from those entitled to equal treatment. Philosophers have argued that justice demands that we consider, as part of our analysis, elements such as need, contribution, and the broad impacts of social structure on individuals and groups. The justice lens also encompasses retributive justice. “An eye for an eye” was a call for such justice; current laws that aim to punish criminal wrongdoing also reflect this notion of justice. A third type, compensatory justice, is reflected in efforts to compensate injured people (whether or not the injury that they suffered was committed intentionally), or to return lost or stolen property to its rightful owner. Justice and fairness also demand impartiality and the avoidance of conflicts of interest. Philosopher John Rawls (1921-2002), for example, argued that a fair and just society should be organized under a “veil of ignorance” about characteristics over which people have no control and which should not determine participants’ role and opportunities in society—characteristics such as age, gender, race, etc. Unable to tip the scales on behalf of their own characteristics, he argued, people would then create a more egalitarian and fair society. Like the rights perspective, the justice lens is also related to notions of human dignity. It focuses more, however, on the relationships among people—on the conditions and processes required in order to implement societal respect for human dignity. The ethical issues and concerns frequently highlighted by looking through this ethical lens include, but are not limited to:
Examples of Justice/Fairness-Related Ethical Issues in Tech Practice: How does design impact the accessibility of various products for people with disabilities, or for people who can only afford older, cheaper technology tools? How should facial recognition tools (or other algorithmic tools that are typically trained on unrepresentative datasets and are therefore far less accurate for some people and groups than for others) be used, if at all, and in which contexts, if any? Looking through this lens enables us to see that technologies often distribute benefits and harms unevenly, and frequently exacerbate or perpetuate preexisting unfair societal conditions. It also stresses the need for technologists to consult stakeholders who may be very differently situated from themselves, in order to truly understand (rather than assume) the potential benefits of a product, as well as to be made aware of harms that they might have otherwise missed. Justice/Fairness-Related Questions for Technologists that Illuminate the Ethical Landscape:
The Utilitarian Perspective Overview Utilitarianism is attractive to many engineers because in theory it implies the ability to quantify the ethical analysis and select for the optimal outcome (generating the greatest overall happiness with the least suffering). In practice, however, this is often an intractable or ‘wicked’ calculation, since the effects of a technology tend to spread out indefinitely in time (should we never have invented the gasoline engine, or plastic, given the now devastating consequences of these technologies for the planetary environment and its inhabitants?); and across populations (will the invention of social media platforms turn out to be a net positive or negative for humanity, once we take into account all future generations and all the users around the globe yet to experience its consequences?) The requirements to consider equally the welfare of all affected stakeholders, including those quite distant from us, and to consider both long-term and unintended effects (where foreseeable), make utilitarian ethics a morally demanding standard. In this way, utilitarian ethics does not equate to or even closely resemble common forms of cost-benefit analysis in business, where only physical and/or economic benefits are considered, and often only in the short-term, or for a narrow range of stakeholders. Moral consequences go far beyond economic good and harm. They include not only physical, but psychological, emotional, cognitive, moral, institutional, environmental, and political well-being or injury, or degradation. The ethical issues and concerns frequently highlighted by looking through this ethical lens include, but are not limited to:
Example of Utilitarian Ethical Issues in Tech Practice: Technologists falsely assumed that people’s technological choices were reliably correlated with their increasing happiness and welfare. This was not a reasonable assumption, because people make themselves unhappy with their choices all the time, and we are subject to any number of mental compulsions that drive us to choose actions that promise happiness but will not deliver, or will deliver only a very short-term, shallow pleasure while depriving us of a more lasting, substantive kind. Leading device manufacturers now increasingly admit this by building tools to fight tech addiction. Related Questions for Technologists that Illuminate the Ethical Landscape:
The Common Good Perspective Overview Another approach to ethical thinking is to focus on the common good, which highlights shared social institutions, communities, and relationships (instead of utilitarianism’s concentration on the aggregate welfare/happiness of individuals). The distinction is subtle but important. Utilitarians consider likely injuries or benefits to discrete individuals, then sum those up to measure aggregate social impact. The common good lens, in contrast, focuses on the impact of a practice on the health and welfare of communities or groups of people, up to and including all of humanity, as functional wholes. Welfare as measured here goes beyond personal happiness to include things like political and public health, security, liberty, sustainability, education, or other values deemed critical to flourishing community life. Thus a technology that might seem to satisfy a utilitarian (by making most individuals personally happy, say through neurochemical intervention) might fail the common good test if the result was a loss of community life and health (for example, if those people spent their lives detached from others--like addicts drifting in a technologically-induced state of personal euphoria). Common good ethicists will also look at the impact of a practice on morally significant institutions that are critical to the life of communities—for example on government, the justice system, or education, or on supporting ecosystems and ecologies. Common good frameworks help us avoid notorious tragedies of the commons, where rationally seeking to maximize good consequences for every individual leads to damage or destruction of the system that those individuals depend upon to thrive. Common good frameworks also share commonalities with cultural perspectives in which promoting social harmony and stable functioning may be seen as more ethically important than maximizing the autonomy and welfare of isolated individuals. For practical purposes, it can be helpful to view the utilitarian and common good approaches as complementary lenses that provoke us to consider both individual and communal welfare, even when these are in tension. Using the utilitarian and common good lenses while doing tech ethics is like watching birds while using special glasses that help us zoom out from an individual bird to survey a dynamic network (the various members of a moving flock), and then try to project the overall direction of the flock’s travel as best we can (is this project overall going to make people’s lives better?), while still noticing any particular members that are in special peril (are some people going to suffer greatly for a trivial gain for the rest?). The ethical issues and concerns frequently highlighted by looking through this ethical lens include, but are not limited to:
Example of Common Good Ethical Issues in Tech Practice: Technology use, data storage, and energy-intensive training of AI models also impact the environment in ways that implicate the common good. In addition, with everything from weapons to pacemakers now being connected to the internet, cybersecurity has become one of the conditions required for the common good. Related Questions for Technologists that Illuminate the Ethical Landscape:
The Virtue Ethics Perspective Overview Aristotle (384-322 B.C.) stated that ethics cannot be approached like mathematics; there is no algorithm for ethics, and moral life is not a well-defined, closed problem for which one could design a single, optimal solution. It is an endless task of skillfully navigating a messy, open-ended, constantly shifting social landscape in which we must find ways to maintain and support human flourishing with others, and in which novel circumstances and contexts are always emerging that call upon us to adapt our existing ethical heuristics, or invent new, bespoke ones on the spot. Virtue ethics does, however, offer some guidance to structure the ethical landscape. It asks us to identify those specific virtues—stable traits of character or dispositions—that morally excellent persons in our context of action consistently display, and then identify and promote the habits of action that produce and strengthen those virtues (and/or suppress or weaken the opposite vices). So, for example, if honesty is a virtue in designers and engineers (and a tendency to falsify data or exaggerate results is a vice), then we should think about what habits of design practice tend to promote honesty, and encourage those. As Aristotle says, ‘we are what we repeatedly do.” We are not born honest or dishonest, but we become one or the other only by forming virtuous or vicious habits with respect to the truth. Virtue ethics is also highly context-sensitive: each moral response is unique and even our firmest moral habits must be adaptable to individual situations. For example, a soldier who has a highly-developed virtue of courage will in one context run headlong into a field of open fire while others hang back in fear; but there are other contexts in which a soldier who did that would not be courageous, but rash and foolish, endangering the whole unit. The virtuous soldier reliably sees the difference, and acts accordingly—that is, wisely, finding the appropriate ‘mean’ between foolish extremes (in this case, the vices of cowardice and rashness), where those are always relative to the context (an act that is rash in one context may be courageous in another). While other ethical lenses focus our moral attention outward, onto our future technological choices and/or their consequences, virtue ethics reminds us to also reflect inward—on who we are, who we want to become as morally expert technologists, and how we can get there. It also asks us to describe the model of moral excellence in our field that we are striving to emulate, or even surpass. What are the habits, skills, values, and character traits of an exemplary engineer, or an exemplary designer, or an exemplary coder? Virtue ethics also incorporates a unique element of moral intelligence, called practical wisdom, that unites several faculties: moral perception (awareness of salient moral facts and events), moral emotion (feeling the appropriate moral responses to the situation), and moral imagination (envisioning and skillfully inventing appropriate, well-calibrated moral responses to new situations and contexts). The ethical issues and concerns frequently highlighted by looking through this ethical lens include, but are not limited to:
Examples of Virtue-Related Ethical Issues in Tech Practice: There is probably no better conceptual lens than virtue ethics for illuminating the problematic effects of the attention economy and digital media. It helps to explain why we have seen so many pernicious moral effects of this situation even though the individual acts of social media companies appeared morally benign; no individual person was wronged by having access to news articles on various social media platforms, and even the individual consequences didn’t seem so destructive initially. What has happened, however, is that our habits have been gradually altered by new media practices not designed to sustain the same civic function. Not all technological changes must degrade our virtues, of course. Consider the ethical prospects of virtual-reality (VR) technology, which are still quite open. As VR environments become commonplace and easy to access, might people develop stronger virtues of empathy, civic, care, and moral perspective, by experiencing others’ circumstances in a more immersive, realistic way? Or will they instead become even more numb and detached, walking through others’ lives like players in a video game? Most important is this question: what VR design choices would make the first, ethically desirable outcome more likely than the second, ethically undesirable one? Related Questions for Technologists that Illuminate the Ethical Landscape:
The Care Ethics Perspective Overview Proponents of care ethics have argued that a focus on high-level principles and abstraction might ignore the role of both embodiment and emotion in determining the right thing to do in particular circumstances. They have also argued that the care ethics approach focuses on empathy and compassion, rather than on the (unrealizable) goal of complete impartiality. “Moralities built on the image of the independent, autonomous, rational individual largely overlook the reality of human dependence and the morality for which it calls,” argues the philosopher Virginia Held. “The ethics of care attends to this central concern of human life and delineates the moral values involved…” She adds, however, that “we need an ethics of care, not just care itself. The various aspects and expressions of care and caring relations need to be subjected to moral scrutiny and evaluated, not just observed and described.” Care ethics is often associated with feminist ethics—though some have pushed back against care ethics as a “feminization” of universal concerns; in turn, some care ethicists might point out that empathy and compassion are not gender-specific, and that multiple cultural traditions point to care ethics by making the case that one should meet the needs of family members ahead of those who are unrelated, for example—or to consider the needs of members of one’s town ahead of the needs of strangers farther away. While care ethics is sometimes narrowly delineated in this way, emphasizing relational proximity and intimacy, some philosophers have argued for broader, more expansive versions. Caring for the world might be part of the care for those close to us; for example, caring for the environment can help to protect the health of those with whom we have direct caring relationships, as well as others. An article in The Stanford Encyclopedia of Philosophy notes that “the ethic of care bears so many important similarities to virtue ethics that some authors have argued that a feminist ethic of care just is a form or a subset of virtue ethics”; however, this observation ignores care ethics’ particular emphasis on relationships as one key aspect that determines whether a particular action is virtuous. The ethical issues and concerns frequently highlighted by looking through this ethical lens include, but are not limited to:
Example of Care Ethics Issues in Tech Practice: Embodiment is also a key concern of care ethics—and one that will be deeply impacted by new developments in virtual reality. Will avatars in the “metaverse” (should that technological ecosystem come to be as widespread as some predict) change our understanding of our own bodies’ limitations—and others’? Will haptic controls ever be able to match the experience of the caress of a mother’s hand on a child’s forehead? The increasing use of robots for care work also takes on new dimensions when evaluated through the lens of care ethics. On one hand, robots might step in to fulfill at least parts of the caring relationships among humans that care ethics values; on the other, they might allow humans to focus more on the relationships themselves and less on the physical strain that caring can entail. Related Questions for Technologists that Illuminate the Ethical Landscape:
Global Ethical Perspectives There is no way to offer an ‘overview’ of the full range of ethical perspectives and frameworks that the human family has developed over the millennia since our species became capable of explicit ethical reflection. What matters is that technologists remain vigilant and humble enough to remember that whatever ethical frameworks may be most familiar or ‘natural’ to them and their colleagues, they only amount to a tiny fraction of the ways of seeing the ethical landscape that their potential users and impacted communities may adopt. This does not mean that practical ethics is impossible; on the contrary, it is a fundamental design responsibility that we cannot make go away. But it is helpful to remember that the moral perspectives in the conference room/lab/board meeting are never exhaustive, and that they are likely to be biased in favor of the moral perspectives most familiar to whoever happens to occupy the dominant cultural majority in the room, company, or society. Yet the technologies we build don’t stay in our room, our company, our community, or our nation. New technologies seep outward into the world and spread their effects to peoples and groups who, all too often, don’t get a fair say in the moral values that those technologies are designed to reinforce or undermine in their communities. And yet, we cannot design in a value-neutral way—that is impossible, and the illusion that we can do so is arguably more dangerous than knowingly imposing values on others without their consent, because it does the same thing—just without the accountability. Technologists will design with ethical values in mind; the only question is whether they will do so in ways that are careful, reflective, explicit, humble, transparent, and responsive to stakeholder feedback, or in ways that are arrogant, opaque, and irresponsible. While moral and intellectual humility requires us to admit that our ethical perspective is always incomplete and subject to cognitive and cultural blind spots, the processes of ethical feedback and iteration can be calibrated to invite a more diverse/pluralistic range of ethical feedback, as our designs spread to new regions, cultures, and communities. Examples of Global Ethical Issues in Tech Practice: Many Western ethicists view this system as profoundly dystopic and morally dangerous. In China, however, some embrace it, within a cultural framework that values social harmony as the highest moral good. How should technologists respond to invitations to assist China in this project, or to assist other nations who might want to follow China’s lead? What ethical values should guide them? Should they simply accede to the local value-system (especially in an authoritarian society in which people might be reluctant to express their real values)? Or should technologists be guided by their own personal values, the values of the nation in which they reside, or the ethical principles set out by their company? This example illustrates the depth of the ethical challenge presented by global conflicts of ethical vision. But notice that there is no way to evade the challenge. A decision must be made, and it will not be ethically neutral no matter how it gets made. A decision to ‘follow the profits’ and ‘put ethics aside’ is not a morally neutral decision, it is one that assigns profit as the highest or sole value. That in itself is a morally laden choice for which one is responsible, especially if it leads to harm. It may be helpful, where possible, to seek ethical dialogue across cultural boundaries and begin to seek common ground with technologists in other cultural spaces. Such dialogues will not always produce ethical consensus, but they can help give shape to the conversation we must begin to have about the future of global human flourishing in a technological age, one in which technology increasingly links our fortunes together. Questions for Technologists that Illuminate the Global Ethical Landscape:
References and Further Reading Aristotle [350 B.C.E.] (2014). Nicomachean Ethics: Revised Edition. Trans. Roger Crisp. Cambridge: Cambridge University Press. Asaro, P. M., "AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care," in IEEE Technology and Society Magazine, vol. 38, no. 2, pp. 40-53, June 2019, doi: 10.1109/MTS.2019.2915154. Benjamin, Ruha (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity Press. Benner, P. “A Dialogue Between Virtue Ethics and Care Ethics.” Theor Med Bioeth 18, 47–61 (1997). https://doi.org/10.1023/A:1005797100864 Broussard, Meredith (2018). Artificial Unintelligence: How Computers Misunderstand the World. Cambridge, MA: MIT Press. D’Ignazio, Catherine, and Lauren F. Klein (2020). Data Feminism. Cambridge, MA: MIT Press. Noble, Safia (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NY: NYU Press. Ess, Charles (2014). Digital Media Ethics: Second Edition. Cambridge: Polity Press. Gray, J., & Witt, A. (2021). “A feminist data ethics of care for machine learning: The what, why, who and how.” First Monday, 26(12). https://doi.org/10.5210/fm.v26i12.11833 Held, Virginia (2006). The Ethics of Care: Personal, Political, and Global. Oxford: Oxford University Press. Jasanoff, Sheila (2016). The Ethics of Invention: Technology and the Human Future. New York: W.W. Norton. Kagan, Shelly (1989). The Limits of Morality. Oxford: Clarendon Press, Kant, Immanuel. [1785] (1997). Groundwork of the Metaphysics of Morals. Trans. Mary Gregor. Cambridge: Cambridge University Press. Lin, Patrick, Abney, Keith and Bekey, George, eds. (2012). Robot Ethics. Cambridge, MA: MIT Press. Lin, Patrick, Abney, Keith and Jenkins, Ryan, eds. (2017). Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. New York: Oxford University Press. Macmillan. Shariat, Jonathan and Saucier, Cynthia Savard (2017). Tragic Design: The True Impact of Bad Design and How to Fix It. Sebastopol: O’Reilly Media. Mill, John Stuart [1863] (2001). Utilitarianism. Indianapolis: Hackett. Nodding, Nell (2013) Caring: A Relational Approach to Ethics and Moral Education, Second Edition, Updated. Berkeley and Los Angeles: University of California Press. Robison, Wade L. (2017). Ethics Within Engineering: An Introduction. London: Bloomsbury. Ross, William David (1930). The Right and the Good. London: Oxford University Press. Sandler, Ronald (2014) Ethics and Emerging Technologies. New York: Palgrave Selinger, Evan and Frischmann, Brett (2017). Re-Engineering Humanity. Cambridge: Cambridge University Press. Tavani, Herman (2016). Ethics and Technology: Controversies, Questions, and Strategies for Ethical Computing, Fifth Edition. Hoboken: Wiley. Vallor, Shannon (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York: Oxford University Press. Van de Poel, Ibo, and Royakkers, Lamber (2011). Ethics, Technology, and Engineering: An Introduction. Hoboken: Wiley-Blackwell. van Wynsberghe, A. “Designing Robots for Care: Care Centered Value-Sensitive Design.” Science and Engineering Ethics 19, 407–433 (2013). https://doi.org/10.1007/s11948-011-9343-6 Online Resources The Ethics of Innovation (2014 post from Chris Fabian and Robert Fabricant in Stanford Social Innovation Review, includes 9 principles of ethical innovation) https://ssir.org/articles/entry/the_ethics_of_innovation Ethics for Designers (Toolkit from Delft University researcher) https://www.ethicsfordesigners.com/ The Ultimate Guide to Engineering Ethics (Ohio University) https://onlinemasters.ohio.edu/ultimate-guide-to-engineering-ethics/ Code of Ethics, National Society of Professional Engineers Markkula Center for Applied Ethics, Technology Ethics Teaching Modules (introductions to software engineering ethics, data ethics, cybersecurity ethics, privacy) Markkula Center for Applied Ethics, Resources for Ethical Decision-Making |