Reviews and Praise

“Matthew Brown advances the literature on science and values in a manner that will serve multiple audiences. For the scientific community, he provides an inspiring new ‘ideal of moral imagination.’ For philosophers, he draws on the work of John Dewey to develop a rich pragmatist account of values and value judgments. This is an accessible and creative book.” – Kevin Elliott

“It is rewarding to see a book so well grounded in pragmatism and feminism that provides opportunities for robust outcomes to shape scientific practice… Matthew Brown’s book Science and Moral Imagination: A New Ideal for Values in Science provides an important contribution to work in socially responsible and socially responsive philosophy of science.” – Nancy Arden McHugh, Studies in HPS

“Finally, a book that grapples in detail with the really hard, central questions concerning values and science—the nature, sources, kinds, and cognitive status of nonepistemic values, how they stack up against epistemic values, how conflicts among these nonepistemic values are to be resolved, and so on. Science and Moral Imagination will be a winner among students and professionals alike, from the sciences as well as science studies.” – Janet Kourany

“Citing the pervasiveness of choice and contingency throughout the research process, Brown cuts through misunderstandings to offer a welcome new account of values, one that suggests a new type of responsibility that is aimed at avoiding failures of moral imagination in scientific inquiry. Science and Moral Imagination provides a refreshingly pragmatic approach to the urgent question of how to manage values in science.” – Erik Fisher

“Readers will especially appreciate Brown’s transparency in laying out his philosophical perspectives and the clarity of his writing, which makes his otherwise academic text accessible to a wide audience.” – Frank Grabowski, Choice Reviews

“I appreciate how this author’s work has helped me to notice how much I value the representation of these inclusive, pluralist elements in a normative picture of science.” – Joyce C. Havstad, Studies in HPS

Science and Moral Imagination: A New Ideal for Values in Science is an exciting addition to the literature on values in science and an achievement on which Brown is to be congratulated.” – Sarah Wieten, Studies in HPS

About the Book

Contingency and choice are ubiquitous throughout the research process. Scientists, engineers, and biomedical researchers face choices of what to investigate and how to investigate it, what methods to use, what hypothesis to test, how to model phenomena, what data to collect, when to stop data collection, and what conclusions to draw based on the evidence. Peer reviewers for funding bodies decide to fund this grant application and reject that one. Committees decide to hire or tenure this scientist but not that one. Likewise, institutions have evolved in one direction but could have evolved in another; individual researchers have certain levels of talent and skill that could have been otherwise; sometimes researchers are in the right place at the right time, but other times they are not. Many of these contingencies are out of the control of individual choices, but others are matters of explicit decisions, and many things that are decided by habit, luck, or institutional practice could be made explicit and decided differently.

On what basis are scientists to decide what to do in the face of these contingencies and choices? Some would say that they must be decided objectively, by the evidence, by logic and statistics, by scientific standards (sometimes called “epistemic values“) such as simplicity or “Okham’s razor.” But right away, we can see that this answer is inadequate for many scientific questions, such as which question of the infinity of possible questions we should study, or what methods are ethical and humane to use on animals or human subjects. In order to make these decisions, we must also consider our values, what we care about, our goals, ethics, duty, responsibility, what is right and good.

This book argues that few, if any, of the decisions scientists face can, in principle, be decided by logic and evidence alone. Nor are epistemic standards sufficient. What’s more, even if those decisions could be settled that way, it does not follow that they should. Values are relevant throughout the research process, and scientists have an ethical responsibility to weigh values and make value judgments in the course of the research process, even when dealing with data and drawing conclusions. Each contingency in science could, in principle, become an explicit choice. Any such choice could have foreseeable consequences for what we value; to find out for any particular case, we have to think about values, exercise moral imagination to determine the consequences of each option, and exercise value judgment as part of the choice. We cannot always foresee the consequences; the choices may sometimes be irrelevant to any values, but we cannot determine that ahead of time without looking at the details of the case. Thus, scientists have a responsibility to make value judgments about scientific contingencies, and thus science is value-laden through and through.

I call this general argument “the contingency argument,” which I develop in detail in chapter 3. This argument is meant to undermine the ideal of science as value-free (or the value-free ideal for short), according to which values (except for scientific standards) have no role to play in scientific inquiry proper. That is, in the ideal, scientists should not consider values in science, except to ensure that their work is impartial towards and neutral for our values (Lacey:1999). The value-free ideal is motivated by the thought that it will minimize the bias, subjectivism, and potential for wishful thinking that values would bring into science. Science, after all, is supposed to be objective. And yet, as the contingency argument shows, scientists have an ethical obligation to bring in values. While this may appear to create a conflict between the scientists’ responsibilities, I argue that the apparent conflict is based on a mistake, an implicit view about values—that they are necessarily biasing, subjective, arbitrary, or, as I will put it, that they have no cognitive status. To deny that values have cognitive status is to deny them meaning, warrant, credibility, and truth. To insist, as I do, that values can have cognitive status means that they need not be biasing or subjective, that they need not lead to wishful thinking, that they are meaningful and can be warranted and credible. Indeed, we cannot make sense of human practices, human passions, of heartfelt disagreement over values, or the genuine difficulty of moral quandaries, without attributing some cognitive status to our values.

If values have their own cognitive status, then they need not necessarily lead us to subjectivism and wishful thinking. On the other hand, we still need to know how to manage values in science. Attributions of “cognitive status” are no panacea against wishful thinking. Nevertheless, there is no general reason to think that value-laden science is deficient or problematic.

What we need is a better theory of values, one that avoids the simplistic idea that values necessarily lead to unacceptable bias, one which allows us to acknowledge the cognitive status of values, one that can help us distinguish the legitimate roles for values in science from those that lead to rigid and wishful thinking. This theory of values should be “science friendly,” neither presupposing some mysterious, supernatural realm of values, nor removing values from the realm of evidence altogether. Science allows no unmoved movers. I propose a pragmatic pluralist theory of values, according to which values are inherently connected with action, come from many sources in human life, practice, and experience, and come in many different types according to the many different roles they play in our activities. On this view, there is a crucial distinction between unreflective or habitual values and reflective value judgment, where the latter is understood as a type of empirical inquiry into questions of what to do. The cognitive status of values tracks both their success in guiding human activities and the quality of the inquiry that warrants their evaluation. This theory of values may not be the only one for the job, nor does it necessarily satisfy the deeper questions of metaethics and ethical theory, but it has many benefits as a practical theory of values.

On this account, scientific inquiry and value judgment share common aims and a common structure, laid out in chapter 2, in the case of scientific inquiry, and chapter 6, in the case of value judgment. Both are conceived as problem-solving inquiry occasioned by problematic situations of practice. Both involve determining the facts of the case, proposing hypotheses for resolving the problem, and experimental testing. Both are contextualized by the problematic situation they respond to. Both are judged by whether they resolve the problematic situation in practice, rather than by merely intellectual criteria.

Central to the pragmatic pluralist theory of values is the concept of moral imagination. Value judgment requires considering stakeholders and the various implications and consequences of various courses of action connected with values. As such, it requires exercising imagination via empathy, dramatic rehearsal, and creative-problem solving. The exercise of moral imagination is not mere fantasy but a part of all evidence-based inquiry. The emphasis on imagination is an important feature of this theory of values, one compatible with any ultimate ethical theory.

Based on this account of values, I define a new ideal for values in science, a replacement for the value-free ideal, which has been undermined by the contingency argument. I call this ideal, “the ideal of moral imagination,” defined as follows: Scientists should recognize the contingencies in their work as unforced choices, discover morally salient aspects of the situation they are deciding, empathetically recognize and understand the relevant stakeholders, imaginatively construct and explore possible options, and exercise fair and warranted value judgment in order to guide those decisions. This is an open-ended ideal to strive for, difficult in principle to satisfy, just as the value-free ideal was. It is not a minimal criterion for all inquiry to satisfy, but it is a genuine ideal.

To say that contingencies are choices is to say that there is more than one open option that reasonable inquirers could settle on. To say that the choice is “unforced” is to say that no factor decisively settles the matter, shows one of the options to be best all-things-considered, at least, from the perspective of the scientific inquirer at the moment the choice is made. Not all contingencies are, in the moment, recognized as unforced choices by the inquirers. They may not imagine that there are other options, and let force of habit or convention, or the appearance of only one option decide for them. But ideally, they would recognize those contingencies for what they are, and exercise their moral imagination in order to make a responsible choice.

The ideal of moral imagination, in turn, allows us to recognize a second kind of irresponsibility in scientific research. Already thoroughly discussed are cases of misconduct, when scientists violate clear minimal constraints on responsible research (for example, fabricating data, plagiarism, experimenting on human subjects without consent). The ideal of moral imagination allows us to recognize a distinctive form of irresponsibility in sailures of moral imagination, where scientists fail to live up to the ideal by, for example, failing to consider a reasonable range of options (including the superior option), or by not considering the impact on relevant stakeholders. The second is the new form of evaluation that the book defines and advocates. It is generally a matter of degree, where misconduct is usually an all-or-nothing question.

While the ideal of moral imagination allows us to identify a distinctive failure of responsibility, its emphasis is on the positive, on what values and value judgment can contribute to scientific inquiry. The ideal of moral imagination gives scientists something to strive for and tools for responsibly making the choices that pervade the research process. It can guide decisions about research agenda, methodology, and framing hypotheses; it provides guidance on the questions that arise in the conduct of inquiry, of gathering data, of testing and refining hypotheses; it can improve the way that scientific results are presented and applied.

Get the Book / Additional Resources

  • Available for purchase from the University of Pittsburgh Press, Amazon, or wherever fine books are sold.
  • Available Open Access (CC-BY-NC-ND 4.0) via PDF, ePub, Kindle, Nook, or JSTOR. Open Access publication was partially funded by The University of Texas at Dallas Office of Research through the HEARTS program.
  • Download the Moral Imagination Framework worksheet from the back of the book. Its use is described in detail in the Conclusion of the book. It is based on the framework described in Chapter 6.