Why science is not objective | Stephen John » IAI TV (2023)

We view science as an objective representation of the world, free from political and other biases. But things are not that simple. Evidence alone doesn't tell you when you have enough evidence to support a claim, so scientists sometimes have to make judgments that depend on ethical and political values. This finding undermines our understanding of science.objectivityas worthless. But all is not lost, argues Stephen John.

"Social distancing and face masks must remain FOREVER, says Susan Michie, member of the communist SAGE committee."daily mailThe implication was clear: Because she's a communist with ID, Susan Michie isn't to be trusted. When her politics were brought up by an interviewer, Michie's response wassimply: "You don't ask other scientists about politics, so I like to talk about science, which is my job, and I limit myself to that." For Michie, politics is one thing, science is another.

One way to sum up this mini-controversy is in terms of objectivity: the Mail claims Michie can't be objective, but says she can be. We often think that scientists should be objective and associate scientific objectivity with "value-freedom". We believe that a scientific claim is objective if the justification for that claim does not depend on any ethical or political value. We call a scientist objective when he does not allow his values ​​to influence his reasoning or arguments. These notions of good science as value-free relate to a broader notion of science that informs us of how the world really is, devoid of comforting illusions. This image can also explain why science should play an important role in politics. Effective strategies must be based on our best understanding of the world. Good, objective scientists tell us how the world is. Precisely because they achieve this perspective by ignoring values, they can act as neutral arbiters in politics. So there's a nice package of ideas that combines politics, science, objectivity and value freedom that explains why we care if Michie can leave her communism at the door.

There's just one problem: this picture is fake.

There is no way to do science without anticipating or anticipating an ethical, political, or economic point of view.

How science is charged with values

The example above perfectly sums up a common problem when thinking about science, politics and politics. Scientific advisers play a central role in policy formulation. In the Covid-19 pandemic, UK leaders have stressed that they are "following the science". Given their power, it seems important that scientists do not allow their advice to be influenced by their own political, ethical, or economic values. It would be deeply undemocratic for Michie to introduce communism into politics through the back door of the Scientific Council, as the Mail suggests. And that seems a valid concern, even if communism is a good idea. On the other hand, scientists are of course human beings with needs, desires, passions and interests. Can they prevent values ​​from influencing their claims?

Many philosophers of science now say that this is impossible: scientific reasoning cannot be value-free. Notice how strong these statements are. There are many instances where scientists' ethical or political values ​​have influenced their science. A famous example is the 19th-century "discovery" of "Drapetomania" - a psychological condition that caused slaves to flee the plantations. In our view, the "science" of the curtain craze was consciously or unconsciously driven by the values ​​and interests of the affluent white establishment. The concern of philosophers, however, is not only that too much "science" is influenced by values. It's not even that it's hard to avoid that science is value-biased, or that it's hard to tell if science is value-biased, or that decisions about what to research reflect economic values could. Rather, the core of the scientific justification must be influenced by values. Running away" is valuable, "Smoking causes lung cancer" or "Covid-19 is spreading." Why think that and what does it mean for the relationship between science and politics?

How much proof that a claim is true should we ask for before we can say that the claim was justified?

When is enough proof enough?

Perhaps the simplest and most powerful argument for an inescapable role of values ​​in scientific justification appeals to "inductive risk". All scientific knowledge is inductive; Strictly speaking, it goes beyond our evidence. For example, although we have a large body of evidence that smoking causes lung cancer, we cannot be 100% certain of this conclusion. Maybe, just maybe, something else explains the patterns we're seeing. This simple fact poses a problem: how much evidence that a claim is true should we require before we say that the claim was justified? Science cannot answer the question of how right “right enough” is. There is no fact on earth like "Only accept claims if you are at least 95% sure".

The more evidence we ask for, the lower our chances of a false positive (accepting a claim that is actually false). That's a good thing, too, because responding to false claims is pointless at best and damaging at worst. However, asking for too much evidence comes at a price: an increased risk of false negatives (not accepting claims that are actually true). This can be a problem because not accepting true claims can also be costly. The main claim of the recent philosophy of science, based on the arguments of Heather Douglas, is that the only correct way to establish any degree of certainty is to think about the ethical and practical costs of these various errors. The higher the costs associated with false positives, the more certainty we should demand. The higher the costs associated with false negatives, the less certainty we should demand. When we do science, we need to think about the unscientific implications of saying a claim is justified.

Think of an example. In March 2020 the UK Government had to decide whether to accept the statement 'Covid-19 risks overloading the NHS'. They had some evidence for this, but that evidence was uncertain, complex, and contradictory. How much evidence would they have had to ask for before accepting the claim? The less evidence they asked for, the greater the likelihood of a false alarm leading to massive and unnecessary damage to the economy. The more evidence they asked for, the greater the chance of a false negative, which would lead to unnecessary deaths. Many people now feel that the government got that balance wrong: they waited too long. I don't know if that's correct. It is clear that these concerns express not only a scientific but also an ethical or political judgment about the right balance between economic and public health goals, the nature of government responsibility, and so on.

Therefore, scientific justification following “inductive risk” arguments must be value-laden, because decisions about how right is “right enough” must be based on broad ethical claims. Note that the argument does not say that these choices must always be conscious or explicit. Instead, scientists mostly solve their inductive risk problems by invoking convention. For example, statistical approaches that minimize false positives are often built into statistical computer programs. The strength of the argument is that these conventions may themselves presuppose or imply contestable ethical values. Someone born and raised eating meat may reach for the chicken drumstick without thinking, but that doesn't mean eating meat is an ethically neutral activity.

We face a problem: the authority of scientific experts seems to revolve around the idea that they are objective; Objectivity seems to demand value-freedom; but the problem of inductive risk seems to imply that any scientific justification is value-laden. Something has to go, but what?

A problem for science or science communication?

One possibility is to find a way for scientists to avoid taking inductive risks; For example, instead of saying “Covid-19 is in the air”, they could say “we are very confident that Covid-19 is in the air” at the risk of false alarms. This would leave politicians questioning whether "too right" is "right enough," but scientists could at least keep their hands clean. In fact, there are cases where scientists do something like that. The reports of the IPCC, for example, are full of reservations, such as that it is "practically certain" that climate change is the result of human activity.

We face a problem: the authority of scientific experts seems to revolve around the idea that they are objective; Objectivity seems to demand value-freedom; but the problem of inductive risk seems to imply that any scientific justification is value-laden.

In this answer, the concerns about inductive risk aren't really related to the science itself, but to how the science is communicated. The problem is that scientists sometimes talk sloppily and say claims are true when they should say they are "X%" sure.[SJ1

Unfortunately, this strategy does not work because inductive risk issues arise throughout the scientific process, not just the report. Consider the models used to predict the likely impact of different blocking policies. These models provided probability estimates rather than certainties. Yet even making these probability claims involved many questionable assumptions about various parameters - such as the public's willingness to comply with lockdown measures. In turn, overestimating or underestimating these parameters can have significant practical consequences. Model makers faced an analog of an inductive hazard problem.

You could always dig deeper and say that somehow the scientists could have avoided making valuable decisions, for example by running multiple models and reports.atyour results. Of course, in fact, scientists often run models with different variables and offer a variety of estimates. But even this approach involves value judgments about which approaches are worth investigating, what is most important to know, and so on. No scientific studiesatopportunities and reportsatof uncertainties. It is not clear that this is possible. Even if it's possible, it seems like a huge waste of time and effort. If the expert advisers are non-judgmental, the policy maker can also do a Google search.

Values ​​as a guide to objectivity

We are then left with two options: deny the authority of science or change our view of objectivity. The first option is exciting but risks throwing the baby out with the bathwater: while climate science or epidemiology involves some value judgements, it seems best for climate scientists or epidemiologists to play a role in politics than to leave it all to unskilled speakers.

So the second option is to abandon the notion that objectivity is about the absence of values ​​and say that it is about the presence of the right values.

So the second option is to abandon the notion that objectivity is about the absence of values ​​and say that it is about the presence of the right values. This proposal may seem worrying. How could we know what the right values ​​are? Is there such a thing as the “correct” values? Haven't we saved objectivity in science by entering the even more complicated realm of objectivity in ethics?

These are tough questions. Fortunately, we can largely avoid them in the context of policy advice. In a democracy, the “right” values ​​are democratic values, the values ​​that shape our political system and are shared by most people. As long as the values ​​that shape scientific practice are in line with these democratic values, science can be objective.[Alexis Pa2][SJ3

Susan Michie wrongly drew a line between politics and science, because good science must be political. The Daily Mail was wrong to say that being a communist automatically weakens her advice because her advice may be guided by values ​​different from yours.

Of course, it is neither easy nor uncomplicated to guarantee the democratic legitimacy of the values ​​that shape science. There is a fine line between respecting democratic values ​​and giving in to the interests of the ruling party. policy makersit shouldto reflect, because these are the values ​​of the people, not the short-term values ​​of electoral success. This is difficult because it can be difficult to know what people really arewant, and scientists might not be the best people to know that anyway. Building objective science may require greater public participation and oversight of scientific practice. May require challenge or debate systems. All of these systems can go wrong. However, these problems are not just due to thinking about scientific advice; on the contrary, they are problems to be faced if one is to say that government can be democratically legitimate. Nobody ever thought objectivity would be easy.

[Alexis P

Still, you might be nervous: haven't we replaced the idea of ​​science as a reliable guide to the world with the idea that our values ​​are a good guide? It could be. However, if we want to follow science, and science has to be based on some value judgements, we have to make sure that science ultimately follows us.

Top Articles
Latest Posts
Article information

Author: Foster Heidenreich CPA

Last Updated: 03/16/2023

Views: 5597

Rating: 4.6 / 5 (56 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Foster Heidenreich CPA

Birthday: 1995-01-14

Address: 55021 Usha Garden, North Larisa, DE 19209

Phone: +6812240846623

Job: Corporate Healthcare Strategist

Hobby: Singing, Listening to music, Rafting, LARPing, Gardening, Quilting, Rappelling

Introduction: My name is Foster Heidenreich CPA, I am a delightful, quaint, glorious, quaint, faithful, enchanting, fine person who loves writing and wants to share my knowledge and understanding with you.