The transition in the US Federal executive branch in January has been accompanied by a large change in the stated support for science. This appears to be true for investigations of the causes of the pandemic, the nature of the virus and its mutations, and wider support for scientific studies about diverse outcomes of the disease.
The contrast deserves careful focus on what is meant by “science” and what is meant by “science-based” policies. I have commented on this in the past but a more sophisticated perspective is merited now that loud voices are demanding science-based decisions.
Of course, the scientific method brings with it a set of attributes of knowledge production that have immense attraction. Science forces very specific definitions of what question is being addressed in a study. Opinions are valued, but not as knowledge to be acted upon but as hypotheses to be tested. Bertrand Russell noted that the deeper the knowledge of the scientist, the more likely they are to emphasize the limitations of their knowledge. There is a humility required in reporting research findings, which tends to devalue opinion without evidence.
Scientific pursuits are self-correcting. The dialectic of research, critique, new research, new critique, etc., ensures that errors in conclusions are corrected over time. In my opinion, the scientific method corrects erroneous results faster that well-ensconced ideologies adapt to new circumstances. The peer review process is a heavy hand to prompt corrections.
But there are weaknesses in science that should be acknowledged. Science progresses most visibly by narrowing attention, focusing activities on very precise questions. These questions are often very small components of broader issues facing the world. Most of the pressures on scientists is to add to a vein of discovery, seeking deeper insights into some phenomenon. An individual scientist can thus be very well informed about a very small domain of knowledge.
A practical example of this lies in the nomenclature of “efficacy” and “effectiveness” in virology. Very highly controlled clinical trials allow a scientist to answer the question of whether a trial vaccine has more positive outcomes that no treatment. In the clinical trial context, this is labeled the “efficacy” of the vaccine, as found in a particular clinical trial. What population was studied, what was the base prevalence of the disease, how were the vaccinations administered, how long a follow-up was conducted, and a host of other features of the research design are pertinent when assessing “efficacy.” “Effectiveness” asks a similar question about utility of the vaccine in real life, without the designed controls (e.g., variation in how the dose was delivered, comorbidities not experienced in the trials). It is most often the case that effectiveness is lower than efficacy.
What’s the connection to narrowness of scientific enterprises? When scientists produce discoveries, they tend to be in controlled settings or dealing with other data limitations. Applying the discoveries to the real world, especially in domains involving human behavior, however, often requires knowledge outside their areas of expertise. For example, a virologist in a laboratory setting is not necessarily an expert in applying clinical findings to large, diverse communities outside the clinic.
The best applications of scientific findings to the larger world need experts from a variety of fields.
As we begin to see more scientists in public media answering questions of a journalist, we all need to ask whether the scientists are speaking within their domain of expertise. Scientists, because of their deep knowledge in their area of expertise, are not automatically experts outside those areas. Applications require multiple disciplines. Answers to real world questions may not be well-known by each scientist in the chain of evidence producing an application. The best scientists will admit they do not know the answer to the journalist’s question, and underscore the fact that scientific application is a team sport.
“This appears to be true for investigations of the causes of the pandemic, the nature of the virus and its mutations, and wider support for scientific studies about diverse outcomes of the disease.”
That didn’t last very long… Today this all seems like political science more than actual science. For example, I can’t understand why I can’t think of any government counting people with natural immunity of covid as part of the herd immunity group. It’s like they solely believe (and want people to believe) the experimental covid vaccines are the only way out of this pandemic – and those vaccines don’t even provide sterilizing immunity!
Key to developing science-based policies is the use of comprehensive policy analysis:
1. Descriptive Analysis (something with which scientists are very familiar)
2. Predictive Analysis (something with which many scientists are familiar, but for which input from a broader group of people from various fields might be needed in order to consider unintended consequences and the like)
3. Normative Analysis (something with which many scientists purposely ignore because they want to be, or pretend to be, neutral and which shouldn’t be ignored when it comes to developing policies because the whole point of policy making is to improve a situation in accordance with the values of the various policy actors and targets)
4. Prescriptive Analysis (something which basic scientist might be hesitant about doing, especially if having been reluctant to engage in the normative analysis, but needs to be done because so many policy makers, decision makers and politicians often say, “just tell me what to do!”)