The transition in the US Federal executive branch in January has been accompanied by a large change in the stated support for science. This appears to be true for investigations of the causes of the pandemic, the nature of the virus and its mutations, and wider support for scientific studies about diverse outcomes of the disease.
The contrast deserves careful focus on what is meant by “science” and what is meant by “science-based” policies. I have commented on this in the past but a more sophisticated perspective is merited now that loud voices are demanding science-based decisions.
Of course, the scientific method brings with it a set of attributes of knowledge production that have immense attraction. Science forces very specific definitions of what question is being addressed in a study. Opinions are valued, but not as knowledge to be acted upon but as hypotheses to be tested. Bertrand Russell noted that the deeper the knowledge of the scientist, the more likely they are to emphasize the limitations of their knowledge. There is a humility required in reporting research findings, which tends to devalue opinion without evidence.
Scientific pursuits are self-correcting. The dialectic of research, critique, new research, new critique, etc., ensures that errors in conclusions are corrected over time. In my opinion, the scientific method corrects erroneous results faster that well-ensconced ideologies adapt to new circumstances. The peer review process is a heavy hand to prompt corrections.
But there are weaknesses in science that should be acknowledged. Science progresses most visibly by narrowing attention, focusing activities on very precise questions. These questions are often very small components of broader issues facing the world. Most of the pressures on scientists is to add to a vein of discovery, seeking deeper insights into some phenomenon. An individual scientist can thus be very well informed about a very small domain of knowledge.
A practical example of this lies in the nomenclature of “efficacy” and “effectiveness” in virology. Very highly controlled clinical trials allow a scientist to answer the question of whether a trial vaccine has more positive outcomes that no treatment. In the clinical trial context, this is labeled the “efficacy” of the vaccine, as found in a particular clinical trial. What population was studied, what was the base prevalence of the disease, how were the vaccinations administered, how long a follow-up was conducted, and a host of other features of the research design are pertinent when assessing “efficacy.” “Effectiveness” asks a similar question about utility of the vaccine in real life, without the designed controls (e.g., variation in how the dose was delivered, comorbidities not experienced in the trials). It is most often the case that effectiveness is lower than efficacy.
What’s the connection to narrowness of scientific enterprises? When scientists produce discoveries, they tend to be in controlled settings or dealing with other data limitations. Applying the discoveries to the real world, especially in domains involving human behavior, however, often requires knowledge outside their areas of expertise. For example, a virologist in a laboratory setting is not necessarily an expert in applying clinical findings to large, diverse communities outside the clinic.
The best applications of scientific findings to the larger world need experts from a variety of fields.
As we begin to see more scientists in public media answering questions of a journalist, we all need to ask whether the scientists are speaking within their domain of expertise. Scientists, because of their deep knowledge in their area of expertise, are not automatically experts outside those areas. Applications require multiple disciplines. Answers to real world questions may not be well-known by each scientist in the chain of evidence producing an application. The best scientists will admit they do not know the answer to the journalist’s question, and underscore the fact that scientific application is a team sport.