I find myself reading across disciplines these days, with special attention to differences in the logical structure and scholarly criteria for determining excellent work in the fields.
From time to time in this learning, I find odd correspondences across two disciplines, sometimes two fields that tend to have little interaction. I hit one last week.
The word, “hermeneutics,” a iterative process of discerning meaning in a text, is as old as biblical scholarship. At this point the word applies to a whole sub field of philosophy, but also has more specific meanings. Its use in that sense involves a sequence of interpretations of a text – a “guess,” as it were – followed by application of the interpretation to a more detailed review of parts of the text, other texts of the same author, or external phenomena related to the text. In short, is the initial interpretation supported by repeated, more specific examinations? If not, the initial interpretation is altered and the process begins again, until at some point, there appears to be no important change in the then-achieved interpretation. (There are some obvious caveats here. One is that we must assume the text itself exhibits some coherence.)
A key attraction of revealing such a conceptual structure on interpreting the meaning of words is that it can help direct new students to their behavior as they learn to examine new material.
Bayesian statistics is founded on an important theorem, mathematical in its logic, that permits an integration of what is known prior to an analysis of new data into the analysis of those new data themselves. Instead of basing our conclusions about a phenomenon (e.g., what portion of a patient population benefits from a specific drug) only on a given single set of data, our conclusions will ask the question of how the new findings alter the prior conclusions, based on other sources of data on the same phenomenon. (There are some obvious caveats here, too. One is that we must assume that there is no difference between the conditions generating the prior data and the conditions of the current data collection.)
First, it’s fascinating to me to learn of two, relatively independent fields, inventing methods that resemble one another. Second, one wonders about the counterfactual – what would have happened if the two fields had been collaborating earlier? For example, the application of hermeneutics, in some sense, seems quite adaptive to new information. Indeed, some treat the structure of hermeneutics as a circle of interpretation/reinterpretation that never ends. New observations can be entered into older completed interpretations, yielding a new state of interpretation.
Bayesian statistical approaches are designed as a two step process, an integration of the new observation “on top of” everything else we know that yielded our beliefs prior to the new data. Of course, repeated application of Bayesian estimation to repeated new data collections creates on going updating process that closely resembles the hermeneutics circle. But its original focus was the two step process.
It looks like some developments in machine learning, which can produce constantly updated predictions of the future state based on new data, are using formal Bayesian methods. This resembles more fully the continuous revision of the hermeneutics circle.
So maybe the two conceptual structures are increasingly resembling each other. I’d love to see a dialogue between these fields to see if any new thoughts would arise if they understood each other more fully.