I find myself reading across disciplines these days, with special attention to differences in the logical structure and scholarly criteria for determining excellent work in the fields.
From time to time in this learning, I find odd correspondences across two disciplines, sometimes two fields that tend to have little interaction. I hit one last week.
The word, “hermeneutics,” a iterative process of discerning meaning in a text, is as old as biblical scholarship. At this point the word applies to a whole sub field of philosophy, but also has more specific meanings. Its use in that sense involves a sequence of interpretations of a text – a “guess,” as it were – followed by application of the interpretation to a more detailed review of parts of the text, other texts of the same author, or external phenomena related to the text. In short, is the initial interpretation supported by repeated, more specific examinations? If not, the initial interpretation is altered and the process begins again, until at some point, there appears to be no important change in the then-achieved interpretation. (There are some obvious caveats here. One is that we must assume the text itself exhibits some coherence.)
A key attraction of revealing such a conceptual structure on interpreting the meaning of words is that it can help direct new students to their behavior as they learn to examine new material.
Bayesian statistics is founded on an important theorem, mathematical in its logic, that permits an integration of what is known prior to an analysis of new data into the analysis of those new data themselves. Instead of basing our conclusions about a phenomenon (e.g., what portion of a patient population benefits from a specific drug) only on a given single set of data, our conclusions will ask the question of how the new findings alter the prior conclusions, based on other sources of data on the same phenomenon. (There are some obvious caveats here, too. One is that we must assume that there is no difference between the conditions generating the prior data and the conditions of the current data collection.)
So what?
First, it’s fascinating to me to learn of two, relatively independent fields, inventing methods that resemble one another. Second, one wonders about the counterfactual – what would have happened if the two fields had been collaborating earlier? For example, the application of hermeneutics, in some sense, seems quite adaptive to new information. Indeed, some treat the structure of hermeneutics as a circle of interpretation/reinterpretation that never ends. New observations can be entered into older completed interpretations, yielding a new state of interpretation.
Bayesian statistical approaches are designed as a two step process, an integration of the new observation “on top of” everything else we know that yielded our beliefs prior to the new data. Of course, repeated application of Bayesian estimation to repeated new data collections creates on going updating process that closely resembles the hermeneutics circle. But its original focus was the two step process.
It looks like some developments in machine learning, which can produce constantly updated predictions of the future state based on new data, are using formal Bayesian methods. This resembles more fully the continuous revision of the hermeneutics circle.
So maybe the two conceptual structures are increasingly resembling each other. I’d love to see a dialogue between these fields to see if any new thoughts would arise if they understood each other more fully.
Note that both of the conceptual structures Provost Groves cites depend on unproven (or unprovable) assumptions. Hermeneutics, as he points out, relies on the supposition that the underlying text is coherent. Bayesian analysis must assume that there is no difference in the conditions which generate both the prior and current data.
That is another–and very interesting–commonality of those two conceptual structures. To make any progress at all, both have to take something “on faith.” Per the cross-disciplinary dialogue Provost Groves urges, I’d like to know how many disciplines share that feature.
In one of my Georgetown MALS classes, Professor John Reuscher (as sharp a guy as I’ve ever met) introduced us to the Incompleteness Theorems of the mathematician Kurt Godel. They proved that basic arithmetic systems, in order to be logically consistent, had to rely on axioms not provable by or within the system itself. Had, in other words, to take something “on faith,” in order to be able to work at all. Might Godel’s conclusion, Reuscher asked us to consider, apply to knowledge more generally?
That is something I would like see a cross-disciplinary dialogue address. How many of them share the Godel problem. And what are the implications?
As psycholinguistic Virginia Satir once pointed out all words are symbols. So one must look at context. In some ways we must have faith or at least mutual agreement in all our discussions. Why does one plus one equal two? Why is black black ? Just a thought.
Or. I may have totally missed the point!
Well. Having had one lecture on statistics in Med school and none as an undergrad, I had trouble understanding this post ! However, it reminded me somewhat of Ted Leonsis’s book “ The business of happiness” . In that book he describes how he got interested in computers. As an English major he was writing him thesis on Hemingway. Fr Durkin encouraged hm to use the GU. Computer and collaborate with the linquistics department to help assess when a particular novel was written. It worked. Then began a future career in computers. Thanks to a Jesuit who encouraged him to reach out to other disciplines and think outside the box! So even tho I don’t understand the the theory or statistics, I think Ted’s story might be a good Hoya example. Just a thought !
Hey, enough with the enticing clickbait titles. #catnip