One of the unusual attributes of universities as institutions is the granting of tenure to faculty. There are few remaining work organizations that give to its members such a guarantee for their entire careers.
Surveys show that new faculty find the review process opaque, with vague decision criteria and a certain threatening air of mystery attached to it.
Now that I’m completing my second year, I’ve seen a large set of promotion dossiers describing individual cases. I’ve talked to many faculty about the process. In my experience, the greatest source of misunderstanding arises from the criteria for scholarly product, not those concerning teaching performance or service. Hence, I thought it would be good to comment on the process of assessing scholarship as part of a promotion review. (I may do a future post on service and teaching criteria.)
The tenure review process is sequential. To attempt to achieve consistency and equity, the process is defined to be multifaceted — the department/unit faculty, the school dean, the dean of the Graduate School, the university committee, the provost, and the president read the dossier material.
The department/unit level usually has the responsibility of giving annual feedback to the pre-tenure faculty and of conducting a mid-term review in the third year of the usual six-year pre-tenure term. When these are conducted correctly, the candidate is given objective guidance regarding where he or she stands. In addition, the faculty member on his/her own accord can examine the curricula vitae of recently-tenured faculty in the same field from peer institutions as well as those at Georgetown in the recent past. (It seems clear that standards of tenure have risen over time; so, the more recent cases are more informative.)
Tenure is granted by the university, not by a department or school. Hence, the later stages of review involve a committee of full professors from different areas of the university. This committee is charged with taking a university perspective, to ensure as much as humanly feasible the application of the same standards across fields that manifests their scholarly product in very different ways. (Some fields use book-length products, some use peer-reviewed journals, and still others use non-text products like videos, constructed objects, and performances.) It is not uncommon that the university committee recommendation differs from that of the department/unit.
The committee has copies of the scholarly product of the candidate, letters written by evaluators at peer institutions evaluating the product, the evaluative commentary of the unit’s tenured faculty, and a statement by the candidate himself/herself.
The best dossier contains the candidate’s self-evaluation and description of the cohesion of various research thrusts. It contains a department/unit review of both the strengths and weaknesses of the case (i.e., no case is without some flaws). The best have outside letters written by internationally renowned scholars in the field, who have no prior connection with the candidate, who have carefully reviewed and discussed the work of the candidate, who compare the candidate to others in the field, and who provide a clear recommendation to the university. In fields in which citations to work are useful evaluative tools, the report provides citation statistics. In fields in which reviews of work are more common, they can be provided or described. The dossier provides clear information about what is in the scholarly “pipeline” of the candidate.
Weak dossiers often contain cursory letters from outside reviewers without strong credentials themselves, with academic unit reviews failing to discuss both strengths and weaknesses, and without evidence of carefully reviewing the scholarly product of the candidate.
In sum, there is the need to evaluate quality, quantity, and trajectory of the scholarly product. Ideally, the evaluators are attempting to judge the impact of the scholarship on the given field. Is there evidence that the candidate will eventually achieve international prominence in the field? The evaluators are seeking evidence of a cohesive body of original accomplishment that stands as the candidate’s clear area of excellence. Some fields develop quality proxy indicators (e.g., the impact factor of the journal, the prestige of a given press, the citation count of work, peer reviewed externally funded proposals). However, careful reading by experts in the field is most often needed; that’s the value of having multiple independent readers, both inside and outside.
The trajectory of the candidate’s scholarly work is also pertinent to the decision because a grant of tenure most often has a several-decade impact on the university. Strong dossiers provide working papers, draft chapters of the next book, or grant proposals — any material that gives the reader assurance that the productivity of the candidate is likely to continue beyond the review year.
The evaluative process is inherently subjective; it’s also deeply confidential. To achieve wise decisions, strong scholars both inside the subfield of the candidate and outside the field do the reviews; both scholars inside Georgetown and outside Georgetown are involved. They provide their frank judgments based on pledges of confidentiality.
No process involving human judgment is perfect, but Georgetown takes this responsibility seriously and with a strong spirit of institutional commitment. Over the years, I’ve come to believe that the more the process can be illuminated both for the reviewers and those being reviewed, the better the process can become. We need to find a way to share more information about the process without violating its key principles.
Great description of tenure. Thanks.