Many professions have thresholds of review that document stages of accomplishment. For those aspiring to the tenure-line faculty there are three: the hiring into an assistant professor tenure-line position, the promotion to a tenured associate professor, and the promotion to a full professor. The first is increasingly fiercely competitive, with hundreds of PhD’s competing for one position. The second two are based on judgments about individual contributions to a specialty while at the institution. Each of these hurdles is continuously being raised, as fields evolve and knowledge is expanded.
The decisions of a university to grant passage across these thresholds are taken seriously. The move from assistant professor without tenure to associate professor with tenure is a particularly important step because of the security it provides the candidate and the commitment the university gives to him/her. Tenure is granted by the university not a department or school. Thousands of person-hours are spent reading the scholarly product, discussing the case, and writing evaluative reports. The process is hierarchical in one sense – the unit colleagues of the candidate read his/her work, the dean of the school reviews the evaluation and makes a separate judgment, a university-wide committee reviews the candidate, and finally, the provost and then the president make judgments.
The evaluations have three foci: scholarly product, teaching quality, and service to the profession and the university. (These are the same criteria used in annual merit reviews, and are often weighted to give more emphasis to scholarship and teaching than to service.)
In contrast to some professions not much of the evaluation can be quantified. We can count the number of peer-reviewed articles and books published, the number of citations by other scholars of the work (a great contribution of Google and other internet based tools to academia), the mean scores of ratings on student evaluation forms, the number of invited talks given, the number of editorial boards and other professional committees, and the number of theses and dissertations overseen.
But the underlying question is always “What is the impact of the candidate’s work and what is his/her trajectory?” With a focus on the impact of one’s work, it’s clear why a manuscript completed is less valued than a manuscript accepted for publication, than a manuscript printed, than a publication reviewed and/or citeable. Further, large impacts come from choosing big, important issues to work on and pushing the frontiers of the field on those issues.
The impact of teaching is equally difficult to judge. Student evaluations are used. However, large, required classes tend to generate lower scores than small, elective courses. Student evaluations about whether the instructor was prepared for class, open to meeting after class, are helpful. In addition, departments that have senior faculty visit classes of the candidate and report on the interaction add to the value of the teaching assessments. But the lasting impacts of teaching on students are not commonly measured; we need to get better at this.
The impact assessment begs the counterfactual question of how diminished we would have been without the contributions. The trajectory assessment begs the question of how the future will continue the past. These involve subjective judgments. Hence, the routine is to seek judgment from many knowledgeable people, ideally as independent as possible from one another, but using consistent standards.
The departmental colleagues of the candidate read the work and make judgments. They often share sophisticated knowledge of relevant areas of work. However, biases can sometimes creep in at this level because the personalities of candidates or within-field intellectual disputes sometime inappropriately color the judgments.
To temper this, multiple reviewers outside of Georgetown are also asked to read the work of the candidate. Ideally, these are not friends, collaborators, former students, or mentors of the candidate. These letter-writers are identified to the Georgetown reviewers but kept confidential outside the group. (A breach of confidentiality ruins this process and must be taken very seriously.) Desirable attributes of the reviewers are that they have unambiguous credentials of academic excellence, are broadly read in the field, and are current with the latest developments in the field. We are less interested in the opinions of the less successful.
The most useful external letters are from those who carefully read the work, critically comment on strengths and weakness, assess its impact on the field, and compare the candidate’s work to that of others with comparable experience. Superficial letters, whether negative or positive, tend to be downweighted.
The department/program writes a critical report of the scholarly product, teaching performance, and service contributions. Unit colleagues of appropriate rank vote. In addition to reporting the outcome of the vote, a real value of the report is its critical content. A review report that is a superficial approving of the performance is quickly downweighted by those who read it later in the process.
As the evaluation proceeds from the department/program to the school to the university, the reviews by design attempt to leaven any unevenness across the units. Faculty and senior administrators whose job it is to review candidates from many units are asked to weigh in. All involved need to weigh the relative contributions of scholarship, teaching, and service. It is fair to say that not all successful candidates are equally accomplished in all three areas.
Ideally, feedback to the candidate on the results of the review provides more than just its outcome. Within the constraints of confidentiality of reviewers, the candidates deserve some identification of strengths and weaknesses identified in the review process.
In sum, the process is designed to surface diversity of judgments. However, its nature doesn’t provide a crisp answer to a fundamental question of candidates — “What exactly do I have to accomplish to be successful?” For them, it is useful to compare themselves to others recently jumping the threshold at comparable universities in the same field. It is useful to get multiple opinions from senior colleagues inside and outside the department/program. It is useful to nurture a relationship with a knowledgeable, wise, constructive critic.
But the process doesn’t yield itself to checklists that, once completed, assure success. Hence, however challenging, the thousands of hours of independent review seem necessary. When all involved faculty and administrators take the process seriously, it is key to building a stronger university.