Some months ago, a post described a review of Academic Analytics data for Georgetown faculty.
After collegially reviewing data and methods with Academic Analytics, we concluded that most of the discrepancies between information collected from faculty CVs and Academic Analytics’ data were the product of the type of work covered and collected by Academic Analytics, timing issues relating to faculty hiring, and date ranges associated with certain Academic Analytics data. It is clear to me that Academic Analytics does a good job of collecting and making available the data that it purports to collect, and that the data largely are accurate. For that reason Academic Analytics can be a useful tool for universities, and it has the potential to become even more useful as it expands the nature and types of data it collects. Importantly, the results reported in my prior blog represented Georgetown’s attempt to completely replicate scholarly performance data as they appear on faculty members’ CVs. Academic Analytics, of course, neither claims nor seeks to address every data element that appears in faculty CVs, and instead aims to build a comparative scholarly performance matrix across all research universities. That difference in aim and purpose largely accounts for many of the discrepancies reported in my prior blog. We continue to believe that Academic Analytics’ data are valuable and important.
A very good article. Well written!
I have to admit that I am bit puzzled by this second post about Academic Analytics, and I continue to have strong reservations about the adoption of Academic Analytics to evaluate the productivity of individual faculty members or departments.
Very interesting.
A very good article. Well written!
I have to admit that I am bit puzzled by this second post about Academic Analytics, and I continue to have strong reservations about the adoption of Academic Analytics to evaluate the productivity of individual faculty members or departments.
Unless Academic Analytics is able to capture the entirety of faculty members’ published research on a global scale and across disciplines, I think its data collection cannot be considered accurate and reliable. What we read in last October’s entry, namely that “the practice of evaluating the impact of work is highly variable across fields,” has to be kept in mind, all the more so in the humanities. Most reputed journals in the humanities are not necessarily on the radar because, for instance, they have a limited distribution and a negligible presence on the web and in databases that were originally conceived to collect information about research in the hard sciences.
“Are you publishing in the right journals?,” we are asked on the Academic Analytics web page. Yet who decides which journals are the right ones? The right journals for whom? For what? How can innovation and a plurality of approaches (in any discipline) find adequate venues if we are invited (mandated?) to publish mainly or only in journals that have been selected by rigid and at times non transparent criteria, but that are not necessarily interested in our topics or methodologies?
Likewise, research funds from federal agencies, which seem to be another crucial variable in Academic Analytics, are very rarely available for the humanities, while other grants that humanities scholars may be awarded go unnoticed.
Furthermore, the fact that the faculty member or department under scrutiny cannot access its own records is deeply disturbing, to say the least.
On March 22, 2016 the American Association of University Professors issued a rather outspoken statement urging caution toward Academic Analytics, spelling out several reasons which, in my humble opinion, remain valid despite the alleged expansion of the nature and types of data it purports to collect.
The text can be found at the following link:
https://www.aaup.org/news/statement-urges-caution-toward-academic-analytics#.WX_jwdPyvVo
I hope it can foster some more dialogue about the meaning and implications of “benchmarking academic excellence.”