Skip to main content

Address

ICC 650
Box 571014

37th & O St, N.W.
Washington, D.C. 20057

maps & directions
Contact

Phone: (202) 687.6400

Email: provost@georgetown.edu

 

Documenting the Scholarly Product of Academics

With the rise of the Internet and digital records of publications, comparisons of quality of universities are increasingly utilizing statistics based on this documentation (e.g., the Times Higher Education university rankings). Many academic fields themselves are comparing the product of scholars by using counts of citations to work (through h-indexes and other statistics). Journals are routinely compared on their impact partially through such citation evidence. Some academic fields have rated their journals into tiers of “quality” based on these numbers. Platforms like Google Scholar and ResearchGate are building repositories of documentation of the work of scholars.

At the same time, we have learned that the practice of evaluating the impact of work is highly variable across fields. Some fields have a scholarly culture in which citations of related work are mandatory for the credibility of the research product. Other fields have no culture supporting citations and view them as irrelevant to the quality of the scholarship in question.

Some fields use the journal article as the basic unit of research output; others use books and monographs as the basic unit. Some fields are built around individual scholarship in which a book-length product might take 6 or more years to produce. Others are built around teams of collaborating researchers in which articles might be produced at the rate of 5-10 per year from each team.

Finally, few would claim that the use of counts of articles/books and citations fully capture the contribution to human knowledge of a scholar’s work. There are too many examples of a single product of a scholar being transformative, sometimes decades after its inception.

While a one-size-fits-all evaluation tool is not appropriate, having tools to compare how universities vary on some standard statistics might be useful. Such was the motivation of Academic Analytics, a faculty-inspired data base that counts articles, books, citations, grants, and professional society awards. The data are available at the count level for many universities across the country, with the ability to compare like departments/units across universities.

Some faculty, given the comments above, see little value in comparing counts of articles and citations and other outputs. Other faculty could find it a useful auxiliary piece of evaluative information, if the counts were accurate.

Over the summer, the Provost Office examined the correspondence between what faculty listed on their curricula vitae (CV’s) and their Academic Analytics records. The categories of scholarly activities analyzed included: academic papers, external grants, scholarly books, and academic awards. The project was conducted in two phases.

In the first phase we examined CV’s of all current faculty members from the McCourt School who were represented in Academic Analytics (AA) for a total of 21 faculty. Two research assistants compared the data from AA with the faculty member’s CV independently and then cross-checked each other’s work for quality assurance. For each scholarly item included in our analysis the item was marked either (1) In AA and on the CV, (2) In AA but not on the CV, or (3) Not in AA but on the CV. During this phase a few analysis parameters were decided:

  1. We limited the analysis to only the years on which AA had data (e.g., AA presents academic papers published only within the last 5 years).
  2. We included data that we knew would not be in AA in order to examine the under-representation of scholarly activities of our faculty (e.g., AA does not collect information on grants from foundations, but we included any such grants found on a CV in our dataset).
  3. For any category 3 items (not in AA but on the CV), we would look at the granular data to determine if there is any obvious pattern that would explain the missing data.

For the McCourt School faculty, Academic Analytics captured scholarly books fairly accurately, missing 5%. The situation worsened for academic papers, where 35% of articles published and listed on CVs were not captured. Surprisingly, several articles in the area of economics, a standard disciplinary area, were not captured because they were in venues that were not covered by the data base. About 30% of the grants obtained by McCourt faculty were missed, mostly because they were private foundation grants, not covered by the database.

In the second phase of our project we collected AA data from 350 randomly chosen faculty members in the College. We were able to find current CVs for 348 of these faculty and conducted the same analysis described above on these CVs. In contrast to the McCourt finding, AA was not as accurate in reflecting scholarly books published by faculty in the College, missing 19%. While the database does not cover books from foreign presses as fully as US presses, not all the missing were published by foreign presses.

Academic papers, including conference proceedings, were poorly represented by AA with only 48% captured. More than a dozen departments had at least one instance of a missed paper. The two departments that faired least well were Computer Science and Psychology. Some of the missing products in Computer Science are conference proceedings. The case for Psychology is less clear, but might be related to publishing in sub-discipline.

As with McCourt faculty, grants awarded by non-federal foundations or institutions were not captured by AA which, therefore, significantly underrepresented faculty grant activity.

Finally, we discovered some articles in the Academic Analytics database assigned to a Georgetown faculty member that do not appear on the faculty member’s CV. We drilled into the case to try to understand the cause. It appears to be a person mismatch, with the database having attached to a Georgetown faculty member’s record a set of articles published by someone else who has a very similar name.
In short, the quality of AA coverage of the scholarly products of those faculty studied are far from perfect. Even with perfect coverage, the data have differential value across fields that vary in book versus article production and in their cultural supports for citations of others’ work. With inadequate coverage, it seems best for us to seek other ways of comparing Georgetown to other universities. For that reason, we will be dropping our subscription to Academic Analytics.

After this post, further analysis of the Academic Analytics data was performed in collaboration with the company; a brief summary of the results appear here.

10 thoughts on “Documenting the Scholarly Product of Academics

  1. The reason behind this, the usage of this script was
    minimised to local users along controlled functionality.
    As far as features and reliability are worried, Jusst – Host keeps up o their promises.
    It is bext idea to consider demo oof control panel and interface etc prior to buyying a
    plan.

  2. Great goods from you, man. I’ve understand your stuff previous to and you’re just too great.
    I actually like what you’ve acquired here, really like what you are saying and the way in which
    you say it. You make it entertaining and you still care
    for to keep it smart. I can’t wait to read much more from you.
    This is actually a great web site.

  3. The Chronicle article on this issue takes the way GU did this as an example — which is very much where we want to be — exemplary.

  4. I’m sure you know Robert Pirsig’s Zen and The Art of Motorcycle Maintenance. And I’m sure you know what Pirsig would say about this. All the data gathering in the world will not get at the only value that matters: quality. The more we turn assessment of intellectual endeavor over to quantity, the bean-counters, the closer we are to rejecting the most significant humanist values in the academic community.

  5. Excellent work – not sure how an analytical program would view it.

    Can a computer algorithm or machine judge a work of art, a book, an innovative idea or a research paper. We need peer review to judge and evaluate scholarship and a machine is not our peer. Indirectly, we would be evaluated by the programmer who wrote the program that the computer uses. Scary

  6. Very complicated but important topic. In reviewing journal articles the papers with like twenty authors get pretty complicated. Important to study to try to quantify but need to remember that not all that counts can be counted and not all that can be counted counts. Tough work.

Leave a Reply

Your email address will not be published. Required fields are marked *

Office of the ProvostBox 571014 650 ICC37th and O Streets, N.W., Washington D.C. 20057Phone: (202) 687.6400Fax: (202) 687.5103provost@georgetown.edu

Connect with us via: