Skip to main content


ICC 650
Box 571014

37th & O St, N.W.
Washington, D.C. 20057

maps & directions

Phone: (202) 687.6400



What Do We Mean by “Skills?”

Posted on

There is much discussion these days, especially in liberal arts institutions, about what kind of knowledge should be the focus of undergraduate and graduate education.

In many of these discussions, the word “skills” arises. Its use is causing some misunderstandings. Some of us devoted to liberal education hold to notions that the traditional liberal arts are knowledge domains resulting from “disinterested inquiry” in contrast to those devoted to a given profession or vocation. Indeed, “skills” often is used to describe the knowledge required for activities within a job. From this, there is a common reactance to the notion that an undergraduate degree from a liberal arts institution should explicitly not be occupationally targeted, that job skills are not the obligation of such an undergraduate curriculum.

If one looks for formal definitions of a skill, you find: “An ability and capacity acquired through deliberate, systematic, and sustained effort to smoothly and adaptively carryout complex activities or job functions involving ideas (cognitive skills), things (technical skills), and/or people (interpersonal skills). (This from This indeed sounds rather task-oriented.

But further definitional commentary dissects skills into hard skills, labor skills, life skills, people skills, social skills, and soft skills. These expand the notion of skills far beyond narrow task-oriented knowledge suitable for a given job. Some of these subcategories of skills fit the kind of capacities that those in liberal education have espoused for some time: creativity, resiliency, critical thinking, decision-making under uncertainty, leadership, dealing with conflict, empathy, taking the “other’s” perspective, inter-cultural competence, reflection, and discernment.

Surveys of employers and business executives repeatedly find that they highly value attributes like strong work habits, self-discipline, computer skills. This is to be expected. But they also highly value attributes like critical thinking, communication, problem-solving abilities, and cultural and global awareness. These latter attributes are often identified as desired outcomes of a liberal education. But it’s rarer that we inside universities label them as “skills,” but some outside academia routinely think of them as skills.

So, in some sense, the word “skills” is being used in multiple ways and causing communication problems. I find that many colleagues who take pride in their “disinterested inquiry,” searching for insight and truth, also value that such inquiry builds the ability to think critically, and to communicate in words and speech. They value that such inquiry is global in its reach and requires cultural understanding. They value that a derivative attribute of the experience is empathy towards those quite different from oneself. But often they would not label these as skills.

So, as we all actively contemplate the future of universities and how they can contribute to humankind, I wonder whether we should be more careful to avoid too glibly using the term “skills,” either to denigrate or support some new educational activity. Having a discussion at a finer conceptual level (e.g., empathy, global awareness) might be a wiser course. It’s not that liberal education is antithetical to many personal attributes valued by others as “skills.” It’s that many in academia don’t think of them as skills.

Method and Substance

Posted on

The traditional organization of universities honors specific substantive foci – e.g., biology, psychology, mathematics. The units bring with them a set of questions about different phenomena of interest. What is life and how do living organisms function? How does the human mind function and how does it affect behavior? What is the underlying logic of shape, quantity, and arrangement?

The disciplines and fields also, however, bring with them a set of methods of inquiry or approaches to scholarship. In many fields these are well-defined practices, which are prescribed by the discipline and sanctioned as legitimate ways to provide evidence for conclusions or arguments forwarded in the research.

Inside many of the social sciences, one finds a thriving mix of methods. It is common in such fields to have formal courses in “research methods,” to introduce the fledging student to the alternative approaches at knowledge acquisition. In some fields, the student would be exposed to collecting data from existing administrative or archival sources, to participant observation or ethnographic techniques, to forms of unstructured interviewing of persons, to randomized experiments with human subjects, to quantitative survey research, to statistical analysis of existing quantitative data. Fields that use multiple methods sometimes sort themselves into internal tribes, each of which touts the superiority of one method to discover truth and disparages the others.

As a provost, one is struck by the use of similar methods across disciplines studying very different phenomena. For example, behavioral economists use experimental laboratory methods with most of the features of psychologists’ methods in their laboratories. Organizational analysts sometimes use intensive observation and case study techniques that are common to anthropologists. Scholars in cultural studies in foreign language departments and English departments use techniques common to those in sociology departments. Some psychologists use the FMRI measurement in ways not dissimilar to those of neurologists. Some faculty in linguistics use research approaches similar to those in computer science. Statistical analysis of quantitative data is common in not only statistics but in political science, economics, sociology, psychology, public policy, business, etc.

This commonality of methods across fields is interesting at the university level, for three reasons. First, it produces a set of courses that cover similar content across different programs. For example, there are statistics courses spread throughout scores of departments. They differ in the mix of theory and application. They also tend to utilize data examples from the fields in which they are taught (e.g., in environmental studies, data on fish; in economics, data on businesses). Similar, the design of experiments or surveys can be taught in a variety of programs. Textual analysis is spread throughout many departments.

Second, it produces a set of faculty who share interests in advancing research methods but find themselves in different units. Good things can happen when such faculty get together. For example, at Georgetown there is a group of quantitative scientists from throughout the university, GQUADS, that convenes regularly to discuss new methods in statistics, computational science, and related issues. When methods-oriented faculty team-teach across fields, wonderful things can happen for students and faculty.

The third reason is more of a thought experiment – what would happen if a university organized itself about units that shared research methods and not in units that shared a set of substantive foci? If we organized that way, would we cumulate more insight into, for example, how economics and psychology might combine to explain human behavior? Would we end up in even more conflict between theory and application with our current disciplines? Would methodological development themselves advance at a greater pace; would we develop better measurement and observational tools faster?

The Minimum Size of Academic Certification

Posted on

In the last 6 years, we’ve experienced a set of phases involving online learning. In August of 2012, there was widespread speculation that “brick and mortar” residential universities were headed for replacement by the free internet-based online learning platforms. The higher education community has learned much since the hype phase of online learning.

The level of self-discipline required in multi-course online degrees appears to be much better suited to master’s level education than to bachelor’s level. Online Master’s students are often working full time, and they seek a degree in their off-hours. The motivation for the pursuit is often the hope of advancement in one’s career or the retooling to another field. They tend to be older and more mature than many bachelor’s students.

At the same time, the popularity of online learning of a different sort, short narrowly focused educational learning (e.g., Kahn Academy, has much larger appeal. Further, there is some evidence that employers value certification from shorter learning experiences. For example, in computing fields such certification is used by employers as a qualifier for technical positions (e.g., Cisco certified network associate, Microsoft certified systems engineer).

Further, MIT in the last few years has offered a “micromaster’s,” a series of 5-6 courses using the edX online platform. If the student successfully passes a proctored final examination for each course, they are awarded a micromaster’s credential. The course sequences are designed to offer a short, but integrated graduate-level treatment of a larger area (e.g., statistics and data science, supply chain management). A successful completion of the micromaster’s gives preferred entry into a related MIT Master’s degree.

Educationalists cite this feature of a micromaster’s followed by a Master’s as an example of “stackable” course sequences. It evokes a future of higher education where coursework might extend over a long period of time, with sequences taken in spurts to yield “nano” certifications, which are then eventually combined into a larger degree program. Obviously, one of the issues of such a future is how learning in such a staccato way has the equal educational value as the same courses taken in a more compressed time frame.

As evidenced by these developments, we are seeing a rethinking of what the minimum level of learning is required to be valued by students and their employers. We should admit that this is not completely new. Master’s degrees in many fields were two-year experiences, but are now increasingly one-year curricula. Many undergraduates are completing their bachelor’s degrees in less than four years.

We should expect continuous reexamination of the appropriate volume of course work that merits an academic certification. I suspect that there will be demand for shorter and shorter learning experiences to qualify for traditional degrees (i.e., bachelors, master’s). Whether shorter course experiences will be valued, I suspect, will depend on their educational design.

The sustainable new short curricula are likely to be designs that a) provide a truly integrated experience for the certification, each course building upon the prior, b) have sufficient depth that significant knowledge advances result, and c) provide the student with flexibilities for future educational choices. Drifting into packaging existing subsets of courses into new credentials without seeking those three attributes might not serve well the students we wish to educate.

The Meaning of Grades

Posted on

I taught a first-year student seminar this term and finished assigning final grades. In examining the varying performance of the students on the weekly writing tasks and the final semester project, my thoughts turned to the recent faculty intellectual life report. Among a large set of good recommendations, the report once again raised the issue of grade compression. Three years ago, I wrote about this, but the issues remain.

When I first arrived at Georgetown in 2012, a set of faculty made sure I was aware of their concerns. One dean noted that large portions of each class were achieving the “Latin honors” of cum laude, magna cum laude, and summa cum laude status at graduation, diluting the honorific meaning of those appellations. He really didn’t know what to say when a parent was so proud of his child achieving a GPA of 3.0, when he himself knew that put the student in the lower percentiles of his classmates.

By that time, the McDonough School of Business had already decided to enforce some spread of grades across their classes. The result was the mean grade for business majors was lower than the mean grade of their peers in other Georgetown schools, and their graduates were disproportionately failing to achieve Latin honors.

So, we agreed to fix the cum laude and above designations, to have them based on percentages of the class, not fixed GPA’s. For example, cum laude is awarded to the top 25% of the graduating class within each school, a GPA that last year ranged from 3.66 to 3.81; summa cum laude, to the top 5% of the class, a GPA from 3.88 to 3.95, quite close to the maximum of 4.00. The GPA targets are updated each year to reflect changes in the percentiles.

The other changes we made were ones that increased transparency. First, on the internal transcripts that students see each term, we post both their individual grade and the mean grade in the course. This is to convey a B (3.0) in a course with a A- average (3.67) might have a different meaning than a B in a course with a C+ (2.33) average. Second, we asked the registrar to give to each department chair/unit head the distribution of grades in every class in their unit, in hopes that more transparency would generate faculty discussions about grading standards.

Over the subsequent years, it has become clear that faculty do not agree on the meaning of grades. Some hold strong to the meaning prescribed in the student bulletin: a D is a minimum passing grade; a C is adequate performance; a B is good performance; and an A is excellent performance. Further, they interpret these evaluations as relative to students in the current class. Such an interpretation implies that, unless there is rare perfect homogeneity in performance, there should be some variation in grades.

Other faculty assert that they specify a set of learning goals for their class, with a corresponding set of assessment tools. If all the students pass that threshold, they should all receive an A, in their opinion. (There does not seem to be much discussion of raising the learning goals in an attempt to stretch the students.)

There are many other related sentiments – a common one that small seminar classes often demonstrate superior performance among students, and hence, giving them all A’s makes sense; another, that students flock to courses known to given high grades; another, that we place our students at a disadvantage for graduate schools’ admission if we give lower mean grades than our peer institutions; another, that lower mean grades produce more harmful competition among students within classes; finally, faculty report that parents are increasingly vocal in supporting high grades for their child.

There are, however, equity problems in continuing grade compression. Departments that provide lower mean grades produce majors that disproportionately don’t achieve Latin honors. Some majors achieve lower grades in their own department than they do in other departments; other majors received much higher grades inside their own department than outside their department. When grading practices systematically vary across fields, the GPA yields little information about the student’s performance without knowing the courses taken.

As GPA’s continue their advance to their maximum of 4.0, they contain little discriminating information across students. Indeed, judgments about graduates will increasingly depend on other attributes.

“Scientific Facts”

Posted on

Currently, there are many signs of attitudinal gulfs among those with different levels of education. This is a post concerning disagreements about the value of science, as an enterprise that contributes to the common good. There appear to be three features of science that contribute to the balkanization of support.

First, much of priority-setting for science funding, the evaluation of new proposed work, and the assessment of the value of products, depend on the judgments of those in the same field. This notion of “peer review” is a feature of most all research endeavors but is most prominent in science. A critique of this process often labels peer review as “cronyism.” Friends and associates are merely supporting one another. “You support my research, and I’ll support your research.” The obvious fear is that funded research fails to advance the common good in the most efficient way. Instead of a meritocracy, the evaluation process is an elite friends’ network of self-aggrandizement. The importance of the proposed research does not determine its likelihood of support but rather the connections of those proposing it to those reviewing it.

Protections against such cronyism include recusal rules for reviewers, which exclude collaborators, mentors/mentees, and colleagues from the same university. They include transparency of all grants awarded by funding agencies, with descriptions of the work proposed. Increasingly, they include linkage between grants awarded and the research products of the grant.

Second, the ever-changing knowledge set produced by scientific progress confuses the uninitiated. Science constantly creates new hypotheses. Some are supported, and the findings add to currently accepted knowledge. However, few scientists expect that all parts of the currently accepted knowledge will be invariant in the future. Science progresses. What appeared to be true in one era is refined and changed with discoveries in a later era. For those who seek invariant truth, such change can be misunderstood as poor performance, that nothing is believable out of science because its “truth” is dynamic.

Third is the fact that science, like all academic fields, is in constant deliberation. There are always controversies. Different theories explaining target phenomena appear to be attractive to different subgroups. Different approaches to questions are supported by subgroups. Debates are common. Most scientists would say debates are necessary. Opposing viewpoints, orally presented in conferences, printed in journals and books, are the necessary fuel to progress. The debates help identify future research directions and clarify puzzles, all to the benefit of seeking a better approximation to the truth.

From the outside, not knowing these last two features of research, it’s easy to attack research by noting that what the fields are claiming are “findings” are constantly changing. You can’t believe anything they say because in a few months they’ll say something different. You just can’t count on them, so ignore them. Further, when popular media describe the internal debate over theories, methods, and findings, they often do so through the lens of a two-party debate. “They don’t agree within the group. The findings are ‘controversial.’” “There is a lack of consensus even among themselves. There are many who don’t accept the conclusions.”

These three features of science lead one to hypothesize that part of the large educational differences in support of science is due to a failure of science and researchers to describe their work and its culture. How can scientists communicate that current findings will be subject to the similar refinements and changes as prior findings? How can scientists help the media understand that controversies are the engine for advancement in science? Widening the support for science requires that we tackle these questions with the same vigor we use in our research.

Costs and Quality of Higher Education

Posted on

A common attribute of service sector organizations is that their costs of operation have increased at higher rates than those of other sectors. Sectors of the economy that have used electronic or mechanical processes to assist human labor have shown larger productivity gains (output per labor hour). For example, manufacturing firms, using automation, have increased their production per employee in significant ways. Higher productivity often leads to cost benefits for the resulting products.

In contrast, for example, psychotherapy using clinical therapists, show lower productivity gains. It’s difficult to imagine how a single therapist could greatly increase the number of patients served without a loss of quality. Hence, the cost inflation is larger for such activities than those not solely dependent on human labor.

Using traditional categories, higher education falls into the service sector of an economy – it provides educational services to students. In that sense, universities share many of the attributes of other service sector providers. Having a faculty member teach ten times as many students in a single class will increase productivity, if only measured by number of students taught, but the service provided is generally believed to be of lower quality.

Part of the process of explaining the cost inflation of higher education is, therefore, inevitably intertwined with arguments about what constitutes quality aspects of education. This question has led to discussions of what are the desirable outcomes of education.

As we crawled slowly out of the Great Recession, great attention was paid to income impacts of education. We now have highly replicated results that the value of a bachelor’s degree is over $1 million in increased lifetime earnings, relative to a high school diploma. Further, if you factor out the missed employment for four years of a bachelor’s curriculum and the cost of tuition, the economic gains of higher education remain clear. Higher education pays off in income gains.

If income were the sole outcome of higher education that was relevant, we could easily compare quality adjusted productivity across different curricula. But we believe, especially at Georgetown, that empathy, civic engagement, commitment to social justice, creative thinking, leadership, resilience, self-teaching ability, etc., are also outcomes to be valued. Not all of these are correlated with income of first job.

However, the measurement of these attributes is not easy. Behavioral, observable indicators are available only for a few. Most are internalized attributes that are usually (but imperfectly) measured only by self-report.

For those of us who work in highly selective universities, there is another concern. Entering our universities are highly accomplished young people, with superior cognitive abilities, many who sought out rigorous educational curricula and excelled. That they then achieve great success after graduation begs a question. What evidence do we have that what the students experience in our university is a cause of their later success? How do we know our institutions are significantly increasing the chances of their success? Would they have achieved the same wonderful outcomes without us?

To explain costs of higher education, we must understand and provide evidence for the value of education. To explain the value of education, we need more serious attention to measuring the outcomes of education that we value.

Identifying Mechanisms Producing Liberal Arts Educational Outcomes

Posted on

I’ve participated in multiple discussions over the last few days, all of which are relevant to an important issue facing the country. The question of interest is the effect of a liberal arts education to valued life outcomes.

It’s first important to note that the term “liberal arts education” is not uniformly understood. From one perspective, the term is a property of an institution – some are liberal arts institutions; others, are not. From another perspective, the term is a function of individual experiences of students. Certainly, the ambiguity of the phrase “liberal arts education” is problematic for public arguments promoting its design. We have all been in conversations that equate the term “liberal arts” only with the humanities, missing its support of multi-disciplinary educational experiences. I have even heard the misinterpretation of the analog phrase, “liberal education,” as describing a political orientation of the education rather than its broad, multi-disciplinary curriculum. As academics describe the role of a liberal arts education, we need to acknowledge the common misinterpretation of the phrase by many outside the university.

Further complicating the discussion is that we’re not clear about what components of a liberal arts education are key to its outcomes. The stereotypical image is that of a small college, with a residential undergraduate population, small classes, a curriculum that forces exposures to multiple disciplines, serious attention to teaching among the faculty, and a rich set of extracurricular activities.

This perspective forces attention not to the experience of the student. It implies that different students at the same institution may experience different dosages of the features of a liberal arts education. It also implies that the same major across different institutions may have different educational experience (e.g., majors in philosophy at MIT or in engineering at Swarthmore vs. the same majors at the other institution).

Separately, I was reminded of the Gallup/Purdue survey of predictors of engagement in one’s career and well-being, using self-reported undergraduate experiences. It finds six strong predictors of these life outcomes:
1. I had a professor who made me excited about learning;
2. My professors cared about me as a person;
3. I had a mentor who encouraged me to pursue my goals;
4. I worked on a project that took a semester or more to complete;
5. I had an internship or job that allowed me to apply what I was learning;
6. I was extremely active in extracurricular activities and organizations.

Some of these attributes, especially those involving connections between students and faculty, require a faculty interested in teaching and interaction with undergraduates. These are hoped-for attributes of a liberal arts education. The attribute of extracurricular activities is more common in residential institutions than commuter institutions. So, some of these indicators might be natural features of many liberal education experiences.

Of course, the Gallup work is not singularly focused on identifying the effects of an undergraduate liberal education versus those of other educational designs. As we attempt to understand more about what features of a traditional liberal education produces its valuable outcomes, it does seem attractive to identify, from the student perspective, what experiences are key to those outcomes. This leaves open the possibilities that the “dosage” of those experiences will vary over students in the same institution. Further, it will help identify what features of liberal education deserve more investment by liberal arts institutions and which might be adopted by other institutions, to the benefit of students.

What is New; What is True

Posted on

While there are a variety of cultures across disciplines, departments, schools, fields of a university, there are also commonalities. The commonalities are most vivid in the scholarship or research activities of the diverse fields. It is true that there are very diverse methods and styles of scholarship. One field may pride itself on the work of scholars working by themselves; others, the work of multi-person teams. One field may create book length products of their scholarship; others, produce smaller bites of work disseminated through journal articles.

The commonality in scholarship and research exists in the privileging of novelty and creativity. Fields are constantly innovating, continuously attempting to expand their bases. New ideas, new approaches, new interpretations are valued. It is through such work that fields advance. They build upon their foundations. They enlarge their influence. Doing the same thing as prior research is devalued as repetitive or uninteresting. (I’ve written earlier about a weakness of this culture, but here I want to praise it.)

PhD students are mentored to choose an unexplored area or select an unsolved problem. New assistant professors are encouraged to forge a clear new identity, to build a distinctive theme in their scholarship to succeed. Innovation is the name of the game.

While each discipline values innovation, how do they determine which innovations are of lasting value? What is both new and true? All fields rely on some sort of peer review. That is, others in the same field judge whether an innovative product is a valued new contribution.

Some fields have rather strong paradigms, consisting of principles and time-tested findings. In them, a novel result that solves a knotty puzzle within the paradigm, but is consistent with the body of principles, can be rather quickly accepted. A piece of work whose novelty violates some of the well-accepted principles, on the other hand, is often greeted with intense skepticism. In that sense, the peer review criteria rest on the large base of prior research results, which are the foundation of the paradigm. The new work is evaluated using the old as a lens.

In fields with much weaker paradigms or fields that are collection of diverse approaches and foci, peer review values new interpretations and new approaches. Such fields value critiques of past work, but demand evidence. Radical new approaches require larger evidentiary bases for them to be accepted. Glowing reviews of books, awards for books, and later publications that build upon an approach taken in a book are signals of acceptance of innovation. The author is sought out by others for commentary in his field of expertise.

The more radical is the innovation, the longer the process of acceptance might take. Such fields use the dialectic of argument as a tool for innovation. When counter-arguments to an innovation cease or are judged ineffective, the new creation is on its way to incorporation into accepted knowledge.

One of the greatest values of the thirst for innovation within academic disciplines is that erroneous findings or conclusions of little general value are effectively dispelled. The continuous effort to extend knowledge has the great value of purging that which does not stand the test of peer review.


Posted on

As I approached the archway of the ICC building yesterday, I noticed, along with other announcements of events, an 8.5 x 10 inch piece of paper taped to the brick wall.

The authoring group or persons were unidentified. Instead of announcing an upcoming lecture or a performing arts event, the sheet contained a numbered list of items. Its format was odd enough that it caught my attention.

The title conveyed that this was a list of features of Georgetown for which the anonymous authors were thankful. It was a list of services on campus, of people who provide care to students and faculty, and of programs that are part of the Georgetown community. Although the list was numbered, I didn’t really perceive a priority implication of the order. Rather, it seemed more like the result of quiet reflection about one’s life on campus.

I must admit the uniqueness of the announcement and the mystery of the who, how, and why of the list captured my attention. I stopped my usual rush to the office in an attempt to understand its origin and purpose. I’ve since given up, but I greatly admire the idea of the list and its effectiveness of literally stopping me in my tracks.

I do know that I owe the author(s) an appreciation. In attempting to unravel the mystery, I’ve realized that I too have a list of those attributes of Georgetown I appreciate.

First to come to mind are the members of several faculty and student groups that give the provost office input on new initiatives and ways to improve the university. All of these are volunteers. Each has his/her own duties and stresses in their current role. They freely give us advice despite limits on their time. Their very willingness to help us in this way is testament to their devotion to the institution. Instead of merely seeking their own success, they want to produce a better community.

Second to mind was a recent event. I was walking across campus yesterday and saw the grounds crew, leaning over the flower beds digging up the dying fall flowers and planting bulbs that will be the Spring flowers. They were each bent at the waist, tilling the soil and injecting the bulbs into the ground. I could imagine my own back ache after hours of such work. It also reminded me of how proud I am to see the bright flowers at the front gates on a sunny Spring morning. I should have thanked those men as those thoughts quickly ran through my mind.

Finally, as the students quickly emptied out of the campus as the hours progressed this Thanksgiving week, the campus became quiet. Today, Wednesday before Thanksgiving, there were few faculty around, almost no students, especially in the afternoon. Those hanging in through the Wednesday hours were disproportionately administrative staff, committed to finishing out their work, regardless of the class schedule. On many days, they often stay later than others. They’re often here when faculty and senior administrators are away. I appreciate their commitment to the institution and to the community of which they are such an important part.

Those are a few of the parts of Georgetown that make me grateful. Thanks to them (and thanks to the author(s) of their own list for making me stop and pay attention!).

Peer Review and Quality Assessment in Research

Posted on

Those subject to peer review of their proposals for research funding or of their scholarly products view it as a significant hurdle to succeeding in their careers. To those outside of academia, peer review can be misinterpreted as cronyism that illegitimately rewards friends and allies.  That interpretation is far away from my experience.

The peer review process is founded on the belief that those at the cutting edge of their fields are best suited to judging the value of proposed new work. The rate of success for many federal government research grant proposals is less than 15%. Having served on scores of such review panels, I have vivid memories of the care and critical review that is exercised in the evaluative deliberations. With the success rate so low, the critical review is fierce in most panels.

For all fields, the same peer review process is used to judge the value of completed scholarly products. In those fields producing books, the prestige of publishers is related to the rigor of the review process. Editors jealously compete to attract the best work of the best scholars. Editorial boards give advice to the organization on the value of a given series. The author of a mediocre manuscript submits to a long sequence of presses before an affirmative decision to publish is given, if ever.

For those fields whose scholarship is disseminated through journals, peer review rigor is often reflected in the success rate of submissions, which for some journals hovers in single digit percentages. The decision of a journal is the result of critical reviews by peers in the area the article addresses. The competition is fierce.

So, in sum, most of the attributes of peer review act to reward the very best in scholarship. But there are weaknesses.

I have vivid memories of a scientist friend of mine, now one of the most highly cited in his field, in his early years of work. He was attacking the dominant paradigm in practice in his field and having repeated difficulties getting his work published. The rejections brought criticism, calls for more evidence, and resistance to his approach. He was forced to publish his work in less impactful, second-tier journals. All was not lost, as the value of his work was eventually recognized, albeit much more slowly that, in retrospect, it deserved to be.

Peer review is effective in evaluating the marginal contribution of work fully within the accepted framework of a subfield. Peer review performs less well for work disruptive of the status quo. Its conservatism in evaluating field-changing results is a weakness from one perspective, but a strength, from another, as it asks a higher level of evidence for such challenges to decades of accepted findings.

As the number of scholarly outlets increases with web-based journals and other electronic publications, the opportunities have increased to get one’s work disseminated. One hopes that path-breaking work has higher opportunity to see the light of day. In any case, honest criticism inherent in the peer review process remains a strength of the academic enterprise.

Office of the ProvostBox 571014 650 ICC37th and O Streets, N.W., Washington D.C. 20057Phone: (202) 687.6400Fax: (202)

Connect with us via: