Skip to main content


ICC 650
Box 571014

37th & O St, N.W.
Washington, D.C. 20057

maps & directions

Phone: (202) 687.6400



Technology and Society

Posted on

Discoveries in basic science decades ago have made available sophisticated technologies to billions of people around the world. As many people have noted, many of us carry around devices more powerful than all the electronic gear in the Saturn moon rocket capsule.

The technological changes applying those scientific discoveries were catalyzed by a culture of entrepreneurship. In this country the most obvious examples of this lie in Silicon Valley. Failing fast, quick prototyping, agile focus on the market, seeking disruption – all of these catch phrases underlay an environment that nurtured unrivaled innovation.

Recent events, however, have revealed a weakness in the environment that created much of the technology that surrounds us daily. It was the hope of many innovators that they were building a world that democratized access to knowledge, reduced the cost of entry of economic enterprises, and linked people together in enriching ways. In essence, this was the key to flattening the earth, in Friedman’s terms. The dominance of the nation state as arbiter of culture and laws would diminish and be replaced by a ubiquitously available set of tools that permit the whole world to form mutually valuable social, political, and economic ties.

A cursory inspection of recent media reports about new technology, however, highlights a different view – cyberbullying; use of the Internet to support criminal activity; the very existence of something called “the dark web;” microphone recording of personal activities by Alexa; control over internet-based information flows in autocratic regimes; active disinformation attacks via bots controlled by another country; cyberwarfare unleashed. It isn’t clear how many of these outcomes were anticipated at the founding moments of these technologies.

Thus, many begin to question what these technologies are doing to the fabric of society. Do we know and interact with our neighbors less frequently? Does the perceived distance between us and others on the web encourage us to be less friendly, engage in harsher language, or disclose the selfish side of our natures? Are the generations coming into adulthood, having experienced the internet and mobile communication from birth, less interpersonally engaged, less capable or needing of deep long-lasting relationships? Do predictive analytics privilege ties among homogeneous groups at the cost of ties to heterogeneous groups? Are we constantly connected but ironically less close to one another?

All of these questions have reminded humanity once again that actions can rarely, if ever, be value-free. One can evade explicit articulation of the underlying values of a platform design, but they will inevitably manifest themselves. Human designers make decisions. The values underlying these decisions can be made explicit or not. The values will either be made obvious at the design stage, or they will become obvious at the execution stage.

It seems clear that Georgetown has a special opportunity here. The intellectual resources of Georgetown and the mission of educating women and men for others give the institution a special obligation to act at this moment in history. We can help provide a conceptual framework that refocuses attention of use of technology for the common good. We can develop ethical guidelines that could assist developers at the design stage to make explicit underlying values of platform design. We can help articulate user agreements that clearly inform users of costs and benefits of agreement. We can push the frontier of tools and algorithms to help protect privacy and make transparent risks of privacy breeches. We can lead in the use of high-dimensional data to promote social justice. We can help answer the questions of a developer about how they should design and use technology when they seek a more just world.

Derivative Inquiry: Dangers Facing Fields that Use Data without Producing Data

Posted on

An old joke describes a severely inebriated man who is found looking around, on his knees under a streetlight. A passer-by asks him what he is doing. The man says that he lost his keys and is looking for them. The passer-by asks if he’s sure he lost them there. The man says no, he thinks he lost them in the park nearby, but the light was so much better where he was looking.

The story fits some fields of academic inquiry in the sciences and social sciences. For example, much of the empirical work in macroeconomics relies on aggregate data produced by private enterprises and government statistical agencies. The field is somewhat removed from the production of those data. Hence, some of the literature focuses on the gap between theoretical concepts and the data available to reflect them. There were, however, few alternatives. Those data were the streetlight for the field.

In contrast, other sciences collect original data. They conceptualize what observations must be taken to test alternative ideas, how to mount the measurement, how to construct instruments to implement the measurements, and then how to process the data to address the research questions they pose. By collecting the observations directly, they learn the fallibilities of the measurements. By measuring features of the phenomena, they become more sophisticated about the mechanisms producing the phenomena.

Recent events have highlighted this distinction between science based on original measurement and science that starts with data produced by others. Implicit biases discovered in machine-learning based algorithms are hitting the popular press. The algorithms under scrutiny were based on data sources that were available at the times the algorithms were built. That was their streetlight. Unanticipated was poor performance for phenomena that were not part of the original data set. For example, the misidentification of persons of color in facial recognition seems to be related to the rarity of images of persons of color in the training data set for the algorithms.

Algorithms that guide loan risk or health risk assessment can fail if the data sets do not contain measures of all the attributes that affect risk. The best data come from deep understanding of the mechanisms that affect the likelihood of loan default or health conditions requiring medical interventions.

Some of the data sets used as the basis of the machine learning were very, very large in numbers of different units observed. But the total size of the data set is of little relevance if the characteristics of the data set do not match the real-world phenomena the algorithm will face.

Data scientists are increasingly realizing that building sophisticated algorithms on weak data is problematic. Faced with the choice between unsophisticated algorithms derived from rich data describing the mechanisms affecting some outcome of interest versus sophisticated algorithms based on weak data, they’re arguing that better data sets are a faster route to impact.

To build more useful data sets, we need data scientists’ attention to the measurement step producing the data, as well as the analytic step. Depending only on convenient, bright streetlights may fail us in locating the keys.

Leadership in a Networked World

Posted on

Some decades ago, Rensis Likert studied large number of work groups in an attempt to form a typology of leadership. Much of the work focused on communication and decision practices in different organizations. He identified four different “systems,” which exhibited the key variations across organizations. They form a progression. System 1 is the most hierarchical, with those in higher positions holding full responsibility for decisions, without input from subordinates. Management is focused entirely on work completion, sometimes through threatening behavior. System 2 lightens up on the threats and replaces them with rewards of various sorts. Communication from below is limited to information pleasing to the boss. System 3 leaders have greater trust toward subordinates, generating more information flow up, down, and sideways. System 4 is more fully participative; there is trust up and down the line. Subordinates feel free to deliver bad news. Successful teams abound. Members of more than one team end up having importance to organizational success.

Likert’s work was conducted in individual work organizations and suffers from having been done before the internet and globalization of work. Increasingly, organizations and individuals are linked together into networks. Some are short-lived and unidimensional. Some are long-lasting and heavily integrative. Some modern work organizations have members that interact virtually via electronic media and rarely are in the same room together. At the organizational level, complementary task assignments to allied organizations produce alliances for the production of products or services. Organizational cultures blend across networks.

This contrast makes one wonder what effective leadership will be in a fully networked world. In some sense, this is a side issue to what makes effective networks. In another sense, networks by their nature abhor centralized authority and hierarchies. The notion of the boss of a network seems oxymoronic.

What makes a node in a network of collaborators or work organizations judge that its network membership is fruitful? What are the attributes of persons who strengthen the network?

This all has relevance to the modern university. As research issues increasing are informed by knowledge domains spanning the entire university, networks appear to be increasingly attractive as an organizing principle in contrast to hierarchies. What kind of roles offer leadership possibilities in a network of research groups?

System 4 in Likert’s perspective highlighted the role of “linking pins,” people who were members of multiple work teams. Such persons lead some work teams and were members of others. They ended up translating organizational strategy to day-to-day work. Their team-spanning knowledge facilitated integration of ideas, processes, and workflow.

One would hypothesize disproportionate importance for those who are members of multiple nodes in the network. They learn multiple cultures. They identify and communicate the potential synergies across nodes of the network.

Effective leadership of a network, in my experience, requires a selflessness that masks that any leading is even being done. Leadership in a network seems more like facilitating latent consensuses. Hence, one key attribute is consistent outreach to different nodes, and the ability to translate knowledge and nomenclature of one node to another. (This is one feature of joint citizenship in multiple nodes.)

Another attribute goes beyond knowledge of different nodes’ perspective to support of them. Effective leaders in networks must be devoted to the value of the network, beyond the value of individual nodes.

I suspect leaders in networks must be more flexible on the degree of commitment of each node to a network. Networks are often voluntary clustering of synergistic units. The benefits of ongoing network membership might vary across the nodes (some are deeply benefitting; others not so much). Hence, an effective leader needs to calibrate aspirations across nodes. This requires empathy and the ability to understand various perspectives.

I suspect much of this means that leaders of networks, in contrast to leaders of single organizations, may not even be labeled as “leaders.” Indeed, they might wisely avoid labeling their work as leadership. Rather, they will yield their influence because of the shared beliefs of network nodes that they have each node’s interests at heart.

Some Joys Require Time to Develop

Posted on

In reading about the private sector job market, it’s notable how much has changed. In the 20th century, it was common for a young person to enter a work organization after completing their education, to enter a relatively low-level job, to receive formal training from the company over time, to advance through the ranks, and eventually to end one’s career at the highest level of their careers. In contrast to that culture, many organizations today assume that employees will spend relatively short amounts of time before leaving for another organization. Indeed, with the rise of the “gig” economy, many workers are not really employees of any organization. They are contractors exchanging their labor for money, with no legal obligations to work on any particular schedule and, in return, little ongoing commitment of the organization toward them.

In contrast, many of the joys of being an academic in a university come from the longevity of tenure as a faculty member.

Faculty members can gain immense satisfaction from interaction with students. There are few experiences more pleasurable than seeing an individual student finally understanding some threshold concept in a course. Once a key concept is understood, the student is then able to achieve a more integrated knowledge of a field. Before that, all was blurry to them. The “aha” moment in a student’s eyes is priceless. Accumulating these moments takes time.

So too are the pleasures of following the news of successful former students, providing a quiet sense of being a little part of their success. Indeed, it is the student who had to work hard, and struggled in classes, whose success is most rewarding for faculty. These pleasures do not arise quickly; they require time.

Similarly, faculty tell us over and over again that their colleagues in the same field are a source of great satisfaction to them. The ties that grow among colleagues are really unlike many. Colleagues read drafts of papers and chapters written by each other. They proffer critical ideas for improvement; they provide key stimulus to new projects; they buck us up when we hit dry patches. Nurturing such bonds among colleagues takes time.

Finally, academics have the freedom to pursue long-run research activities. For example, the history of scientific discoveries is clear in showing that the perseverance of basic science yields over time many of the discoveries that yield new technological breakthroughs (sometimes decades later). Much curiosity-driven research requires building up an infrastructure (in some humanities, images of key documents; in the lab sciences, equipment, specimens, research teams). This requires time.

There is another benefit, about which I was reminded today. Today, Vice Provost Aggarwal and I shared lunch with members of the Georgetown University Association of Retired Faculty and Staff. The atmosphere was joyous, with colleagues renewing friendships and updating one another on their activities. Some mentioned to me research activities they continue post-retirement. Others described their participation in the GU Learning Community Courses (, where retired faculty teach noncredit courses on a wide variety of topics, open to the community. In short, they were engaged. They were animated.

What struck me was another attribute that comes with longevity – a commitment to an institution, a sense of shared purpose, and an interest about the future of the organization. The questions the retirees posed to me were almost all future-oriented. They cared about the future of Georgetown, to which they gave so much of their lives. While it was fun for me to communicate all the exciting initiatives Georgetown is launching, I benefited most from the reminder from them that an academic career offers another gift – a long-lasting sense of belonging in a university community.

2019 Provost’s Distinguished Associate Professors

Posted on

Each year, deans, departments, and similar units nominate deserving colleagues as Provost Distinguished Associate Professors. A faculty committee of university professors, endowed chair holders (chaired by Reena Aggarwal, Vice Provost for Faculty) reviewed the applicants. This year, competition was particularly intense, with many more nominations than previous years.

Georgetown uses the designation to honor Associate Professors who are performing at extraordinary high levels. These designations are term-limited with a maximum duration of five years, or until promotion to full professor. As indicated below, their work exemplifies what makes Georgetown strong – faculty thoroughly engaged in pushing the envelope of knowledge in their field, and transmitting their passion for such work to their students and the general public.

Vishal Agrawal is an Associate Professor of Operations and Information Management in the McDonough School of Business. He received his Ph.D. from Georgia Institute of Technology. Dr. Agrawal’s research interests include sustainable operations, new product development and supply chain management. His research focuses on managerial challenges at the interface of business and the environment. He is also interested in the effect of consumer behavior on operations and new product development strategies. Dr. Agrawal’s research has appeared in leading journals such as Management Science and Manufacturing & Service Operations Management, and has received several awards including the Management Science Best Paper in Operations Management Award (2015), Paul Kleindorfer Award in Sustainability (2016), and the INFORMS ENRE Young Researcher Award (runner up 2014). He serves as an associate editor for Manufacturing and Service Operations Management and as a senior editor for Production and Operations Management.

Laia Balcells is an Associate Professor in the Government department of Georgetown College. She is a political scientist specializing in the study of political violence as well as nationalism and ethnic conflict. She earned her PhD from Yale University in 2010 and came to Georgetown from Duke. Dr. Balcells’ book, Rivalry and Revenge: The Politics of Violence during Civil War (Cambridge University Press, 2017), deals with the determinants of violence against civilians in civil war, and explores micro-level variation in the Spanish Civil War and Côte d’Ivoire. Her more recent work examines preferences for secessionism and their relationship with redistribution and identity-related factors. She has also recently explored post-war low-intensity violence (in Northern Ireland), wartime displacement (in Colombia and Spain), and cross-national variation in civil war warfare and its implications on conflict duration, termination and severity.

Shweta Bansal is an Associate Professor of Biology in Georgetown College. She completed a RAPIDD postdoctoral fellowship at the Center for Infectious Disease Dynamics at Penn State University and the Fogarty International Center at NIH. She completed her Ph.D. in 2008 in network modeling and infectious disease ecology at the University of Texas at Austin. Dr. Bansal is an interdisciplinary mathematical biologist, and her research is focused on the development of data-driven mathematical models for the prevention and containment of human and animal infectious diseases using tools from network science, statistical physics, computer physics, computer science and statistics. Dr. Bansal’s Lab focuses on the interactions that facilitate infectious disease transmission between hosts. It seeks to understand how social behavior and population structure shape infectious disease transmission, and how knowledge of such processes can improve disease surveillance and control.

Leticia Bode is an Associate Professor in the Communication, Culture, and Technology master’s program of the Graduate School of Arts and Sciences. She received her PhD in Political Science from the University of Wisconsin, Madison. Her work lies at the intersection of communication, technology, and political behavior, emphasizing the role communication and information technologies may play in the acquisition and use of political information. Her work examines the effects of incidental exposure to political information on social media, effects of exposure to political comedy, use of social media by political elites, selective exposure and political engagement in new media, and the changing nature of political socialization given the modern media environment. Her work has appeared in the Journal of Communication, Journal of Computer-Mediated Communication, New Media and Society, Mass Communication and Society, Journal of Information Technology and Politics, and Information, Communication, and Society, among others. Dr. Bode also sits on the editorial boards of Journal of Information Technology and Politics, and Social Media + Society.

Jeremy Fineman is an Associate Professor in the Department of Computer Science of Georgetown College. He joined the Georgetown community in 2011, before which he was a Computing Innovation Fellow at Carnegie Mellon. Dr. Fineman studies algorithm design and analysis, focusing on parallel algorithms, scheduling, and memory-efficient or large-data algorithms. He is also interested in classic (sequential) algorithms and data structures. He has published prolifically and has emerged as a leader among his peers. His work is well-recognized by his peers, and one of Dr. Fineman’s papers received a Best Paper award from the 2014 European Symposium on Algorithms.

Emily Mendenhall is an Associate Professor of Global Health in the Edmund A. Walsh School of Foreign Service at Georgetown University. She is a medical anthropologist who writes about how social trauma, poverty, and social exclusion become embodied in chronic mental and physical illness. Dr. Mendenhall received her Ph.D. from the Department of Anthropology at Northwestern University and MPH from the Hubert Department of Global Health at Emory University.  Prof Mendenhall’s most recent project is a book forthcoming with Cornell University Press (2019), Rethinking Diabetes: Entanglements of Poverty, Trauma, and HIVRethinking Diabetes considers how ”global” and ”local” factors transform how diabetes is perceived, experienced, and embodied from place to place. She is also the author of Syndemic Suffering: Social Distress, Depression, and Diabetes among Mexican Immigrant Women (Routledge, 2012). In 2017, Dr. Mendenhall was awarded the George Foster Award for Practicing Medical Anthropology by the Society for Medical Anthropology.

Please join me in congratulating these wonderful Georgetown colleagues for their accomplishments and their ongoing contributions to our university.

What Do We Mean by “Skills?”

Posted on

There is much discussion these days, especially in liberal arts institutions, about what kind of knowledge should be the focus of undergraduate and graduate education.

In many of these discussions, the word “skills” arises. Its use is causing some misunderstandings. Some of us devoted to liberal education hold to notions that the traditional liberal arts are knowledge domains resulting from “disinterested inquiry” in contrast to those devoted to a given profession or vocation. Indeed, “skills” often is used to describe the knowledge required for activities within a job. From this, there is a common reactance to the notion that an undergraduate degree from a liberal arts institution should explicitly not be occupationally targeted, that job skills are not the obligation of such an undergraduate curriculum.

If one looks for formal definitions of a skill, you find: “An ability and capacity acquired through deliberate, systematic, and sustained effort to smoothly and adaptively carryout complex activities or job functions involving ideas (cognitive skills), things (technical skills), and/or people (interpersonal skills). (This from This indeed sounds rather task-oriented.

But further definitional commentary dissects skills into hard skills, labor skills, life skills, people skills, social skills, and soft skills. These expand the notion of skills far beyond narrow task-oriented knowledge suitable for a given job. Some of these subcategories of skills fit the kind of capacities that those in liberal education have espoused for some time: creativity, resiliency, critical thinking, decision-making under uncertainty, leadership, dealing with conflict, empathy, taking the “other’s” perspective, inter-cultural competence, reflection, and discernment.

Surveys of employers and business executives repeatedly find that they highly value attributes like strong work habits, self-discipline, computer skills. This is to be expected. But they also highly value attributes like critical thinking, communication, problem-solving abilities, and cultural and global awareness. These latter attributes are often identified as desired outcomes of a liberal education. But it’s rarer that we inside universities label them as “skills,” but some outside academia routinely think of them as skills.

So, in some sense, the word “skills” is being used in multiple ways and causing communication problems. I find that many colleagues who take pride in their “disinterested inquiry,” searching for insight and truth, also value that such inquiry builds the ability to think critically, and to communicate in words and speech. They value that such inquiry is global in its reach and requires cultural understanding. They value that a derivative attribute of the experience is empathy towards those quite different from oneself. But often they would not label these as skills.

So, as we all actively contemplate the future of universities and how they can contribute to humankind, I wonder whether we should be more careful to avoid too glibly using the term “skills,” either to denigrate or support some new educational activity. Having a discussion at a finer conceptual level (e.g., empathy, global awareness) might be a wiser course. It’s not that liberal education is antithetical to many personal attributes valued by others as “skills.” It’s that many in academia don’t think of them as skills.

Method and Substance

Posted on

The traditional organization of universities honors specific substantive foci – e.g., biology, psychology, mathematics. The units bring with them a set of questions about different phenomena of interest. What is life and how do living organisms function? How does the human mind function and how does it affect behavior? What is the underlying logic of shape, quantity, and arrangement?

The disciplines and fields also, however, bring with them a set of methods of inquiry or approaches to scholarship. In many fields these are well-defined practices, which are prescribed by the discipline and sanctioned as legitimate ways to provide evidence for conclusions or arguments forwarded in the research.

Inside many of the social sciences, one finds a thriving mix of methods. It is common in such fields to have formal courses in “research methods,” to introduce the fledging student to the alternative approaches at knowledge acquisition. In some fields, the student would be exposed to collecting data from existing administrative or archival sources, to participant observation or ethnographic techniques, to forms of unstructured interviewing of persons, to randomized experiments with human subjects, to quantitative survey research, to statistical analysis of existing quantitative data. Fields that use multiple methods sometimes sort themselves into internal tribes, each of which touts the superiority of one method to discover truth and disparages the others.

As a provost, one is struck by the use of similar methods across disciplines studying very different phenomena. For example, behavioral economists use experimental laboratory methods with most of the features of psychologists’ methods in their laboratories. Organizational analysts sometimes use intensive observation and case study techniques that are common to anthropologists. Scholars in cultural studies in foreign language departments and English departments use techniques common to those in sociology departments. Some psychologists use the FMRI measurement in ways not dissimilar to those of neurologists. Some faculty in linguistics use research approaches similar to those in computer science. Statistical analysis of quantitative data is common in not only statistics but in political science, economics, sociology, psychology, public policy, business, etc.

This commonality of methods across fields is interesting at the university level, for three reasons. First, it produces a set of courses that cover similar content across different programs. For example, there are statistics courses spread throughout scores of departments. They differ in the mix of theory and application. They also tend to utilize data examples from the fields in which they are taught (e.g., in environmental studies, data on fish; in economics, data on businesses). Similar, the design of experiments or surveys can be taught in a variety of programs. Textual analysis is spread throughout many departments.

Second, it produces a set of faculty who share interests in advancing research methods but find themselves in different units. Good things can happen when such faculty get together. For example, at Georgetown there is a group of quantitative scientists from throughout the university, GQUADS, that convenes regularly to discuss new methods in statistics, computational science, and related issues. When methods-oriented faculty team-teach across fields, wonderful things can happen for students and faculty.

The third reason is more of a thought experiment – what would happen if a university organized itself about units that shared research methods and not in units that shared a set of substantive foci? If we organized that way, would we cumulate more insight into, for example, how economics and psychology might combine to explain human behavior? Would we end up in even more conflict between theory and application with our current disciplines? Would methodological development themselves advance at a greater pace; would we develop better measurement and observational tools faster?

The Minimum Size of Academic Certification

Posted on

In the last 6 years, we’ve experienced a set of phases involving online learning. In August of 2012, there was widespread speculation that “brick and mortar” residential universities were headed for replacement by the free internet-based online learning platforms. The higher education community has learned much since the hype phase of online learning.

The level of self-discipline required in multi-course online degrees appears to be much better suited to master’s level education than to bachelor’s level. Online Master’s students are often working full time, and they seek a degree in their off-hours. The motivation for the pursuit is often the hope of advancement in one’s career or the retooling to another field. They tend to be older and more mature than many bachelor’s students.

At the same time, the popularity of online learning of a different sort, short narrowly focused educational learning (e.g., Kahn Academy, has much larger appeal. Further, there is some evidence that employers value certification from shorter learning experiences. For example, in computing fields such certification is used by employers as a qualifier for technical positions (e.g., Cisco certified network associate, Microsoft certified systems engineer).

Further, MIT in the last few years has offered a “micromaster’s,” a series of 5-6 courses using the edX online platform. If the student successfully passes a proctored final examination for each course, they are awarded a micromaster’s credential. The course sequences are designed to offer a short, but integrated graduate-level treatment of a larger area (e.g., statistics and data science, supply chain management). A successful completion of the micromaster’s gives preferred entry into a related MIT Master’s degree.

Educationalists cite this feature of a micromaster’s followed by a Master’s as an example of “stackable” course sequences. It evokes a future of higher education where coursework might extend over a long period of time, with sequences taken in spurts to yield “nano” certifications, which are then eventually combined into a larger degree program. Obviously, one of the issues of such a future is how learning in such a staccato way has the equal educational value as the same courses taken in a more compressed time frame.

As evidenced by these developments, we are seeing a rethinking of what the minimum level of learning is required to be valued by students and their employers. We should admit that this is not completely new. Master’s degrees in many fields were two-year experiences, but are now increasingly one-year curricula. Many undergraduates are completing their bachelor’s degrees in less than four years.

We should expect continuous reexamination of the appropriate volume of course work that merits an academic certification. I suspect that there will be demand for shorter and shorter learning experiences to qualify for traditional degrees (i.e., bachelors, master’s). Whether shorter course experiences will be valued, I suspect, will depend on their educational design.

The sustainable new short curricula are likely to be designs that a) provide a truly integrated experience for the certification, each course building upon the prior, b) have sufficient depth that significant knowledge advances result, and c) provide the student with flexibilities for future educational choices. Drifting into packaging existing subsets of courses into new credentials without seeking those three attributes might not serve well the students we wish to educate.

The Meaning of Grades

Posted on

I taught a first-year student seminar this term and finished assigning final grades. In examining the varying performance of the students on the weekly writing tasks and the final semester project, my thoughts turned to the recent faculty intellectual life report. Among a large set of good recommendations, the report once again raised the issue of grade compression. Three years ago, I wrote about this, but the issues remain.

When I first arrived at Georgetown in 2012, a set of faculty made sure I was aware of their concerns. One dean noted that large portions of each class were achieving the “Latin honors” of cum laude, magna cum laude, and summa cum laude status at graduation, diluting the honorific meaning of those appellations. He really didn’t know what to say when a parent was so proud of his child achieving a GPA of 3.0, when he himself knew that put the student in the lower percentiles of his classmates.

By that time, the McDonough School of Business had already decided to enforce some spread of grades across their classes. The result was the mean grade for business majors was lower than the mean grade of their peers in other Georgetown schools, and their graduates were disproportionately failing to achieve Latin honors.

So, we agreed to fix the cum laude and above designations, to have them based on percentages of the class, not fixed GPA’s. For example, cum laude is awarded to the top 25% of the graduating class within each school, a GPA that last year ranged from 3.66 to 3.81; summa cum laude, to the top 5% of the class, a GPA from 3.88 to 3.95, quite close to the maximum of 4.00. The GPA targets are updated each year to reflect changes in the percentiles.

The other changes we made were ones that increased transparency. First, on the internal transcripts that students see each term, we post both their individual grade and the mean grade in the course. This is to convey a B (3.0) in a course with a A- average (3.67) might have a different meaning than a B in a course with a C+ (2.33) average. Second, we asked the registrar to give to each department chair/unit head the distribution of grades in every class in their unit, in hopes that more transparency would generate faculty discussions about grading standards.

Over the subsequent years, it has become clear that faculty do not agree on the meaning of grades. Some hold strong to the meaning prescribed in the student bulletin: a D is a minimum passing grade; a C is adequate performance; a B is good performance; and an A is excellent performance. Further, they interpret these evaluations as relative to students in the current class. Such an interpretation implies that, unless there is rare perfect homogeneity in performance, there should be some variation in grades.

Other faculty assert that they specify a set of learning goals for their class, with a corresponding set of assessment tools. If all the students pass that threshold, they should all receive an A, in their opinion. (There does not seem to be much discussion of raising the learning goals in an attempt to stretch the students.)

There are many other related sentiments – a common one that small seminar classes often demonstrate superior performance among students, and hence, giving them all A’s makes sense; another, that students flock to courses known to given high grades; another, that we place our students at a disadvantage for graduate schools’ admission if we give lower mean grades than our peer institutions; another, that lower mean grades produce more harmful competition among students within classes; finally, faculty report that parents are increasingly vocal in supporting high grades for their child.

There are, however, equity problems in continuing grade compression. Departments that provide lower mean grades produce majors that disproportionately don’t achieve Latin honors. Some majors achieve lower grades in their own department than they do in other departments; other majors received much higher grades inside their own department than outside their department. When grading practices systematically vary across fields, the GPA yields little information about the student’s performance without knowing the courses taken.

As GPA’s continue their advance to their maximum of 4.0, they contain little discriminating information across students. Indeed, judgments about graduates will increasingly depend on other attributes.

“Scientific Facts”

Posted on

Currently, there are many signs of attitudinal gulfs among those with different levels of education. This is a post concerning disagreements about the value of science, as an enterprise that contributes to the common good. There appear to be three features of science that contribute to the balkanization of support.

First, much of priority-setting for science funding, the evaluation of new proposed work, and the assessment of the value of products, depend on the judgments of those in the same field. This notion of “peer review” is a feature of most all research endeavors but is most prominent in science. A critique of this process often labels peer review as “cronyism.” Friends and associates are merely supporting one another. “You support my research, and I’ll support your research.” The obvious fear is that funded research fails to advance the common good in the most efficient way. Instead of a meritocracy, the evaluation process is an elite friends’ network of self-aggrandizement. The importance of the proposed research does not determine its likelihood of support but rather the connections of those proposing it to those reviewing it.

Protections against such cronyism include recusal rules for reviewers, which exclude collaborators, mentors/mentees, and colleagues from the same university. They include transparency of all grants awarded by funding agencies, with descriptions of the work proposed. Increasingly, they include linkage between grants awarded and the research products of the grant.

Second, the ever-changing knowledge set produced by scientific progress confuses the uninitiated. Science constantly creates new hypotheses. Some are supported, and the findings add to currently accepted knowledge. However, few scientists expect that all parts of the currently accepted knowledge will be invariant in the future. Science progresses. What appeared to be true in one era is refined and changed with discoveries in a later era. For those who seek invariant truth, such change can be misunderstood as poor performance, that nothing is believable out of science because its “truth” is dynamic.

Third is the fact that science, like all academic fields, is in constant deliberation. There are always controversies. Different theories explaining target phenomena appear to be attractive to different subgroups. Different approaches to questions are supported by subgroups. Debates are common. Most scientists would say debates are necessary. Opposing viewpoints, orally presented in conferences, printed in journals and books, are the necessary fuel to progress. The debates help identify future research directions and clarify puzzles, all to the benefit of seeking a better approximation to the truth.

From the outside, not knowing these last two features of research, it’s easy to attack research by noting that what the fields are claiming are “findings” are constantly changing. You can’t believe anything they say because in a few months they’ll say something different. You just can’t count on them, so ignore them. Further, when popular media describe the internal debate over theories, methods, and findings, they often do so through the lens of a two-party debate. “They don’t agree within the group. The findings are ‘controversial.’” “There is a lack of consensus even among themselves. There are many who don’t accept the conclusions.”

These three features of science lead one to hypothesize that part of the large educational differences in support of science is due to a failure of science and researchers to describe their work and its culture. How can scientists communicate that current findings will be subject to the similar refinements and changes as prior findings? How can scientists help the media understand that controversies are the engine for advancement in science? Widening the support for science requires that we tackle these questions with the same vigor we use in our research.

Office of the ProvostBox 571014 650 ICC37th and O Streets, N.W., Washington D.C. 20057Phone: (202) 687.6400Fax: (202)

Connect with us via: