Skip to main content

Address

ICC 650
Box 571014

37th & O St, N.W.
Washington, D.C. 20057

maps & directions
Contact

Phone: (202) 687.6400

Email: provost@georgetown.edu

 

Provost Distinguished Associate Professor

Posted on

Academics experience heavily structured career steps. The vast majority of tenure-line faculty receive two promotions during their lives, from assistant professor to associate professor, and then later, from associate professor to full professor. In addition, a small number of faculty are awarded named professorships or endowed chairs, usually after they have achieved full professor status. An even smaller number are named university professors.

In an attempt to enhance the research lives of faculty, we have put in place a variety of new policies (e.g., banking of courses, use of partial sabbaticals, increased numbers of internal fellowships, and matching awards for prestigious fellowships). But we think we can do better at improving Georgetown as an environment in which scholars can do their best work and are recognized for their accomplishments.

Indeed, we came to the conclusion that one segment of our tenure-line faculty was not being appropriately recognized. We have among us a set of associate professors whose performance as scholars and teachers exceeds the normal thresholds of excellence that we demand of all professors.

For them, there is evidence that they have impacted learning and the formation of students in their activities. This might include extraordinary levels of mentoring, joint research publication with students, teaching awards, and other evidence of excellence in teaching and mentoring. It generally includes unusually impactful scholarship, as evidenced by awards given by professional associations for books, articles, or other products. It includes repeated and thematically cumulative articles in major journals. It includes receipt of successively larger research grants in those fields with external funding possibilities. It includes awards of national and international competitive fellowships, given only to small numbers of scholars globally.

To recognize such Associate Professors, we will establish a set of Provost Distinguished Associate Professor honorific titles. The Provost Distinguished title will be granted to Associate Professors of unusual merit. These titles will be term-limited with a duration of five years, maximum. (When a Provost Distinguished Associate Professor is promoted to full professor status, the term would also be completed.)

A committee of exceptional full professors will guide the selection of the Provost Distinguished title. Nominations could be submitted to the provost’s office by unit heads (e.g., department chairs). The number of Provost Distinguished Associate Professors may vary over the years. We expect that those so honored would also benefit from the extraordinary merit protocol in practice on the main campus. (The Provost’s office will distribute formal nomination procedures in the coming days.)

We are excited about this new development. We are very hopeful that these new titles will signal and recognize the accomplishments of our highest performing associate professors.

Data Production In Organic Data

Posted on

As we move from the world of designed data (through surveys, censuses, and administrative forms) to one of self-generated “big data,” or more organic data, we are excited about completely new ways to describe our world, the behaviors of humans, and the activities of organizations. This will be a more complicated data world than the one that forms the basis of current most social and economic statistical indicators. Most importantly, the analysts who produce statistical descriptions in this new world are unlikely to control much of the processes that generate the data.

In the old world of designed data through surveys, a common model of the production of data involves four steps: 1) comprehension and interpretation of the question being posed, 2) retrieval from memory of information relative to the question’s perceived intent, 3) formation of a judgment regarding retrieved memory and the question’s intent, and 4) delivery of a response. The framework begins by the stimulus of a question being posed to a respondent — this is the proximate cause of the provision of the datum (i.e., the answer to the question). This framework became valuable with designed data because it focused attention on weaknesses in the wording of questions (affecting the comprehension stage), biases in memory retrieval, and biases in judgment (e.g., reticence to reveal socially undesirable attributes).

Our new world of data contains data created without a uniform stimulus, as in a survey question. Let’s imagine we’re using Google search terms as a data set. Let’s say we’re interested in estimating the number of persons searching for work through examination of such search terms. Not unlike the logic of Google Flu, we plan to code the text strings of searches as more or less relevant to employment search behavior and create an index of employment search based on the coding. There are many interesting problems here.

Let’s examine the possible behaviors of persons who are indeed actually seeking employment. What behaviors on their part will generate a Google search? We might speculate that some would be seeking a well-known (to them) electronic job listing service and using Google instead of a direct URL to the job-listing site. Others would avoid Google and go directly to such a site.

Still others may be seeking information for alternative employment opportunities, using the search tool to locate such sites. For example, they might be looking at the job listings of specific companies on the companies’ websites.

Still others may be doing their search via personal networks they maintain, using word of mouth as a source of information on relevant job openings. No Google search use would be required.

With these examples, Google data systematically miss some employment search behaviors.

Let’s say I switch the data source to Twitter, attempting to use it to track the same phenomenon — what’s the current volume of job search activity in the country?

Let’s again focus on the behavior of Twitter subscribers who are searching for work. As a subscriber, the data they emit are tweets, retweets, and choosing others to follow, among others. If they are engaged in job searches, what would prompt them to issue a tweet about their search? Would they be more likely to tweet information that positively reflects on their job search (e.g., “I have a job interview”) as well as other information (e.g., “I’m losing hope at ever finding a job”). Such filtering of tweets motivating by attempt to manage self-presentation might be viewed as akin to the social desirability bias in answering survey questions concerning embarrassing attributes.

Now, let’s change the focus to those who are not looking for work. Some may enter Google search terms about employment for an unemployed family member or friend; some have an ongoing interest in what jobs are open in their field. Their search terms might be precisely the same words as those seeking work.

These are simple examples that force attention to the process that generates the data. In some circumstances the behaviors we are attempting to monitor may not be recorded in the big-data source, or might be recorded only under special circumstances. In survey data, much attention was paid to the formation of the data stimulus (the question), in an attempt to understand the meaning of the resulting data. In this new world we need similar attention to identifying the stimulus to the data production. In this case, the stimulus will tend to be unobservable from the data themselves, but building conceptual frameworks that force careful considerations of alternative stimuli is key. Only then can we interpret the both value and foibles of the organic data.

Ethics in PhD Education

Posted on

For some time, there have been discussions about the role of the PhD in the system of higher education. A recent Science opinion piece highlights the issues among STEM fields.

We are experiencing a period of rapid innovation in the undergraduate space in the US, partially because of the impact of tuition pressures of the baccalaureate. It was stimulated by the potential of online platforms to improve access and efficiency. It was propelled by the excitement surrounding experiential learning, project-based class organization, and the intrigue with competency-based education.

There is much less discussion of innovation in PhD education. However, there are issues to consider. The concerns tend to span many of the disciplines and fields.

One of the key issues concerns the very mission of a PhD program. Is the PhD program solely designed to build the professoriate for the next generation? Are we training future tenure-line faculty essentially to replicate the careers of their mentors? Just looking at the national data on this matter, the proportion of newly-minted PhD’s entering tenure-line positions is highly variable across fields. Some fields, for example, computer science, contribute PhD’s to private industry at very high rates. In their careers they often conduct advanced research. For other fields, those who enter nonacademic jobs use few of the research skills or content knowledge they gained in their PhD programs.

A common complaint among PhD students is that faculty mentors prefer PhD students oriented to academic careers. The faculty measure their own success by the number of placements in top departments. However, there is little coordination between the production of PhD’s and the demand for assistant professors in universities. What obligations do universities, for such programs, to limit the size of their PhD student body to sizes that give high probabilities of graduates landing tenure-line positions?

A common observation is that PhD programs often effectively provide research skills and deep content knowledge, but little else of value to one’s career. Just as was true for the current faculty, for many programs there is no formal education in pedagogical methods, in the practice of supporting one’s research through grant proposals and other funding mechanisms. It’s easy to generalize that comment to skills needed in nonacademic, research-oriented careers.

Some faculty view their graduate students as key supports in executing their own research agendas. The value of PhD programs to those faculty is that they multiply their own research productivity. Some universities view PhD students as cheap labor for teaching undergraduates. Such logic seems to place little value on the obligation of the faculty to form the character and intellect of their PhD students or on the role of a mentor to shape a younger person who wishes to pursue a career of the mind. What obligations do universities have to communicate clearly the quid pro quo’s of being a graduate student?

Some of the issues above rise to the level of ethical treatment of PhD students, in my opinion, when we don’t fully disclose to students the likely outcomes of their PhD training. In this regard, efforts to become more transparent to potential applicants seems wise. How many of the graduates of the program have tenure-track positions; how many are in research positions in other domains; how many are in jobs that have little use of their PhD training? With such information made available to applicants, the risks of less-than-optimal outcomes of a PhD program can be part of the application decision.

Protecting University Citizens While Seeking Important Information

Posted on

One of the important results of the modern world’s thirst for data-based decision-making is repeated requests for personal information to assemble those data. After every purchase of a car, many visits to websites, or a purchase of fast food, it is common that we as customers are given a set of questions to answer about the experience.

There are wonderful effects of these surveys. Organizations supplying goods and services to the public can be alerted to potential improvements they can launch. Customers are given a voice to express their reactions. Organizational decisions can be guided by real reactions of clients instead of hunches and “gut-feelings.”

There are also some harmful effects of this world. We are barraged by requests of measurement. Merely making the choice to respond or not becomes a burden at common volumes of requests. In addition, everyone appears to think they know how to design a survey and create a structured measurement scheme. (Indeed, since we all have asked questions before, what’s really complicated about constructing a questionnaire?) The result is that we are sometimes subjected to horrible survey requests, containing questions that are ambiguous, sometimes even meaningless. Finally, our limited discretionary time is taken up by these requests.

For these reasons, the participation rates in surveys are declining throughout the world. As the rates decline, the threat of bias in statistics from survey data increases. When the nonrespondents to a survey have different characteristics on the survey variables than do the respondents, then descriptive statistics from the survey respondents do not match what would have been obtained from the full population. This problem cannot be solved by having more people in the sample; the problem needs to be solved with higher response rates.

Coming to Georgetown and observing our own survey culture was an eye-opener to me. It appears that everyone is free to mount a survey for all populations in the University if they so choose. (At a past hackathon, undergraduate students mocked how many requests and pieces of information that they get daily from the university.)

Further, there seems to be a plethora of surveys that employ an entire census of the community of interest, not a scientific sample of the population. This implies that we’re not taking advantage of the economy and statistical inferential properties of carefully designed samples.

Finally, there seems to be no consistency in how much followup of requests for survey participation. As a result, many have very low response rates, sometimes a small minority of the survey sample.

These are common concerns in large organizations. For that reason, many organizations have created processes to evaluate proposed surveys of their members. (For Federal government agencies, they have “respondent burden budgets” that limit the number of hours of the US public time they can use for survey participation.)

Some universities have created an oversight group of technical experts in surveys, representatives of the survey populations, and administrators to evaluate survey proposals. The group has authority to support or reject the access to faculty, student, or staff lists used for the sample surveys. It attempts to coordinate and combine surveys whenever possible. It assures that the analyses from the survey would be available for wider university work. It would archive the data for wider, later uses consistent with the goals of the survey.

I believe we might better serve our institutional needs for information with such a group. In the coming days, we’ll launch discussions with key stakeholders to mount such an effort at Georgetown.

Quality in Organic Data

Posted on

As social scientists increasingly encounter new data resources, especially those in the so-called “big data” realm, they’re finding a new challenge to identifying the proper quality framework to use.

For some years, much of empirical social science was guided by a framework of inference from a sample-based data set to a large well-defined population. The data and statistics derived from the data were evaluated through the lens of a “total survey error” framework, often presented in the chart below:

29 JUL 2015_Graph

Some of this framework focused on quality properties that caused biases (consistent, systematic error) between the population from which the sample was drawn and the sample generating the data. For example, if the survey data came from a web-based survey, the researcher has to attempt to measure the impact of missing persons with no web-access (this produced “coverage error” on the chart above). Would those without web access have given different answers to the survey questions than those with web-access?

We’re now inundated with statistics from Twitter and Facebook and other social media platforms, but few studies using such data ask the question whether the nonsubscribers to those platforms would be different on the statistics published.

Worrying about the biases in statistics due to missing observations, however, does not require a large renovation in the quality framework above used in surveys.

A more important difference between so-called organic data (e.g., from social media) and designed data (e.g., from surveys) is that the researcher controls the observations in designed data but does not in organic data. Much of the survey error framework acknowledges possible mismatches between the desired target of measurement (e.g., the status of being employed) and survey questions that are asked in the questionnaire (e.g., “Last week, did you do any work for pay”?)

With the new data resources available to researchers in the big data world, a different kind of measurement issue arises. The researcher is merely “harvesting” the “exhaust” of people as they live their lives. What’s in the exhaust is not controlled by the researcher. For example, what would motivate a tweet that says “I lost my job today.” What type of Twitter subscriber who did indeed lose their paying job would choose to tweet this? What type of Twittter subscriber who lost their job would choose not to send such a tweet? If a subscriber is unemployed, what is the probability that he or she would tweet evidence of that status repeatedly during their unemployment spell? Would a subscriber ever send such a tweet despite being employed for pay? Do people holding multiple jobs behave differently than those who hold only one job?

To construct a useful quality framework for such organic data, the researcher needs to tackle the question of why a person would choose to provide information on the platform. Understanding the motivation is key to knowing the signal to noise ratio in the data for a given phenomenon. The probability of creating such evidence must be known for both those who have the attribute and those who do not have the attribute.

This kind of quality feature of big data cannot be measured within the big-data set itself. Such biases aren’t corrected by having larger data sets; the errors stem from inherent mismatches between the target of measurement and the processes producing the data.

Further, we don’t have language for this type of data quality feature. Candidates might be “the propensity to report an attribute,” “the likelihood of signaling,” or “match between the attribute and the signaling.” None of these are pithy.

Great care will be required in the move from data that were designed for a specific analytic purpose to data harvested from digital traces naturally occurring. We need serious discipline about big-data quality.

Data, Trust, Verification

Posted on

Each day, we seem to be inundated with two types of media stories simultaneously — 1) how “big data” will usher in a world of heightened convenience and efficiency for all and 2) how relentless tracking of our personal information threatens our autonomy as human beings.

In prior decades, much of our collective understanding of how people felt about issues, what activities they pursue, and what knowledge they possess about key issues facing their lives, came from direct questioning them. The questions were components of sample surveys, through which a scientific sample of the full population was systematically measured and their answers statistically aggregated to describe the full population. The selected respondents to these surveys were given pledges that their answers would remain confidential to the survey organization, and only statistical aggregations would be constructed by combining their answers with many others.

One property of this prior world was that respondents were aware of which of their attributes would be known through the survey (i.e., only the questions answered by them). A second property is that their participation was voluntary, and the proposed uses of the data could be a factor in whether they chose to respond. A third property was that most institutions collecting survey data earned the trust of respondents that the pledges of confidentiality would be honored.

Over the decades, this protocol worked fairly well. There were very few violations of the confidentiality pledges. There was effective dissemination of information to the public to describe key features of their world — how well the government was perceived to be fulfilling their needs; how well-off the public was on basic attributes of income, educational attainment, and health status; how safe from crime different populations found themselves; and how well businesses were performing. That is, by sample persons giving up their privacy to provide data held confidential and used only for statistical purposes, the full society was informed about how well it was doing. Indeed, the data were designed to achieve this common good outcome.

Enter the Internet and unobtrusive data collection on persons, users, and members of services.

This new world produces data as auxiliary to other processes (traffic management, search algorithms, mobile phone location identification, social media communication, and credit card use). We, as individuals, use these services and in return to the personal benefits of the services, provide personal information to the service (this is generally authorized in the fine print of use agreements that most of us don’t read but quickly hit the “Agree” button).

These data are attractive to social scientists because they are fine-grained temporarily (some almost real-time), they are plentiful (trillions of observations versus thousands of survey respondents), and they track some behaviors that seem important to understanding how society is functioning. Will they become the equivalent of the ubiquitous survey data of the 20th century?

What’s new about this world is that the data weren’t designed to answer any particular economic or social question. Further, they are lean in number of attributes measured on each observation (i.e., we don’t know a lot about whoever initiated the data burst). Finally, they are not held by institutions whose mission is to extract information for common good purposes. Instead, they largely come from businesses that use the data to provide their services.

Most social scientists feel that this new data world has promise to unlock new insights about human thought and behavior. But it’s a different world — there is no defined infrastructure to coordinate the access to diverse data sources. It seems clear (to me, at least) that the winning society in the future will create a way to address privacy concerns of data access, private sector data holder concerns, and needs of researchers to combine diverse data to create more insights. This will require a new set of structures to assure privacy rights of individuals and verification that the data usage does indeed serve common good purposes. If the new world does not combine these new data sources for common good purposes, we will all take a step backward.

Intent of the Communicator, Comprehension by the Listener

Posted on

The University of California (UC) has issued some guidance attempting to ameliorate the bad feelings generated among students of color and other groups when various statements or actions occur by those outside these groups. These kinds of behaviors were often mentioned in the Georgetown students’ Twitter campaign (see #BBGU, #BAGU, #BLGU, and others). The guidance has generated much controversy. The guidance has a bit of a checklist of what types of statements risk giving offense.

Some of the speech acts that are highlighted seem directly to involve assumptions by the speaker: “You are a credit to your race” or “You’re a girl, you don’t have to be good at math” or “You people…” The person to whom this is directed, the listener, has to process what is intended by the communication. On the surface, it’s easy to see why the speaker may be viewed as making judgments based on generalizations that do not take into account the unique attributes of the listener. Alternatively, they are indirectly revealing assumptions about entire groups of people, some falsehood about homogeneity within a group. Of course, what the speaker intended is rarely articulated. It’s also easy to imagine what the listener thinks about the statements. “I don’t want to be defined by a single observable attribute.” “They view my group entirely differently than I view my group.”

Other statements are less directly targeting the subgroup: “Why are you so quiet? We want to know what you think. Be more verbal.” and could be said to many students. The intent of the speaker remains unclear, but the UC document notes that when the listener is of a culture in which verbal interaction is governed by norms different from American ones, the communication may not achieve the intended outcome. The document notes the possibility that what is heard is a devaluing of the culture of origin of the listener.

The UC effort is consistent with growing evidence that we ourselves are sometimes not privy to our own assumptions. We act and speak often with implicit assumptions. Our effortful cognitions yield different outcomes than our automatic actions.

The controversy that ensued over the UC guidance forces attention to one of growing concern — when does speech become harmful? What uncomfortable statements should be removed from day-to-day speech? When is the perceived intent of a speaker itself based on unfounded assumptions? When should the listener think more deeply about the intent of the speaker? How can we learn to speak effectively to those who are members of groups outside of our own? How can we see the world as they see it? How can we conquer our own fear of giving offense, in order to create real communication across group barriers?

With the events of this past year in the US, it seems clear that we collectively have work to do. The understanding across groups that we lack needs an honest dialogue, reflection, and more dialogue. This will require ongoing work. It seems to be a wonderful opportunity for US universities to lead this effort, and Georgetown’s excellence in inter-group dialogue gives it strengths that should serve it well in these tasks.

Team Learning for the Fun of it

Posted on

Over the past few weeks, I’ve found myself in various national meetings discussing the future of higher education and new pedagogical vehicles that seem to fit that future. It’s led me to reflect on personal experiences that form my deepest and warmest memories of organized learning.

The first was a course on Joyce in the English department in my sophomore year. The instructor was a famous senior professor. He entered the classroom with his young dog, whom he instructed to sit below the instructor’s desk. (I immediately thought that was cool and am still amazed at how disciplined the dog was throughout the semester.) I’ll never forget his first words in the class: “I’ve never taught James Joyce before, but I’ve wanted to do so over the years. I’m not sure I know what I’m doing; you and I will learn together about his writings.” While this was an impressive unveiling of his vulnerabilities, it was just the beginning of his setting up an environment giving us permission to reveal our own insights into the writing. The readings were voluminous. In addition to Joyce’s major works, we read multiple books of literary criticism of the writings. The classes were spent sharing our interpretations of the writing, with an openness for alternative meanings. By never openly teaching us, the professor allowed us to learn the layers of meaning of the words, sentences, paragraphs, and actions in the texts. We felt like we were all working together as a team; we were both students and teachers. The lessons of reading and re-reading, letting the text soak in my consciousness, were never forgotten. I kept my notes and course materials for the class for over 20 years.

Another of the most heartwarming experiences of my life was, in retrospect, a near-perfect union of the university goals of formation of students and original scholarship.

The story begins with the observation that multiple courses in my university were tackling the same topic, but from very different perspectives. The courses were offered in different disciplines. The particular issue was the impact on statistics of failure to measure all members of a sample within a survey context. One course began with the existence of a data set, already assembled, but subject to the missing data problem. Another dealt with the cognitive and behavioral underpinnings explaining why sample persons would provide data (or not provide data) to a sample survey. I taught one; a colleague taught the other. There was no overlap in the readings of the courses.

I teamed with my colleague to build a new course from scratch. The faculty taught the course for free. The course wasn’t required for any program. The other instructor and I began this course by presenting the perspective of our disciplines toward the material. Then, the class and we together attempted to look for differences and similarities in the approaches. We moved to constructing a synthesis of the approaches. Toward the end of the class, we decided that we were forming an approach that really needed more work. The semester ended at the point that we formulated a research design to test out new ideas that arose during the term. In a way the course produced new knowledge, yet untested with rigorous research, but we knew what analyses on what existing data needed to be mounted.

As the semester ended, we were surprised — the students asked whether we could continue meeting, with or without a formal course. We decided to plow ahead; everybody became part of a research project. We divided up the work; we became a project team. By the end of a few months, we had the ingredients of what became a journal article and the vision of several others produced later. It was great fun; the ratio of learning per time unit was unusually high.

To me, there are similarities in these two experiences. Both of them stimulated active learning because there was a sense that new discoveries were possible. Both of them were fresh, new, and filled with untested content. Both of them generated a sense of a team of peers attempting to understand in new ways. Both of them permitted all involved, both students and faculty, to become learners and teachers. Both of them involved a lot of work but even more fun.

I marvel at how these experiences resemble some of the project-based and research-based learning protocols being created as part of Designing the Future(s) of Georgetown. If Georgetown could increase the frequency of these kinds of courses, I’m convinced our graduates may have more experiences that they remember years later. I also believe our faculty would have more fun teaching.

The Evolution of Professions

Posted on

There’s rapid change occurring in traditional professions. The common theme appears to be an evolution from self-directed work to activities in support of the mission of a larger organization.

Physicians, once the epitome of the self-employed, are increasingly employees of large organizations, members of big teams of co-workers. Lawyers, another profession where “hanging out a shingle” was the metaphor to launching a career, now increasingly find themselves within large organizations whose mission has legal guidance only as an auxiliary function. Many architects work within units of institutions that require design skills, but only because they need to house large groups of workers performing activities that have nothing to do with architecture. These are examples of traditional professions.

The morphing also seems to apply to products of graduate education in more traditional disciplines. A recent report of the National Science Foundation notes that many STEM trained graduates find themselves working in fields without major STEM focus. Another report on physics PhD’s in nonacademic settings probes levels of salary and job satisfaction. The Modern Language Association 2014 report argues for more interdisciplinary content to PhD programs in language and literature. In contrast to the decline of self-employment among physicians, lawyers, and architects, this attention appears to be associated with the decline in academic job markets, related to demographic changes in the university-age U.S. population and declining government support for higher education. Increasingly, PhD’s are working in large nonacademic organizations.

In short, something’s afoot on several fronts regarding the post-baccalaureate higher education graduates. What are the implications for educational institutions that launch such new professionals into the world?

It’s seems fair to say that the traditional role of many graduate professional and PhD programs was to provide the graduate very deep and broad understanding of a well-defined body of knowledge. Any curricular feature that strayed outside that well-defined body of knowledge was generally meant to prepare for a single dominant occupational class of the profession. For example, law students were given training in moot court settings and in clinics with individual clients (often disadvantaged persons). Medical students were given clinical experiences through rotations in various specialties within a direct health care service or in private practice. Architects served long apprenticeships within an architectural practice. PhD students were given teaching experience. In short, the programs offered practical experience in activities central to the execution of the dominant occupation of the profession.

In contrast, few PhD and professional graduate programs tended to provide education in the knowledge needed to be effective leaders in large organizations. However, increasingly the graduates of these advanced education programs find themselves working within large organizations. In these organizations, they are not self-directed scholars. They are team members working to achieve the mission of the organization, a mission sometimes only tangentially related to their advanced education. They feel the dissonance of self-identity to their profession versus allegiance to the organization’s success. They work with others completely unschooled in their field. They find themselves leading others, supervising others, and motivating others. They find themselves confronted with budget constraints, making tradeoff decisions among alternative goals (some completely out of their professional domain), forecasting production, analyzing performance, and dealing with personnel problems.

It seems clear that, for the most part, universities have been slow to recognize the mismatch between how they are educating these advanced professionals and academics, on one hand, and what these graduates will need to know to achieve success. There are some programs, both at Georgetown and other universities, that attempt to give PhD and professional students more interdisciplinary education to help them attain leadership positions within larger organizations. These have great merit in serving the changing job markets of advanced degree holders.

However, one of the more difficult challenges of advanced programs is to educate for the new career realities of a field instead of replicating the education of the current leaders of a field. We need to ask ourselves whether we are training students for careers we experienced or training them for the careers they will experience. For the benefit of our future graduates, we need to attend to these issues.

Students as Producers Versus Consumers

Posted on

Hunter Rawlings, the president of the Association of American Universities, the consortium of the large research universities in the country, recently wrote a thoughtful essay challenging various beliefs about the key purposes of universities.

As more and more commentators are noting, assessing the value of a university based only on lifetime earnings (or even more radically, the income of the first job post graduation) misses many of the components of the experience.

However, Rawlings makes another point — thinking of university degrees as something to purchase, as a consumer product is also a dangerous misunderstanding. Students maximize the value of their higher education by maximizing the effectiveness of their studying. The more the students give of their own time, the higher the value of the purchased experience.

As we are discovering in the Designing the Future(s) activities, much learning of students takes place out of the classroom, in on-the-job experiences connected to their educational programs and in research-based work. The vast majority of these activities are situations in which the students are actively teaching themselves. That is, the students are investing their own time to achieve the benefits of the education.

In some sense, students are not consumers purchasing a good, they are producers of their own education guided by faculty mentors. They are not buying teaching; they are subscribing to a set of experiences that allows them to discover their talents and interests.

Rather than thinking of acquiring higher education as the purchase of shares of stock, in which a chief purpose is a passive return on investment, maybe it’s better to think of higher education as a fitness center where the chief purpose of the membership is better health status. The membership fee gives one access to exercise facilities and machines, but the health status benefits arise only if the member optimizes their use. The “return on investment” is fully in the hands of the member.

Good health through exercise can be achieved in many different ways. Some people thrive in group activities, using others to heighten motivation. Some emphasize cardiovascular health through aerobic exercise. Some lift weights. Choosing the right fitness center, like choosing the right university, is a matter of matching aspirations, self-knowledge, and the assets of the organization. But, just like a fitness center membership, students get their “return on investment” when they take advantage actively of all the resources of the organization. When students take charge of their education, direct it, invest deeply through their own time, only then do they gain the benefits of the higher education institution.

When students come to realize that they are the producers of their education, not consumers, then, higher education productivity is maximized.

Office of the ProvostBox 571014 650 ICC37th and O Streets, N.W., Washington D.C. 20057Phone: (202) 687.6400Fax: (202) 687.5103provost@georgetown.edu

Connect with us via: