Skip to main content

Address

ICC 650
Box 571014

37th & O St, N.W.
Washington, D.C. 20057

maps & directions
Contact

Phone: (202) 687.6400

Email: provost@georgetown.edu

 

Imprinting by Mentors

Posted on

I often have the privilege of talking with faculty about their research lives. In most of these conversations, some talk about their graduate student experiences. I’m struck at the important role that mentors play in the early career of many PhD’s.

Many faculty, looking back at the formative pre-PhD time, remember individual moments when the mentor taught them what level of rigor was required in their field. These are memories that were sometimes painful – returned drafts filled with markups, pointed critiques of flawed logic, directives to write more succinctly, or attacks on the superficiality of the work. They are also memories of deep pleasure, when the young student first earned the praise of the mentor, a moment when the mentor said that the student had accomplished something worthwhile.

Another common memory was how the mentor shaped the PhD student’s assessment of what was a good problem to pursue in their own research. A common guidance was, “If you were completely successful in the project, what would we know that we don’t know now?” Of special import is whether the target of the project had promise of generating several follow-up projects. “What are the different outcomes of this project? Can you imagine a follow-up project that would result from each of those outcomes?” This kind of guidance was viewed as key in helping the student conceptualize an integrated set of projects over time. For some faculty, these discussions defined several years of work.

Some faculty also point out the weakness of their mentor on this score: “My mentor gave me the problem for my dissertation.” In retrospect, such mentees had to teach themselves how to identify a good research question. Some remember feeling somewhat lost in the early years of their independent scholarship. Similar problems arise sometimes with mentors who co-publish with their mentees over many years. At the worst, these sometimes stifle the increasing ability of the young scholar to develop their own skills at discerning good research questions to pursue.

A few faculty remember their pre-doc days under the influence of two mentors, with possibly different perspectives on the research project of the student. They recall navigating between the two, sometimes attempting to blend together the two viewpoints. They remember sometimes awkward meetings of the three of them. They remember getting contradictory advice from the two. Some think that they own careers have blended the two perspectives together in some way.

Some of the relationships between a PhD mentor and mentee last for years. For some faculty looking back, they can remember a moment when their mentor began to treat them more as a colleague than a student. Mentees often develop expertise in new methods or techniques in the field. They then teach these new methods to their mentor. So, sometimes the teaching and guidance becomes mutual for the two. Gradually, the pair becomes more complementary of one another.

What is notable in these discussions is how impactful these mentoring experiences are to entire lifetimes of scholars. The imprinting of mentors on the lives of the mentee is deep and long-lasting.

The Three-Story Structure of Theory and Applications

Posted on

I’m told that one country’s government weather service is a three-story building. On the top floor are theoretical climatologists and meteorologists. They are striving to elaborate the theory that underlies human understanding of the earth’s weather systems and how they change over time. Their work is highly technical and at times has no stated purpose other than satisfying the curiosity of the scientist attempting to find answers to puzzling questions. As they do their work, they are searching for fundamentally new insights. The failure rate of the top floor is high, but the payoff of new developments can be transformative.

The second story of the building is populated by modelers of real weather components across the globe. They are attempting to use newly formulated theories to design new procedures for forecasting weather and modeling climate. They test their models on real data assembled over time, indirectly evaluating new theoretical developments. As they attempt to improve on current techniques, they are pushing the frontier of practice. Many times, they examine historical data as the evidentiary base on which to make their judgments. The success rate of the second floor generally exceeds that of the third floor, but its successes tend most often to be incremental improvements. When the second-floor researchers are successful, they tend to solve some weakness of the existing body of practice, through testing new practices against the old.

The ground floor is populated by weather forecasters, applied meteorologists, who provide the public, the government, and companies the daily prediction of temperature, precipitation, wind, humidity, and cloud coverage throughout the country. When they perform at their peak, they are implementing newly proven innovations from the second floor. To the general population, they are the face of the weather service. Hence, the staff on the first floor receive the vast majority of the appreciation for correct predictions and almost all criticisms for incorrect ones.

Thus, the three stories move from theory to application. One building. Vastly different enterprises. I’ve assumed the bottom floor was chosen to be application-oriented because it is the group with the greatest outreach to diverse publics. They respond to “walk-in” trade.

Of course, the value of the three floors depends on the use of the stairways linking them together. Is there movement across floors? Is the movement uni-directional? Are the theoreticians made aware of failures on the first floor? Are the first-floor forecasters aware of the assumptions in the newest model promoted by the second floor? Are there any people who work on multiple floors?

Finally, the variation in visibility of the three floors can affect a superficial evaluation of the product of the building. It’s easy for someone who knows nothing about climate and weather to assume that the first floor houses the most important group in the building. After all, that floor delivers the products and services of the entire building. If there are funding pressures, following that logic, the first floor must be privileged over the higher floors.

Such logic ignores the knowledge that the third floor is the origin of what the first floor accomplishes in future years. Emptying out the second and third floors is not unlike a struggling farm unit deciding to preserve only the harvesting equipment and stopping purchase of seeds. It might be cost efficient for one season, but then it’s out of business. Similarly, with each new introduction of devices and platforms from technologists, it’s tempting to view them as autonomously produced inventions. We can easily make the mistake by judging that only such visible innovation work is of importance. Yet many of those innovations depend on basic science developments years earlier.

Much of basic research and scholarship is academia’s third floor. While it’s easy to criticize a publication that is read by only 100’s of people, publications exactly like that can be identified as the seed of new ideas and new ways of thinking that transform the lives of large populations decades later. There is value in the constant pursuit of pushing the edges of human knowledge. Future populations will live at that edge.

We don’t do well with a one-story building.

Data, Information, Knowledge, Wisdom

Posted on

As we all try to make sense of the world day by day, we are increasingly reminded that we can learn something about almost every topic faster than ever before. But a few clicks on a keyboard generates more than we can consume fully.

It might be useful, however, to make some distinctions among “data,” “information,” “knowledge,” and something perhaps labeled “wisdom.” Data consist of documentation of individual attributes of people, households, businesses, organizations, events, images, texts, and transactions. Increasingly these come from streams of digital bits from internet-connected devices. Sometimes the data are quantitative; sometimes, words; sometimes, images or sounds; sometimes, even smells. The popular phrase, “data exhaust,” invokes the correct image because it implies a material residue of something that was real and often complex.

To people outside the processes, these data themselves often have no real meaning. Their first purpose is simple — to permit interactions to proceed in a manner that fulfills the designs of those in charge of the process. Each of us derives benefits from many of these processes; in the internet world, we pay for them generally by providing data on our behaviors that have value to those running such processes.

But the haphazard collection of our Alexa speech, tweets, credit card transactions, search terms, mobile phone GPS transmissions, facial images, and other exhaust can also, given a specific purpose, produce information. By “information” might mean an assembly of data with a purpose. Hence, different types of information can be assembled from the same collection of data. Our exhaust, when assembled into information, can answer many specific questions.

So, the distinction between data and information in this sense is the added value of connections among data. In some sense, subsetting and combining data invents information in the hands of one seeking an answer to a question.

Most academic fields are vast collections of information. There are millions of answers to individual questions that are themselves collections of data. The work of academic fields is both discovering new information, but also to reassemble pieces of information into novel coherent wholes. The development of new theory in individual fields seems often to depend on discovering some new information (sometimes merely a new assembly of old data) that reinterprets whole bodies of information. With such assemblies, the use of the term, “knowledge” becomes attractive. Knowledge might be viewed an interrelated network of different pieces of information.

(An interesting assembly of information that is of growing interest in the academy is the story or narrative. These are assemblies of information that often gain their memorability by evoking emotions through the careful selection of pieces of information. Any story manipulates related pieces of knowledge to emphasize purposeful themes.)

So, how does the notion of wisdom come into this cumulative collection of data? A cognitive psychologist friend thinks of wisdom as the result of networks of networks of knowledge. When we label a remark as a wise observation, we often express surprise by a new assembly of knowledge. In my friend’s view, this is evidence of an unusual connection between different knowledge nodes built from experience. A testament to that is the “I never thought of it that way” comment that we make to ourselves.

Many cultures associate wisdom with age. The wise elder may attain that status by networks of knowledge that only years of experience can provide. Seemingly unrelated knowledge domains are found to be related because of millions of life episodes experienced over time.

All of us are overwhelmed with data each day. The individual pieces, scattered about in our minds, are far distant from a state of wisdom. Using our life experiences to constantly probe new mixes of seemingly unrelated knowledge is our route to building networks of networks of knowledge that might be called wisdom. This is always intellectually courageous, but often fun.

Philosophy in the Financial Sector

Posted on

The eco-system of knowledge development in the modern university is a permanent fascination to me. A key feature of this eco-system is the mix of discovery versus application that occurs.

Some scholars within the academy are totally curiosity-driven. They find themselves interested in an issue and begin to use their discipline’s perspective to explore it. They are driven to seek an answer to the motivating question. Rarely are they interested in any day-to-day applications of their work.

Other research efforts seek a solution to a practical problem. The researchers are made aware of an unsolved problem and seek to apply knowledge to discover and test a solution to the problem. This is the movement from the lab to the clinic in a biomedical context. It is the move from a classroom experiment to a whole school curriculum reform. It is the move from the engineering lab to a real-world build-out.

There are some academics who move back and forth between curiosity driven scholarship and real-world applications. In my estimation, there are few who do both well.

Some of the current discussion about the humanities, the social sciences, and the natural sciences, on one hand, and the job market, on the other, gets conflated with issues of theory and application. The humanities often get labeled as purely curiosity-driven exercises. From that observation, some commentators then extrapolate that the humanities have no value in today’s job market.

In this regard, I read with interest a recent interview  of the investor Bill Miller, a man who achieved unusual success designing and running investment funds. Miller is an ABD in philosophy at Hopkins. The interview focused on how he reconciled his great success in financial markets with his humanistic studies.

He reported that “Philosophy steered me away from confirmation bias.  In any philosophical assessment, you’re trying to figure out the validity of an argument, working to pinpoint ways to make it stronger.  You’re not necessarily trying to win an argument, and you’re not taking an adversarial position.  You’re really trying to look at things from a wide variety of perspectives.”

A mind that approaches an unsettled issue with a pure disinterest in the final resolution is truly liberated to find new solutions. Miller believes that he acquired those cognitive skills in his philosophy courses. Those skills, in his belief, were key to choosing companies for his investments. One gets the impression that it was his ability to remove emotion from his assessments that was key to his taking different perspectives on the same investment possibility. The ability to attack the same set of facts from different perspectives permitted him to detect the potential value of a new investment opportunity in ways that a traditional approach would not.

Further, the skill in deep analysis of information, dissecting it, reassembling its components in different ways, led to his success. Miller claims the habits of mind that his philosophical study gave him provided superior tools in his investment management life.

He predicts that these skills will be some of the scarcest resources in the future. Young people, aspiring to leadership in the new world, should strive to acquire them.

What are the Civic Obligations to Provide Personal Data for Common Good Statistical Information?

Posted on

Some time ago, I mused about a notion of “data ethics,” in a set of posts that was focused most on the obligations data collectors and data analysts had to the persons who data they were handling. Most of principles that I discussed had to do with protection of the privacy of individuals whose data were being collected and assurances that no individual harm could come directly from their provision of personal data.

This post takes a different perspective on personal data. It ruminates on the question of what obligations do residents of a society have to provide data to produce statistical information for common good purposes.

Most countries of the world have some system of institutions that collect data from individuals, aggregate them to produce statistical summaries, in order to inform the populace about its own characteristics. These generally inform about the welfare of the national population and various subgroups. For example, they describe the income distribution of households, educational achievements across subgroups, labor force participation, incarceration rates, agricultural production levels, cost of living changes, and a host of different attributes.

These basic indicators collectively provide the nation with a sense of how it’s doing. They inform us how the benefits of the society are shared across different subgroups. By comparing the same indicators over time, the country can judge whether things are getting better or getting worse.

Indeed, such indicators are a foundational component of a democracy. They help the people judge whether the performance of elected officials merits the continuation of their service or whether the country needs new leadership.

In these days of data breeches, we are reminded nearly daily of misuse of personal data to harm individuals, profiteering from personal data without full consent of those who supplied the data, and numerous other events that heighten our concerns about personal privacy. One is tempted to react to these events by avoiding sharing any personal data with anyone, as a way to maximize one’s own privacy protections.

However, obviously, none of the statistical indicators that the democracy needs to guide decisions of residents would be available to the nation if individuals chose not to agree to supply their personal data for such statistical purposes.

So, in parallel with our discussions about protecting our own privacy, I’d like to see us all engage in a discussion about what civic obligations we have to contribute to common good statistical indicators. When asked by a government statistical agency to participate in a survey that produces such statistical indicators, for what reasons to people frame the request as an unwarranted intrusion into their private lives? In what way, can such requests be viewed as a chance for public service to the common good?

Scholarship and Storytelling

Posted on

Recently, Georgetown had the honor of hosting Ken Burns, the documentary filmmaker. He met with students, participated in discussions about educational activities among the incarcerated, and showed some of his film product.

Mr. Burns made one suspect a common tendency to overemphasize distinctions between academic scholarship and the kinds of documentary films he makes.

Regarding depth of archival research, Mr. Burns reviewed the making of several films that took over ten years of work, not unlike the length of time to produce a university press book. He cited the cataloging and digitization of hundreds of thousands of images (photographs, documents). He noted the aggregation of materials from multiple sites over many years. That feature of his work seemed very similar to what many academics do in the beginning stages of a research product.

Increasingly, modern academic scholarship involves interdisciplinary teams. His description of the close ties of his group and his respect for their diverse skills reminded me of interdisciplinary research teams describing their cross-functional strengths.

Regarding the rigor of review and rewrite, he cited the numerous iterations in the design and execution of the video product. He described the time and effort in the editing phase of a project, continuously squeezing the content to extract the most meaningful components of the overall message. It reminded me of precisely the same apparently-endless process of drafting any scholarly work, whether it be an article or a book.

Much of academic scholarship is devoted to assuring a balanced presentation of alternative arguments. Much scholarship clearly describes limitations on conclusions or presents conflicting perspectives. Mr. Burns repeatedly noted how he felt obliged to show both the good and bad of every character, even those for whom he had great sympathy. His goal was to make it difficult for a viewer to know how he felt about a character or an event.

Academic scholarship produces the raw material for teaching. Mr. Burns’ films produce the attention that all lecturers seek in the classrooms but few of us ever achieve. The awe that viewers feel in his films imprints permanent memories of their content.

Mr. Burns has another point of view quite consistent with that of many academics – true understanding takes time. The fleeting, mediated interactions of the modern world limit the likelihood of true insight into complex, elaborated, and interwoven pieces of knowledge. Indeed, at one point, he uttered a common refrain of a truly wise scholar – when you really understand a phenomenon, you have finally identified what the key questions really are, or when you really have deep knowledge of a field, you are focused more and more on what you don’t know. All of this takes time.

Finally, Mr. Burns knows deeply what many scholars are now just discovering. A very common discussion among the science and social science community these days is how the use of storytelling can motivate understanding of technical research findings. It is the stories that generate emotions, and the emotions that generate attention and memory. When stories generate the attention, more complicated facts can be communicated and remembered more effectively. On this, his mastery is unrivaled.

So, stepping back from the day, I find myself a little more puzzled about the perceived differences between academic research products and those products that share so many of attributes of academic work, but are designed for a broader audience. It seems to me that both sides are at least intellectual cousins, if not intellectual siblings. The goals are the same; the methods have more overlap than we sometimes may think.

The 2020 Census is Coming!

Posted on

As we approach the year 2020, all of the preparatory activities for the US decennial census are ramping up. This week, with the partnership of the Poynter Institute and the support of the Ford Foundation and the Annie E Casey Foundation, Georgetown conducted a 1.5 day Census 2020 workshop for journalists.

Because a census comes only once in a decade, few remember the what, how, and why of the event. Our workshop was aimed at giving the journalists a deep dive into the various operations of the census and what stories are likely to be important as the months transpire.

A census requires design and careful implementation of thousands of details. It is the only event in the society that includes everyone. It requires a large staff of devoted workers. It is a national ceremony that occurs once a decade. That is the topic of this post.

In the 2010 Census over 600,000 people were employed to conduct the various stages of the census. This is the single largest civilian deployment of human resources. Indeed, it exceeds the sizes of military deployments.

Employment as a Census worker is public service. The founding fathers placed the Census in Article 1 of the US Constitution. They invented this scientific method of assuring that the lower house of the legislature would be allocated to represent the population distribution of the fledging but growing nation. They were serious that the census count every resident, levying a $20 fine for nonparticipation (over $500 in current value). The census retains a legal requirement to participate, but large efforts are mounted to raise voluntary participation.

The census seeks civic participation on an unrivaled scale. Communities throughout the country form voluntary groups, called Complete Count Committees. Over $500 million will be spent on media – as small as weekly in-language newspapers and local in-language radio stations for new immigrant groups and as large as national network television.

Everyone is counted at their usual place of residence as of April 1, 2020. Most households will receive snail mail envelope inviting them to complete a short questionnaire online; some will be mailed a paper questionnaire; a small number will be visited by an enumerator. Even mobile phones can be used to complete the Census form. If the first mode of request is not successful, the large nonresponse followup stage begins, with hundreds of thousands of enumerators, lasting until mid-Summer 2020.

The Census Bureau is now taking applications for a large variety of jobs, from checking on the completeness of the list of addresses to office activities preparing for the nonresponse followup stage. The pay rate for the District of Columbia is $25 per hour for a census taker.

These are great jobs for university students because they offer flexible hours; they assure that one will meet people in the community in vastly different kinds of environments; and they provide the satisfaction that one is contributing to the common good.

Over the coming weeks and months, we need to get the word out to Georgetown students about this opportunity.

Communities Determine What is Fact

Posted on

It’s difficult to read any media these days without encountering a discussion of whether some pieces of news are fact-based or whether they are distorted reports of reality.

It prompts an academic to think about how our individual fields determine what is meritorious and what is not. Much of this is based on peer reviews. The “peer” in this phrase is generally meant to be one schooled and practicing in the same field of inquiry as that of the given scholar.

In fields with external funding, the peer review takes place in multiple stages. When a proposal for external funding is submitted to the funding agency, it is often reviewed by a group of researchers in the same field. They rate the proposals relative to other proposals for work in the field. These are judgments of knowledgeable others about the likely benefits of knowledge expansion from a proposed project.

The link to “determining the facts” and peer review is that the reviewers are judging whether the proposed research will indeed add knowledge to the existing domain. Will the results of the work produce valid knowledge expansion?

Later, when funded research has been completed, another stage of peer review begins with the dissemination of findings. These are often fully anonymous reviews. A group of peer scholars are asked to read the written product of the research and judge whether it is persuasive as evidence of a novel finding, an innovation, or an extension of current knowledge. If the work doesn’t yield support of peer reviewers, it is not published (at least in that journal).

Similar, but perhaps more diverse, procedures of peer review for book-length manuscripts by university presses. It is common that this occurs after a few early chapters are drafted and the full book is outlined. The role of the internal editor for the university press is often an important influence on the acceptance.

The link to “determining the facts” is that the community of peers determine whether the work is meritorious of being added to the cumulative product of a field.

In speaking with my journalist friends, they note that mainline newspapers often have codes of practice and internal review procedures to vet drafts of stories. They form the internal community of vetters who determine their acceptance of facts. They are quick to note that the proliferating new media have not fully adopted such procedures. Indeed, the rate at which new articles, blogs, tweets, and texts are produced makes such reviews unlikely.

In the absence of this, social networks that exchange news pieces among themselves might vet the veracity of reports. Unfortunately, such networks in general do not possess the knowledge requisite to reviewing the veracity of such pieces. They are in general based on friend groups; they tend to be rather homogeneous, thus, offering little variation in viewpoints.

I am fully aware of the weaknesses of peer reviews that are part of academic life, but, with all their faults, I must admit I appreciate them more these days.

Technology and Society

Posted on

Discoveries in basic science decades ago have made available sophisticated technologies to billions of people around the world. As many people have noted, many of us carry around devices more powerful than all the electronic gear in the Saturn moon rocket capsule.

The technological changes applying those scientific discoveries were catalyzed by a culture of entrepreneurship. In this country the most obvious examples of this lie in Silicon Valley. Failing fast, quick prototyping, agile focus on the market, seeking disruption – all of these catch phrases underlay an environment that nurtured unrivaled innovation.

Recent events, however, have revealed a weakness in the environment that created much of the technology that surrounds us daily. It was the hope of many innovators that they were building a world that democratized access to knowledge, reduced the cost of entry of economic enterprises, and linked people together in enriching ways. In essence, this was the key to flattening the earth, in Friedman’s terms. The dominance of the nation state as arbiter of culture and laws would diminish and be replaced by a ubiquitously available set of tools that permit the whole world to form mutually valuable social, political, and economic ties.

A cursory inspection of recent media reports about new technology, however, highlights a different view – cyberbullying; use of the Internet to support criminal activity; the very existence of something called “the dark web;” microphone recording of personal activities by Alexa; control over internet-based information flows in autocratic regimes; active disinformation attacks via bots controlled by another country; cyberwarfare unleashed. It isn’t clear how many of these outcomes were anticipated at the founding moments of these technologies.

Thus, many begin to question what these technologies are doing to the fabric of society. Do we know and interact with our neighbors less frequently? Does the perceived distance between us and others on the web encourage us to be less friendly, engage in harsher language, or disclose the selfish side of our natures? Are the generations coming into adulthood, having experienced the internet and mobile communication from birth, less interpersonally engaged, less capable or needing of deep long-lasting relationships? Do predictive analytics privilege ties among homogeneous groups at the cost of ties to heterogeneous groups? Are we constantly connected but ironically less close to one another?

All of these questions have reminded humanity once again that actions can rarely, if ever, be value-free. One can evade explicit articulation of the underlying values of a platform design, but they will inevitably manifest themselves. Human designers make decisions. The values underlying these decisions can be made explicit or not. The values will either be made obvious at the design stage, or they will become obvious at the execution stage.

It seems clear that Georgetown has a special opportunity here. The intellectual resources of Georgetown and the mission of educating women and men for others give the institution a special obligation to act at this moment in history. We can help provide a conceptual framework that refocuses attention of use of technology for the common good. We can develop ethical guidelines that could assist developers at the design stage to make explicit underlying values of platform design. We can help articulate user agreements that clearly inform users of costs and benefits of agreement. We can push the frontier of tools and algorithms to help protect privacy and make transparent risks of privacy breeches. We can lead in the use of high-dimensional data to promote social justice. We can help answer the questions of a developer about how they should design and use technology when they seek a more just world.

Derivative Inquiry: Dangers Facing Fields that Use Data without Producing Data

Posted on

An old joke describes a severely inebriated man who is found looking around, on his knees under a streetlight. A passer-by asks him what he is doing. The man says that he lost his keys and is looking for them. The passer-by asks if he’s sure he lost them there. The man says no, he thinks he lost them in the park nearby, but the light was so much better where he was looking.

The story fits some fields of academic inquiry in the sciences and social sciences. For example, much of the empirical work in macroeconomics relies on aggregate data produced by private enterprises and government statistical agencies. The field is somewhat removed from the production of those data. Hence, some of the literature focuses on the gap between theoretical concepts and the data available to reflect them. There were, however, few alternatives. Those data were the streetlight for the field.

In contrast, other sciences collect original data. They conceptualize what observations must be taken to test alternative ideas, how to mount the measurement, how to construct instruments to implement the measurements, and then how to process the data to address the research questions they pose. By collecting the observations directly, they learn the fallibilities of the measurements. By measuring features of the phenomena, they become more sophisticated about the mechanisms producing the phenomena.

Recent events have highlighted this distinction between science based on original measurement and science that starts with data produced by others. Implicit biases discovered in machine-learning based algorithms are hitting the popular press. The algorithms under scrutiny were based on data sources that were available at the times the algorithms were built. That was their streetlight. Unanticipated was poor performance for phenomena that were not part of the original data set. For example, the misidentification of persons of color in facial recognition seems to be related to the rarity of images of persons of color in the training data set for the algorithms.

Algorithms that guide loan risk or health risk assessment can fail if the data sets do not contain measures of all the attributes that affect risk. The best data come from deep understanding of the mechanisms that affect the likelihood of loan default or health conditions requiring medical interventions.

Some of the data sets used as the basis of the machine learning were very, very large in numbers of different units observed. But the total size of the data set is of little relevance if the characteristics of the data set do not match the real-world phenomena the algorithm will face.

Data scientists are increasingly realizing that building sophisticated algorithms on weak data is problematic. Faced with the choice between unsophisticated algorithms derived from rich data describing the mechanisms affecting some outcome of interest versus sophisticated algorithms based on weak data, they’re arguing that better data sets are a faster route to impact.

To build more useful data sets, we need data scientists’ attention to the measurement step producing the data, as well as the analytic step. Depending only on convenient, bright streetlights may fail us in locating the keys.

Office of the ProvostBox 571014 650 ICC37th and O Streets, N.W., Washington D.C. 20057Phone: (202) 687.6400Fax: (202) 687.5103provost@georgetown.edu

Connect with us via: