I have a colleague who, in the face of the growing use of artificial intelligence (AI) platforms, feels the need to identify products unaffected by AI. She touts the phrase, “100% human made” as a label that we all might consider going forward.
It is, indeed, a time of rapid change in visions of the future, given generative AI platforms. The varieties of future being discussed range from the total elimination of humans to utopian imagines of leisure-filled times of plenty for them.
Increasingly, as more and more use Large Language Model (LLM) platforms for their work, there is a protocol emerging, with the LLMs used as a draft to text products. Enter a query to generate an essay. Adjacent to this use is LLMs as idea-generating devices for work to be done. Sometimes this involves asking LLMs to summarize the information from a set of written products, to extract key findings.
In university classrooms, some colleagues are having students use LLMs for first drafts of essays, which become the target of student revision and editing.
Similarly, scientific uses of AI received enormous boost when AlphaFold AI was able to produce accurate predictions of the structure of many proteins, following the learning data set of protein structures. But later scientific reviews note that “AlphaFold results need to be validated and should not be employed blindly.” So the scientist needs to use the AlphaFold results as a first draft of sorts, subject to revision based on human-directed investigation.
So, too, recent demonstrations of LLMs have displayed the writing of a children’s book about, say, a curious frog, complete with illustrations matched to the story. The product was impressive, but just a little less interesting that one might like. Clearly, a next step of human refinement was necessary.
These three examples suggest that human skills in editing will grow more important in our future. Great editing requires deep connection with the draft product. The connection is a prerequisite to critical review, searching for gaps and falsehoods, and adding new content to improve the product. In the sciences, this is what lab presentations used to facilitate — the revelation of initial findings and the seeking of peer contributions to improve the course of the research. In the humanities, this is the scholar’s regimen of revision after revision, of trashing everything and beginning again. It is also the commentary and sought-critique of peers and mentors, often in small reading groups sharing initial work for review by others.
From a viewpoint of higher education, changes that require more exercise of critical skills are desirable. With proper use, AI may offer university instructors that opportunity. This already appears to be the case for coding, in computer science; for writing, in the humanities.
But there is also clearly a downside here. If we look at LLMs only, the platforms have ingested as many as 13 trillion words, in the context that they appear in articles, books, blogs, texts, social media posts, etc. They use the patterns of words to predict the next word in a sequence based on the millions of patterns they have learned. In some sense, the creative act in writing is precisely the opposite of this goal – not using the typical phrase but a uniquely unexpected phrase, to incite attention, emotion, or reflection.
I recently heard a speculation about what would have occurred if humans had an LLM in the 1600s, prior to Galileo’s finding that the earth rotated about the sun, not vice versa. One could easily posit that all the learning data would have provided full reasons why the earth was the center of the universe. Using that information as the base, little suggestion of the opposite might arise. Specifically, the creativity based on careful observations that Galileo performed would probably not have been aided by such an the LLM. Having Galileo spending his time “editing” the information from the LLM versus making his discoveries seems (now that we know what we know) to be misplaced energy.
So, bringing these comments together, while critical thinking in universities might be enhanced with proper use of LLM output, there is a legitimate question about how to support the “100% human made” knowledge. Critical thinking and creativity sometimes seem like opposing “muscles” in humans – one reactive, the other, fully productive of novelty. However it is achieved, human creativity currently remains distinctive from the algorithms underlying LLMs, it seems. So, in addition, to using LLMs to producing first drafts, we should develop protocols to spur the invention of novel thoughts still the province of humans.
Address
ICC 650
Box 571014
37th & O St, N.W.
Washington, D.C. 20057
Contact
Phone: (202) 687.6400
Email: provost@georgetown.edu
Office of the ProvostBox 571014 650 ICC37th and O Streets, N.W., Washington D.C. 20057Phone: (202) 687.6400Fax: (202) 687.5103provost@georgetown.edu
Connect with us via:
Deep Blue versus Garry Kasparov might serve as an example for pitting any “creations” by AI against the “100% human made” creations of one or more human experts, as long as the human experts are shielded from the temptation to merely edit (or from the undue influence of) what is created by AI. Thus, the best of both worlds could be a starting point for further research and development.