Over the past few weeks, every newspaper, news program, blog, and podcast seemed to contain commentary on generative artificial intelligence (AI) platforms, based on large language models (LLM). Some of the commentary paints a dystopian picture of impacts of AI eventually destroying humanity. Other commentaries propose an unprecedented productivity leap in scientific discovery.
My favorite example of the dystopian view is a LLM model that rewards the production of paper clips and leads to a world where AI uses every raw material (including the iron deposits in humans) to produce a globe filled with paper clips. A practical example of the optimistic view is the success of predicting how proteins fold into three dimensional shapes, a problem that plagued biologists for decades, leading to only 29 of the 4800 human proteins now lacking structural data.
Most of the newsworthy AI systems are trained on diverse and massive data sources obtained from the internet and digital resources from around the world. The appears to be one area of agreement among their developers. They themselves cannot explain exactly how a given output of the system was attained, given models that might be based on billions and billions of parameters.
In an event that didn’t generate long-lasting attention, in October, 2022, the White House Office of Science and Technology Policy issued a report labeled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.” It generated a little press for a month or so, but could profit from more attention.
The report articulates five principles to guide the design and use of artificial intelligence systems, when they are used in ways that can affect humans:
First, one should be not be subjected to “unsafe or ineffective systems.” The report argues that some of the protections arise at the design and testing stage – wide consultation with the communities that will deploy the systems, use of third party technologists who attempt to “break” the system and use it for harmful purposes, checks on whether there are uses of inappropriate data in the training phase of the system, and a commitment to transparency about any harms after release.
Second, the algorithms should not disproportionately benefit some people in ways that violate basic equal protections notions. “Algorithmic discrimination” can arise because of poor training data coverage of populations or ignorance of the causal mechanisms of the given issue. There is the now famous example of training data from past employed persons being used to guide future staff hiring decisions merely replicating the racial and gender biases of the firm. Other example illustrates a misunderstanding of the target phenomenon of the algorithm – using arrest data in an attempt to reduce crimes. A common weakness is the current generative AI models are lacking representation of persons, behaviors, and images that are not internet-accessible.
Third, a person’s privacy should be protected and they should have “agency over how data about [them] is used.” One of the interesting current issues of large language models is that the training data were acquired in ways that are quite far removed from the humans that generated them, whether they be text or other type of data. The training data were harvested by scraping data from the internet, buying, or otherwise acquiring all the digital data they could.
Fourth, one should be alerted when an interaction they are having is being driven by an AI system. This should include some information about how it can affect one’s experience and outcomes in the interaction. Most people could perceive use of earlier primitive natural language processing platforms in telephone interactions. This is less and less true as the tools evolve.
Fifth, one should be able to avoid services of an AI system if they wish. An opt-out option should be offered. This would be especially important when the interaction has some personal risk – criminal justice, education, and employment. Of course, such agency requires that the person are aware of use of the AI system in the first place (see the fourth principle).
These principles are the beginning of a framework. Although not designed fully in a context of ethics, they have great resonance with the kinds of discussions of appropriate ethical behaviors of designers and users of AI systems.
AI systems have the promise of improved well-being for the entire globe. They also pose threats of enormous harm to populations, especially those not sophisticated in software systems.
With the features of generative AI changing monthly, the search for some enduring principles of behavior is an important one. It’s not clear that this report has achieved such permanent value, but it certainly deserves more attention.
Address

ICC 650
Box 571014
37th & O St, N.W.
Washington, D.C. 20057
Contact
Phone: (202) 687.6400
Email: provost@georgetown.edu
Office of the ProvostBox 571014 650 ICC37th and O Streets, N.W., Washington D.C. 20057Phone: (202) 687.6400Fax: (202) 687.5103provost@georgetown.edu
Connect with us via:
Artificial Intelligence is one of the most advanced technologies. It refers to the human-made machine intelligence, which is innovated for making our daily tasks easy.
You can see Artificial Intelligence technology everywhere. The most well-known example of Artificial Intelligence is Google assistance on your phone, Alexa and Siri. However, these examples are just a part of it; Artificial Intelligence is more than that. There are many industries, which are using artificial intelligence for their benefit. Moreover, there are many different types of Artificial Intelligence used widely.
I imagine how AI could extrapolate data from disperate sources to improve outcomes in so many areas. From staffing and production standards related to foodservice and residential environments to automation of infrastructure to support sustainability. Perhaps also improved identification of predictive behavioural indicators that help place students into a learning environment where they are more likely to succeed and placement into a career where they can reach their full potential. There is so much data out there but, to your point, there is so much opportunity for coordination/correlation of that data to improve lives and environments for the good of the whole. So interesting to watch and monitor whilst balancing privacy and risk.