Ted Chiang on Artificial Intelligence

Ted Chiang is a science fiction author who wrote the story that became Arrival (via Metafilter). Recently in The New Yorker:


How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks.

With Ezra Klein for the New York Times:

But another aspect in which [superheroes] can be problematic is, how is it that these special individuals are using their power? Because one of the things that I’m always interested in, when thinking about stories, is, is a story about reinforcing the status quo, or is it about overturning the status quo? And most of the most popular superhero stories, they are always about maintaining the status quo. Superheroes, they supposedly stand for justice. They further the cause of justice. But they always stick to your very limited idea of what constitutes a crime, basically the government idea of what constitutes a crime.

In the same vein, the very idea of an internally self-consistent “cinematic universe,” and, more generally of a fictional “canon,” tends to be politically conservative (as most canons tend to be.) They give rise to strong guardians of the canon. Such universes also naturalize time as “big H History,” that tends to be self-limiting in terms of imagination.

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

(Previously on AI and capitalism and Marvel superheroes.)

GPT-Neo: An open source alternative to GPT-3

Access to GPT-3 remains limited, but “EleutherAI,” a “grassroots collective of researchers working to open source AI research,” has released GPT-Neo:

GPT⁠-⁠Neo is the code name for a family of transformer-based language models loosely styled around the GPT architecture. Our primary goal is to replicate a GPT⁠-⁠3 DaVinci-sized model and open-source it to the public, for free.

AI writes passable college papers

A post by Eduref, a company that deals in information about postsecondary education, has gotten some attention for doing a Turing Test for course papers produced by the AI language model, GPT-3.

We hired a panel of professors to create a writing prompt, gave it to a group of recent grads and undergraduate-level writers, and fed it to GPT-3, and had the panel grade the anonymous submissions and complete a follow up survey for thoughts about the writers. AI may not be at world-dominance level yet, but can the latest artificial intelligence get straight A’s in college?

As the saying goes, “C’s get degrees.” Straight A’s in college, however, are far from common, and with AI being far from perfect, GPT-3 performed in line with our freelance writers. While human writers earned a B and D on their research methods paper on COVID-19 vaccine efficacy, GPT-3 earned a solid C. Performing a bit better in U.S. History, humans received a B and C+ on their American exceptionalism paper, while GPT-3 landed directly in the middle with a B-. Even when it came to writing a policy memo for a law class, GPT-3 passed the assignment with a B-, with only one of three students earning a higher grade.

(More coverage at ZDNet and Inside Higher Education.)

Last year, I wrote something for The Conversation, based on my experience of using GPT-2, an earlier version of the same language model, to produce papers for an anthropology course. At the time, GPT-2 was close but not quite close enough.

I concluded:

While computer writing might never be as original, provocative, or insightful as the work of a skilled human, it will quickly become good enough for such writing jobs, and AIs won’t need health insurance or holidays. 
If we teach students to write things a computer can, then we’re training them for jobs a computer can do, for cheaper. 
Educators need to think creatively about the skills we give our students. In this context, we can treat AI as an enemy, or we can embrace it as a partner that helps us learn more, work smarter, and faster.

From all accounts, GPT-3 seems much more capable as is than GPT-2 was. While GPT-3 is not widely available, it won’t be long before it or something like it is. This means we need to rethink what writing assignments are, and what we want them to do.

John Warner, in Inside Higher Ed, suggests a change to how we approach grading is sorely needed:


In this case the problem is in our well-trodden patterns of how we assess student work in the context of school. [GPT-3’s] response is grammatical, it demonstrates some familiarity with the course and it is not wrong in any significant way.

It is also devoid of any signs that a human being wrote it, which, unfortunately does not distinguish it from the kinds of writing students are often asked to do in school contexts, which is rather distressing to consider, but let’s put that aside for the moment.

When confronted with this kind of work, what if we did something differently?

What if we replaced that … sigh … B with a “not complete, try again”?

Because honestly, isn’t that a more appropriate grade than the polite pat on the head that the B signals in this case?

This seems like an opportunity to put something like Labour-Based Grading into wider use.

(Previously (1), (2) on GPT-3.)

Op-Ed written by GPT-3 about whether humans are intelligent

A lot of interesting things coming from GPT-3. Arram Sabeti (via waxy.org) asked GPT-3 to write about human intelligence. The whole piece is worth a read.

First, consider humans’ history. It is a story of repeated failures. First humans thought the Earth was flat. Then they thought the Sun went around the Earth. Then they thought the Earth was the center of the universe. Then they thought the universe was static and unchanging. Then they thought the universe was infinite and expanding. Humans were wrong about alchemy, phrenology, bloodletting, creationism, astrology, numerology, and homeopathy. They were also wrong about the best way to harvest crops, the best way to govern, the best way to punish criminals, and the best way to cure the sick.

AI written blog goes viral, almost nobody notices

Liam Porr (via waxy.org):

Over the last two weeks, I’ve been promoting a blog written by GPT-3.

I would write the title and introduction, add a photo, and let GPT-3 do the rest. The blog has had over 26 thousand visitors, and we now have about 60 loyal subscribers… 

And only ONE PERSON has noticed it was written by GPT-3. 

People talk about how GPT-3 often writes incoherently and irrationally. But, that doesn’t keep people from reading it… and liking it. 

I wrote this last year predicting that GPT-2 would be used to write fake course papers. By all accounts, GPT-3 would do even better, and it’s sure to be public soon.