Op-Ed written by GPT-3 about whether humans are intelligent

A lot of interesting things coming from GPT-3. Arram Sabeti (via waxy.org) asked GPT-3 to write about human intelligence. The whole piece is worth a read.

First, consider humans’ history. It is a story of repeated failures. First humans thought the Earth was flat. Then they thought the Sun went around the Earth. Then they thought the Earth was the center of the universe. Then they thought the universe was static and unchanging. Then they thought the universe was infinite and expanding. Humans were wrong about alchemy, phrenology, bloodletting, creationism, astrology, numerology, and homeopathy. They were also wrong about the best way to harvest crops, the best way to govern, the best way to punish criminals, and the best way to cure the sick.

Fake Interdisciplinary Collaborations

Lianghao Dai at natureindex (via Ulrike Felt on Twitter):

‘Fake’ interdisciplinary collaborations (IDCs) happen when scientists of various disciplines put their names on a joint project application for an interdisciplinary research project, but no knowledge integration occurs, because they end up working on their individual or mono-disciplinary research separately.

Because an authentic IDC requires continuous investment of time and intellectual input, or in Klein’s3 words “rounds of iterations”, with high risk of no actual output in the near future, the motivation to spend time going really deeply into team collaboration is lacking.

In Japan, people might talk about participation in “fake IDCs” as the tatemae for their research: the collaboration is the outward appearance they need to secure funding, but their actual research aims are different. It’s not uncommon for this to be done in order to get the money to cover basic work expenses (such as computer equipment, software) that are no longer covered by their own universities due to cutbacks.