How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks.
But another aspect in which [superheroes] can be problematic is, how is it that these special individuals are using their power? Because one of the things that I’m always interested in, when thinking about stories, is, is a story about reinforcing the status quo, or is it about overturning the status quo? And most of the most popular superhero stories, they are always about maintaining the status quo. Superheroes, they supposedly stand for justice. They further the cause of justice. But they always stick to your very limited idea of what constitutes a crime, basically the government idea of what constitutes a crime.
In the same vein, the very idea of an internally self-consistent “cinematic universe,” and, more generally of a fictional “canon,” tends to be politically conservative (as most canons tend to be.) They give rise to strong guardians of the canon. Such universes also naturalize time as “big H History,” that tends to be self-limiting in terms of imagination.
I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.