Slack (not that one) is necessary for effectiveness and productivity

A short post on a book by Tom DeMarco:

Only when we are 0 percent busy can we step back and look at the bigger picture of what we’re doing. Slack allows us to think ahead. To consider whether we’re on the right trajectory. To contemplate unseen problems. To mull over information. To decide if we’re making the right trade-offs. To do things that aren’t scalable or that might not have a chance to prove profitable for a while. To walk away from bad deals.

Trying to eliminate slack causes work to expand. There’s never any free time because we always fill it.

Amos Tversky said the secret to doing good research is to always be a little underemployed; you waste years by not being able to waste hours. Those wasted hours are necessary to figure out if you’re headed in the right direction.

Infrastructuring an open access journal is easy, until it’s not

The below is written from my perspective as an individual user. They don’t reflect the viewpoints of 4S, ESTS or the other editors.


All respect to the Public Knowledge Project and Open Journal Systems. The ecosystem they’ve built over years is tremendous and respectable. For good reasons, “open access journal” has kind of become synonymous with “OJS.”

But all is not well.

I’ve posted before about choosing a platform for Engaging Science, Technology and Society. At the time, we were considering options other than OJS, but in the end we decided to stick with it and upgrade to a new version. It seemed like the safe choice.

Months have passed, and we are on the verge of showing everyone the new version. It’s going to be a big improvement over what we had before. The design is cleaner and will adapt to different screen sizes. The fonts, colors, and images are better, with choices driven by our desire to want to be more accessible.

All credit for the website goes to Amanda Windle, our shining managing editor. The new website will be good. Where it falls short, it is not because of her (or any other members of the editorial collective. Except maybe me.)

Never have I encountered a piece of software that seemed to work against the user as much as OJS does. Things that feel like they should be easy to implement (and can indeed be very easy on other platforms) can be really difficult on OJS. Moreover, the documentation on some crucial things is sparse to non-existent. We could not find out what a version update would do to our site. We’ve done a lot of checking over the past few days, but we found issues that we should have been aware of going in.

Some would have been quite embarrassing if they had not been caught before we made the site public.

In a meeting I attended of the Transnational STS Publishing Working Group yesterday, Leandro Rodríguez Medina, editor-in-chief of Tapuya: Latin American Science, Technology and Society, a great open access journal, spoke about his experiences.

One of the interesting features of Tapuya is that, while open access, it is attached to a major commercial publisher: Taylor & Francis. Leandro was frank and refreshing about Tapuya’s relationship with T&F. One of the things that stood out to me was how easy it was for him to do things on T&F’s system that we would have to use a lot of time and energy to do with ESTS on OJS. It sounded luxurious. Katie Vann, Amanda’s predecessor as managing editor of ESTS and current managing editor of the Sage-published Science, Technology, and Human Values once said something similar. Compared to Sage, it’s a huge pain to work in OJS, especially if you try to do things the OJS way.

This complicates the common characterization of BIG PUBLISHERS as evil leeches on the volunteer labor of academics. Yes, they are. But if I imagine the investments that must have gone into making a smooth(er) editorial and publication experience, then it’s hard to seriously argue that they add no value at all. (In addition to web stuff, I think similar things can be said about copyediting and typesetting.) If they can make life easier for a thousand journal editors in the process, then they probably deserve a little something for their trouble.

It also illustrates for me how far the open access ecosystem has to go. BIG PUBLISHERS are not just good at building infrastructures and extracting financial resources from universities, but they are also huge concentrations of expertise in publishing. (Along these lines, deals like the recent one between the University of California and Elsevier are very interesting, because it kind of shifts Elsevier to becoming an open access publishing services provider for universities.)

This is one area that I think the open access journal ecosystem falls a little bit short, probably especially in the humanities and social sciences. It’s not easy to draw on expertise that has undoubtedly been built up at other journals. It’s out there, and I know there are efforts to make things better in this area. I look forward to seeing and supporting them. But at the moment, it is OJS and PKP that probably represent the densest and most accessible concentration of this expertise, at least for us. That their documentation and platform fall so short is starting to feel to me like an abdication of responsibility. (And about their paid support…)

I hope OJS gets better. To be fair, for some things it’s perfectly fine. As a turnkey solution to open access publication, it is deservedly successful. You can get an acceptable journal website and production workflow in place easily. But it will present you with more than its fair share of issues in day to day use. And it’s really hard and costs money to do what is needed for a site to be good. And by good, I mean not just nice to look at and use, but made with attention to as many different kinds of users as possible.

Last thing: be kind to your editors and web developers.

Ted Chiang on Artificial Intelligence

Ted Chiang is a science fiction author who wrote the story that became Arrival (via Metafilter). Recently in The New Yorker:


How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks.

With Ezra Klein for the New York Times:

But another aspect in which [superheroes] can be problematic is, how is it that these special individuals are using their power? Because one of the things that I’m always interested in, when thinking about stories, is, is a story about reinforcing the status quo, or is it about overturning the status quo? And most of the most popular superhero stories, they are always about maintaining the status quo. Superheroes, they supposedly stand for justice. They further the cause of justice. But they always stick to your very limited idea of what constitutes a crime, basically the government idea of what constitutes a crime.

In the same vein, the very idea of an internally self-consistent “cinematic universe,” and, more generally of a fictional “canon,” tends to be politically conservative (as most canons tend to be.) They give rise to strong guardians of the canon. Such universes also naturalize time as “big H History,” that tends to be self-limiting in terms of imagination.

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

(Previously on AI and capitalism and Marvel superheroes.)

GPT-Neo: An open source alternative to GPT-3

Access to GPT-3 remains limited, but “EleutherAI,” a “grassroots collective of researchers working to open source AI research,” has released GPT-Neo:

GPT⁠-⁠Neo is the code name for a family of transformer-based language models loosely styled around the GPT architecture. Our primary goal is to replicate a GPT⁠-⁠3 DaVinci-sized model and open-source it to the public, for free.