15 Comments
User's avatar
JD's avatar

I'm surprised by the dismissiveness of the comments.

If any of these things were happening to you or a very near loved one, you would treat them like an emergency, or at least rank them as among The Most Desperately Important Things.

Expand full comment
John's avatar

"Life is suffering" - Buddha

"What can we do about it?" - EAs

Expand full comment
Beetle's avatar

it's posts like these that raise my credence in pro-sentient-life-extinction

Expand full comment
Some Guy's avatar

This might seem an ugly thing, but if we live in the best possible world and if there are other intelligences out there that have a moral ethic similar to ours (let’s assume ethics converges with agentic world modelers) can you think of a moral reason they don’t intervene?

Expand full comment
The Lurking Ophelia's avatar

How does it compare to The Precipice by Toby Ord? :D

Expand full comment
Martin Greenwald, M.D.'s avatar

Once you have kids of your own you’ll understand what real emergencies are.

Expand full comment
skaladom's avatar

Uhhh... welcome to life I guess? I know it sounds trite but I really don't get this modern obsession of trying to separate life from death. I'll much rather take my cue from Montaigne, who was saying some 500 years ago:

"I were a writer of books, I would compile a register, with a comment, of the various deaths of men: he who should teach men to die would at the same time teach them to live."

One day it will come for me too; having already reached middle age, it's only by a paltry factor of 2 that my life can be longer or shorter - not a huge factor in the wider scheme of things. But for whatever time it lasts: have we lived well?

Expand full comment
Ben Kosan's avatar

This is a silly downplaying of these issues, especially that of child mortality. Sure the idea of dying in old age after living a full sound nice and poetic. But thats the opposite of what is happening in these emergencies, which is lives of suffering and premature death.

Expand full comment
skaladom's avatar

Most of these things are the way things have been forever, and in some cases, they've actually massively improved. Child mortality basically fell off a cliff in the 20th century, from ~50% down to 4.3%, and AFAIK it's still falling. In rich countries it has reached 0.4%, which is ~1% of what it used to be. Even if technology keeps improving I don't think it can reach 0; biology just doesn't do that.

My wider point is that we don't have any kind of objective set point for how much pain in the world is acceptable and a natural counterpart of life, along with all the good stuff. So people set their points all over the place. Some declare that suffering is so terrible that nearly all lives are net negative, and so a barren world would be literally better than this. Others find sentiency itself such an amazingly good thing that any suffering involved is more than worth it. As long as you tether your judgment to abstract ideas, you can make it look as good or as bad as you want, because they're infinitely flexible.

I don't have a precise answer myself. I doubt there is one even. But I know that, given the choice between a rich ecosystem (e.g Earth before humans came with their technology and dominated the land), and a barren landscape like Mars or the Moon, I'll choose the rich ecosystem every day of the week. And that includes all sorts of animals dying of exposure, sickness, predation, etc. Wild animals FTW, I'm personally glad every time some land is left undeveloped for them to roam. And a friendly hello to all the worms working underground to keep the soil fertile, to the daffodils and dandelions sprouting in early spring, and to the dogs and cats pissing on them.

Of all the things BB lists, the only one I agree is a global disgrace is factory farming. It really has no redeeming feature — the animals have it much worse than in nature; the suffering wasn't there to begin with, but was willfully added by humans; and to top it all, we don't even need it, we'd presumably be able to feed ourselves without doing that.

For the rest, sure, three cheers for anyone working on improving outcomes, reducing suffering and prolonging life. It's obvious, I don't think it needed saying. There's a major industry devoted to that, with no lack of qualified people and funding. But my point remains that life and death remain two faces of the same coin, and that we gain nothing, and lose a lot, when we delude ourselves that we can ignore death or push it away indefinitely.

Expand full comment
FLWAB's avatar

None of those (except maybe AI) are emergencies. Emergencies are "emergent": as in unexpected, new, a serious thing that wasn't happening that is now happening, etc.

Now that may seem like a pedantic point (and it is) but I think it's a relevant one, particularly when it comes to wild animal suffering. Animals have been dying since animals have existed. Dying in pain is kind of an essential part of being an animal. To be an animal is to live, to feel, and to die. To say that animals dying is an emergency is to say that animals existing is an emergency. More particularly, it's saying that pain existing is an emergency.

Do I think it's sad that animals suffer and die? Yes. But it's part of being alive, and to be alive is good. Death and suffering is part of the whole living deal. I watched my grandfather die painfully over several weeks. I watched my grandfather in law die even more slowly and painfully over several years. When I die, I expect it will be painful as well. Is that an emergency?

I don't see any way to get rid of suffering and death without getting rid of life as well. As in, if wild animals suffering and dying is not and acceptable state of affairs, the only real solution is to euthanize all the animals until there are none left.

Expand full comment
Ibrahim Dagher's avatar

Do you think death is intrinsically bad?

Expand full comment
Abe's avatar
Mar 20Edited

Superintelligence shouldn't be on this list of ongoing emergencies. The LLM architecture has inherent limitations that none of the companies have any idea how to transcend, a survey of high-level AI researchers indicated that something like 80% of the experts believe that the current LLM architecture/neural nets will never lead to AGI. Altman and Co. have bet the farm on their ability to somehow make a miracle happen by pouring inconceivable amounts of capital (including the human kind) into their word-predictors, but there's no indication that they'll succeed. Microsoft is scaling back its production of data-centers, Coreweave is a time-bomb, and all of the AI companies (except Nvidia) are losing money. This isn't a revolutionary technology; it's a fairy tale of infinite growth that has bewitched a stagnating & desperate tech sector.

Even some of the LessWrong folks have lost faith in LLMs. https://www.lesswrong.com/posts/oKAFFvaouKKEhbBPm/a-bear-case-my-predictions-regarding-ai-progress

This isn't to say that a true AGI wouldn't be cause for concern, but the current machinations of OpenAI and the like are a bunch of hooligans with a smoke machine pretending they've lit a fire.

Expand full comment
Spherb's avatar

A large chunk of your reasoning relies on the assumption that we probably won’t make any breakthroughs in DNN architecture within the next few decades, even though we’re shoveling well over ten times as much effort into AI research right now than we were five years ago. It’s possible that it’ll turn out to be extremely difficult to improve on transformers for some reason or another, sure, but it’s also possible that we’ll see major progress within the next five years. I don’t think anyone can justify confidence in either direction right now.

If NASA announced that a 30-mile-wide asteroid had a 10% chance of hitting Earth within the next decade, would that be worth declaring an emergency over even if their asteroid detection mechanism was new and possibly unreliable? Even if it was too early to go into full-blown panic mode, wouldn’t it at least justify dumping a significant amount of research effort into asteroid detection and deflection until we got a more conclusive answer?

Expand full comment
Rishtar Preet's avatar

They don't have many years left to do it. This is an impossibly expensive operation, and once it becomes clear that they're betting on unknown unknowns, the plug gets pulled. After that, AI research scales back enormously, and all the wasted money and lies from the first go around will stay the hand of anyone who might put plug back in. I think the whole LLM fiasco actually slows the path to AGI in the long term; it's all a big sideshow.

Expand full comment
Vikram V.'s avatar

Ok, but doesn’t this framing prove that the “Security K” attack on effective altruism is somewhat true? It frames everything as an emergency greater than oneself.

Taking them “seriously” just means that anything you actually care about or irrelevant to the utilitarian calculus. And if it’s an “emergency” then you can suspend all standards of decency and societal conventions in the favor of whatever cause you decide based on a wide array of assumptions has the most infinite utility at stake.

Expand full comment