I object to this the same way I object to objections of the repugnant conclusion: it's hard to have intuitions about numbers that large. Is it really possible to image someone "gaining more welfare per second than all humans ever have so far"?
It's hard to picture it concretely, but I think that we can have intuitions. For example, I'm very confident that that would be good. I don't think large number objections mean we can just throw out our intuitions about the repugant conclusion -- they just undermine them to some degree.
But this is very plausible. No one would say that the marginal displacement each time is worth it. I wouldn’t kill 10^10^1000 humans to create a utility monster.
I object to this the same way I object to objections of the repugnant conclusion: it's hard to have intuitions about numbers that large. Is it really possible to image someone "gaining more welfare per second than all humans ever have so far"?
It's hard to picture it concretely, but I think that we can have intuitions. For example, I'm very confident that that would be good. I don't think large number objections mean we can just throw out our intuitions about the repugant conclusion -- they just undermine them to some degree.
But this is very plausible. No one would say that the marginal displacement each time is worth it. I wouldn’t kill 10^10^1000 humans to create a utility monster.
This leaves aside the bit about fetus rights.
But it becomes especially plausible with fetuss that replacement is fine, particularly iterated.