To be honest, I’m not exactly sure how to do the math relevant to this point. We could make the point about any existential risk, e.g. there’s an X chance that all our nukes will spontaneously malfunction and detonate, and if that happens once we’re all dead, and every time it doesn’t happen there’s still an X chance that will happen going forward.
My intuition is that this is misleading. For one thing, the argument is too strong, tending to show that anything that has a chance of occurring eventually will. For another, each ‘chance’ is already time bound; some statements “there’s an X chance that Y will happen” already take all the chances into account, they really mean “there’s an X chance that Y will ever happen”.
Third, even if this is a case that’s best considered as a series of discrete ‘chances’, the outcome of each ‘chance’ changes the game, so it’s not really an iteration on the same thing. For example, if we successfully create superintelligent and cooperative AIs, that should dramatically decrease the risk posed by the possibility of superintelligent and uncooperative AIs.
So, you make an interesting point, and its one on which I acknowledge my ignorance and would like to hear more, but for the reasons above I’m not yet convinced that it undermines my position.
Which experiences? I don’t think there are particularly many historical examples of more intelligent species wiping out less intelligent species. Granted, humans have driven a ton of species to extinction, but humans have been around for a relatively short time, and there have been many non-human-caused extinction events (even mass extinction events). And outside of humans, intelligence doesn’t seem to have been that dominant evolutionarily. Indeed, even in cases where humans have driven species to extinction, human intelligence was generally only an incidental factor, in that allowed us to out-compete them. It’s also not clear that intelligence is always selected for, or that homo sapiens drove out other human species primarily by outsmarting them individually, rather than e.g. by being more aggressive or more social.
Moreover, I don’t know how well biological examples map to abiological examples. Evolved species like humans have particular incentives that may make wiping out rival human species a good strategy, where an AI, because it does not reproduce or even die (in the conventional sense) does not have the same incentives or pressures. The way we think, the things we worry about, are not necessarily objective in the ways we often consider them to be. Our emotions, for example, are evolved traits, and may have no place in a superintelligence AI. That could significantly affect the risks posed by an AI. The discussions I see tend to anthropomorphize AI as having human-like traits and acting on them. To the extent our concern is based on appeal to contingent human-like mental habits, it seems misplaced.