Can’t hear what isn’t said, James. But I’m happy to make my response to your argument clearer if it will help you point out what’s wrong with it.
This argument that you’ve made cuts both ways:
To the extent that’s true, both optimistic and pessimistic predictions about the threat posed by future AI are “seriously dubious”.
But you go on to imply such a prediction:
There, you are implicitly “predict[ing] the potential threat of something much greater than yourself before experiencing it”, i.e. that the future relationship between humans and AIs will be like the past and present relationship between monkeys and humans. By your own standards, that prediction is “seriously dubious”. You urge that we should “[l]ook into history”, but it doesn’t seem that looking into history somehow avoids the argument that “predict[ing] the potential threat of something much greater than yourself before experiencing it, is seriously dubious.”
Next, you offer more, yet more oblique exhortations to “look into history”, suggesting that my argument is equivalent to encouraging someone to jump off something (presumably something dangerously tall) because maybe they won’t die even though everyone else has. My response to this strawman was to point out that everyone hasn’t died in being optimistic about things much greater than themselves: dogs, I note, might have taken your pessimistic view about the prospects of working with humans, and if they had they’d have been wrong, as dogs as a species have thrived by cooperating with humans.
So we are left with two competing anecdotes, two imperfect analogies for the situation we’re actually talking about here, each pointing to the opposite conclusion. We aren’t monkeys and we aren’t dogs, and AI isn’t humans. Anecdotes are a useful way to approach a problem, but at some point their shortcomings do more to mislead than to further elucidate the question. We are beyond that point.