This argument (and I take the first half of Meno_'s post to be making the same point) isn’t wrong, but it cuts both ways. If we can’t know the future then we can’t know the future, and postulating that AI will or will not be a threat is pointless. I think the radical agnostic position is too strong – we can and do make predictions about the future with some degree of success) – but a healthy uncertainty about any prediction is appropriate.
But as I say, it cuts both ways: the argument that AI will be a threat is exactly as diminished by the agnostic appeal as is the argument that AI will not be a threat.
So, while I acknowledge the validity of the point, it isn’t a strike against my particular position, but rather against the whole conversation. I’m glad to conceded that our predictions are necessarily limited. But I don’t agree that they are impossible, and where and to the extent that we can make some prediction, the prediction should be that AI is not that dangerous, given what we know about intelligence.
Meno_, you mention cyborgs, and there I am not so optimistic. Inequality is already becoming more and more self-reinforcing, as wealth buys health and education, which in turn create more wealth. Humans who can afford to upgrade themselves will do so in ways that allow them to afford yet more upgrades. Runaway inequality is a real threat. Hopefully upgraded humans will recognize that and seek a fairer distribution of resources, but I am not optimistic about that either.