I like where I was going with this, but not how I went about it. I’ve had some years to reformulate it.
What I was apparently after was “informal fallacies.”
I have no interest in building software that detects formal fallacies such as if all cows are animals, and dogs are animals, ergo dogs are cows.
I’m not interested in unraveling those in popular dialogue.
What I’m interested in doing is creating a conversational AI that can take a user along a path in a conversation about a given topic, and quickly determine
if the user is engaging in an informal fallacy, i.e. “when the contents of an argument’s stated premises fail to adequately support its proposed conclusion.”
While some informal type fallacies can be expressed as a syllogism, I’m not interested in coding for that. I’d rather provide multiple choice answers for
where the user lies on the spectrum, in a way that lays bare the exact fallacy being employed to take on the assertion. It also wouldn’t be a dead end,
but rather a Socratic journey that self-cleanses until you get to assertions that don’t have informal fallacies attached; or perhaps the best we can do is minimize them.
Example: An Israeli friend is visiting you in New York. You advance an observation about how Israel should handle a given policy. Your friend deems your observation incorrect, simply on account of you not living in Israel, and therefore having no “right” to comment.
I think this might be called Argument From Authority, i.e. legitimizing or illegitimizing a claim based on the claimer’s authority or lack thereof instead of examining the content of the claim itself.
I think argumentum ad populum, tu quoque, ad hominen, excluded middle, argumentum ad absurdem, red herring, confirmation biases, equivocations, generalization, hasty decision, argumentum ad hitlerum, post hoc ergo propter hoc, moving the goalpost, and an unkindness of others, maybe over a hundred, flocking our rhetoric, each fallacious vulture with its own name and subtlety, are used and abused daily in places as common as home and school, but also in important places like media and govt.
We know that the outcome of our world closely relates to the quality of our conversations. We also know that our conversations are riddled with informal fallacies. We have the ability to create software that can unravel the latter, so why don’t we build it, and use it to “certify” our leaders and commentators, if not ourselves? Again, not talking about formal logic.
Having this be handled by AI might take the emotional and defensive reaction out of the equation.