How spam has trained me
For the last couple decades, one common aspect of a lot of spam has been nonstandard use of English. For example, when I get email that claims to be from a major American corporation, but it’s full of nonstandard grammar and spelling, that’s a signal that the email is very unlikely to really be from that corporation.
And it now occurs to me that dealing with spam like that may have helped train me to consider certain uses of language, such as complete sentences that use standard grammar and spelling, as a signal of authoritativeness. I’ve always had that kind of reaction; but the new-to-me thought this morning is that maybe many years of learning to detect spam has further strengthened my association between standard English and authoritativeness.
That association is problematic in various ways—in various contexts, it can be classist and/or racist and/or ableist, etc.
But setting that issue aside, it’s now a problem for me in another way:
It contributes to my gut reaction that AI-generated text sounds authoritative.
Or to put that in a shorter, punchier way:
All those years of spam may have made me more vulnerable to believing GPT’s lies.
For some further thoughts from me and others, see comments on the Facebook version of this post.