SinceritySlop and the death of personalized communication
Dealing with the burden of ultra-personalized LLM spam
“The key to success is sincerity. If you can fake that, you’ve got it made.”
— George Burns

For decades advertisers have wanted the ability to target and personalize messages for individual consumers. Now that capability is available to everyone. For free.
And it's a mess. Here is a discussion through the lens of this week’s personal experience.
I regularly get e-mails from prospective students wanting an inside track on graduate admission with research support, a paid post-doc, etc. Most of these are spam. (The most entertaining was from a few years ago: "you are the only professor I want to work for" -- sent to every professor in my department in the "to" line. Yes, we all noticed.)
We get a steady stream of queries and are overloaded in general. Nonetheless, sometimes in the past I would respond to a query that was clearly sincere and showed they had done their homework specific to my research group. Usually someone with relevant industry experience who wanted to go back to school for an advanced degree. Every once in a while the most impressive would get a reply or possibly even an offer. It was usually easy to differentiate spam from genuine personal e-mail in a quick skim. I know all professors have to figure out how to handle these queries (it varies). I imagine most people with some sort of public profile deal with the same. Doing so is fine; it is an expected part of building one’s own personal brand and comes with the territory for some job positions.
This week I got yet another query, this time from a student seeking admission + full support who had not been accepted in the usual Fall admission cycle. It checked all the good boxes -- mentioned interest in my specific research area, gave specific references and discussions to show they had looked at my work, explained what they wanted from a degree and why thought I might be a great fit as an advisor. Not too wordy. Not too brief.
So much enthusiasm! So nicely written!
So sincere! … and … SO FAKE!
The easy tell was one of the URLs to my work does not exist, and is on a server that does not exist. Clearly a chat "hallucination" (I think BS is a better term.)
A few free AI detectors I tried clocked it at 50%-90% AI content. Looks like there was human-written intro boilerplate prefacing the chatbot material. Who they are, where they are, and which semester they want to arrive is pasted in as a prelude to the chat-generated slop. Sincere, but just babble to try to get me to engage with a student whose interests are impossible to know, and who seems to be trolling for a professor to hire them regardless of topic, despite a failed application process.
I don’t personally fault them for trying to get a leg up in a highly competitive game. I’m sure there it lots of advice that this is the smart way to stand out from the pack, at least in principle. But they do lose points for seeking a research position via an e-mail with a made-up reference to my own work (which you’d think I would notice!). Regardless, it is a definite sign of the times.
We have entered the age of SinceritySlop.
Thinking about how college undergrad college admission essays work, I can only imagine the battle they've been fighting on this. They already have had to deal with professional essay-for-hire companies. I’ll bet they already look forward to the good old days of noticing an essay has been cribbed from a published set of example essays by a student who paid for a ghost-written custom essay (I’m not making that example up). That will only get worse each year with Chat-generated, and chat-"inspired" admission essays.
Do you accept a student who used an LLM to “inspire” their essay and/or help them express themselves; who did some prompt engineering to personalize the essay, and feels the result represents their thinking? If so, how do you differentiate that from a different LLM essay that was just phoning it in trying to get into a college without putting in the effort? It used to be you could use a phone/video interview to check things out, but those are already being faked on tech. job interviews (real-time video masquerade and a chatbot interface to feed answers to questions).
The admission folks truly have my sympathy for having to deal with that mess. Of course admission processes can be made better. But when everything except standardized test scores is reverting to a mean of chat-generated BS, I think you’re going to find out that test scores as the only objective differentiator. Which colleges have been trying to get away from for years.
There are no doubt analogies to this dilemma across every profession. Hiring processes are already in disarray in some industries/positions, what with LLM-screened resumes, faked video interviews, and the like. These trends place an increasingly large cost imposed on everyone who is accustomed to receiving legitimate cold incoming contacts.
It used to be I would read and consider e-mail clearly sent by someone who had spent their time composing something worth reading. I guess that time has now passed. I don't have time to do this level of analysis on every e-mail I receive. So if it is a cold outreach e-mail there is a risk it will go into the SinceritySlop bucket with the other tsunami of material that has been ramping up. And will no doubt get dramatically worse over the next year. I can think of ways individuals can figure out how to get may attention, but you’ll need to figure it out yourself to avoid that being automated too.
I guess we'll see AI detection showing up as a spam filter capability, but it will be tricky as always. Companies are already being hounded by management consultants to add SinceritySlop to their legitimate communications, so differentiation will be difficult. And one of the AI detectors I tried has a "humanize" button to rephrase the input to get around other AI detectors. E-mail spam detection is getting much harder as collateral damage.
I guess we’ll end up with chat programs reading our e-mail to tell us what to pay attention to. I’m sure I’m not the first to observe that the obvious endpoint process is:
Human sender types 10 word prompt
Sender LLM creates a polite 250 word e-mail
Message sent to receiver
Receiver LLM condenses 250 word e-mail into a slightly different 10 word summary
Human receiver reads the 10 word summary, which might or might not be sufficiently close to the sender’s 10 word prompt to clearly communicate the message.
Eventually, someone creates the movie: “You’ve got LLM-mail” and hilarity ensues.
I have seen a few reasonable uses for ChatGPT-style technology. I use it, sparingly. But not for my serious writing (not for the text of my posts, and not for books) because I believe it will corrode my ability to think and communicate clearly. Others will have different opinions, and that’s fine in principle.
However…
It is important to consider both the benefits and the costs when adopting new technology. I think we're only just beginning to see the negative social costs that will be imposed by LLM technology adoption. As with many high-tech innovations, there are incentives to over-adopt new technology when the adopter gets the benefits and others carry the burden of negative externalities. It is easy to get trapped into a variation on the Tragedy of the Commons: https://en.wikipedia.org/wiki/Tragedy_of_the_commons
I believe that SinceritySlop will carry a high societal cost that leads to pervasive corrosion of varies forms of social communication and the development of trust. Which is happening in other ways as well as a result of misuse, abuse, and thoughtless use of LLM capabilities. Because, as has been observed over the years:
“The key to success is sincerity.
If you can fake that, you’ve got it made.”
(Commonly credited to the comedian George Burns. For a more detailed attribution, see: https://quoteinvestigator.com/2011/12/05/fake-honesty/ )
P.S.: I haven’t hired/recruited in a while as I follow my glide-slope to impending retirement from the university. My CMU web page has said for a while that I am no longer recruiting students or postdocs so I don’t leave them in a lurch when I retire. But the chat programs summarizing my research and publications for prospective students/postdocs do not seem to have figured that part out.
Social media like X and FB already destroyed the trust and the fascination that I initially found in connecting with those i had never met before, and sometimes carrying on some meaningful conversation. That trust is totally gone thanks to those who run social media platforms by gaming the algorithms in pursuit of their “business growth.” LLM is now doing something far more damaging, because we no longer know whom we are talking to, who wrote the content and how it was created. Why do we even trust anyone, any claim, any suggestions?!?
Interesting to note its evolution and tech-bro apotheosis as “Fake it ‘til you make it.”