Environment

LinkedIn Has a BS Problem, and It’s Not Just AI Slop

LinkedIn Has a BS Problem, and It's Not Just AI Slop

Though Harry Frankfurt could boast many remarkable achievements during his long lifetime—not least being named Professor Emeritus of Philosophy at Princeton University back in 2002—it was for his 2005 book On Bullshit that he perhaps became best known.
In it, the highly-respected moral philosopher draws a clear distinction between lying and BSing—the first being an act of conscious deceit, the latter describing any form of communication intended to persuade without due regard for truth. And contrary to a liar (who knows they’re hiding the truth), it was the bullshitter—that person for whom concepts like deceit and truthfulness are essentially irrelevant—who emerges as the more insidious.
I bring it up only because we currently exist in a time of abundant BS, a phenomenon that proliferates unchecked through the myriad of social media channels that permeate our daily lives.
And that kind of crap is thriving on LinkedIn.
Featured Video
An Inc.com Featured Presentation
On any given day, the average user on the world’s largest business platform is sluiced by a tidal wave of pseudo-coaching waffle. With tedious frequency, one finds oneself dragged involuntarily into a type of discount business supermarket filled with self-important, AI-flavored quasi-wisdom packaged up into oracular mini-bites. If one looks at any recent study that now describes clicks as LinkedIn’s primary survival currency, it’s reasonable to assume that these nuggets are designed less to improve the reader’s professional prospects than they are simply to gain that all-important engagement flicker. This in turn allows people to live for another day under the capricious eye of the algorithm itself.
But the question is, why? When did half-baked wisdom become the de-facto communication currency on a platform that once prided itself on its sheer, stripped-down utility?
The first conclusion must be that it’s simply a product of demand. After all (so the argument goes), these posts would be quickly downranked by the algorithm if the majority took exception to them or simply ignored them.
Yet this inference seems misguided. One can clearly see that the high levels of engagement on these posts don’t derive from their profundity or transformative professional impact, but instead from the desire of the commenters, likers or reposters to have their own profile ranking temporarily improved by interacting with another post that potentially has blockbuster written all over it. One has only to consider that vague urgings about authenticity or success, scribbled on Post-It notes and delivered by photogenic social media interns, will vastly outperform any market insight authored by a west coast tech exec of some two decades’ standing.
In which case, if the message itself has become little more than an arbitrary vehicle for racking up personal algo-points, our second explanation must lie within the algorithm itself.
And in this context, pseudo-wisdom serves several useful purposes. It creates a safe social media environment for a wider audience, its broad, generalized themes both free from the type of controversies that have so blighted platforms such as X in recent months. Additionally, it acts as a catalyst for reams of non-committal, positive (and often automated) debate.
Add to this incentivisation mechanics such as the Top Voice programs, creator modes, and follower-building mechanics that reward open-ended engagement simply on its own terms, and you begin to build a template where bland positivity (dressed in the loose trappings of educated thought) is prized above all else.
The final fuel for this fire is, I’m afraid to say, AI and automation.
It has never been easier, thanks to a widening variety of AI tools such as MagicPost and EasyGen, to research, recycle, and deliver one’s message in a way that often has the barest of human touches before appearing on a Tuesday morning feed. According to Originality.ai, on any given day it’s estimated that at least 54 percent of LinkedIn posts come from AI engines (much of which is now increasingly, and derisively, referred to as AI slop). And the easiest message to craft via this pipeline, needless to say, is the type of vague business advice that requires no unique insight, is written with no tangible authority (other than one’s follower count), and offers no opportunity for factual interrogation.
After all, who can argue with ephemeral statements about common leadership flaws, life-balance hacks, or how to find your inner entrepreneur?
But at the end of the day, maybe we’re just paying too little attention to basic human needs. In Penny Maycock’s curiously-titled Cambridge University study, “On the reception and detection of pseudo-profound bullshit,” she talks of how people are inherently wired to find vague statements meaningful, provided that they’re delivered with confidence.
In a jittery job market (exacerbated, ironically, by AI), people are quite naturally primed for easily-digestible heuristics about success, resilience and leadership, even if their sources may openly derive not from a credited knowledge base, but from a lesser-informed layman with a desire to entrench their own social media presence.
As Harry Frankfurt himself so accurately summed up: “It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.”