1/ Bullshit Everywhere
A theory of mind is needed for bullshitting – being able to put oneself in the place of their mark. What sort of impression they’ll make, and thus adjust the message or behaviour accordingly. This applies to both humans and other living creatures.
“Full-on bullshit is intended to distract, confuse, or mislead—which means that the bullshitter needs to have a mental model of the effect that his actions have on an observer’s mind.” (a quote)
Bullshit exists because people want something from others, see bullshit as the means to obtain this something and use the available language tools to carry out the plan.
Being caught lying results in social sanctions, including the loss of relationship and trust, and even physical assault. Paltering, however, is using technically correct statements to lead to the wrong conclusion or decision. The social consequences of paltering are less severe. There seems to be an implied benefit of a doubt, as what people actually say often is not what they intend to say.
Implied meaning (implicature) allows to say things without saying them. E.g., “this movie wasn’t terrible” to mean “it was so-so”. There’s huge wiggle room to say something and claim that the conclusion was not implied (paltering). The choice of words is the message in itself, and it matters.
Weasel wording is using the gap between literal meaning and implicature to avoid taking responsibility for things. It helps save face, avoid negative consequences or having to deliver on the promise. E.g., “… up to 50% in savings” when the median savings are 5%. Or when companies are trying to pretend they’re not using child labour.
Bullshit can be delivered via self-regarding signals (“I’m X”) or other-regarding signals (“the world is Y”), which, with a very few exceptions, is common to humans only. Describing the world (or what you do) send signals not just about the object you’re describing, but about you as well – the character, social status, sense of humour, etc.
Sometimes bullshit exists without regard for truth, but with a good story as an objective. To most consumers it doesn’t matter; what matters is the attention and the laughs.
Brandolini’s principle: “The amount of energy needed to refute bullshit is an order of magnitude bigger than [that needed] to produce it.” (quote). One of the most notable examples is that “vaccines cause autism”. This claim has been refuted many times over (at a huge monetary cost to society), but still there are people believing it wholeheartedly.
It also takes less intelligence to create bullshit than clean it up. Sadly, it also spreads faster than the attempts to curb or refute it. Many members of the attention economy – either due to low intelligence or lacking moral qualities - will stop at nothing to get their share of clicks, page views and likes. Add to it an unfortunate habit of mass media to publish bullshit stories on their front pages and later (when enough damage has already been done) to publish corrections, retractions and apologies in the much-harder to spot sections.
2/ Medium, Message and Misinformation
Technology has amplified the bullshit problem, with social media platforms offering virtually unlimited ways to spread out all kinds of info. Everyone has their shoebox, and the amount of information produced far exceeds our capacity to consume and process it.
The lower the cost of accessing information – the wider the range of topics available and the lower the objective quality of such information. This leads to distraction, and distraction is a form of misinformation.
The old media was monetised via subscriptions. The new media is mostly clicks-driven. Optimising content for clicks (clickbait, “empty calories”) is completely different from optimising for a long-term financial and emotional relationship with a newspaper and its journalists.
Even major newspapers are prone to the manipulation of headlines, often even contradicting the contents of the main article. An arms race of headlines (in search for the new bottom) has another unpleasant effect: many people only read the headlines and not articles themselves, hence they get a feeling of being in control of the content they consume, but the opposite is true. To publishers it doesn’t make a lot of sense to put the main message into the headline, as this reduces the chance this article will be opened (the fear of telling too much too soon).
Successful headlines don’t convey facts, they promise an emotional experience. “… it will make you fall in love”, “… you will gasp”, “… this will melt your heart” and so on.
The victim is the professional reporting: simply stating the truth is no longer enough.
Balanced reporting is no longer possible because people prefer echo chambers and reinforcing their own views. Hence the obvious partisanship in reporting in most mass media. Social media amplifies partisan and hyperpartisan content, as it attracts the highest engagement and shares.
The content people read and share first of all sends a signal about these people – their value system, beliefs and affiliations. It’s much less about the content rather than sending the right signal. The medium IS the message.
Communication becomes affirming the commitment to our group and echo chamber. As such, who is talking becomes more important than what’s being said.
Content personalisation, a crown jewel of all social media platforms, is about increasing content engagement metrics while optimising for the particular user’s time on the platform (reducing churn) and (recently) navigating the minefield of public scrutiny and compliance, trying not to get caught amplifying the wrong message or indoctrinating teens into violence or suicide.
Recommendation algorithms tend to be biased towards amplifying increasingly extreme and radical content. Sadly, it’s a one-way road.
Misinformation is spreading false claims that are not deliberately designed to deceive. Disinformation includes the intent to deceive. In the 24-hour a day cycle whoever publishes the breaking news first gets the most clicks, so fact checking can wait. It’s a clear example of perverse incentives.
Social media is very good for spreading propaganda, too, as the person sharing the message, is trusted higher than a random stranger. But the goal of propaganda is no longer to convince of specific untruths, it’s about using the “firehose strategy” – sending lots and lots of messages to the point of fatigue, when readers are no longer able or willing to separate fact from fiction. This leads to the erosion of public trust towards institutions as well as lower participation in the democratic processes.
Fake news is much less about propaganda than about generating advertising revenue. There’s no limit to how low people can go, and the money doesn’t stink.
Bots and fake accounts (with realistic photos generated by AI) can spread the right messages to their unsuspecting subscribers or just can send the right spam (vote in a certain way, hack search engines’ suggestions, etc.). Let’s not forget about deepfakes.
Tech companies try to fight misinformation and disinformation, while in the process becoming arbiters and judges, which is a dangerous road to walk. Governments try to stop some messages and topics from spreading, which may be considered hampering free speech.
Part 2.