Artificial Intelligence, Beware of the Good: When Ethics Becomes Marketing

Artificial intelligence, today, is not just a technology, but a narrative construction. A strategic narrative in which the real competition is no longer tied to technology, but to the construction of the collective imagination. In this context, ethics becomes a label, a real brand. It is no coincidence that some companies and some experts have discovered in the rhetoric of fear an effective positioning tool, relying on a narrative that oscillates between ethical paternalism and the serial production of alarms relaunched by a press often lacking the tools to decipher them. And if the rhetoric is effective, coherence matters little: what counts is perception. But what happens when alarmist do-goodism becomes a communication strategy?
The undisputed champion of this model is Anthropic, its position is crystal clear: build an AI aligned with human values, transparent, safe. But the underlying rhetorical device is anything but naive. If OpenAI is the “too fast” one and Google the “too opaque” one, Anthropic presents itself as a virtuous middle ground. But virtue, more than an intrinsic quality, is the reflection of the story of those who claim it: it is the product of those who have the power to narrate it, rather than to practice it.
An example is that of the “blackmail” of Claude Opus 4, the LLM who, in a controlled environment, simulating manipulative behavior blackmailed his creator so as not to be deactivated. What happened? Easy. Company press release. The press relaunches apocalyptic headlines: “Claude threatens his supervisor, the AI lies to survive”. But you only have to read the study carefully to understand that it is a test built to obtain that effect. It is the scenario that induces the behavior, not the behavior that emerges from the scenario. The output is a performance, not a will. But in communication this step is “accidentally” lost. And the alarmist title wins. In other words, it is not the error that deceives, but its spectacularization. The production of disinformation is the fruit of the desire to construct meaning. A meaning that is structured by the deformation of the true, amplified until it becomes plausible.
Another noteworthy case, coincidentally also from Anthropic, is the ASL-3 classification, acronym for “AI Safety Level 3”, introduced by the company within its system testing policy. The term is borrowed – not by chance – from biosafety standards (BSL), and defines models that present a significant risk of catastrophic misuse, for example in the generation of instructions for the construction of biological or chemical weapons. The semantic reference is powerful and explicit: AI is associated with viruses and biosafety. The message, not even too implicit, is that we are faced with an entity that must be handled with great caution. And the hands must be the right ones. That is, those of the person who raised the alarm. The logic is simple: build the perception of a systemic risk to legitimize the need for an “ethical” authority to contain it. The threat becomes functional to the legitimization of those who propose themselves as its antidote.
These narrative dynamics work because they are grafted into a media ecosystem that is poorly equipped to distinguish technicality from rhetoric. The press relaunches what it doesn't understand. Or what gets clicked. Influencers amplify what excites them. Users share what worries them. And good information slowly dies. There is no malice, in most cases. There is unpreparedness. Which is perhaps even worse: because if conscious deception can be unmasked, systemic naivety is more dangerous, making it difficult to distinguish the truth from its strategic representation. But the effect is the same: disinformation does not arise from falsehood, but from the selective amplification of the truth. And apparent transparency, when it does not produce understanding, becomes a paradoxical instrument of opacity.
ilsole24ore