Who’s Afraid of the AI Boogeyman?
Who’s Afraid of the AI Boogeyman?
by Bert Olivier at Brownstone Institute
It is becoming ever more obvious that many people fear rapidly developing Artificial Intelligence (AI), for various reasons, such as its supposed superiority, compared to humans, as far as processing and manipulating information is concerned, as well as its adaptability and efficiency in the workplace, which many fear would lead to the replacement of most human beings in the employment market. Amazon recently announced that it was replacing 14,000 individuals with AI robots, for example. Alex Valdes writes:
The layoffs are reportedly the largest in Amazon history, and come just months after CEO Andy Jassy outlined his vision for how the company would rapidly ramp up its development of generative AI and AI agents. The cuts are the latest in a wave of layoffs this year as tech giants including Microsoft, Accenture, Salesforce and India’s TCS have reduced their workforces by thousands in what has become a frenzied push to invest in AI.
Lest this is too disturbing to tolerate, contrast this with the reassuring statement, from an AI developer, to boot, that AI agents could not replace human beings. Brian Shilhavy points out that:
Andrej Karpathy, one of the founding members of OpenAI, on Friday threw cold water on the idea that artificial general intelligence is around the corner. He also cast doubt on various assumptions about AI made by the industry’s biggest boosters, such as Anthropic’s Dario Amodei and OpenAI’s Sam Altman.
The highly regarded Karpathy called reinforcement learning—arguably the most important area of research right now—’terrible,’ said AI-powered coding agents aren’t as exciting as many people think, and said AI cannot reason about anything it hasn’t already been trained on.
His comments, from a podcast interview with Dwarkesh Patel, struck a chord with some of the AI researchers we talk to, including those who have also worked at OpenAI and Anthropic. They also echoed comments we heard from researchers at the International Conference on Machine Learning earlier this year.
A lot of Karpathy’s criticisms of his own field seem to boil down to a single point: As much as we like to anthropomorphize large language models, they’re not comparable to humans or even animals in the way they learn.
For instance, zebras are up and walking around just a few minutes after they’re born, suggesting that they’re born with some level of innate intelligence, while LLMs have to go through immense trial and error to learn any new skill, Karpathy points out.
This is already comforting, but lest the fear of AI persist, it can be dispelled further by elaborating on the differences between AI and human beings, which, if understood adequately, would drive home the realisation that such anxieties are mostly redundant (although others are not, as I shall argue below). The most obvious difference in question is the fact that AI (for example, ChatGPT) is dependent on being equipped with a vast database on which it draws to come up with answers to questions, which it formulates predictively through pattern recognition. Then, as pointed out above, even the most sophisticated AI has to be ‘trained’ to yield the information one seeks.
Moreover, unlike humans, it lacks ‘direct’ access to experiential reality in perceptual, spatiotemporal terms – something which I have experienced frequently when confronted by people who draw on ChatGPT to question certain arguments. For example, when I gave a talk recently on how Freud and Hannah Arendt’s work – on civilisation and totalitarianism, respectively – enables one to grasp the character of the globalist onslaught against extant society, with a view to establishing a central, AI-controlled world government, someone in the audience produced a printout of ChatGPT’s response to the question, whether these two thinkers could indeed deliver the goods, as it were.
Predictably, it summarised the relevant work of these two thinkers quite adequately, but was stumped by the requirement to show how it applies to the growing threat of totalitarian control in real time. My interlocutor used this as grounds to question my own assertions in this regard, on the assumption that the AI bot’s response was an indication that no such threat exists. Needless to stress, it was not difficult to repudiate this claim by reminding him of ChatGPT’s dependence on being supplied with the relevant data, while we humans have access to the latter on experiential grounds, which I proceeded to outline to him.
The fear of AI also finds expression in science fiction, together with intimations of possible modes of resistance to AI-machines which may – probably would – attempt to exterminate their human creators, as has been imagined in science fiction cinema, including Moore’s Battlestar Galactica and Cameron’s Terminator films. It is not difficult to demonstrate that such products of popular culture frame the current symptoms of fear pertaining to AI in imaginary terms, which may be seen as a crystallisation of repressed, unconscious anxiety, related to what Freud called ‘the uncanny’ (unheimlich, in German; more on this below).
Both Moore and Cameron elaborate on the likelihood that the very creatures engendered by human beings’ technological ingenuity will eventually turn on their creators to annihilate them. In Alex Garland’s Ex Machina (2014), again, one witnesses an AI ‘fembot’ called Ava, subtly manipulating her human counterparts to the point of her escape from confinement and their own destruction. Undeniably, these, and many other similar instances, are incontrovertible evidence of a hidden fear on the part of humanity that AI constitutes a possible threat to its own existence. Precisely because these fears are lodged in the human unconscious, however, they are not the main reason to take any threat posed by AI seriously, although they do comprise a valuable caveat.
The chief reason for regarding AI as a legitimate source of intimidation does not arise from AI as such, as many readers probably already know. Rather, it concerns the manner in which the globalists intend to use AI to control what they perceive as the ‘useless eaters’ – the rest of us, in other words. And those of us who do not go along with their grandiose plans of total world control would fall victim to being ‘reprogrammed’ into compliant ‘sheeple’ by AI:
Yuval Noah Harari has emerged from the shadows to brag about the new technology developed by WEF scientists which he warns has the power to destroy every human in the world by transforming them into transhuman entities.
Harari has made clear who will survive the great depopulation event the elite have been warning us about for years.
According to Harari, the global elite will survive thanks to a ‘technological Noah’s ark’ while the rest of us will be left to perish.
In this vastly depopulated world, the elite will be free to change themselves into transhuman entities and become the gods they already believe themselves to be.
But first the elite need to eliminate the non-compliant masses, those who are opposed to the anti-life and godless WEF agenda, and as Harari boasts, the elite now command the AI technology to ‘ethically’ destroy non-compliant humans by hijacking their brains.
Disturbingly, Harari’s claims are grounded in reality and the WEF is rolling out the mind-control technology as we speak. Davos claims the tech can transform criminals, including those accused of thought crimes, into perfectly compliant globalist citizens who will never dissent again.
There you have it – AI will be the tool, if the globalists have their way, of forcing us into submission. Needless to point out, this could only happen if sufficient numbers of people fail to resist their plans, and judging by the number of people who are showing their opposition to the would-be rulers of the world, this will not occur.
Another way of gaining an understanding of the fear of AI is to liken it to what is commonly known as ‘the boogeyman.’ As some people may know, the ‘boogeyman’ (or ‘bogeyman’) – a creature of mythical proportions, which assumes different shapes and sizes in many cultures, often to scare children as a way of eliciting good behaviour – is variously presented as a monstrous, grotesque, or shapeless creature. As a little research indicates, the word derives from the Middle English term, ‘bogge,’ or ‘bugge,’ which means ‘scarecrow,’ or ‘frightening spectre.’
Being a quintessentially human phenomenon, it is not surprising that it has equivalent names in many folklore traditions and languages across the world. Just like languages, depictions of this frightening figure diverge strikingly, often attaining its ominous and scary character from the element of formlessness, such as the figure of ‘El Coco’ in Spanish-speaking countries, the ‘Sack Man’ in Latin America, and the ‘Babau’ in Italy, sometimes imagined as a tall, black-coated man.
The boogeyman figure may be regarded as a kind of Jungian archetype, encountered in the collective unconscious, which probably originated centuries ago from parents’ need to frighten children into obedience by means of a version of the unknown. In South Africa, where I live, it sometimes assumes the shape of what indigenous people call the ‘tikoloshe’ – a malevolent, and sometimes mischievous, dwarfish figure with an enormous sexual appetite. Being an archetype, it has also made its way into a popular genre such as horror film, manifesting itself in grotesque characters such as Freddy Krueger, the eponymous ‘Nightmare on Elm Street.’
So, in what sense does AI resemble the ‘boogeyman?’ The latter is related to what Sigmund Freud memorably called ‘the uncanny,’ of which he writes (in The Complete Psychological Works of Sigmund Freud, translated by James Strachey, 1974: 3676): ‘…the uncanny is that class of the frightening which leads back to what is known of old and long familiar.’
This already hints at what he uncovers later in this essay, after uncovering the surprising fact that the German word for ‘homely,’ to wit, ‘heimlich,’ turns out to be ambivalent in its usage, so that it sometimes means the opposite of ‘homely,’ namely ‘unheimlich’ (‘unhomely,’ better translated as ‘uncanny’). That the concept of ‘the uncanny’ is suitable to grasp what I have in mind when I allude to ‘the fear of AI,’ becomes evident where Freud writes (referring to another author whose work on the ‘uncanny’ he regarded as important; Freud 1974: 3680):
When we proceed to review the things, persons, impressions, events and situations which are able to arouse in us a feeling of the uncanny in a particularly forcible and definite form, the first requirement is obviously to select a suitable example to start on. Jentsch has taken as a very good instance ‘doubts whether an apparently animate being is really alive; or conversely, whether a lifeless object might not be in fact animate;’ and he refers in this connection to the impression made by wax-work figures, ingeniously constructed dolls and automata. To these he adds the uncanny effect of epileptic fits, and of manifestations of insanity, because these excite in the spectator the impression of automatic, mechanical processes at work behind the ordinary appearance of mental activity.
Here, already one encounters a trait of the uncanny that conspicuously applies to AI – the impression created by AI that it is somehow ‘alive.’ This was the case even of the first, ‘primitive’ computers, such as the one in the episode on the First Commandment, of Krzysztof Kieslowski’s 1989 television series regarding the Ten Commandments, called The Decalogue, where the words, ‘I am here,’ appear on the computer screen when the father and his son use it. The ominous implication in this episode is that if humanity were to replace God with AI, it would be disastrous for us, as shown in the fact that the father is sufficiently ‘rationalist’ to trust the computer’s calculations of the thickness of the ice on which his son skates, which turns out to be wrong, leading to the child’s death.
Freud continues his investigation of the nature of ‘the uncanny’ by paying sustained attention to the work of E.T.A. Hoffman, whose stories are famous for producing a strong sense of the uncanny, particularly the tale of ‘The Sand-Man’ – ‘who tears out children’s eyes’ – which features, among several other uncanny figures (and very significantly), a beautiful, lifelike doll called Olympia. He then explains it by relating it in psychoanalytical terms to the castration complex – attached to the father figure – via the fear of losing one’s eyes (Freud 1974: 3683-3685). Freud continues his interpretation of the uncanny in a revealing manner by invoking a number of other psychoanalytically relevant aspects of experience, of which the following one appears to apply to AI (1974: 3694):
…an uncanny effect is often and easily produced when the distinction between imagination and reality is effaced, as when something that we have hitherto regarded as imaginary appears before us in reality, or when a symbol takes over the full functions of the thing it symbolizes, and so on. It is this factor which contributes not a little to the uncanny effect attaching to magical practices.
It is not difficult to recall instances in one’s childhood, Freud avers, when one has imagined inanimate objects, like toys (or animate ones, for that matter, such as a pet dog) to be capable of talking to you, but when it actually appears to happen (which would be a hallucination, as opposed to a deliberate imagining), it unavoidably produces an uncanny effect.
One might expect the same thing to be the case with AI, whether in the shape of a computer or a robot, and ordinarily – perhaps at an earlier stage of AI development – this would probably have been the case. But today seems to be different: people, especially the young, have become so accustomed to interacting with computer software programmes, and recently with AI chatbots such as ChatGPT, that what might have been an experience of the uncanny before is, for all intents and purposes, no longer the case. In this respect, the ‘uncanny’ appears to have been domesticated.
As long ago as 2011, in Alone Together, Sherry Turkle reported that she was concerned about young people displaying an increasing tendency to prefer interacting with machines, rather than other human beings. Hence, it should not be in the least surprising that AI chatbots have assumed the guise of something ‘normal’ in the sphere of communication (leaving aside for the moment the question of the status of this vaunted ‘communication’).
Furthermore – and here the fear of what AI could bring about on the part of all-too-trusting individuals raises its ugly head – from recent reports (such as this one) it is apparent that, particularly the young, are extremely susceptible to chatbots’ ‘advice’ and suggestions concerning their own actions, as Michael Snyder points out:
Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?
A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me…
A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.
We aren’t just talking about a few isolated cases anymore.
At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.
Unfortunately, there are many examples where these relationships are leading to tragic consequences. After 14-year-old Sewell Setzer developed a ‘romantic relationship’ with a chatbot on Character.AI, he decided to take his own life…
As the preceding discussion shows, there are some areas of human activity where one need not fear AI, and then there are others where such fears are legitimate, sometimes because of the manner in which unscrupulous people harness AI against other people. But whatever the case may be, the best way to approach the tricky terrain regarding the capabilities of AI vis-á-vis humans is to remind oneself of the fact that – as argued at the outset in this article – AI depends on vast amounts of data to draw on, and on being ‘trained’ by programmers to do this. Humans do not.
Who’s Afraid of the AI Boogeyman?
by Bert Olivier at Brownstone Institute – Daily Economics, Policy, Public Health, Society
