home / projects /

Friendship with cybergolem: political history of AI - an essay

jan 2026
links: publication (in russian)

this essay was conceived as a PhTea talk (a kind of informal free-topic public talk in GSSI). in it, i aimed to articulate and argue for some of my suspicions and aversion to AI in its widely circulating forms. afterwards, the material prepared for this talk turned into a proper text and was published in russian by the DOXA journal. i’m grateful to Armen, DOXA’s editor and my friend, for accepting my pitch and excellent editorial feedback. the English version below is my rough translation of the published version.


History of AI and AI critique

It’s 16th-century Prague. Maharal, the city’s chief rabbi, a renowned Kabbalist and Talmudic scholar, is increasingly concerned about the threat of persecution against his community. Seeking protection, he turns from literary to practical Kabbalah: with clay from the banks of the Vltava River, he molds a human figure and brings it to life through a magical ritual. Golem — this biblical hapax logomenon used in Psalm 139 to denote Adam, unfinished, in the process of being created by God, becomes the name of a creature. Golem fulfills his purpose and protects the Jewish ghetto. At the end of the story, he becomes unnecessary or, in another version, spirals out of control and threatens to destroy the entire world. He is killed, or perhaps deactivated, returning to the state of inanimate matter.

The legend of the golem can be thought of as a chapter in the prehistory of artificial intelligence. In a Jewish theological and spiritual context, this story is particularly provocative — the creation of the golem explicitly parallels the creation of man. Perhaps that is why emphasis is often placed on the golem’s imperfections and an urgent need for it as a protector, as well as on the outstanding figure of the golem’s creator, be it the legendary Maharal of Prague, Elijah Ba’al Shem of Chełm, or the Vilna Gaon.

Mid-20th century. It’s becoming evident that computers are indispensable for the military — from ballistic calculations and cryptography to streamlining supply. The computer revolution is in full swing, at the same time technological, mathematical, and philosophical. In 1950, Alan Turing published “Computing Machinery and Intelligence,” in which he immediately dismisses the question “Can machines think?” as meaningless and proposes the famous “imitation game”. In it, a human examiner corresponds with two interlocutors — a machine and a human. A machine claiming to be intelligent must win the game by convincing the examiner that it is, in fact, human. In the article, Turing proclaims his definitive optimism about the potential of “thinking machines” and thoroughly responds to counterarguments — from theological objections about the divine spark to a contemporaneous version of the “Can a robot write a symphony?” meme.

Even greater optimism is espoused by the organizers of the Dartmouth workshop, who coined the term “artificial intelligence” in 1956: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

It’s easy to see this as a kind of STEM guy arrogance, a belief in the omnipotence of mathematics and the natural sciences, and a corresponding dismissal of the humanities and, in general, less formal kinds of knowledge. But in the 1950s, this attitude at least had some grounding: over the previous 10–15 years, many calculations previously performed by human computers (or rather, “computeresses”, since this was a non-prestigious job done predominantly by women) had been handed over to electronic ones. It was tempting to project this triumph further.

And for some time, the triumph continued. This was the beginning of the era of symbolic AI, relying on a rigorous mathematical description of the problem using a formal language. Such a description essentially made the relevant knowledge available to the computer, allowing an algorithmic solution. Thus, machines learned to play chess using tree-searching algorithms, perform specialized reasoning using a knowledge base and inference rules, and plan optimal operation of very complex systems. These are examples of tasks previously accessible only to humans; however, many others, such as recognizing hand-written text or freely using a natural language, remained beyond the reach of symbolic AI. It became evident that the notion of a complex task is vastly different for humans and computers; it’s tremendously hard, if not impossible, to program into a computer some skills that almost any human acquires in the first years of their life.

Despite early successes, widely shared optimism, and generous funding from the US military, even in this classical era, contradictions emerged. Their echo is heard in today’s discussions. One of the early high-profile AI success stories is ELIZA, created by Joseph Weizenbaum in the mid-1960s. Often called the first chatbot, ELIZA is not much different from ChatGPT in its interaction format: the user types a message on a terminal, the program parses it, and generates a response. Developed in an attempt to humanize computer interaction as a potential replacement for cumbersome and unreadable punch cards, ELIZA did almost too well.

Playing the role of a Rogerian psychotherapist, the program generated responses in a characteristic reflexive manner, simply rephrasing the user’s message into a clarifying question and seemingly guiding them to further reflection, perhaps even intimacy. As Weizenbaum writes, “the psychiatric interview is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing

of the real world”. This role was chosen primarily due to technological limitations that prevented the program from operating a more complex script. But it was precisely this simplicity that highlighted the “Eliza effect”: despite the primitive nature of the algorithm, people very quickly began to interact with it as if it were a real therapist. Some were convinced that the responses were actually secretly written by a human, many protested against having their correspondence with Eliza read, and some asked to be left alone with the program for a private conversation.

There is a notion of transference in psychoanalysis, involving a patient unconsciously associating traits or emotions from previously experienced situations with the therapist. Weizenbaum essentially discovered that transference can work on a non-human interlocutor, given a certain basic level of trust by the user and consistent mirroring to stimulate monologue. At the time, many saw this as a tantalizing opportunity for deep human-machine integration, but Weizenbaum, already in this original work, is cautious: “The whole issue of the credibility (to humans) of machine output demands investigation. Important decisions increasingly tend to be made in response to computer output. […] ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.”

Unlike his fellow computer science pioneers, blinded by their belief in the omnipotence of the exact sciences, Weizenbaum maintains a critical view of both his discipline and the wider political context. Growing up as a Jew in 1930s Germany, he witnessed the rise of the Nazi regime, emigrated to the United States, and then witnessed the labor and anti-racist movements in 1940s Detroit. Now, in the 1960s, he actively participated in campus protests against the Vietnam War, using his recently acquired professorship to lend credibility to the movement.

His activism influences his work: he grows increasingly critical of computing, openly clashes with many colleagues, and in 1976 published the polemical book “Computer Power and Human Reason: From Judgment to Calculation”. This work is not concerned merely with the technical challenges of developing AI systems, but rather conducts a philosophical, psychological, and political critique of it. Weizenbaum proposed a simple moral principle: “there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them”.

In support of this thesis, Weizenbaum distinguishes between the concepts of decision and choice. A decision is the result of a calculation and, in principle, can be achieved by a computer or other machine. A choice, on the other hand, is the result of a judgment, which can only be made by a person based on their experience, personal and collective history, and life in a community. A person is responsible for the choices they make and for the values carried out.

In response to ELIZA, some psychologists started to imagine fully automated psychotherapy. After all, how hard could it be? Perhaps we’d need to add a bit of sophistication, branching of the algorithm, expand the template database — incremental changes! For Weizenbaum, this is an example of a catastrophic, if not criminal, substitution of judgment for computation. Occupations such as psychotherapist or judge require human participation because they require human judgment and constitute a particular form of human contact. Similarly, the roles of artist and writer are fundamentally not reducible to the production of content — regardless of their technical complexity, art and writing are the result of judgment and bear the imprint of lived human experience.

Instrumental reason is the name Weizenbaum gives to the tendency to eliminate judgments from social life, replacing them with a transparent and rational algorithm. Many of his former colleagues pursued this goal explicitly, believing it would increase efficiency and eliminate human error and arbitrariness. Weizenbaum sees in this the implicit conservatism of instrumental reason: along with inefficiency and arbitrariness, it eliminates compromises, flexibility, and the potential for grassroots change. The computer revolution for him turns out to be a social counterrevolution: giving the ruling class tools to control, contain, and transform social tension. Today, in the world of Elon Musk and Peter Thiel, openly embracing the new authoritarianism, it’s difficult not to appreciate Weizenbaum’s insight.

Modern AI and scaling

The era of symbolic AI ended, having greatly advanced the capabilities of computers but failing to achieve its ambitious goals: “to make machines use language, form abstractions and concepts, […] and improve themselves.” This failure was due both to the technical limitations of classical methods and to the shakiness of its philosophical foundation — the idea of ​​the mind as a logical and formalizable system. In the 1990s, an alternative paradigm — connectionism — became dominant. It is based on the idea of ​​the human brain as a multitude of neurons exchanging signals with one another.

Each individual neuron performs a rather basic function, essentially receiving, processing, and transmitting a simple electrical impulse. Complex mental phenomena — consciousness, memory, intelligence — arise in this view in specific configurations of an entire network of neurons. These configurations are not designed top-down; they cannot be created or edited at will; they arise organically during the system’s development under the influence of external stimuli. From both neurobiology and philosophy of mind perspectives, this model is rather simplistic, but it turned out to be easily translatable to a computer program, thus giving rise to artificial neural networks. The analogy between a program and the human brain, used by Rosenblatt back in 1957, added weight to the term “artificial intelligence”: not only do machines solve human problems — although no objective definition for this category exists — but even their operating principles are inspired by the materiality of human intelligence.

Unlike classical AI, neural networks and other connectionist models are not programmed but rather trained by example. For example, by the late 1990s, a neural network had learned to recognize handwritten digits with high accuracy, though this required collecting tens of thousands of examples recognized and labeled by humans, now known as the MNIST database. Gradually, increasingly complex and large-scale models, using ever-larger training data sets, mastered new tasks inaccessible to symbolic systems: handwriting and speech recognition, image classification, text translation and generation, and discovering statistical patterns in human behavior and preferences.

The relationships between model size, training data volume, on the one hand, and the model’s capabilities, on the other, were discovered and termed “scaling laws”. Roughly, these laws state that an increase in model scale translates into a corresponding increase in capabilities. Practically, this means that oftentimes it’s not the most sophisticated architecture or optimization algorithm that wins — it’s the largest computing cluster and the largest training dataset.

It’s not accidental, then, that connectionist AI only achieved true breakthroughs in the early 2010s. By this time, tech giants like Google and Facebook had perfected a business model that Shoshanna Zuboff dubbed surveillance capitalism. In it, data about users — their interests, habits, history, and behavior — became the primary capital, and was accumulated in huge quantities. Connectionist methods became the perfect lubricant for the surveillance capitalist machine, extracting patterns from “big data” and effectively turning them into targeted advertising, recommendation algorithms, and attention-grabbing social media feeds. Big tech profits also provided the necessary funding to meet ever-increasing computing demands of ever-larger AI models, which simultaneously became less accessible to non-profit academic researchers and small companies.

In this context, ethical and political issues of AI — copyright abuse, environmental impact, large-scale sociopolitical and psychological consequences — appear less as peculiarities of a particular technology, and more as consequences of the Big Tech business model. Until recently, the primary applications of AI were “analytical”, but problematic tendencies have already emerged: police use facial recognition algorithms to identify protesters via CCTV, classifiers internalize and reproduce negative social dynamics such as racial profiling. For example, in the Netherlands, the Risk Indication System (Systeem Risico Indicatie) was designed to incorporate a wide array of government datasets to identify cases of benefit fraud. However, due to its instrumental approach and lack of transparency, it was criticized as a means of administrative pressure that essentially ghettoized already poor neighborhoods. After a successful public campaign, it was shut down by court order.

But most recently, it’s not analytical AI but generative AI that has taken center stage in the public consciousness: ChatGPT and other large language models for text generation, as well as diffusion models for image, audio, and video generation, such as Midjourney and Sora. Since 2022, these tools have quickly become the most visible and widely known examples of AI.

Generative AI became the next step in the scale race, demanding a vast increase in training costs — now hundreds of millions of dollars — and training data volumes — hundreds of terabytes of text alone, compared to early 2010s big data models. In her book “Empire of AI,” journalist Karen Hao describes the almost religious faith of Ilya Sutskever, co-founder and former chief scientist of OpenAI, in scaling. His aspiration was not just incremental progress, but artificial general intelligence (AGI) — a hypothetical AI capable not only of solving a wide range of problems but also of active self-improvement. In the techno-utopian imagination, AGI would unleash exponential progress, usher in a new industrial revolution, and quickly become not only “general” but superhuman.

Sutskever’s core belief — that general AI can be achieved without substantial conceptual innovation, simply by scaling existing models — is radical and intellectually arrogant, yet it has resonated with Silicon Valley venture capitalists. Scaling is also the key strategy of a startup business. During its first few years, it operates at a loss, burning through investor money. The goal is to capture a significant market share, usually through a combination of technological innovation and price damping, and only then does it begin to make money. Most AI companies still follow this model: according to their own projections, OpenAI and Anthropic will spend more than they earn for at least 3-4 years from now, and this doesn’t include the enormous financial commitments in new datacenters. The profit is supposed to come from somewhere eventually, but from where?

Bullshit(ization) machine

A panacea, solution to the climate crisis, colonization of other planets — all this and more is regularly promised by AI evangelists as the benefits of looming AGI. Shameless marketing aside, the main promise of generative AI that investors are buying into is labor automation. In 2024, a group of researchers from OpenAI published a paper claiming that nearly half of all jobs could be automated using LLMs. This echoes an earlier claim predating generative AI: in 2013 similar scale of automation was predicted, only with machine learning algorithms and robots. It has since become clear that those figures were exaggerated. Nevertheless, for many today, the prospect of being replaced by ChatGPT seems quite real. Before examining the validity of these fears, let’s take a closer look at generative AI and what it actually produces.

In a provocatively titled paper, “ChatGPT is bullshit,” the authors describe LLM-produced text as bullshit. This isn’t a dismissal or insult, but rather a well-defined philosophical term. It was coined by philosopher Harry Frankfurt to describe a particular mode of utterance in which the speaker is indifferent to the truth of but is concerned merely with achieving a practical goal, such as persuasion. Bullshit is distinct from both truth, when the speaker seeks to convey a true fact or belief, and from lies, when the truth is concealed or distorted.

An unprepared student struggles to find the right words to scrape together a passing grade; a politician at a campaign rally strings together key topics that, polls say, will resonate with swing voters — these are Frankfurt’s examples of bullshit. The paper’s authors argue that ChatGPT’s utterances are generated in a similar mode: what is optimized during model training is the statistical similarity of the generated text to the training corpus. Any notion of truth is wholly absent from this system. Perhaps one might hope that the truth is encoded implicitly, automatically, maybe because true statements are prevalent in the training data? OpenAI founder Sam Altman’s response to an earlier version of this criticism is in this dismissive vein. But this is merely a denial of the problem. Connectionist AI has abandoned the notion of truth by design — the neural network cares only about statistical patterns in the training data, which encompass its world.

Another well-known work based around the concept of bullshit is David Graeber’s “Bullshit Jobs.” Graeber describes how, despite the exponential increase in labor productivity over the past century — primarily due to automation — people have hardly started working less. Instead, most workers (at least in the global North) are now engaged not in the production and distribution of goods, but in auxiliary service work: administration, management, sales, and finance. According to Graeber, such jobs, termed bullshit jobs, are often perceived as meaningless by those performing them. And due to its ubiquity, it causes systemic moral and psychological damage. The reason for bullshit jobs’ prevalence is not economic but political: it doesn’t serve a genuine economic need but results from a particular system in which one must work in order to spend money in the free market.

It would seem like bullshit jobs should be prone to AI automation. After all, AI excels at writing business emails, maintaining pointlessly detailed corporate documentation (which is also read only through AI summaries). It unleashes florid cascades of epithets on task trackers, press releases, and PR statements. However, bullshit jobs have a structural role that has nothing to do with the tasks performed by the worker. AI can perhaps accelerate them, but it will not displace the humans occupying them. The situation is different for real productive work. AI threatens to completely replace or at least take over integral tasks of illustrators, musicians, writers, engineers, and scientists — those who create something through intellectual and creative effort. On the other hand, teachers, doctors, therapists, and even simple customer service — all roles requiring direct human contact — are increasingly competing with AI chatbots.

In most cases, AI’s output is inferior to what it’s intended to replace. But it’s close enough to be serviceable, and it also has the important advantage of being predictable, measurable, and repeatable. AI promises to do to jobs what standardized testing did to education1 — make them more transparent and legible to regulatory agencies and quality control departments. And the workers in these professions, today engaged in actual labor, are to be placed on yet another level in the bullshit hierarchy. The new managerial underclass will be more precarious, worse-paid, and just as alienated. AI automation turns out to be bullshitization.

However, contrary to naive (and often biased) projections cited at the start of this section, automation isn’t happening overnight. Even in the absence of organized resistance from workers, bullshitizing jobs turns out to be non-trivial. In his book, “Automation and the Future of Work,” Aaron Benanav analyzes various approaches to the automation discourse — from warnings of looming societal collapse to the utopia of a universal basic income — and finds them lacking. His alternative: thinking about technological innovation as it is embedded into complex social processes that we all participate in, and, more importantly, can in some way direct. Aaron describes trends of de-skilling and increased worker surveillance — elements of what we call bullshitization. They are already noticeable, for example, among taxi drivers, who have gone from being highly-skilled experts in their neighborhoods to precarious Uber “partners”, mandated to follow Google Maps directions. Generative AI may not become general or superintelligence, but it sure can bring a similar shift to more and more professions.

The threat of AI is not that it will one day become too smart and replace us, but that it will provide businesses with increasingly attractive ways to cut corners in the creative, spontaneous, and interesting aspects of our work, leaving us more alienated and controlled. The hope lies in viewing progress not as a natural disaster coming to us from the distant science and technology realm, but a process that doesn’t exist without us — that is, unless we accept such a strictly passive role. As the 2023 WGA strike demonstrated, resisting the destructive effects of AI is tied to labor union struggle and to other forms of self-organization and solidarity.

AI luddism

It is unlikely that the current generation of AI, created by Big Tech and funded by venture capital, will bring universal prosperity or solve global problems. However, it’s pointless to deny the impressive potential of the technology. For a brief time in November 2022, ChatGPT genuinely captured the public imagination and discourse: everyone rushed to experiment with the talking computer, bombarded it with absurd requests, and shared funny responses. For the first time in 10-15 years, encountering new technology felt like something truly novel. Perhaps this is another example of the Eliza effect, discovered 50 years earlier, but the effect itself can be read not as evidence of human gullibility, but as an indication of the radical openness of humans to communicate with the other.

It might help to read Weizenbaum’s critique through a cyberfeminist lens. Cyberfeminism is a kind of feminist philosophy2 that explores, among other things, the liberatory potential of technologies. It criticizes the essentialization of differences and antagonism between men and women and emphasizes the importance of plurality and mutual openness. In this context, Weizenbaum’s thesis that certain things ought not be done by computers can be read not as a conservative defense of the human ideal from lowly imitation, but as an attempt to return the computer to its status of a genuinely non-human, truly alien and self-sufficient being.

Computer intelligence (however we define it) must inevitably be alien to us at least as much as animal intelligence — neither is capable of developing human values ​​or making human judgments. However, they might be capable of radically different judgments, authentically computer and animal. The problem of AI, then, can be posed in this way: will we be able to make friends with this non-human intelligence, understand its values, principles, and existential stakes3? This may seem like a difficult or even hopeless task, but aren’t we doing the same thing when we build relationships with animals, across species boundaries, adapting them and being adapted by them? Perhaps, the experience of interspecies communication and companionship suggests a way to think of AI not as a substitute, counterfeit, or imitation, but as a genuine non-human intelligence.

The possibility of a friendship with a computer shouldn’t cancel our suspicions about how it might be used to transform labor and society. In “Blood in the Machine”, journalist Brian Merchant attempts a rethinking of Luddites, 19th-century English textile workers who protested automation and are famous for sabotaging looms.

Luddites are commonly portrayed as staunch conservatives, enemies of progress, entrenched in the past, and seeking to keep everyone else with them. However, as early as the 1960s, Marxist historian E. P. Thompson, in his book “The Making of the English Working Class”, described the Luddites as one of the first instances of workers’ self-conscious political organization. The target of their protest was not technology — indeed, many of them were skilled workers who respected and embraced the new machines as they could make their work easier — but the new power balance that factory owners sought to impose with these technologies. In terms of the previous section, they protested the bullshitization and consequent devaluation of their labor. And the notorious sabotage was a last resort, only resorted to after failure of other means of protest, such as negotiations and peaceful strikes. Merchant invites us to reevaluate the figure of the embittered, backward Luddite and embrace it — we, as they did, should actively participate in deciding how new technology will transform our lives. Historical Luddites were defeated, but not by the unstoppable march of progress. They were crushed by brutal police and military repression, orchestrated by the state at the behest of industrialists. We, the Luddites of the 21st century, may still have a chance.

The clay figure is transformed into the golem when the rabbi inserts a tablet bearing the sacred, unpronounceable four-letter name of God into its mouth. In another version, it is animated by the word אמת (emet), “truth,” inscribed on its forehead; at the end of the story, the first letter is erased, leaving מת (met), “death.” In any case, the golem’s artificial intelligence is born through the mystical power of language. And although the golem, according to a rabbinic opinion, is not equal to a human being, most versions of the story depict him as a helper, protector, and sometimes a friend of its community and people. Some interpretations also go on to explore its inner world, its torments and anxieties. If AI, molded from textual clay from the banks of the internet and animated by our collective faith in language games, becomes a modern golem, let it be our helper, our protector, our strange friend, and not the cruel servant of corporations and dictatorships.

  1. this comparison is courtesy of Aidan Walker

  2. I’m not quoting any particular author here, instead aiming to apply cyberfeminist concepts in broad strokes; however, two of the most relevant and personally important to me would be Donna Haraway and Sadie Plant

  3. as suggested also by Ben Tarnoff