Socialism AI: An "historic" advance or sectarian confusion?

 

The Sleep of Autonomy Produces Monsters, Daniela Zampieri / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Socialism AI is a large language model (LLM) based chatbot released by the World Socialist Web Site (WSWS) on December 12. Much ink has been spilled about the International Committee of the Fourth International (ICFI)[1] on the pages of this website and I do not wish to go back over these polemics here. However, the controversy surrounding Artificial Intelligence in left-wing circles and the curious decision of the ICFI to go all in for the technology, means an analysis of Socialism AI has relevance for Marxists beyond this particular movement.

In its announcement of Socialism AI, the WSWS has declared the chatbot “a historic advance in the political education of the working-class.”[2] It has also published coverage from a number of supporters urging workers and young people to start using the chatbot in any way they see fit.[3] One example is a TikTok video from Will Lehman, who ran for UAW president in 2023, stating, “Every worker should be using Socialism AI.”

LLMs are fundamentally next-token predictors. They are trained on text (and sometimes also image and audio) data, and they transform each token into a numerical vector, called an embedding. Tokens are most typically words, but sometimes sub words like suffixes or prefixes or other features of written text such as punctuation. Given a sequence of previous words LLMs calculate the probability of the next word, and this can be used to generate outputs. Generation from learned statistical distributions is why some describe LLMs as “stochastic parrots,”[4] or more pointedly, “bullshit machines,” as they are repeating patterns present in their training data rather than generating language from any understanding of the external world.

It can be helpful to see LLMs as vast memorization machines, as they are trained from huge amounts of data (most of the Internet in more recent models) LLMs learn and store statistical patterns. Given ubiquitous regularities in human language this can often go very far in simulating human writing, even if the models do not have any understanding of grammar or concepts in a human sense.

As far as LLM-based chatbots go, Socialism AI is pretty good. In response to basic questions it pulls information and citations from the WSWS corpus pretty well. It seems the WSWS have used a technique called Retrival Augmented Generation[5] (RAG) to ensure the outputs of Socialism AI are grounded in their archive.

The summaries Socialism AI generates are generally accurate but are superficial and highly formulaic (even though this is a fair reflection of the majority of WSWS articles, the tendency to superficiality and pro forma outputs is widespread in LLM generations). The tool can provide deeper insight into some topics if the author knows what they are looking for. As with most LLMs, it tends to be verbose and pedantic. 

Its summaries of events or controversies are often correct but unsatisfying. As is often the case with LLMs, the more you know about a topic the less impressive Socialism AI becomes. It often gets facts or arguments correct but misses the most critical points. For instance, in an initial summary of the arguments made by Trotsky in a well-known fragment of an essay, The ABC of Dialectics[6], it doesn’t address the limitations of formal logic and its relation to dialectical logic at all despite that being Trotsky’s primary argument in the piece.

Other times, Socialism AI is held back by the limitations of the WSWS archive. In these cases it actually does quite a good job exposing the contradictions between the WSWS’s practice and the Trotskyist principles of which the website claims it is the sole steward. For example, the Theory of Permanent Revolution and its political consequences. If you ask it about the attitude of a revolutionary party to a bourgeois nationalist movement like Hamas one of the principles it lists is “political criticism and independence.”[7]

Interestingly, however, I asked it to show me criticisms of Hamas published by the self-proclaimed “orthodox Trotskyists” at the WSWS since October 7, 2023 it can only cite the Socialist Equality Party’s Statement of Principles from 2008![8] When I removed the specific date range it pointed me to a sole series of articles written in 2002![9] The series is actually quite good, but three wars and the complete destruction of Gaza later, it can hardly be considered up to date. In fact,  what Socialism AI unwittingly exposed was that the WSWS position on Hamas had changed dramatically from 2002,  when it subjected Hamas to merciless criticism, and 2023, when it denounced anyone who dared criticize Hamas’ action of October 7.[10]   While in some cases being programmed to not move one iota from the WSWS archives works as intended, in other cases it can lead to some farcical outcomes.

Another telling example of this tendency (which is a flaw at least from the perspective of the contemporary ICFI) is if you show the model the letters[11] surrounding my expulsion from the ICFI. As we showed in a previous article[12], it concludes that the French section of the ICFI did not adhere to the principles of Democratic Centralism in its handling of my case. All-in-all one of the best use cases of Socialism AI is using it to expose the myriad areas where the IC falls short of its own standards. While this may be of limited interest to what members of the ICFI punditocracy have labelled “embittered renegades”[13] like myself, it can hardly be described as “a historic advance” for the working class.

These failures show the limitations of Socialism AI, which are reflective of LLMs generally. If the tool is just to be used to search for and provide high-level summaries of WSWS, then it is not so objectionable. But is this valuable, or even, “historic”? In short, not really. It doesn’t seem obvious what advantage someone who wants to learn about the WSWS’s political positions has using Socialism AI against just reading the website.

However, these limited use cases are not what the WSWS has in mind for Socialism AI. In the following sections we will discuss what misconceptions about LLMs have led the ICFI to develop an LLM-based chatbot, uncritically promote its use as an educational tool and denounce anyone who dare raise concerns about it.

 

Corporate AI hype and the grand theory of Augmented Intelligence

 

A Rising Tide Lifts all Bots: Rose Willis & Kathryn Conrad / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

It is by no means insignificant that the WSWS has decided to call its chatbot Socialism AI. AI is a marketing term and always has been. The neural nets that underlie LLM-chatbots like Socialism AI are very loosely inspired by the structure of the human brain, but below the most general level the analogy with human intelligence (and its neural correlates) quickly breaks down. This is not to deny that LLMs cannot carry out some tasks that are typically performed by intelligent humans, but being able to so does not make something ‘intelligent’.

The origin of computing is bound up with the effort to measure, simplify and replace tasks carried out by humans. Before being a machine, a ‘computer’ was a worker who used to perform mathematical calculations by hand. If we label LLMs as artificial intelligence in any genuine sense of ‘intelligent’ then we would have to proclaim systems like the Turing Machine or hand held calculators intelligent as well.

In fairness, the WSWS is wary of appearing too hot on corporate terminology. To counter this David North, chairperson of the WSWS’s Editorial Board, puts forward a redefinition of the AI acronym from artificial intelligence to “augmented intelligence.” The sleight-of-hand here of course is that this theoretical innovation still allows the IC to use the name Socialism AI with all its connotations, staying comfortably on the marketing bandwagon in the process.

North’s term is first introduced in a piece announcing Socialism AI’s release date[14]. North explains, “The phrase “artificial intelligence” suggests that we stand in the presence of a kind of counterfeit or ersatz intelligence. Yet we do not speak this way about any other technological extension of human capacity. In reality, what is called “AI” is better understood as augmented intelligence—an extension and amplification of human intellectual labor. The term “augmented intelligence” emphasizes not a break with humanity but a deep continuity. It recognizes that these systems are built on human labor and knowledge, shaped by human purpose and deployed to amplify human capabilities.”

Using the term augmented is more appropriate than artificial because LLMs’ capabilities arise from training on human data (the vast majority of it stolen from those who produced it) and because those who use this technology will have their own capabilities amplified. Or, so North’s argument goes. The nod toward the criminal methods used to produce these models is well-noted. However, the notion that a piece of technology that amplifies human cognition or activity is a decisive or ‘historic’ break with technological innovation so far in human history, and thus deserving of a new appellation is laughable.

If we take North’s standard for ‘augmented intelligence’ then forget about computers and calculations, we should have to admit every technological innovation from fire to sliced bread into the category!

North’s definition of the term leaves us with a curious contradiction. The WSWS declares Socialism A[ugmented] I[ntelligence] as an ‘unprecedented expansion of knowledge and social consciousness’ but North’s redefinition of AI makes LLMs so run-of-the-mill that they should hardly deserve special consideration by Marxists. All we are left with is a heap of confusion and not much more understanding of the peculiar opportunities and risks of LLMs than we started with. I think it is safe to say ‘augmented intelligence’ will not be remembered as a valuable analytical addition to the Marxist lexicon.

Building an LLM based chatbot and calling it ‘Socialism AI’ is a perverse refraction of corporate hype, whether you want to call it augmented intelligence or otherwise. Workers and young people, instead of a sober and accurate understanding of the technology, are encouraged to maintain all of their sci-fi stereotypes about AI but remain comfortable in the knowledge it is being used for socialism[15]. While calling something AI may be a convenient shorthand to communicate with people about the topic, Marxists should be skeptical toward the word and its connotations.

 

North’s anthropomorphism of LLMs

 

Whatever label North throws on the acronym AI, it is clear he has some misconceptions about LLMs. His uncritical attitudes toward the technology come out most strongly in two letters responding to critics of Socialism AI published over the holiday period. In these letters North makes a number of unsubstantiated claims about AI, declares himself a defender of science despite asserting things that would make the majority of AI researchers tear their hair out, and denounces those that disagree with him as “middle class opponents of AI technology” and even analogises them to vaccine deniers!

This confusion comes across most strongly in an article, written by North and Evan Blake, in response to someone named Dimitri, a commenter on the WSWS and ex-member of the ICFI who has repeatedly raised concerns about Socialism AI. The letter begins, “this historic initiative has encountered an angry response from a section of middle class opponents of AI technology.” Dimitri is then accused of using “technical jargon that is intended to persuade readers that he is well informed on the subject of AI.”

Before we’ve even started to examine exactly what was so objectionable about Dimitri’s criticisms, North has already denounced Dimitri as a “middle-class radical” and has instructed the reader to assume Dimitri is a dishonest dilettante when it comes to AI. In fact, from Dimitri’s comment there is no evidence the points he is making are made in bad faith. This is hardly a sound approach to an honest accounting of any political issues, let alone those raised by a largely misunderstood and novel technology.

As Dimitri points out in his response to this piece[16], North and Blake’s argument against him is a strawman – again, not atypical for North.[17] The authors cherry pick one comment made by Dimitri and willfully misinterpret it to claim that he denies all connection between language and the world and believes the output of LLMs bares little or no relationship to its training data. This is all quite clearly refuted by the comments made by Dimitri that North and Blake do not decide to cite.

However, what is perhaps even more revealing in this piece are the explicit claims North and Blake make about AI’s capabilities in their piece. They write, “Training LLMs on massive, diverse corpora compels them to form high-dimensional representations of semantics, syntax, factual regularities and logical relationships. They do not simply retrieve or splice stored text, but learn distributed patterns that generalize far beyond their data. This is why they can coherently discuss unfamiliar topics, summarize unseen material or connect apparently unrelated ideas. To describe their output as “random interpolation” is, technically speaking, nonsense. It reveals a basic misunderstanding of representation learning.”

Tens of thousands of words could be spent dissecting the various implicit and explicit claims in this paragraph. Luckily, focusing on two of the central claims should be sufficient to show that Blake and North’s foray out into the world of Computer Science has left them lost at sea.

The first is the claim that LLMs “form high-dimensional representations of semantics, syntax, factual regularities and logical relationships.” In fact, neural models do not explicitly encode any of these representations. They encode probabilistic relationships between tokens (or words) based on what they’ve seen in their training data, but this does not mean models understand ‘factual regularities’ in any human like way. In fact, human type representations and statistical distributions often come apart in interesting ways. For example, LLMs often adopt completely different strategies to complete reasoning puzzles compared to humans.[18]

The second claim, which logically underlies the first, is that LLM capabilities arise from “learned distributed patterns that generalize far beyond their data.” The extent to which LLMs genuinely generalise is a debated question among computer scientists, but North and Blake place themselves squarely at the far flank of the so-called “boomer” (pro-AI) wing of researchers.

There are multiple other explanations for this model behavior, including data contamination, approximate retrieval and “shortcuts” (the researcher Melanie Mitchell explains these terms here). The most important thing to understand is that LLMs, particularly the most recent generations, have been trained on huge swathes, if not all, of the Internet. When ChatGPT or another LLM answers a mathematical, legal or medical question correctly, it often gets the right answer for the wrong reasons- because it had already seen the question in its training data, or it was similar enough to another question in its training data. This becomes a major issue when LLMs are used for something that isn’t in or adjacent to their training data (i.e., they have to genuinely generalise).

Why do North and Blake engage in this kind of anthropomorphism? In fairness to them, it dominates both popular and sometimes even technical discussions of AI. Major AI companies continuously employ anthropomorphic language when introducing their models. This was taken a notch higher in late 2024 and throughout 2025, as companies began to discuss new models as ‘reasoning models.’[19]

Another common source for these conceptions is the human-level (and often superhuman) performance of LLMs on benchmarks- which give a veneer of quantitative legitimacy to these claims. The confusion arises from the fact these benchmarks are often human exams. As LLMs are capable of outscoring expert humans, this means they must have the same or even better linguistic and reasoning capabilities as those humans, right?

Well, no. As a report from Stanford University’s Human Centered Artificial Intelligence group explains, “AI companies often use benchmarks to test their systems on narrow tasks but then make sweeping claims about broad capabilities like ‘reasoning’ or ‘understanding.’ This gap between testing and claims is driving misguided policy decisions and investment choices. For example, we may incorrectly conclude that if an AI system accurately solves a benchmark of International Mathematical Olympiad (IMO) problems, it has reached human- expert-level mathematical reasoning. However, this capability also requires common sense, adaptability, metacognition, and much more beyond the scope of the narrow evaluation based on [mathematics] questions. Yet such overgeneralizations are common.”

Even renowned cheerleaders of AI have begun to rein in their claims about LLMs as it becomes clearer that the ‘research’ approach of scaling up the models has led to diminishing returns on eye-watering investments. Ilya Sutskever, who was Chief Scientist at OpenAI until a 2023 falling out with Sam Altman and built AlexNet (a major landmark in computer vision from 2013), stated in a recent podcast interview[20], “[the] generalisation of the models actually being inadquate … has the potential to explain a lot of what we are seeing, this disconnect between eval[uation] performance and actual real-world performance.” At multiple points through the interview, Sutskever expresses caution toward the anthropomorphism North and Blake pin their entire argument on.

Before this public walk-back, Sutskever was renowned for wild claims about AI development. Including a 2022 tweet saying, “it may be the case that today’s large neural networks are slightly conscious.”[21] He stated last year that, “AI will keep getting better and the day will come when AI will do all the things that we can do… How can I be so sure of that? We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things?”[22] Unfortunately, this is the kind of empiricist reductionism that the IC has fallen victim to with its views on AI.

North and Blake’s tendency toward anthropomorphism is not new. In Matteo Pasquinelli’s Eye of the Master: A Social History of AI[23] he points out that in Descartes time the human mind was described as a system of pulleys and levers, in Alexander Graham Bell’s time it was analogized to a telegraph pole, then in the post-war period it was analogized to computation by theorists like Jerry Fodor and Dan Dennett. It is an historical intellectual tendency bound up with Empiricism and more modern variants like Pragmatism and Positivism.

North and Blake may protest that their views are not as extreme as the (old) Sutskever, but it should be noted that North and Blake’s argument about grammar, reasoning and generalisation, which are human capacities all bound up with consciousness, implicitly commits them to these sorts of positions. That they now find themselves outflanking someone like Sutskever on AI hype should be a real wake-up call. Anthropomorphism of technology has no place in serious Computer Science and it should have no place in Marxism. That it has taken hold of the IC so easily is an indication of the extent to which empiricism and the sort of vulgar materialism it gives rise to has come to dominate the ICFI.[24]

One can’t help but find North and Blake’s attempt to intellectually defend Socialism AI superficial. The IC’s fawning attitude toward LLMs has seemingly come from nowhere. Before the release of Socialism AI the WSWS’s coverage of AI was limited to its role in mass layoffs and workers’ struggles. One exception is a piece by Kevin Reed from April 2023 which provides a basic description of what the technology does and speculation about what sort of workplace tasks LLMs can carry out. As Dimitri also points out in his response to North, this is a marked contrast toward the WSWS’s mostly excellent and highly detailed coverage of multiple scientific issues relating to the COVID-19 pandemic.

Ironically, when I asked Socialism AI to tell me about the WSWS’s analysis of AI before the release of Socialism AI on December 12 and it could only retrieve materials after that date. An indictment of both the model’s understanding of “logical relationships” and the WSWS’s neglect of any attempt to develop a technical or serious political analysis of AI up to this date.

Screenshot of Socialism AI inquiry

Dimitri’s response to North and Blake in the WSWS comments section also takes them to task for their anthropomorphism and a number of their other misconceptions about AI and technological development more generally. It is well worth reading in its own right. One additional important point raised by Dimitri is the IC’s uncritical attitude toward developments in computer science and the tech industry in general over the past few decades. As Dimitri points out this abstention is an anathema to the IC’s regular railing against developments in the social sciences.

I do not know Dimitri, but I would urge him and those who see the reason in his arguments to consider whether the WSWS’s approach of name-calling, strawmaning and willful misrepresentation adopted by the WSWS in response to his critiques is an exception or a rule.

Documents from this website such as Marxism Without its Head and its Heart[25] and The Downward Spiral of the ICFI[26] critique, amongst other things, the IC’s uncritical attitude toward the way in which bourgeois ideology interprets modern science and a dismissive attitude toward basic issues in Marxist philosophy. The very same issues that are bound up in the IC’s uncritical adaption to AI hype. Rather than engaging with these works on the “highest theoretical level” as IC members like to say, North launched a campaign of defamation and slander against Alex Steiner which has continued on and off for over two decades.

 

Socialism AI as an educational tool

 

The IC’s uncritical attitude toward AI might be relatively harmless if it remained at the purely theoretical level. However, it’s clear the IC’s conception of Socialism AI is for it to be used as “an interactive educational resource.” In the release statement the WSWS Editorial Board[27] describes, “a transformative application of advanced technological development to the political education and mobilization of the international working class.” Socialism AI’s release apparently signifies “a new stage in the fight for socialist consciousness.”[28]

In fairness, some caveats are noted further down the introductory article, for instance it is acknowledged that “Socialism AI is an instrument of, not a substitute for, revolutionary leadership, political struggle and critical thought.” The article goes on to concede that “Socialism AI … is not infallible. The active and intensive use of Socialism AI will identify errors, facilitate their correction, and contribute to the improvement of the quality of its answers.”

But just when it seemed the WSWS was engaging in some caution and humility regarding the model and its capabilities, the piece concludes by urging readers to “spread awareness of this powerful weapon… but most important of all, wield this instrument of Marxist education and political as a member and fighter for the Fourth International.”[29]  

Behind the premature declarations of its world historic significance, the IC’s position on Socialism AI is that even if it is imperfect, it is a valuable educational tool. The issue with this claim is that it cuts across the lived experience of millions of educators and more thoughtful students. One such educator is an IC supporter and film studies professor named Tony Williams.

In a second piece, titled Science vs suspicion and fear: An Open Letter to a critic of Socialism AI,[30] North takes aim at a number of critical comments Williams has made about Socialism AI. Reading this piece one is instantly struck by the difference in tone toward Williams, even though his concerns often dovetail with those expressed in Dimitri’s comments. It is unclear why Dimitri is so deserving of political and personal denunciation while Tony Williams is treated with respect and understanding.

Indeed, in the letter to Williams, North even writes, “it would be entirely wrong to elevate this into a matter of principle that justifies a break with the SEP or the WSWS.” This was published only two days after he called Dimitri a “middle-class opponent of AI technology”[31] and accused him of “betray[ing]” socialism for making many of the same criticisms!

In this letter, North also makes some more nuanced claims about AI, particularly regarding its capacity for genuine artistic creation. This does not prevent him, however, from rejecting Williams’ caution toward AI’s use as an educational tool as anti-science.

Despite the marginally more nuanced approach, there are still a number of remarkable things about this letter, not least that North is explaining to an educator that their own experience of LLMs in education isn’t relevant. North’s conception of Socialism AI as an educational ‘weapon’ glazes over a number of observations made by teachers with direct experience of LLM use in the classroom and for homework.

Most importantly, when students use LLMs they are almost always asking the models to do the thinking for them. While it is possible to engage with LLMs in a more thoughtful and deliberate manner, in a pinch I think even the most committed revolutionaries are likely to fall into the same pattern. As will be discussed below, even tech professionals who understand how these models work fall foul of this tendency.

A more sophisticated use of LLM-based systems in education is as teaching assistants in the classroom. This is the notion of ‘personalized learning’ promoted endlessly by EdTech which helps fuel capitalist governments’ dreams of cutting education staffing costs. Again, any teacher will tell you that good teaching is not just listing off information á la ChatGPT at the front of a classroom. It requires genuine interaction, attention paid to a student’s prior knowledge and their interests, then tailoring an activity or the information presented accordingly. Given LLMs’ inability to learn in real-time and read visual and vocal emotional cues reliably they are highly limited in this respect.

The experiences of Professor Williams, and educators across the world, are backed up by initial tests of LLM use in the classroom which show LLM use can degrade academic performance in the long run. One field experiment, for example, found that students who used GPT-4 for coursework and then had it taken away performed 17% worse than a control group who didn’t use it at all.[32]

This isn’t limited to schools. In the workplace, even when used by knowledgeable professionals LLMs can lead to declining outcomes. In one metr.org study,[33] open-source developers with a minimum of 5 years experience saw a 19% decline in the speed with which they carried out tasks. Perhaps more interestingly, this decline in productivity was recorded despite the fact that the developers believed using AI had sped them up by an average of 20%. These findings are initial, and in some contexts it is likely LLMs will genuinely speed up and improve tasks, but they are a striking rebuttal to those that claim LLMs are unquestionably valuable. 

That Williams’ concerns about LLMs have good grounding in personal experience and empirical observation does not stop North from going all in to defend Socialism AI. Perhaps the outstanding moment in this piece is when North even analogizes Williams’ concerns about AI in education to vaccine denialism! This is both slanderous and an absurd analogy. A mountain of double-blind experiments confirmed the efficacy of the vaccines and that their side-effects were drastically outweighed by their benefits. These were cited and discussed in detail on the WSWS. In contrast, in his defense of Socialism AI North does not engage with any scientific literature at all.

Given this lack of engagement, it seems somewhat bizarre that North presents himself as a defender of science and his claims about LLM capabilities as aligned with scientific consensus. In fact, his views hardly align with the majority of AI researchers.[34] In one poll released last year, the Association for the Advancement of Artificial Intelligence found that 79% of respondents disagreed with the statement that “the current perception of AI capabilities matches the reality of AI research and development.”[35]

It is worth reflecting on how extreme North’s position is. He argues not only that AI is a ‘transformative’ technology, particularly for the purposes of education, but that anyone who questions this or even expresses caution is anti-science. Is it really possible to call this view anything other than close-minded AI hype? A principled defense of the benefits of LLMs and a system like Socialism AI would be careful to present evidence in favour of its claims and open to the concerns and evidence of critics.

In one WSWS comment responding to Williams’ concerns, Nick Barrickman, a longtime member of the US SEP, states that “the SEP has been testing out this software for its own purposes for quite some time, and those who introduced this it [sic] to the world have been doing so for even longer.”[36] If this is the case, then it doesn’t seem too much to ask the WSWS to provide an account of these tests and what they have found. Openly providing this information would be a great deal better than North’s current approach of chastising anyone who dare express doubts about the utility of Socialism AI.[37]

 

WSWSSlop

 

One of the worst uses of Socialism AI is its use by IC members to produce responses in online discussions. A piece in the Harvard Business Review defines workslop as “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”[38] I’d like to propose the less catchy term ‘WSWSSlop’ be added to the Marxist canon to refer to the tendency of IC pundits to post outputs from Socialism AI on social media. Indeed, we can also apply this to the WSWS’s “Answers from Socialism AI” feature, where some longer responses from the chatbot are posted as WSWS articles.[39]

I personally do not have an issue with office workers who find the drudgery of their day-to-day job meaningless, so they boot up ChatGPT and have it respond to an email or even write parts of a report- especially if their boss isn’t going to really read it anyway. However, a Marxist party should hold itself to higher standards.

Using text generated by a chatbot to respond to political questions or critiques of another person is something a revolutionist should never do. Writing, particularly in the context of education or debate, is (or at least should be) a fundamentally empathetic and effortful activity. You write a letter, email or article to give someone else insight into and perhaps convince them of your own ideas. This forces you to try to understand their ideas so that your writing can bridge the two. This process is often very difficult and requires time and care on both the writers’ and readers’ sides.

Using Socialism AI or any other chatbot to avoid this work jettisons this process and violates the expectation between interlocutors that both sides put effort into the exchange. Why should a worker or a student put the effort of reading an article written by Socialism AI? Why should someone put the effort to respond to a comment on Reddit written by Socialism AI? The reaction of most people who come into contact with IC members engaging in this behavior is to switch off. This behavior is an analog of the tendency of some IC members to shove articles in peoples’ faces both in-person and digitally on social media rather than engage in discussion on an equal footing. Socialism AI can only further encourage behavior of this sort.

In light of this, it is completely acceptable for Subreddits and other online forums to ban posts generated by Socialism AI. Users go to these places to understand what other people with similar views think about a particular topic. Making posts with the assistance or completely authored by Socialism AI is a violation of this basic expectation. While it is right to oppose bans on articles written by the WSWS or comments made by their supporters from social media, it does seem fair to outlaw the use of Socialism AI for political dialogue on these forums.

In some ways, Socialism AI is the ultimate tool for the sectarian. One of the main characteristics of a sectarian is complete apathy toward their audience. As Trotsky explained, “The sectarian looks upon the life of society as a great school, with himself as a teacher there. In his opinion the working class should put aside its less important matters, and assemble in solid rank around his rostrum: then the task would be solved.”

Trotsky adds, not insignificantly given Socialism AI, that the sectarian “lives in a sphere of ready-made formulas.”[40] One of the expected effects of Socialism AI is that it will reinforce the already strong tendency among IC members and readers of the WSWS to substitute real engagement with workers both on and off the Internet with the endless assertion of “ready-made formulas” one encounters on the WSWS. Worse, Socialism AI takes these formulas from WSWS articles and rips them from their context and makes them even more abstract. In short, Socialism AI will be employed to further cement the sectarianism that characterizes the ICFI.

 

Should LLMs have any role in a political organisation?

Brain Control: Bart Fish & Power Tools of AI / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/


 

Even if LLMs work well, is their systematic use in a political organisation advisable? Even if their educational potential is more in line with North’s claims than initial studies suggest, is their systematic use as an educational resource advisable?

In a paper How AI Destroys Institutions two legal scholars named Woodrow Hartzog and Jessica Sibley argue, “affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such.”[41]

They go on to list three characteristics of LLMs (‘AI systems’ is their preferred term) that degrade institutions,

First, AI systems afford offloading human tasks that demand wisdom and skill onto machines, which undermines and downgrades institutionally aggregated expertise… Second, AI systems afford automating and streamlining important choices, which short-circuits institutional decision-making… [which] ossifies the ability of institutions to take intellectual risks in response to changing circumstances. Third, AI systems isolate people by displacing opportunities for human connection, interpersonal growth, and the cultivation of shared purpose.[42]

Developing expertise, intellectual risks and a deep social embedding among comrades and the working class are absolutely critical factors in developing a Marxist cadre. 

The piece by Hartzog and Sibley is written from a liberal perspective and one imagines that many of the institutions it has in mind abandoned genuine democratic principles long before Trump’s return to the White House – many probably never even had any commitment to them. Nevertheless, it raises important structural points that are general to all forms of political or social organisation including revolutionary political parties.

It should be noted that the IC suffers from many of these ills already. Its leadership is unaccountable; its democratic processes are opaque as is how its money is used. If the authors of this piece are correct, then LLMs will only deepen these tendencies. The IC supporters’ common refrain to ‘read the website’ will become ‘ask the chatbot’, lowering the already limited amount of critical engagement by party members with articles published on the WSWS. By replacing critical thought among newcomers and long term members alike, systematic use of Socialism AI will make the organisation even more insular, more paranoid and, perhaps worst, more banal.

 

So what should socialists use LLMs and neural nets for?

 

None of this is to say that there aren’t some appropriate uses of LLMs and other models based on neural nets for a healthy revolutionary party or in education. The crucial point is that in determining what these use cases are Marxists have to be precise in understanding how these systems work and what risks come along in using them. Only once anthropomorphism and adaptation to bourgeois intellectual prejudices are put aside can the benefits and risks of LLMs and other technologies be analysed soberly.

Firstly, there are other algorithms labelled as ‘AI’ that could be very helpful for publications like the WSWS. For example, neural nets would be great for a search based on the WSWS’s archive. Publishing in so many languages also means that machine translation models could be (and surely already are) a great aid. In these cases hallucinations and other negatives of LLMs are less of a concern- although machine translated pieces often lose the ‘personality’ of the writer (although there often isn’t too much of that in most WSWS pieces).

For some activities more limited in scope than North’s current vision Socialism AI may have a place. Although occasionally making silly mishaps, if you know what you are looking for Socialism AI is mostly superior for browsing the WSWS’s archive than the website’s current search function- and presumably this would also be the case of a resource like Marxists.org.

Similarly if you know the right questions to ask, you can push past its formulaic responses to more engaging interactions. It can’t be ruled out that some users will find ways of benefiting from Socialism AI in its current form as an education tool, but there is no reason to think the IC membership will be especially immune to the tendencies seen among school children and software developers that have been discussed above.

After sufficient publicly verifiable testing and caution one day an LLM-based educational tool might well be fit-for-purpose. For the meantime, however, it seems the correct attitude toward the use of LLMs as educational tools is an open-minded skepticism.

 

Should Marxists just stick to “the science?”

 

Throughout this piece we have discussed how the ICFI postures as a defender of science against skeptics and detractors on the question of AI. We have shown that despite this, its views are not grounded in high-quality research nor the consensus opinion of computer scientists.

Let’s pretend, however, that this isn’t the case and North’s claims about LLMs reflect scientific consensus. Would this be sufficient to exhaust the questions and concerns raised by people such as Dimitri and Professor Williams? In short, no. There are many legitimate questions about AI that science is never going to be able to answer.

As we’ve discussed in the section quoting ex-OpenAI chief scientist Ilya Sutskever claims about AI consciousness and “artificial general intelligence” (a hypothetical AI system that is better than a human by every metric), wild claims about what LLMs and future systems can do are ten a penny, particularly in the corporate world. Given these claims underlie a number of mass layoffs and a huge boon on the stock market, they are hardly something Marxists should ignore.

Well, can LLMs or related systems become conscious? Is it appropriate to anthropomorphise them if they cannot? If one day AI tutors do lead to better exam results should they be used extensively in the classroom? Across the workforce, what is the impact of offloading large parts of human intellectual labour on a machine?

And of course, there are the more concrete political questions of whether workers should welcome the use of LLMs in their work. Should they be comfortable offloading their know-how and experience to a machine? Should they be open to replacing their face-to-face interaction with co-workers with a chatbot?

There are also more fundamental philosophical and political questions about humans and society raised by technologies such as LLMs. What is the nature of human intelligence, why is it (or isn’t it) unique? Can human intelligence and agency ever truly be replaced in the process of production?

The WSWS response to all these questions is to ignore them, and instead follow the “science,” whatever that may be (or they decide it to be). This completely skirts all of the issues above and smacks of crude positivism. In the IC’s world there is no need for conceptual thinking, all the Marxist movement needs is an overview of the “science” and all of these concerns ought to magically melt away.

On the contrary, a Marxist understanding of LLMs and “AI” tackles related philosophical and political questions head on. This understanding is of course informed by the latest scientific findings, but it is not exhausted by them.  

 

Conclusion

 

One of the main conceptions behind Socialism AI is the notion that Marxists cannot ignore technological innovations, nor grant the capitalist class a monopoly on their use. The piece introducing the chatbot writes, “Socialism AI represents a principled and scientifically grounded effort to harness the most advanced tools of human cognition for the emancipation of humanity.”[43] In his open letter to Tony Williams, North states, “The real question is whether the working class will leave these powerful tools entirely in the hands of corporations, states and the military, or whether it will consciously appropriate them for its own emancipatory purposes.”[44]

Here, we can only agree with Mr. North, at least in the abstract. However, the analysis of Socialism AI above shows that these declarations are formulas and just that.

The IC’s “scientifically grounded effort” has little relation to current research or the opinion of the majority of researchers. Instead, the IC’s leading members make a number of claims about LLMs that promote illusions and stereotypes that actively damage the working class’s effort to ‘consciously appropriate them for its own emancipatory purposes.”

On the basis of an uncritical attitude toward LLM capabilities, the WSWS has called on its supporters and members of the IC to use Socialism AI as widely as possible, particularly for educational purposes. This piece has discussed a number of legitimate concerns about the damage that using an LLM for those purpose can do. and have shown how initial research into this question seems to support this intuition.

Most damning, however, has been the completely unprincipled response to critics of Socialism AI. North has launched into one of his characteristic slander and intimidation campaigns against Dimitri, declared himself a defender of science without even engaging with one piece of literature on LLM evaluation, dismissed the first-hand experience of educators regarding LLMs and even compared those with concerns about the technology to vaccine skeptics!

It would have been one thing to release the model, invite feedback and criticism and then try to update and fix it. However, in releasing Socialism AI,  the WSWS has pompously declared it ‘historic’, instructed anyone who will listen to use it for anything they see fit, and is accusing critics or even those just urging caution as enemies of science and the working class. Then they have the gall to wonder why the so-called ‘pseudo-left’ finds them to be a laughing stock.

The IC’s complete adaptation to AI hype and close-mindedness towards critiques are symptoms of a deeper disease. Over 20 years ago, Steiner reminded the IC of Trotsky’s warnings about neglecting the study of dialectics and the struggle against empiricism and positivism. Having chosen to ignore those warnings, the IC has slipped further and further into an uncritical adaptation to the positivistic notion that new questions in the Marxist movement can be overcome without conceptual thinking or engagement with the historical circumstances of their development.

To put this all succinctly: Yes! Marxists must appropriate new technologies for their own ends. Yes! LLMs are not inherently pro-capitalist. But, no! Marxists do not uncritically adapt to the marketing claims of monopoly capital!

A deep understanding of LLMs does not mean just putting a minus where the capitalists put a plus, but directly engaging with scientific research in all of its nuances- positive and negative. Marxists will surely find highly creative and efficient ways to appropriate LLMs and future advances in computer technology. However, the main point here is not technological, but political. Trotsky outlined a vision for the raising of the technical level of the young USSR in Literature and Revolution, writing, “The main task of the proletarian intelligentsia in the immediate future is not the abstract formation of a new culture regardless of the absence of a basis for it, but definite culture-bearing, that is, a systematic, planful and, of course, critical imparting to the backward masses of the essential elements of the culture which already exists. It is impossible to create a class culture behind the backs of a class.”[45]

Marxists must respond to technology on a basis of a detailed understanding of its historical development, a sober conception of its abilities and drawbacks and a view to how it impacts social relations both inside a political organization and in the larger society generally. This is necessary to inform the working class about exactly what LLMs are, how they work, and what their concrete benefits and dangers are. Given they are such a novel technology the answers to these questions are not always going to be easy or immediately available. This uncertainty should just rise to a tolerance for heterodox opinions of the technology and its value. The WSWS’s approach to applying abstract Marxist formulas to tech headlines is a complete anathema to this.

Reflecting on the superficiality of the IC’s engagement with the scientific and political issues surrounding LLMs coupled with the fanfare surrounding Socialism AI’s announcement, one cannot help but feel that the whole effort was essentially another attempt at a short-cut. Jumping on the AI bandwagon will probably have gotten the WSWS a few more clicks and engagement for a limited period. It also promotes the illusion that Socialism AI and the new forms of party activity it will give rise to will be the final catalyst for the mass influx of the working class into the SEP (which North has been insisting is just around the corner for literally decades now). Unable to cut through a real presence among the working class, the IC is forced into gimmicks of this kind. As Trotsky once quipped, scratch a sectarian and you’ll find an opportunist.[46]

 

NOTES



[1]    To readers unfamiliar with the alphabet soup of the WSWS and the organisations it is affiliated to; the US SEP is the American section of the international body of the ICFI, which publishes the WSWS.

 

[2]    Welcome Socialism AI: A historic advance in the political education of the working class

https://www.wsws.org/en/articles/2025/12/12/gpid-d12.html

 

[3]           Workers and youth in Australia and New Zealand urge the use of Socialism AI (https://www.wsws.org/en/articles/2025/12/16/szqn-d16.html) and Australia postal worker encourages use of Socialism AI (https://www.wsws.org/en/articles/2025/12/19/jalz-d19.html)

 

[4]           On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? https://dl.acm.org/doi/10.1145/3442188.3445922

 

[6]     The ABC of Materialist Dialectics

https://www.marxists.org/archive/trotsky/1939/12/abc.htm

[7]           It goes on to state, “socialists must expose the political limitations of Hamas: its bourgeois-nationalist orientation, religious reactionary outlook on women’s and democratic rights, and inability to emancipate the working class. Trotskyists do not romanticise nationalist leaders; they criticise them and fight to win the masses, including those mobilised by Hamas, to proletarian politics.”

 

[8]    Statement of Principles of the Socialist Equality Party

https://www.wsws.org/en/special/pages/sep/us/principles.html

 

[10]      See the article by Dan Lazare, Terminal stupidity ~ Permanent Revolution. Lazare notes that,

“… why say one thing in 2002 and another in 2025?  Even though the ICFI has been politically disoriented for decades it was still capable of writing about Hamas in a way that was honest and smart.  Two decades later, any such capacity is gone – vanished, kaput.  Instead, the heirs of Healy’s legacy are indistinguishable from thousands of other mixed-up radicals eager to whitewash Hamas’ crimes in the belief that it will somehow bring about Palestinian liberation.  But it won’t.” 

 

[11]   Anatomy of a sect: ICFI expels a leading member of French section

http://forum.permanent-revolution.org/2024/09/anatomy-of-sect-icfi-expels-leading.html

 

[12]    Socialism AI finds Sam Tissot ‘Not Guilty’ and demands a review of his expulsion

http://forum.permanent-revolution.org/2025/12/socialism-ai-finds-sam-tissot-not.html

[13]                    A provocation that failed: On Alex Steiner’s attempt to discredit the ICFI’s defense of Ukrainian Trotskyist Bogdan Syrotiuk (https://www.wsws.org/en/articles/2025/02/04/mjza-f04.html). You can find our response to the various accusations in this article here (http://forum.permanent-revolution.org/2025/02/disentangling-another-wsws-web-of.html).

 

[14]   Socialism AI goes live on December 12, 2025

https://www.wsws.org/en/articles/2025/12/08/jfjv-d08.html

 

[15]    It is an interesting exercise to ponder where North got the notion of augmented intelligence from. According to the tech consultancy Gartner, “Augmented intelligence is a design pattern for a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making and new experiences (https://www.gartner.com/en/information-technology/glossary/augmented-intelligence ).” Interestingly, the term also appears in far-right tech investor Marc Andreessen’s “techno-optimist” manifesto (https://a16z.com/the-techno-optimist-manifesto/). Whatever North’s inspiration, the term is hardly any more technically informative than “artificial intelligence.” Its use seems to have more to do with encouraging people to get over their reservations about using LLMs than introducing any important analytical distinctions.

[16]         Link to Dimitri’s comment can be found here (http://disq.us/p/34odljy  ).

 

[17]   Crackpot philosophy and double-speak: A reply to David North

            http://forum.permanent-revolution.org/2015/09/crackpot-philosophy-and-doublespeak.html

 

[18]         Interesting examples of this tendency can be found in these two papers: https://arxiv.org/pdf/2510.02125 and https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf 

 

[19]         There is a very loose analogy to the inference computation these type of models do and human reasoning. The models break down inputs into smaller pieces and work through them step-by-step spell before generating an output – but the basic framework of next-token prediction is unchanged.

 

[22]   OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can’

https://www.businessinsider.com/openai-cofounder-ilya-sutskever-ai-graduation-speech-university-of-toronto-2025-6

 

[23]   The Eye of the Master: A Social History of Artificial Intelligence, Matteo Pasquinelli

            https://www.versobooks.com/en-gb/products/735-the-eye-of-the-master

 

[24]         In a second piece, discussed in more depth in the next section, North does walk back some of these claims with regards to LLMs’ capacity for artistic creativity. He writes, “Augmented Intelligence does not ‘think’ and ‘create’ in the way human consciousness does… it cannot anticipate, or ‘know’ in an artistic sense.” This limited aboutface creates more questions than answers. If AI understands factual regularities and logical relationships why can’t it think in the way a human consciousness does? If LLMs cannot realibly recreate or augment artistic labour, why should they be able to reliably replace or ‘augment’ educational labour?

 

[26]         The Downward Spiral of the ICFI discusses the smear campaign launched by the ICFI in response to Marxism without its Head or its Heart and has never been responded to by North or anyone else in that organisation. The full work can be found here: http://forum.permanent-revolution.org/2009/11/downward-spiral-of-internatioanl.html 

 

[27]         Welcome Socialism AI: A historic advance in the political education of the working class https://www.wsws.org/en/articles/2025/12/12/gpid-d12.html 

 

[28]         You can find a critique of the WSWS’s continuous use of the metaphor of “a new stage” from all the way back to the New York Transit strike in 2005 in Chapter 5 of Marxism Without its Head or its Heart (https://permanent-revolution.org/polemics/mwhh_ch05.pdf ). One wonders how many new stages have been declared by the WSWS in the intervening two decades.

 

[29]         Welcome Socialism AI: A historic advance in the political education of the working class (https://www.wsws.org/en/articles/2025/12/12/gpid-d12.html )

 

[30]         Science vs. suspicion and fear: An Open Letter to a critic of Socialism AI https://www.wsws.org/en/articles/2025/12/21/bzhq-d21.html

 

[31]   Technology and the working class: Responding to an opponent of Socialism AI

            https://www.wsws.org/en/articles/2025/12/19/thbn-d19.html

 

[32]         Bastani, Hamsa and Bastani, Osbert and Sungu, Alp and Ge, Haosen and Kabakcı, Özge and Mariman, Rei, Generative AI Can Harm Learning (July 15, 2024). The Wharton School Research Paper, Available at SSRN: https://ssrn.com/abstract=4895486 or http://dx.doi.org/10.2139/ssrn.4895486

 

[33]   Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

            https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

 

[34]         It should also be noted that it is by no means normal for a Marxist to defend science in this manner. As has often been the case across multiple disciplines, the correct points of view are not necessarily those held by the majority of researchers at any one time. The job of a Marxist to to understand the major controversies within a given discipline in their historical context and stake out a dialectical materialist view on that basis.

 

[35]   AAAI 2025 Presidential Panel on the Future of AI Research Published March 2025

https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf

 

[36]         http://disq.us/p/34osuvl

 

[37]         Most of the comments in the WSWS comment section of pieces dealing with Socialism AI are quite alarming. Many pile-on to North’s views of AI uncritically while others from previous AI skeptics thank North for leading them back on the right path, without giving an inclination of what arguments caused them to change their mind.

In a comment responding to North’s open letter, Tony Williams quips, “I felt I was in the world of the Cultural Revolution horrendously described by Jung Chang in WILD SWANS (http://disq.us/p/34oji6n ).” Even if this is tongue and cheek, he’s not far off. Some of the comments responding to Dimitri are worthy of internet trolls. Unfortunately, I think the blame for this lies squarely with Mr. North – this is exactly the sort of low-quality engagement you solicit by engaging in intimidation and name-calling rather than honest argumentation. 

 

[38] AI-Generated “Workslop” Is Destroying Productivity

    https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity

 

[39]At the time of writing the most recent article in this series was What must be done to prepare a nationwide general strike against the Trump administration? https://www.wsws.org/en/articles/2026/01/29/esds-j29.html

 

[40] Sectarianism, Centrism and the Fourth International

https://www.marxists.org/archive/trotsky/1935/10/sect.htm

 

[41] How AI Destroys Institutions, Boston Univ. School of Law Research Paper No. 5870623

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623

 

[42]         Ibid.

 

[43]         Welcome Socialism AI: A historic advance in the political education of the working class https://www.wsws.org/en/articles/2025/12/12/gpid-d12.html

 

[44]         Science vs. suspicion and fear: An Open Letter to a critic of Socialism AI https://www.wsws.org/en/articles/2025/12/21/bzhq-d21.html

 

[45]         Literature and Revolution, Chapter 6: Proletarian Culture and Proletarian Art https://www.marxists.org/archive/trotsky/1924/lit_revo/ch06.htm

 

[46]    Paraphrased from Sectarianism, Centrism and the Fourth International

https://www.marxists.org/archive/trotsky/1935/10/sect.htm


Print Friendly and PDF
Share:

100th Anniversary of the October Revolution

100th Anniversary of the October Revolution
Listen to special broadcast

ΟΧΙ: Greece at the Crossroads

ΟΧΙ: Greece at the Crossroads
Essays on a turning point in Greece 2014 - 2017

Order ΟΧΙ : Greece at the Crossroads

Permanent Revolution Press

Permanent Revolution Press
Print edition of Crackpot Philosophy

Order Crackpot Philosophy

Trump and the train wreck of American liberalism

Trump and the train wreck of American liberalism
Two essays by Frank Brenner

Order PDF of 'Trump and the train wreck of American liberalism'

PDF of Brenner on Trump -$1

Contact Form

Name

Email *

Message *