Confusing AI with OpenAI: Separating Tech from its Capitalist Controllers

Recently, social media has been awash with “Ghibli-fied” versions of netizens selfies, family pictures and travel memories using OpenAI’s latest image generation capabilities for their LLMs. Some have taken a bit more controversial dimension such as the image posted by white house on social media at the arrest of an alleged drug dealer or the cartoonification of an image from Babri demolition on Indian twitter. The trend sparked a wave of online debate about the ethics of AI, with fans quoting Studio Ghibli’s co-founder, Hayao Miyazaki’s well-known disgust and opposition to AI’s appropriation of artists’ styles. Miyazaki had famously called such technologies an “insult to life itself.”

AI, copyrights and invisible labour

This is not the first time that ethical issues especially surrounding copyright and intellectual property rights have surrounded AI- rather this has been the overwhelming debate in recent years, ever since OpenAI released its first LLM Gpt 3.5 for public use.  Several dozen lawsuits have been filed in the United States over the last few years  alleging that AI developers misused copyrighted material to train their models. For example, a group of 17 authors ,including prominent novelists like George R.R. Martin and John Grisham , sued OpenAI in mid-2023 for ingesting their books into ChatGPT’s training data without consent. In another case, The New York Times and other publishers sued OpenAI, claiming the company “exploited the newspaper’s content without permission or payment” when scraping text to train its models. AI companies argue that training data use falls under legal exceptions like fair use, citing precedents such as the Google Books case and text-and-data mining allowances. However, courts have yet to offer a definitive ruling on whether training AI models on copyrighted data constitutes infringement.

Perhaps the most significant instance of challenging Big AI-Tech has been that of Suchir Balaji. The young Indian-origin engineer and whistleblower  helped build OpenAI’s flagship models but grew uneasy with the company’s data practices. Balaji was concerned that scraping billions of online writings without permission “violated copyright law”. He had spoken to The New York Times and the Associated Press, offering to testify and providing what he said were “unique and relevant documents” on OpenAI’s willful infringement of copyrighted material. Shockingly, on November 26, 2024, Balaji was found dead in his San Francisco apartment at age 26. His untimely and suspicious death cast a chilling shadow over the ongoing copyright battles yet

Even before Balaji’s whistleblowing came to public attention, significant ethical concerns had already surfaced, chief among them the use of pirated archives like LibGen and Sci-Hub by major AI firms such as Meta and OpenAI to train their early models. Meta reportedly downloaded 81.7 terabytes of data via torrents from LibGen, Sci-Hub and similar databases while, in a parallel lawsuit, authors suing OpenAI obtained records indicating that OpenAI’s early models (before 2022) were also trained on content from LibGen. The hypocrisy is hard to miss: while, the original pirate platforms knowledge face relentless legal challenges, AI companies using the same data to train their models for free are seen as technological pioneers. Sci-Hub and LibGen have long been pursued by academic publishers like Elsevier while courts in the US  issued multi-million dollar judgments against them and banned their domains.  In India, where researchers have mobilized in support of Sci-Hub’s mission to democratize knowledge, a high-profile lawsuit by major publishers is ongoing. There can be no debate that profit motives shall always trump the need for accessible information.

As such, one might be inclined to assume that AI companies are staunchly opposed to copyright laws and restrictions imposed by intellectual property laws as they hinder access to quick, free data. Access to large amounts of data leads to better machine learning models and hence higher profits for the companies. However, a cinematic irony played out recently with the launch of Chinese SOTA model  DeepSeek that outperformed American AI models while using significantly less time and resources. This immediately raised suspicions: how could a smaller player leapfrog the likes of OpenAI? In early 2025, OpenAI accused Deepseek of industrial espionage by algorithm, claiming DeepSeek “distilled” its AI models, meaning DeepSeek used outputs from OpenAI’s systems (such as via the ChatGPT API) to train its own model. This technique called model distillation, where one AI model learns from the outputs of another, larger model led OpenAI to decry this as intellectual property theft. OpenAI’s response was to collaborate more closely with the U.S. government to “protect our IP” and treat advanced models as strategic assets​. The nationalist undertone is clear: American AI capital must be shielded from Chinese rivals.

While Sci-Hub’s founder, Alexandra Elbakyan, argues that knowledge should be freely available – “if the law forbids that, perhaps the law should change”, AI corporations, in contrast, want exclusive rights to monetize that knowledge once fed into their models. While Aaron Swartz, the young techie who tried to provide open access to free research papers via JSTOR committed suicide at the age of 26 after being hounded relentlessly by copyright suits, AI stalwarts are commended as transformative tech revolutionaries. Suddenly, copyrighted content , authors’ and artists’ consent is a minor collateral towards the cause of technological advancement.

Furthermore, there is much to be explored behind the much touted “intelligence” of these machines. Behind the illusion of “intelligence” are thousands of human laborers: labeling data, moderating toxic content, or training models for cents per hour. Workers in Kenya, for example, were paid less than $2/hour to label graphic and traumatic content for OpenAI’s safety filters. These workers are sometimes called the “ghost workers” of AI, because their contributions are invisible in the final product. This hidden labor force is largely outsourced to developing countries or engaged as gig workers with few protections. Scholars Gray and Suri, in their book Ghost Work, document how this on-demand invisible workforce has become a new global underclass: workers have no job security, perform mind-numbingly repetitive tasks, and are subject to constant algorithmic surveillance and rating. They effectively function as humans acting like machines to make the machines seem smarter. The marvel of AI seems to be assembled on the backs of real human labor rendered invisible and stolen works of artists and writers.

The (true) nature of AI

However, this brings us to the critical juncture where one wonders whether AI is doomed to inherently “steal” content and labour? Is AI inherently exploitative? Do technological advancements need speed-bumps towards the cause of greater humanity? Does AI inevitably violate privacy or naturally concentrate power? Or, to generalise, will machines (thinking-machines or otherwise) always be exploitative against human labour?Will machines always displace labour, immiserate labour, squeeze the profits that were meant to go to labour? Careful consideration would reveal that the debate on the ethics and implications of AI are not new at all- they are merely the latest subset in the massive corpus of existing work on labour and capitalist relations. Capitalism is not merely a backdrop here—it is the engine. Under capitalism, technologies are not developed for human flourishing but for profit, efficiency, and control. To imply some sort of mythical technological determinism is to absolve and overlook the capitalist machinery in which all technology today is ensconced.

One must keep into account that technology neither develops nor functions in a vacuum , rather it takes shape within an economic system that is today brutally capitalist. Far from being neutral, AI’s trajectory is guided by oligopolistic tech corporations closely intertwined with state and even military power. Marxian perspective suggests that any technology under capitalism will be subject to exploitative use: owners of capital deploy machines to maximize profit and control labor, rather than to benefit humanity impartially.  It is the same debate that rose as far back as the cusp of industrial revolution and the invention of the cotton mill. When the cotton-spinning mill first emerged, it was hailed as a technological marvel but the machine became “not a means to lighten labor but a means to lengthen it.”(Marx Capital Volume I). Writing as early as 1845, Engels,documented  in The Condition of the Working Class, how the  skilled weavers were reduced to mere appendages of machines. Technology in itself did not necessitate this misery, its insertion into a capitalist economy did. Capitalists have historically used technology as a managerial weapon against the working class.  Just as the power loom impoverished the artisan, the chatbot undermines the call center worker. Is one to conclude that the machine displaced labour by its own will or did the owner of the machines stand to gain from labour displacement and immiseration?

The profit motive drives AI companies to gather data without consent (to better target ads or train models), to prioritize speed to market over safety, and to externalize costs like environmental damage or labor exploitation- none of which is exclusive to the AI industry alone and the norm for all industries under capitalism. It is a poignant irony that OpenAI began as a nonprofit with a mission of benevolent AI, only to morph into a capitalist enterprise valued at tens of billions of dollars, now mired in lawsuits and controversies. The technology did not change its stripes; the economic context did. A machine-learning algorithm doesn’t decide to rip off artists or underpay workers – corporations do. “Cloud storage” or data centres do not decide to occupy or direct local water resources, the owning corporations choose to impoverish the local masses in lieu of cheap resources. AI does not displace labor because it must as its natural tendency; it displaces labor because it can be used by the owners of AI to  reduce wage costs and increase surplus value. As Evgeny Morozov and other contemporary theorists argue, today’s “tech feudalism” is just capitalism wearing new digital clothes, not a fundamentally new epoch.

The scenario of West African or South Asian workers performing cheap labor for Western tech firms is a direct descendant of older forms of imperialist exploitation – from colonial plantations to today’s electronics factories. Whether it’s cobalt miners in the Congo digging up material for smartphone batteries, or clickworkers in the Philippines screening social media content, the pattern is consistent: global capital seeks the lowest-cost labor and dumps the heaviest burdens on those least able to resist. AI, as currently pursued, unfortunately extends this pattern rather than breaking it. In the Industrial Revolution, children lost fingers and miners got black lung; in the AI revolution, workers lose their sanity while staring at trauma or suffer ergonomic injuries from ceaselessly entering data to feed AI models. The common denominator is the capitalist mandate to reduce labor costs and increase output, no matter the human consequence. Similarly, capitalism has historically externalized environmental costs. Early factory owners didn’t care that they were dumping dye into rivers or filling city air with coal soot , those costs were borne by the public in disease and ecological damage. Similarly, today’s AI leaders operate in a largely deregulated space where there’s little penalty for emitting carbon or draining water. The climate impact of AI is “no one’s problem” in the corporate calculus; or rather, it will be humanity’s problem down the line, not the company’s problem today.

It is not merely the manner of operation of AI enterprises, but one can find semblance in the very beginning of AI endeavours in the beginning of capitalism itself. AI companies, backed by billions in venture capital, have carried out a form of digital primitive accumulation of collective knowledge akin to what early capitalists did to land:expropriating, privatising and enclosing . Without consent, compensation, or accountability, firms like OpenAI and Meta have vacuumed up vast corpuses of copyrighted and creative works to feed their models. This is primitive accumulation by other means,not through the sword or enclosure acts, but through web crawlers and machine learning pipelines. David Harvey, in The New Imperialism called this the “accumulation by dispossession,” describing how late capitalism continues to expand through new enclosures, privatizations, and expropriations of public goods.AI enterprises’ practices are a textbook example of this logic. The labor of writers, the imagination of artists, the knowledge of researchers,these are not treated as contributions to a shared cultural project, but as raw inputs to be mined.The tragedy is compounded by the hypocrisy. The very companies that have pirated millions of works are also lobbying for stronger intellectual property protections, for themselves. Surely, AI needs data to train and improve the underlying model, that is the only “tendency” of AI, the technology. To ensure that data is stolen, pirated, acquired and then sold for a price is a tendency of owners of the “AI capital”.

The conflation of AI and OpenAI (or using “AI” as a shorthand to mean AI technoligarchy) has led many to believe that the technology must be put to an end if human life and integrity is to be valued.  In fact in several critiques, the differentiation between “AI and OpenAI” has been completely missed out, hence anthropomorphising AI into having some vile anti-human(ity) ambitions. Phrases like “AI is stealing,” “AI is replacing us,” “AI is lying,” are abound. But AI is not a person, nor a political actor; it is a tool, a set of mathematical architectures and data pipelines that is owned by person(s) or political actors. When critics focus on “AI” in the abstract, as though it were a rogue demon or a sentient villain, they let capital off the hook. This discursive displacement is politically devastating.This confused critique of AI is manipulated to seem like an enemy of technological advancement itself. Any attempt to critique the actual political economy of AI gets dismissed as “anti-technology,” “neo-Luddite,” or “standing in the way of progress.” In this way, the language itself becomes a tool of ideological control. Similarly, the popular Hollywood refrain “AI might destroy humanity” are preemptive misdirections. These narratives, while entertaining, train us to look for danger in the wrong places. While we worry about hypothetical robot rebellions, real harms occur when data laborers in the global south paid pennies to label traumatic content for AI training, when workers are displaced by automation with no adequate social support, when artists and creators have their  work scraped without consent or compensation, when communities are subjected to algorithmic discrimination in housing, healthcare, and criminal justice or when tech companies think not twice before exploiting and polluting natural resources. None of these outcomes are inherent to technology but are carefully and intentionally made decisions by Big Tech owners about  labor conditions, and capital flows that determine how these systems operate and who benefits from them. In short: the fantasy of AI as a dangerous god serves to conceal the reality of capital as a dangerous master. Technologies embody specific power arrangements and social relationships. The question isn’t whether to accept or reject “AI” in some abstract sense, but rather: Whose interests does this specific implementation serve? What values does it encode? How does it distribute benefits and harms? Breaking free from this conflation requires a more precise vocabulary and a materialist analysis of the political economy of AI. By separating AI as a technological possibility from AI as currently implemented by specific corporations, we can imagine and advocate for alternative technological futures.

Once free from abstract criticisms of technology, the critique naturally redirects towards the capitalist ownership that dictates technology’s exploitative use. Yet, critics who identify capitalism as the root of technological exploitation are met with another common fallacy: the assertion that without capitalism, innovation would cease. 

Could we have AI without OpenAI?

If AI, or any technology for that matter does not have any inherent traits and is defined by it ownership and the socio-economic context, one may ask how does non-capitalist-controlled technology look like? Will a non-capitalist society have technological advancement or innovation when massive profit accumulation is not the driving motive? This question is not just theoretical. It reveals how deeply capitalism has naturalized itself. So deeply, in fact, that many cannot imagine scientific progress unless it is wearing a corporate logo. But this is a historical illusion. Innovation, even in its most world-changing forms, has never been driven solely,or even primarily,by profit. As Mariana Mazzucato documents in The Entrepreneurial State (2013), the most “revolutionary” capitalist innovations often rest on decades of state-funded risk-taking and basic research. Once the groundwork is laid by public institutions, capital swoops in to monopolize, commodify, and enclose. In other words, the role of profit is not to produce innovation but to seize it. Moreover, the assumption that capitalism rewards innovation is increasingly untrue. Today’s corporate tech landscape is dominated not by innovation, but by rent-seeking, IP hoarding, and monopolistic control.

So what might technology look like in a society not driven by profit, but by human need, ecological balance, and democratic control? The ethical ills that appear to be caused by technology can be solved by changing the ownership and governance of that technology.  Instead of proprietary systems controlled by a few big firms, we could have openly shared AI research and models whose benefits (and decision-making) are collectively held. Rather than AI being used to cut jobs and boost stock prices, it would be used to eliminate drudgery and make good on the promise of automation: that labor-saving inventions lighten the toil of everyone We already see hints of different models in the world of open-source software and volunteer-driven projects. For example, the distributed volunteer effort that produced “Bloom,” a large language model released by a collaboration of international researchers, shows that alternatives to the Big Tech model are possible. Bloom was trained on public research infrastructure with an emphasis on transparency and diversity of training data, not secret corporate data vaults. While Bloom is still under a research license, it demonstrates how a cooperative approach can succeed where each contributor has a say in the project.

Subscribe to Our Newsletter

Get the latest CounterCurrents updates delivered straight to your inbox.

In a non-capitalist world, technology forms the infrastructure that handles the burdensome, repetitive, dangerous, and dull work that has historically drained human potential. In this world, the displacement of labor would no longer mean despair but liberation. The collective ownership of technology by the public would ensure AI’s motives are re-aligned to public well being. As Herbert Marcuse wrote in One Dimensional Man, under a liberated society, “technology would cease to be the instrument of domination and become the instrument of liberation”. This is the horizon we must reclaim,not anti-technology, but post-capitalist technology. Not the end of machines, but the end of machines as tools of capitalism.

Hanan Irfan is a data science and AI engineer with a postgraduate background in heterodox Economics from Jawaharlal Nehru University.

Support Countercurrents

Countercurrents is answerable only to our readers. Support honest journalism because we have no PLANET B.
Become a Patron at Patreon

Join Our Newsletter

GET COUNTERCURRENTS DAILY NEWSLETTER STRAIGHT TO YOUR INBOX

Join our WhatsApp and Telegram Channels

Get CounterCurrents updates on our WhatsApp and Telegram Channels

Related Posts

Two Models for Agentic AI

BOSTON – AI “agents” are coming, whether we are ready or not. While there is much uncertainty about when AI models will be able to interact autonomously with digital platforms,…

Join Our Newsletter

Get the latest CounterCurrents updates straight to your inbox.

Annual Subscription

Join Countercurrents Annual Fund Raising Campaign and help us

Latest News