AI: New weapons of competition Ruthless Criticism

Translation of excerpt from the article “‘Künstliche Intelligenz’ – die neue Wunderwaffe in der Konkurrenz um Weltmarkt und Weltmacht” from GegenStandpunkt 4-23

Artificial Intelligence:
New weapons of competition

AI programs can be trained with a huge and constantly expanding database to process a task-specific selected data set in such a way that they assign tags specified by a programmer with a high degree of accuracy or generate another statistically corresponding data set from it. So there you have it, the new universal technology that can be used to automate human activities that previously fell under the category of “mental labor,” or at least required a conscious and decisive subject. The applicability of the technology, which replaces activities involving recognition, understanding and decision-making, seems almost limitless.

Applicable to commerce and manufacturing, transportation, credit agencies of all kinds, medicine, finance, government administration, jurisprudence and warfare, AI software shows how much schematism, mindless rule-following and routine sorting of cases into ready-made boxes make up the intellectual activities that underpin the capacities of a modern nation – from the productivity of its economy, to the efficiency of its governmental and social institutions, to its military might.

However, the planned or already installed applications of this type of software also show who benefits from it and for what purpose. Apart from the small helpers built into cell phones and assistance systems for cars which have already become commonplace, normal citizens experience progress mainly as a danger: the automation of all kinds of brain work threatens millions of peoples’ source of income. It is a tool of capital for cheapening the labor factor, i.e. to further separate wage-earning humanity from the wealth that this collective produces, and of political rule in its various fields of social administration, surveillance and control, as well as its deployment of power externally.

Rationalizing production

The AI sector can make an interesting offer to capitalist industry and its insatiable need to reduce the manufacturing costs of its products in order to accommodate a larger profit margin in the sales price: The task of saving on paid employment, i.e. renumeration for labor, has its measure and limit namely in the investment cost for the machinery used to make labor superfluous: It must be less than the cost of the labor that it replaces. AI programs are expensive to create and require large computers and databases, but once they are created are a mundane piece of software that can be purchased cheaply if they are used widely enough and do not require any expensive hardware on the part of the user.

On the labor side, the fact that it can be replaced by an AI program implies a tremendous narrowing of the mind, namely the reduction of brain work to a single function. The capitalist organization of work has already done a lot to people before the new art of programming can prove what it does for profit. Testing, classifying and sorting activities that a worker performs by sight or some other form of detection are now being taken away from many – along with their income. “Where a person can make a decision based on images, artificial intelligence can do the same.” (FAZ, January 5, 2018)

A human quality inspector, for example, has to inspect workpieces on a conveyor belt in order to distinguish unflawed specimens from those with dents or paint damage. The perceptual task leads directly to a decision whether the product should be rejected. This dull routine work demands attention and concentration, strains the – simultaneously underutilized – mind and does not tolerate distraction or fatigue. The resulting errors in judgment are written down by capital in the list of defects of its labor power – apart from its main shortcoming, the wages it costs. The AI program promises a remedy.

The better-paid jobs in management or in the medical and insurance sectors are also the focus of this form of rationalization. The special expertise of these employees is replaced by a statistical model. The insurance industry is discovering that AI machines can significantly reduce its expenses for clerks, speed up processing, and make it easier to detect errors and fraud. The automated submission of claims via an app on the insurance customer’s cell phone has been around for some time. The identified “automation gap” lies in the subsequent evaluation of photos. This is where AI software steps in. The recorded decisions on applications from the last few decades, condensed into an AI model, make the clerk’s experience largely superfluous. It is sufficient for them to correct the errors afterwards, which are reported back by the recipients of the incorrect decisions – the inherent error-proneness of AI is not a problem given its higher capitalist benefit. Where expensive skilled labor cannot be replaced altogether, its productivity can be increased and the number of clerks required can be reduced by decreasing routine tasks – not to relieve them, which goes without saying for a capitalist perspective, but to burden one employee with the work done by two workers in the past. All business correspondence and communication with customers becomes subject to automation with such tools.

Even higher level professions that previously required a degree and judgment in dealing with language and what it relates to are being rendered superfluous by AI programs and devaluing the corresponding qualifications. The work of a translator is increasingly being shifted to programs. Such AI models for language translation are trained by pairing all the digitally available texts of two languages. Statistics do the rest. The AI model assigns each word in one language to one in the other language. The result may give the impression that the way a human translator works is being replicated by a machine. But it works differently. It understands a sentence and expresses its meaning in the other language, which of course it also understands. The AI program determines what, according to its training corpus, the most likely word sequence is because it is the one that occurs most frequently in the statistically evaluated word groups of other translations.

Intrinsic limits

The intrinsic limits of the “data-based” approach are also evident in this application: when faced with a formulation that does not occur in the training data or a linguistic phrase that is semantically plausible but linguistically unusual, the machine delivers a meaningless or garbled result. In general, the translation programs available today require correction by a human interpreter afterwards to produce a good translation, but in everyday life the automatically generated version is often sufficient.

Generative language models (such as the Generative Pretrained Transformer, GPT for short) go one step further. They collect the textually available statistical use of words in a language regardless of an application. They can generate the most frequently observed word in the training data set for a coherent text of currently up to 4,000 words. This language model is a preliminary product that needs to be trained further for specific applications with specific types of text. With a little special training, it can, for example, replace countless call center employees with their already limited repertoire of answers. All these applications have one thing in common: the generated text is a mindless statistical product. Often enough, the reader can assign a meaning to it, but that is then his work. So it is quite inappropriate to trust in the accuracy of the generated answers. That is not the task of the program. It delivers a text that is typical in the context of the given keywords.

Text generators can also be used, for example, to automate the work of journalists. If, say, the available sports coverage from the past is chosen as the training material for the AI model and current results from games and competitions are linked to the most common phrases and sentence sequences to form readable articles, then the ability to automate this writing reveals how much “intellectual assembly line work” has been carried out so far in the so-called creative professions. Here it’s probably not a pity that this intellectual activity is being replaced. The livelihoods of the people who have carried it out benefits the increase in earnings of the publishing companies, and the reader reads what he always reads anyway.

Automated state

The subjugation of the population to the rule of the state, and thus to the capitalist economic order that it imposes on them, takes place in practice by subsuming the entire life of the citizens under the law. For all types of acquisition, modes of existence and all areas of life, laws stipulate what citizens are allowed to do and must do and what they are entitled to. The state take care of this; in the event that the rules are violated and disregarded, the police and judiciary get involved.

However, the day-to-day rule of law is done by offices and departments where citizens register themselves and their cars, make their identity official, declare and pay their taxes, submit applications if they want to build something, draw a pension, enroll a child in school, or receive any other kind of help. As soon as dealings with the state bureaucracy that are required of citizens can be done digitally as a collection of data, administrative practices and even the administration of justice can, in principle, be automated with the help of AI software. After all, it is a matter of classifying the individual with his socially relevant characteristics and concerns – not fundamentally different from sorting workpieces – as a case of legal regulations or administrative provisions and applying the same to him. An algorithm that has been trained with the data from previous administrative practices will also make the most likely, i.e. the most common, assignment to date, and update it for the new case – much faster, of course, and therefore on more cases than the case handler who has become redundant as a result.

The democratic state, which never has enough money for its tasks and ambitions, has the same interest as the capitalists in freeing its coffers from the livelihoods of its employees, making their work more productive and therefore cheaper for itself – even if they do not produce a surplus. So even a long period of training and professionalism do not necessarily protect clerks and decision-makers in government agencies from being replaced by an AI program.

However, it is precisely in this branch of the vigorously pursued digitization of public administration that politicians are encountering a pitfall of AI that is seriously slowing down progress. “Administrative authorities that use AI may have difficulties complying with the obligation to provide reasons under EU administrative law. According to this law, authorities are obliged to justify their decisions to those affected so that they can defend themselves against them if necessary . . . This is a threshold that modern AI systems . . . do not meet according to the current state of the technology. Their use by the public administration is therefore excluded.” (FAZ, August 3, 2022)

However, AI algorithms as sovereign decision-makers deserve mistrust not just because of their inherent opacity and unpredictability, but also because of some results that violate the current dictate of non-discrimination. This is also in the nature of things: since the assessments and decisions made by AI are extrapolations from previous practice, they also perpetuate the prejudices and biases that have taken effect and are a necessary part of the authoritative decisions made by officials about other people. For example, US AI predictions on the recidivism of offenders can only be expected to confirm the negative judgment about people who are both black and poor that is widespread among the police and authorities.

This can’t be permitted in a constitutional state. On the other hand, because political rule must be rationalized, the EU Commission doesn’t stop at excluding AI from legal and administrative practices, but calls for the invention of new algorithms that enable its decisions to be traced and retrospectively assessed. State actions must be able to stand up to the criterion of justice. Under no circumstances should the programmed procedures of governance undermine the basic trust that citizens have in their authorities. A “trustworthy AI” must guarantee that state decisions are accepted. And last but not least, the branded product from Europe should boost the export business.

The second benefit of AI technology for the state is to increase its grip on members of the public. There are limits to their willingness to cooperate and obey the law, and some deviant behaviors escape state control. Automated data checks can help in a thousand places. An AI-enabled “Financial Intelligence Unit” combs through forms for the tax office looking for tax offenses. Applications for reduced work hours compensation at the Federal Employment Agency are run through an AI program to detect more cases of fraud than before. The digital traces that citizens constantly generate through their use of the internet, whether they want it or not, are suitable for the sovereign control of the citizens. Other forms of surveillance within the country – such as video cameras in public spaces – are experiencing an enormous upswing. So far, it has not been the techniques used in eavesdropping or filming passers-by, customers, or public transportation users that has led to bottlenecks in surveillance, but the analysis of the recorded data. AI software not only makes the work of the police and securities services cheaper, but also makes it possible to use the massive amounts of images, sounds and data material that would be impossible to manage with the manpower of the agencies. The fact that this is now possible with biometric facial recognition and other technologies in turn fuels the agencies’ hunger for data, which is called “intelligence” in English.

What applies to the state domestically applies even more to espionage externally. Algorithms are used to analyze the internet and mobile communications traffic siphoned from all over the world. Based on this global communication data, the USA is developing an AI model in line with its status to predict unrest of all kinds and everywhere – especially unrest that it has not instigated itself.

Intelligent war machinery

At least as important as all the economic and domestic governmental reasons for a modern nation to not miss out on AI developments is its military potential. The ability of AI programs to carry out the steps from data evaluation to reaction more comprehensively and faster than any human processor makes them the current miracle weapon in military technology. Decades of research policy have had an effect here, and the funding programs launched by militarily powerful states leave no doubt about their ambitions.

In order to secure one’s own superiority, it is important to emancipate oneself from the limits of human perception and reaction capabilities. For reconnaissance of the enemy, it is crucial to search for recurring or new patterns on satellite images or radar screens and to deduce at lightning speed where the enemy is located, in what numbers, and with what equipment. AI programs promise to give military command personnel the ability to determine the options for action available in battle in terms of target selection and “means of action” more quickly. Drones, guided weapons of all kinds, unmanned ships and submarines have long been on the market as “intelligent weapons.”

If the superpowers refuse to outlaw autonomous weapons, they will probably also fight their competitive battle for weapons technology superiority in this area and trust their well-funded AI research to gain a decisive advantage. The authors of a manifesto[1] warn the military apparatuses and states they work for that automatic weapons could slip out of their control. The US Joint Chiefs of Staff has responded to these concerns and given assurances: There must and will always be a human at the end of the decision-making chain. That is reassuring: only humans kill ethically. But the researchers’ doubts also remain: can the military leaders ensure that they retain control over their automatic killing machines and the designs for manufacturing them, despite the extremely easy way to proliferate them? “Easy access to powerful AI systems increases the risk of one-sided, malicious use. As with nuclear and biological weapons, it only takes one irrational or malicious actor to cause damage on a large scale. Unlike previous weapons, AI systems with dangerous capabilities could be easily disseminated by digital means. . . . Malicious actors could repurpose AI to be highly destructive, which in itself poses an existential risk and increases the likelihood of political destabilization.”

Political opponents – whether states or militant groups – do not have to “repurpose” anything in order to unleash the destructive potential of appropriated AI warfare technology, but the fact that, once created, it could fall into the wrong hands is its permanent shortcoming which demands all the more control and supervision from good, namely established, actors. Criticism of the perfection of the arsenal of destruction could not be more affirmative: it warns that an opponent could become the danger for “us” that “we” want to be for them. AI specialists that they are, they warn of AI dangers that are not those of AI at all. But that’s not all: in an adventurous twist, they reverse the relationship and declare their devices themselves to be subjects that they believe have roughly the same destructive purposes as the “malicious and irrational actors” from whom they want to see them protected. What makes them think that? “Because we don’t understand AI very well there is a prospect that it might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we’ve designed that might play some devastating role in our survival as a species.”[2]

The fact that one can’t see how the millions of parameters interact to produce the result invites the author to speculate: he does not want to openly say that this is the case, but it could be that the software, in the course of its constant optimization, is making the transition to something completely unique and beyond our control. No further argument is needed to imagine technology as a new invasive species that competes with humanity for survival.

The authors of the manifesto want to bring precisely this gap under control, but fear being outwitted by the software they have constructed: “Models exhibit unexpected, qualitatively different behavior as their competence increases. The sudden emergence of capabilities or goals could increase the risk of humans losing control of advanced AI systems.”

Unexpectedly, the automaton becomes an agent that wields power against humanity: “AIs that acquire significant power can become particularly dangerous if they are not aligned with human values. Power-hungry behavior can also encourage systems to feign good behavior, conspire with other AIs, outmaneuver supervisors, and so on. From this perspective, inventing machines that are more powerful than us is playing with fire.”

Machine subjects?

The second source of the fear about losing control and the hypostatization of machines into subjects that revolt against humanity is the comparison of the abilities of the two: artificial super intelligence is supposed to be much more powerful and smarter than us; if not today, then soon. This makes sense to people who have always understood and defined thought and will – simply: intelligence – as “problem solving,” as a transformational practice that creates an output adapted to a situation from an input. In this way – on the basis of the false equation of thinking with its useful services – comparisons can be made with machines which also achieve this type of transformation with their probability values.

From there, all kinds of extensions into the world of science fiction are made: a super-intelligence wipes out humanity; or in the world of socio-psychological total alienation scenarios: warnings point to the danger of “humanity slacking” and falling into a new master-servant dialectic in relation to its devices: it allows itself to be served so well by them that it can no longer do or know anything itself and apathetically allows itself to be governed by its machines. Although it is created by human hands, in the end it triumphs over them through its extensive services, which it makes them dependent on.

Fortunately, the warnings are being heard by those who know they are responsible for saving humanity. These are the very political decision-makers financing the thing and bringing it into the world. The industry leader Open-AI is offering them its help with the big rescue operation: “We need scientific and technical breakthroughs to steer and control AI systems much smarter than us.”[3] The only way to counter the terrible dangers of super-smart AI is even more, still more sophisticated AI. Really clever, these self-critical guys – or did a power-hungry AI come up with that?


[1] A Center for AI Safety in the USA has written a call for political control and regulation of AI, which hundreds of prominent scientists from various nations and business people in the AI industry have signed (June 2023). In this “manifesto,” “8 Examples of AI Risk” are invoked, from which the following quotes are taken.

[2] Michael Osborne: Professor of Machine Learning at Oxford, The Guardian, May 30, 2023

[3] Open-AI: Introducing Superalignment, July 5, 2023