World150 layoffs in one second: the algorithms that decide who should be...

150 layoffs in one second: the algorithms that decide who should be fired


You will be fired by an algorithm. It sounds like an ominous prophecy, but that is the fate that awaits most people employed in this hectic first third of the 21st century: being hired and fired by machines, without any human intermediation. It is possible that many of them go through this cycle of creative destruction at various times along work trajectories that promise to be agitated. It’s the end of lifelong employment, which was common until the end of the 20th century.

In August, Xsolla, a Russian subsidiary of a company of software and interactive services based in Los Angeles, has made an innovative restructuring of its team, attracting the attention of media outlets around the world. Without warning, it decided to fire 150 of the 450 employees at its offices in Perm and Moscow, following only the recommendation of a work efficiency algorithm that deemed them “unproductive” and “little committed” to the company’s goals.

Neither the impact of the pandemic nor the so-called “structural reasons”. This time, the alleged cause to justify the mass layoffs was the cold judgment of an artificial intelligence program powered by big data. The move was so drastic and unusual that the company’s CEO and founder, Alexander Agapitov, rushed to declare it to the Russian edition of forbes which did not fully agree with the machine’s verdict, but was obliged to abide by it due to the internal protocols agreed with its shareholders’ meeting. He even offered to help the laid-off workers find new jobs as quickly as possible because, in his opinion, they are, for the most part, “good professionals.”

The Xsolla case is one of many examples of modern companies with a disruptive vocation that are incorporating artificial intelligence into their decision-making process. What is relatively new is that the roles that the machine has taken on this time are none other than those of the general directorate of operations and the human resources and talent management divisions.

In ‘Modern Times’ (1936), Charles Chaplin already warned of the dangers of machines at work.Getty Images

That machines would eventually replace human workers is something that nineteenth-century British Luddites already knew, and that Charles Chaplin has shown us quite eloquently in the film. Modern times, 1936. What we didn’t expect was that the machines would become our bosses.

There is at least one well-known precedent. In 2019, Amazon, the mother of all disruptive companies today, attracted the magazine’s attention. Bloomberg for its tendency to fire employees based on IT criteria. On that occasion, one of those affected, Stephen Normandin, was interviewed by the magazine and became a symbol of this apparently cold and dehumanized procedure.

Normandin, 63, a US Army veteran based in Phoenix, Arizona, had been working for several months as a contract delivery boy for Jeff Bezos’ company when he received an e-mail informing him of the termination of his contract. The algorithm for tracking his daily activity found him unfit for the job. A machine had just fired him.

Normandin, who defined himself for the Bloomberg as “an old-school guy” with a “bombproof” work ethic, he considered it a personal affront. For him, it was a “disregard and abusive” dismissal, as well as undeserved. No one came to explain to him what criteria had led artificial intelligence to question his commitment and his level of competence: “I took 12-hour shifts a day at a community restaurant for Vietnamese refugees in Arkansas,” he said. “I’ve proven several times that I’m a disciplined and responsible person, I don’t deserve to be dismissed without being listened to, without taking into account my circumstances and without giving me explanations.” In his opinion, the algorithm fired him for his age, without taking into account factors such as his willingness to work and his excellent physical and mental health, but his attempts to demonstrate this by going to an arbitration court were unsuccessful.

Spencer Soper, who wrote that article, considers Normandin’s struggle against the machine to be “a lost war”, the fruit of a “sinister misunderstanding”: “Men like him continue to appeal to the culture of effort and the dignity of work, while companies like Amazon base their model on the growing automation of production processes and on work routines that almost totally exclude the human factor”.

Jeff Bezos, founder of Amazon and Blue Origin.
Jeff Bezos, founder of Amazon and Blue Origin.JIM WATSON (AFP)

In an interview with CNBC, Jeff Bezos stated that the only business decisions that are essential to leave in the hands of human beings are “strategic ones”. The others, the “everyday” decisions, as important as they are, should preferably be taken by artificial intelligence algorithms, because they act “taking into account all relevant information and without emotional interference”. For the CEO of Amazon, “artificial intelligence optimizes processes and, in the medium and long term, it will create many more jobs than it destroys”. Specific cases, more or less regrettable from the human point of view, such as that of Stephen Normandin, would be just side effects of a revolution that goes on without stopping.

For Fabián Nevado, a specialist in labor law and adviser to the Union of Journalists of Catalonia, “it is morally unacceptable for an algorithm to fire you using general criteria that do not take into account your personal circumstances and, above all, that no human being bothers to communicate the dismissal personally, with a minimum of respect and empathy”.

Nevado does not think that this type of case can only occur in poorly regulated labor markets, such as those in Russia and the United States. “On the contrary, in Spain, contrary to what people believe, the dismissal is allowed. What happens is that it is necessary to argue the reasons for this dismissal and, if there is no agreement, a judge ends up deciding whether they are convincing or not.” But it is perfectly legal for companies to use artificial intelligence to monitor the performance of their employees, as long as they do so in accordance with the Organic Law on Personal Character Data: “Anyway, whoever fires is always an employer, a human being or a group of them”, says Nevado. “But the machine can be the tool used to justify a dismissal. In fact, this is already happening in many cases.”

Ultimately, it is a judge who decides, as the referee does in professional football in relation to most of the VAR’s recommendations, this controversial tool that would forever revolutionize sports justice. What is clearly unacceptable, according to the expert, “is that neither the area heads nor the human resources departments take responsibility for this dismissal, that they hide behind algorithms and other technological innovations to escape responsibility and further dehumanize the working relationships”. If the trend continues, Nevado predicts “a pretty bleak future” for human resources departments.

So bleak that they will disappear in the medium term if the idea is consolidated that talent management (hiring, firing, salary increases, disciplinary proceedings, incentives…) can be left completely in the hands of machines. “And not just this department”, he adds. “Many area heads will also be in danger, especially those whose salary depends on their ability to supervise the workers under their responsibility.” In a world of innovative entrepreneurs, state-of-the-art management technology, and an interchangeable workforce, there are plenty of foremen.

Frank Pasquale, a professor at Brooklyn Law School in New York, addresses these issues in his book New Laws of Robotic (“New Laws of Robotics”). For this intellectual, who defines himself as “a humanist with technological competence”, artificial intelligence must never supplant human experience and reasoning ability in “areas that have clear ethical implications”. In other words, a machine can never decide who to shoot or who to fire, because it will do so based exclusively on efficiency criteria. Decisions of this type cannot be automated. They cannot be dissociated from a process of “responsible reflection”, an exclusively human tool. For Professor Pasquale, the “digital boss” will always be a tyrant, because he dehumanizes people by treating them as if they were not human beings, “by turning them into mere tools and denying them their condition of rational and free creatures” .

The General Union of Workers of Spain points out in its working document Algorithmic Relations in Laboral Relations (“Algorithmic Relations in Labor Relations”), that the barrier against algorithms that fire people must be a clear regulation that requires, in the first place, the disclosure of the criteria used by artificial intelligence. “It is necessary to apply the precautionary principle”, says the union’s head of digitization, José Varela. Because algorithms, like any product of human intelligence, make mistakes. Furthermore, they are not concerned about whether their decisions will have a negative impact on “the security of people or their fundamental rights”. That is, if an algorithm is going to fire us, we’re going to demand that it demonstrate to us, in the first place, that it knows what it’s doing.

sign up on here to receive the daily newsletter of EL PAÍS Brasil: reports, analyses, exclusive interviews and the main information of the day in your e-mail, from Monday to Friday. sign up also to receive our weekly newsletter on Saturdays, with highlights of coverage for the week.

Most Viewed

Trending