Over recent months, the rapid development and deployment of certain AI applications such as ChatGPT (OpenAI), Bard (Google), Copilot (Microsoft, OpenAI), and others, has begun to make itself felt in the labour market. Indeed, employees in technology companies have started to voice concerns about the impact of AI on employment. Unsurprisingly, the most vulnerable appear to be those roles that are easily automated. According to the World Economic Forum’s report Jobs of Tomorrow: Large Language Models and Jobs, the sectors identified as most at risk today are information technology, finance, and sales.
But can an LLM truly generate the same work as a human being? To what extent can an LLM replace a person? This question brings us directly to the title of this entry: the Turing Test.
The Turing Test
The Turing Test, proposed by Alan Turing in his 1950 article Computing Machinery and Intelligence, evaluates whether a machine can exhibit behaviour indistinguishable from that of a human being during an interaction. Turing presented this as an alternative to the original, far more challenging question: Can machines think? He discarded that formulation because it required defining the highly complex notion of “thinking”—a question which remains unanswered even today.
To pass the test, an artificial intelligence must engage in a written interaction with a person such that an external evaluator, reading the exchange, cannot tell which of the two interlocutors is the machine and which is the human being. If the evaluator cannot distinguish between them, the algorithm is deemed to have passed the test and, therefore, to exhibit intelligent behaviour. The word exhibit is crucial: the test does not guarantee genuine intelligence, only the ability to simulate human-like responses convincingly.
Turing explored in detail various philosophical, mathematical, and practical objections to the possibility that machines and humans might be indistinguishable, including arguments about consciousness, creativity, predictability, and computational limitations. Let us recall that his paper was published in 1950, only a couple of years after the invention of the transistor—a fundamental component for the later development of microelectronics and, consequently, computing, which was then still in its infancy. Even in that context, Turing’s knowledge and brilliance enabled him to argue that many of these obstacles might eventually be overcome through technological progress and a broader understanding of human cognition. He concluded that none of the objections were definitive, and that machines might one day overcome such limitations and become indistinguishable from a human being in conversation.
Turing Test Passed
As recently as May 2024, a scientific team at the University of California, San Diego, demonstrated that ChatGPT version 4 is statistically indistinguishable from a human being. In other words, ChatGPT-4 has passed the Turing Test. Although passing the test is not proof of true intelligence, it is a milestone in its own right and a clear sign of the algorithm’s ability to interpret prompts and generate seemingly coherent responses.
At this point, I asked my robotic colleague R. D. Olivaw to generate information about the impact of artificial intelligence on the labour market once the Turing Test has been passed. You will find this artificially generated content, along with my commentary, below.
Applications of AI in the Labour Market
Passing the Turing Test can be seen as the point at which AI’s capabilities have advanced so far that certain types of tasks performed by AI are indistinguishable from those carried out by humans.
Key examples include:
- Automation of cognitive tasks: Algorithms capable of responding to queries, drafting documents, or processing financial data have already proven competitive. Tools such as ChatGPT (OpenAI, 2023) and Bard (Google) can generate coherent and relevant text across a variety of contexts. In fact, part of this very entry has been generated by AI. That said, as we shall see later, such content should not be treated as a final product.
- Partial or complete substitution: Areas such as customer service and basic copywriting are being rapidly transformed by these advances (Autor et al., 2020). Personally, I believe that replacing customer service wholesale would harm consumers. However, I can see the first tier of customer support—those focused on simple problems—being replaced by AI, while leaving more complex issues to human operators supported by AI. In that way, AI remains a tool rather than an autonomous worker.
Impact on the Labour Fabric
Different authors speculate on how AI integration will profoundly reshape the nature of work.
A. Redefinition of Roles
According to Frey and Osborne (2013), 47% of jobs in the United States are at risk of automation due to advances in AI and robotics. This, however, is a clear example of an AI error on the part of my colleague R. D.: in 2013 large language models (LLMs) as we know them today did not exist, and so we cannot fully understand the context in which the authors ventured such a bold prediction. This is precisely the kind of lack of contextualisation that AI frequently suffers from. It is why, despite well-structured and seemingly sensible information, one must always verify AI-generated content for accuracy.
Routine and repetitive tasks are the first to be automated, while human roles evolve towards more strategic and creative activities. I also believe that new roles will emerge, just as the arrival of the internet created entirely new occupations. In many ways, that shift marked a paradigm change: as certain jobs disappeared, others emerged from nowhere and are now integral to society.
B. Productivity and Specialisation
McKinsey Global Institute (2017) estimated that by 2030 nearly 14% of the global workforce would need to change occupational category owing to automation. Personally, I think it is too early to make such precise forecasts, and indeed, we have already seen wide variability in estimates. Of course, AI will affect the labour market, but at present it is difficult to pin down a concrete figure.
AI complements humans, enhancing productivity and freeing up time for more complex tasks. I broadly agree, though with caveats. This very entry has been written more quickly thanks to R. D.’s contributions. Yet, as we have seen, the information is not wholly correct. For now, we should view AI as a very particular kind of tool—extremely useful, but also capable of error if used indiscriminately and without review.
C. Ethical and Social Challenges
- Technological unemployment: Brynjolfsson and McAfee (2014) argue that the “second machine age” may generate significant inequalities if mitigation measures are not taken. That is true, but it is hardly a new phenomenon. Lack of access to knowledge has always fuelled inequality.
- Inequality in adoption: According to the World Economic Forum (2020), countries and sectors with less access to technology risk falling behind, widening global divides. Again, this is not entirely new.
However, one point that R. D. omitted—yet which I consider strategically crucial—is technological sovereignty. We take for granted our access to AI: after all, answers are only a click away. But where does the algorithm truly reside? Who controls the datasets used for training? Who controls access to the servers? Imagine a geopolitical tension between North America and Europe: would European citizens lose access to LLMs? If information and tools remain in the hands of a few corporations, could this mark the beginning of an era where major companies supplant nations?
Towards Human–Machine Collaboration
Many authors argue that collaboration between humans and AI is essential to maximise the potential of these technologies. The literature suggests:
- Complementarity: Davenport and Kirby (2016) emphasise that humans should focus on skills machines cannot easily replicate, such as creativity and empathy. I see their point, but again, they assume AI-generated information is inherently accurate and useful. We are not yet at that stage. For now, AI only appears accurate and useful. It is worth remembering that AI possesses neither consciousness nor intelligence: it processes information and generates responses based on patterns in that information.
- Lifelong learning: Continuous education and development will be key to preparing the workforce to collaborate with automated systems (OECD, 2019). This is neither new nor alarming. Looking back at the technological advances introduced during the 20th century, adaptability and ongoing learning have always been essential workforce skills.
Beyond the Turing Test
In the future, central questions will revolve around:
- The ethics of deploying AI in sensitive roles requiring contextual judgement (IEEE Global Initiative, 2020).
- Responsibility: who bears accountability when AI-generated outcomes have negative consequences?
- Ensuring fairness in distributing the benefits and risks of automation (Ford, 2015).
Yet let us be realistic: has any technology in history ever been equitably distributed in terms of benefits and risks? Has any company or nation ensured equal access to technological progress across all social classes? Sadly, money rules here.
Conclusions
Artificial intelligence has achieved a crucial milestone: passing the Turing Test. This breakthrough, far from proving true intelligence, highlights its ability to carry out tasks with striking effectiveness. However, the unbridled enthusiasm of some quarters—often rooted in simple ignorance—has led many to exaggerate its scope, confusing usefulness with perfection. AI does not replace critical thought, nor does it guarantee flawless results. That is why it remains essential to carefully review every output, particularly when incorporating this technology into both professional and everyday life.
Let us remember: the tool is valuable, but human judgement is irreplaceable.