Superintelligence by Aqdas Afzal


AS if things were not hard enough for universities with declining enrolments and soaring costs, a new spectre has descended over them in the shape of ChatGPT. ChatGPT is an artificial intelligence (AI) chatbot from Silicon Valley that has the ability to write essays, solve math problems, and develop research papers. University faculty are worried about ChatGPT primarily because if students are going to use AI to complete assignments, how are professors going to ensure that actual learning is taking place in universities?

ChatGPT is only a chapter in the larger story about how these AI-powered technological developments will impact society. Economists have written extensively about the labour-displacing nature of these AI-powered technologies for some time now. At least in the medium term, it would be hard to train workers in developing new skills so that they can find employment. Social unrest will invariably follow. When new technologies were introduced during the Industrial Revolution, Luddite rebellions sprung up in the textile-manufacturing areas of Nottinghamshire and Lancashire and had to be brutally put down by the British army.

Given the speed at which AI is developing, the bigger issue now is not how many jobs are going to be lost to AI-powered technologies. The more important issue, certainly from a long-term perspective, is whether AI will one day equal and then quickly surpass human intelligence, subsequently enslaving and then exterminating all humans. Is there something we can do about it?

There is no consensus on how long will it take for AI to reach Artificial General Intelligence, or the same level of intelligence as that of a human brain, with estimates ranging from as early as 2029 to the end of this century. But once AGI is reached, many scientists believe it will then be a very short leap for AI entities for attaining singularity or superintelligence through exponential rates of recursive self-improvement.

Facing such fierce competition, superintelligent AI entities may actually weaponise to exterminate competitors. Many futurists like Judea Pearl have warned against such a dystopian future and have likened AI to breeding a new species of superintelligent animals. It is understandable that some voices have been raised about halting the ongoing progress in AI well behind a certain point.

Where these dystopian scenarios carry instructive value, the talk of stopping progress in AI is alarmist. For starters, we appear to be very far from even attaining AGI and thus AI entities reaching superintelligence is not a foregone conclusion. Second, the nature of work will invariably change in the years to come with many jobs like financial experts, truck drivers and radiologists becoming almost extinct. Where these tasks will be performed by AI entities, new jobs will also open up just like they did in the Industrial Revolution.

Even if we assume that superintelligent AI entities will appear in the not-too-distant future, machines do not stand a chance against humans for three main reasons. The nature of non-reproducible factors like land and energy will remain scarce in the future, creating the same kind of competition we see in the world today. One might even predict that the ensuing distributive conflict would leave superintelligent AI entities no choice but to begin inter-machine warfare…

Moreover, unlike humans, machines do not have the ability to feel or perceive themselves as being part of communities that are, as Benedict Anderson argued, purely figments of imagination. In Imagined Communities, Anderson argued that nations were “imagined” as even the citizens of the smallest country never get to know all their fellow citizens. Yet, this sense of belonging to a community or nation enables humans to exhibit altruistic behaviour and perform heroic feats, whereby some even lay down their lives for the greater cause.

AI entities will only operate on a selfish and individual utility/productivity maximisation calculus because without it machines will not be able to attain superintelligence in the first place. Even after attaining superintelligence, lacking the ability to imagine themselves as part of a greater whole, machines would have no choice but to forge ahead alone.

The power of imagined communities or ‘myths’ as Yuval Noah Hariri calls them in Sapiens, also enable humans to demonstrate the ability to cooperate and create alliances and coalitions in pursuit of their goals. At times, to strengthen cooperation, politicians make tactical retreats or drop long-standing demands. AI entities would be hard-pressed to ensure cooperation with each other as tactical retreats would appear irrational to a superintelligent AI entity, whose algorithm is hardwired to win every single time at any cost.

In time, professors will find ways to counter ChatGPT through oral examinations and group work. There is already talk of apps that have the capability to detect texts created with AI. Humanity has nothing to fear from AI. Rather, policymakers in Pakistan need to conceptualise and suggest ways through which the overall technological frontier can be enlarged. One way to achieve this is by requiring all students to study mathematics until age 18, something Rishi Sunak, the British PM, has proposed. This makes sense as future workers are going to require analytical skills in a world awash with data. Let’s hope some of these ideas can find their way to Pakistan.

Courtesy DAWN