Post ChatGPT – What Does The Future Of AI Hold For Human Application
On the one hand, generative AI, such as ChatGPT and other types of artificial intelligence, promise enormous productivity and corporate profits, and on the other, confusion, mistrust, and the potential loss of power and control by the masses.
One of the fundamental technologies is generative AI. It refers to AI that can generate original audio, code, images, text, videos, speech and more. AI is now more real for the average person, and its effects on employment and daily life are more obvious. In the previously monopolized by humans field of creativity, generative AI is carving out a place for itself.
The technology uses a combination of mass inputs (ingested data) and experiences (individual user interactions) to build a knowledge base. Then, it continuously “learns” new information to produce original and novel content. Some call the ChatGPT-like tools the new frontier for a gold rush.
According to research, “AI could take the jobs of as many as one billion people globally and make 375 million jobs obsolete over the next decade.” However, it can also generate more than $15.7 trillion by 2030. In early-stage generative AI companies, venture capital investment levels quadrupled between 2017 and 2022, and future investment growth expectations are significantly higher.
The influence and reach of generative AI may surpass that of the internet, mobile devices, and cloud computing. Its potential is comparable to the invention of hunting tools, the wheel and the alphabet. More so than the Industrial Revolution or the Renaissance, it can significantly impact our society and behaviour.
But I wonder if we’re prepared to take on the challenge.
People’s power and social standing are threatened by machines that can work across most industries and functions, provide original content, and do so more quickly and intelligently than humans. The entity with the advantage in speed and capacity, who can immediately gain unlimited access to all information created by humans, and who can become smarter more quickly than any individual, is powerful.
The existentialist question becomes, why am I here, and what is my purpose if not going to work from 9 to 5 to earn a living?
Would I need to serve the machine in the future, and how would I make a living?
Elon Musk predicts that AI-driven technologies could power the workforce in the future, saying, “There is a pretty good chance we end up with a universal basic income, or something like that, due to automation.” Does that mean each company will only have one customer in a few decades — the government?
Doesn’t that raise questions about the foundations of capitalism or, at the very least, call for a totally different social safety net? We are moving into an “abnormal” era, which calls for new ways of thinking for individuals and society.
Sam Altman, the maker of ChatGPT, reportedly said the “good case [for A.I.] is just so unbelievably good that you sound like a crazy person talking about it.” He added: “I think the worst case is lights out for all of us.” Some fears are indeed justified and not entirely unfounded. Others are rooted in our inability to see a future that is not necessarily an extension of the past.
AI systems pick up on human biases and past human actions and decisions (data). So, if machines can act and learn faster, they will potentially magnify our systematic biases. The biases that drive fake news and divisions. The biases impact how we judge and treat each other. biases that could cause famine, war, racism, sexism, etc.
Therefore, as machines act on our behalf, we may face a significantly more polarising future unless we address our biases. But should we be afraid of our own prejudices and those of the machine that only reproduces them? Concerned with cheating, schools are pushing back on students’ use of ChatGPT.
Plagiarism is a problem that worries the New York City Department of Education as well as authorities in Seattle, Baltimore, and Los Angeles.
Is backing off the use of generative AI legitimate, or is it time for schools to get students to learn to apply their talents and use technology differently? Some of my fellow professors at the University of Southern California conducted very informal research and concluded that ChatGPT could answer exam questions for undergrads to an A-level.
The challenge is if machines can address the basic questions, should we not re-think what we are asking students to learn and how? Should we still train horses for transportation if we have cars to drive us around?
We need regulations that shield us during this large-scale worldwide change. regulations that encourage cooperation rather than censorship of the capabilities and promises of the machines. We also need corporations to be alert to biases and aware of possible rogue behaviour by machines.
But most importantly, we need a global mind shift that gives us all the courage to leave the past behind and embrace a future of flux. It is time for massive change and growth. A time to think differently about our future and our relationship with machines. As opposed to viewing the relationship from the lens of slave and master, we should look at it from a partnership perspective.
Indeed, guard rails are needed, but machines will only replicate our biases, and students only cheat if we measure them by what they have memorized or predefined procedures. We cannot counter China’s ambitions without a global strategy Whither Israel: A tale of two cities? We should have the courage to let technology take over mundane processes and let machines coordinate routine future actions.
Then, we will have the opportunity to conceive our next future. A future that relies on our collective mental evolution. A future that offers us the luxury to concentrate on innovation and creation. A future that we have not even imagined or are prepared for.
The bottom line, we are entering an era of “abnormal.” An era that offers a fundamental change in our evolutionary path — from physical to mental. From how we earn a living and receive healthcare to our expectations of the government, there will be unprecedented obstacles to overcome.
From how we buy, sell, travel and learn to how we spend our days, define intellectual property and seek legal protections. Sid Mohasseb is an adjunct professor in Dynamic Data-Driven Strategy at the University of Southern California and is a former national strategic innovation leader for strategy at KPMG. He is the author of “You are not Them” and “The Caterpillar’s Edge” (2017).