The Next Stage of AI is Here: Now What?

By Esha Rana
In March 2023, I attended a conference on artificial intelligence (AI) for digital creatives on St. Paul’s Bloor Street with my colleagues. It was a rainy, windy day—typical March weather for Toronto, I was told—and after fighting against the elements, I arrived at the venue. I quickly grabbed a corner seat in the middle of the room and whipped out my notebook and pen, ready to take notes and soak in what the future of AI and, consequently, the world would be.
The opportunity was presented to us courtesy of Humber Press, Office of Research & Innovation (ORI). The department is already an enthusiastic supporter of innovation in technology. With the growing number of start-ups, accelerators and research institutes continually pushing the boundaries of AI capabilities in Canada, we wanted to see what some of the Canadian technology forerunners had imagined and created with the latest iteration of AI.
Exploring AI with notable concerns
The conference opened with a demonstration and presentation about a game that would teach children the basics of machine learning. Subsequent speakers demonstrated other possibilities with AI, namely creative brainstorming with ChatGPT, image creation and generation, and the creation and navigation of virtual realities. Each presentation opened another door to what the collective future of humanity would consist of.
Although not demonstrated, there were other benefits discussed in the panel discussion at the end. The latest iteration of AI promised speed, efficiency and problem-solving. You could code faster, write faster and ask ChatGPT to provide answers for personal and global issues (even climate change). Businesses could save money by automating mundane tasks while employees would be free to do more value-added work.
The conference, overall, was a good glimpse into where AI is headed, but there was a lack of discussion about how these technologies would affect people in the writing and graphics industries, specifically:
- What would possible economic and compensation models look like?
- How would copyright issues be handled?
- What would be the long-term effects of such technology on creativity and cognition?
The conference was held largely on the unspoken assumption that the next iteration of AI could only lead to good. While optimism is a fine lens to look through, adopting a researcher’s mindset to weigh the pros and cons would be a great way for us all to proceed.
This peek into the future of AI and society prompted a lot of internet trawling, book flipping and video watching to piece together the complete picture: What is the other side of the coin? And how can we best prepare for a future we have no blueprint for?
Consequences of the AI we already have
Initially, AI was relegated to inventions we read about in the news or had occasional contact with—self-driving cars; chatbots and virtual agents; IBM’s Deep Blue, a chess-playing computer program that defeated grandmaster Gary Kasparov. AI wasn’t a big part of the everyday public consciousness.
Things changed with the advent of social media, specifically its algorithms, which make it easy to get trapped in an echo chamber of one’s likes and opinions. These algorithms were our first introduction to and contact with artificial intelligence in something as close and personal as a cell phone.
All social media platforms launched with fanfare about how easy it would be to forge and maintain connections with friends and family. Years later, that same ease—now a core tenet of several platforms—has let loose a thread of problems that most people are struggling with, namely, overstimulation, social comparison, stress, continuous distraction and an inability to focus for long periods of time. At a societal level, misinformation, disinformation, trolling and polarization are rampant.
These harmful effects have tainted and eclipsed the benefits that came with the advent of social media. And now that the Pandora’s jar has been opened, social media companies continue to be silent about how their algorithms are built off the principles of gambling machines in Vegas—something they failed to mention in their marketing pitches. Ambitions for engagement and profit-building are disguised as missions for enabling community building and giving a voice to people. People, however, are left grappling with how to use social media in a way that minimizes its insidious effects on their well-being while allowing them to reap its benefits; navigating this space requires continuous caution and awareness.
The other side of the next stage of AI
It is a near-definite possibility that AI will penetrate even deeper into our public and personal spheres. It has already done so in educational institutions where students rely on ChatGPT to craft essay answers, write personal statements and avoid applying themselves creatively or cognitively.
In this evolving AI adoption stage, it is imperative to note that even the developers of next-stage AI systems are unable to understand, predict or control the extent of its capabilities.
Once the systems become more sophisticated, there is no telling how they might be used and/or misused by people and at what scale.
In their video ‘The AI Dilemma,’ Tristan Harris and Aza Raskin, co-founders of The Center for Humane Technology, predict that these large language models can lead to exponential scams, the breakdown of banking and secure computing, automated cyberweapons, automated lobbying, automated loopholes in law, fake reality, synthetic relations and a collapse of trust among other things. Deepfakes—which use machine learning and artificial intelligence to generate extremely convincing fake audio and video—can easily cause socio-political chaos.
In the race for market dominance, technology companies have yet to fully address these possibilities. It is also concerning that the speed at which AI is developing is not proportional to the speed at which the laws surrounding it are being developed.
The way forward
Enthusiasts of technological progress might see any reservations about AI as impediments to innovation. But persistent caution is not about a refusal to innovate and move ahead. Rather, it is concern about the consequences of creating and deploying technologies that can change the working model of our society.
There are two fail-safe ways to channel this caution and concern productively:
Continued research
In an open letter published on March 22, 2023, The Future of Life Institute asks for the pause of giant AI experiments. “Having succeeded in creating powerful AI systems,” the letter says, “we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.”
This period of “AI summer” is rife for aggressive research into the various facets of AI and how it will possibly interact with society. Thanks to social media, the first iteration of AI, we’re already aware of the wrong turns that technology can lead to. This existing knowledge corpus, combined with evidence and data through pre-emptive research, can help to devise better laws, policies and strategies so that robust AI systems do not seismically destabilize the status quo.
Continued conversation
When the nuclear bomb was built, there was a discussion panel on ABC News’ program ‘Viewpoint.’ The panel was comprised of government officials, thinkers and scientists like Carl Sagan, William F. Buckley Junior, Robert S. McNamara, Henry Kissinger, Brent Snowcroft and Elie Wiesel. The discussion was inspired by the movie The Day After but tackled topics like nuclear war, nuclear deterrence and how they relate to tensions between the East and the West.
Colleges and Institutes Canada (CICan) has initiated discussions and brought together researchers and educators to discuss the new iteration of AI, the changes teachers have made to their grading criteria and how they encourage students to integrate the technology into their education. Similar conversations and public forums about the different aspects of AI are the need of the hour.
Seeing how much technology is already entangled with our lives and will continue to be, critical thinking and decision-making about proposed and already-developed inventions is imperative. An awareness of ethics and social good, coupled with nuanced analysis, research and discussion, will remain increasingly important as we not only deal with the ramifications of AI but also continue to innovate.
The joint pursuit of transparency, fairness and inclusivity represents the shared responsibility of technologists, policymakers, and society to collaborate for the most moral and efficient application of AI. Technology experts must concentrate on ethical algorithms and objective data sets, while legislators should prioritize accountability and human welfare in their laws. An informed and empowered society must keep both accountable. In this symbiotic ecosystem, we can work together to navigate the revolutionary potential of AI while preserving our moral integrity and democratic institutions and systems.
The Office of Research & Innovation continually seeks to support and encourage curiosity and newness. If you have an idea or interest in researching AI and its possibilities, effects, rhetoric, etc., please get in touch with us!