What's next for AI? My predictions for the next 10 years

What's next for AI? My predictions for the next 10 years
Estimated reading time: 6 minutes

Explainable AI, optical AI, analog AI, and meta-learning. I predict the culmination of optimizing massive generative models will result generative AI being turned into a commodity, and these four areas will be the next frontiers of innovation in machine learning.

Introduction

Machine learning and Artificial Intelligence (AI) – often used interchangeably1 – have undeniably changed our world. From revolutionizing healthcare diagnostics to powering supervised-driving cars, AI has weaved itself into the fabric of our lives. But even as AI applications proliferate, an even more advanced future awaits us, driven by advancements in these areas: Optical AI, Analog Computing, Meta-Learning, and Explainable AI. Each of these fields holds great potential in pushing the boundaries of what’s possible and redefining the impact – and meaning – of what artificial intelligence even means.

My fascination with cutting-edge technologies began with Optical AI. A friend in Boston who works at a photo-electronics (photonics) company explained to me how they’re creating hardware and software that runs completely on light (instead of electricity). Imagine a transparent sheet of glass-like material integrated into everyday objects that instantly recognizes and classifies anything placed in front of it – a child’s toy, a rare bird in flight, or even a medical anomaly. This is the captivating vision of Optical AI, where light itself performs complex computations, eliminating the need for power-hungry, and complex electronics. I was entranced by this vision of post-electronic world, and sought to understand other, similar technologies.

While the concept of light-based computation is captivating, current limitations include difficulty in implementing complex non-linear operations, which form the backbone of deep learning models. Advances in materials science and novel optical architectures are necessary to overcome these limitations. Challenges in integrating these systems with existing electronic infrastructure might be considerable to make them seamless for end usage.

Beyond speed and efficiency, Analog Computing offers a fundamentally different approach to processing information. By harnessing the continuous nature of physical phenomena like voltages and currents, or even physical movement2, analog systems promise unparalleled speed and energy efficiency. While noise and scalability pose challenges, the future lies in specializing these systems for specific tasks, maximizing their advantages. Imagine dedicated devices for real-time image processing, financial fraud detection, or scientific simulations – all operating with minimal power consumption and potentially surpassing the speed of conventional computers. Here is an example of what the future might look like, with computers being mostly out of the picture in learning-inference process of machine learning.

Despite the allure of speed and energy efficiency, noise and precision remain significant hurdles. Implementing robust error correction mechanisms and developing techniques for reliable control of analog signals are critical for ensuring the accuracy and reproducibility of computations. Scaling these systems to tackle large-scale problems while maintaining manageable complexity is another significant challenge.

But what if AI could not only process information but also learn how to learn itself? This is the essence of Meta-Learning. This captivating field aims to equip AI systems with the ability to adapt and improve with minimal data, similar to how humans learn new skills by building upon existing knowledge. This opens doors to tackling entirely new challenges and potentially accelerating AI’s progress in fields where data is scarce or constantly evolving. However, it’s vital to remember that meta-learning doesn’t seek to replicate human learning in its entirety; rather, it seeks to extract core principles that can enhance AI’s own learning capabilities. Here’s an early paper on what these meta-learning systems might look like.

It’s also important to remember that while the ability to “learn to learn” holds immense promise, current algorithms still rely on significant human guidance in defining tasks and selecting appropriate learning strategies. Developing more autonomous and generalizable meta-learning approaches capable of adapting to diverse scenarios without extensive human intervention is a big hurdle. Plus, potential biases and errors in the training data used for meta-learning algorithms might be amplified, leading to unintended, potentially disastrous consequences. The old axiom of ‘garbage in, garbage out’ still applies, and with these dark-box systems, it’s difficult to validate for every output within existing frameworks.

Finally, there’s Explainable AI, addressing a critical concern – the opacity of current deep learning models. Often working as “black boxes,” with no clear explanation of why a certain choice was made, these models deliver impressive results but leave us wondering “how” and “why.” Explainable AI aims to demystify these processes, making such models transparent, interpretable, and ultimately, more trustworthy. This would engender trust in AI-driven decisions and unlock further innovation by allowing us to understand the internal workings of these systems

Despite that, while providing explanations for AI decisions is important, understanding what constitutes a “good” explanation and how to translate complex machine learning processes into human-understandable terms remains a challenge. Ensuring that explanations are not misleading or easily manipulated is a whole different ball game that needs careful design and evaluation of explainability methods.

As we stand at the edge of these advancements, it’s difficult to predict with certainty the precise point wee will end up at. These are just a few visions of the potential of these converging fields. Navigating to the future responsibly requires careful consideration of ethical implications. It’s important to keep in mind the equitable access to these advancements, mitigate potential biases, and safeguard against misuse. Transparency, collaboration, and responsible development are a must if we want to harness these technologies for a fair and democratic future.

Image by freepik.

  1. There is a long and contentious argument among academics about the meaning, and the differences between the meanings of the words. We’ll skip all of that for the purpose of this essay. 

  2. Mechanical computers have a surprisingly long history, specially their use in warfare. They were used for guidance systems mostly

Sirish
Shirish Pokharel, Innovation Engineer, Mentor

This is where all my quirky comments will go.