Ask a Caltech Expert: Yaser Abu-Mostafa on Artificial Intelligence
ChatGPT has rocked the general public's, awareness, perception, and expectations of artificial intelligence (AI). In this Q&A, adapted from his Watson Lecture delivered on May 24, 2023, computer scientist Yaser Abu-Mostafa explains the history of AI and explores its risks and benefits.
Amid warnings that "AI will kill us all," or boasts that "AI will solve all our problems," a closer look at the science behind the technology can help us identify what is realistic and what is speculative, and help guide planning, legislation, and investment.
Highlights from the lecture are below.
The questions and answers below have been edited for clarity and length.
How did AI grow into the technology we know today?
The artificial intelligence (AI) we see today is the product of the field's journey from simple, brute force methodologies to complex, learning-based models that closely mimic the human brain's functionality. Early AI was effective for specific tasks like playing chess or Jeopardy!, but it was limited by the necessity of pre-programming every possible scenario. These systems, though groundbreaking, highlighted AI's limitations in flexibility and adaptability.
The transformative shift occurred in the 1980s with the move from brute force to learning approaches. This pivot was inspired by a deeper understanding of the learning process in the human brain. This era ushered in the development of neural networks: systems capable of learning from unstructured data without explicit programming for every scenario.
The historical development of AI reflects a continual effort to mirror the essence of human intelligence and learning. This evolution underscores the field's original goal: to create machines that can learn, adapt, and potentially think with a level of autonomy that was once the realm of science fiction.
What is the difference between discriminative and generative models in AI, and how is each type used?
The distinction lies in their approach to understanding and generating data. Discriminative models aim to categorize or differentiate between different types of data inputs. A common application of discriminative models is in facial recognition systems, where the model identifies who a particular face belongs to by learning from a dataset of labeled faces. This capability is applied in security systems, personalized user experiences, and verification processes.
On the other hand, generative models are designed to generate new data that resembles the training data. These models learn the underlying distribution of a dataset and can produce novel data points with similar characteristics. A notable application of generative models is in content creation, where they can generate realistic images, text, or even data for training other AI models. Generative models can contribute to fields such as pharmaceuticals, where they can help in discovering new molecular structures.
Do you worry about AI systems going rogue?
The perceived threat of rogue AI systems is a topic of considerable debate, fueled by speculative fiction and theoretical scenarios rather than grounded in the current capabilities and design of AI technologies. The concern revolves around the potential for AI systems to act autonomously in ways not intended or predicted by their creators, potentially causing harm to individuals, societies, or humanity at large. However, understanding the nature of this threat requires a nuanced consideration of what AI currently is and what it might become.
AI, as it exists today, operates within the confines of specific tasks it is designed for, lacking consciousness, desires, or intentions. AI has no intentions—no good intentions, no bad intentions. It learns what you teach it, period.
AI systems, including the most advanced neural networks, are tools created, controlled, and maintained by humans. The notion of AI going "rogue" and acting against human interests overlooks the practical and logistical constraints involved in developing and training AI systems. These activities require substantial human oversight, resources, and infrastructure, from gathering and preprocessing data to designing and adjusting algorithms. AI systems do not have the capability to access, manipulate, or control these resources independently.
In my opinion, the potential misuse of AI by humans poses a more immediate and practical concern. The development and deployment of AI in ways that are unethical, unregulated, or intended to deceive or harm, such as in autonomous weaponry, surveillance, or spreading misinformation, represent real challenges.
These issues underscore the importance of ethical AI development, robust regulatory frameworks, and international cooperation to ensure AI technologies are used for the benefit of humanity.
Why is regulating the deployment and development of AI challenging? What suggestions do you have for effective regulation to prevent misuse?
One significant hurdle is the pace at which AI technologies progress, outpacing regulatory frameworks and the understanding of policymakers.
The diverse applications of AI, from health care to autonomous vehicles, each bring their own set of ethical, safety, and privacy concerns, complicating the creation of a one-size-fits-all regulatory approach.
Additionally, the global nature of AI development, with contributions from academia, industry, and open-source communities worldwide, necessitates international cooperation in regulatory efforts, further complicating the process.
An effective regulatory framework for AI must navigate the delicate balance between preventing misuse and supporting innovation. It should address the ethical and societal implications of AI, such as bias, accountability, and the impact on employment while also fostering an environment that encourages technological advancement and economic growth.
I have one suggestion in terms of legislation that may at least put the brakes on the explosion of AI-related crimes in the coming years until we figure out what tailored legislation toward the crimes may be possible. What I suggest is to make the use of AI in a crime an aggravating circumstance. Carrying a gun in and of itself may not be a crime. However, if you commit a robbery, it makes a lot of difference whether you are carrying a gun or not. It's an aggravating circumstance that makes the penalty go up significantly, and it stands to logic because now there is a greater existential threat. By classifying the utilization of AI in criminal activities as an aggravating factor, the legal system can impose harsher penalties on those who exploit AI for malicious purposes.
Why is it crucial for the global community to actively pursue AI research and innovation?
The future of AI should not be dictated by a handful of entities but developed through a global collaborative effort. Just as scientific endeavors like the LIGO project brought minds together to achieve what was once thought impossible [detecting gravitational waves], AI research demands a similar collective effort. We stand on the brink of discoveries that could redefine our understanding of intelligence, biology, and more. It's essential that we pursue these horizons together, ensuring the benefits of AI are shared widely and ethically.
Pausing or halting development efforts could inadvertently advantage those with malicious intent. If responsible researchers and developers were to cease their work in AI, it does not equate to a universal halt in AI advancement. If you put a moratorium on the development of AI, the good guys will abide by it and the bad guys will not. So, all we are achieving is giving the bad guys a "head start" to further their own agendas, potentially leading to the development and deployment of AI systems that are unethical, biased, or designed to harm. The development of AI technologies by those committed to ethical standards, transparency, and the public good acts as a counterbalance to potential misuse.
What potential does AI hold for the future, especially in terms of enhancing human capabilities rather than replacing them?
AI's role in automating routine and repetitive tasks frees humans to focus on more creative and strategic activities, thus elevating the nature of work and enabling new avenues for innovation. By removing mundane tasks, AI allows individuals to engage more deeply with the aspects of their work that require human insight, empathy, and creativity.
This shift not only has the potential to increase job satisfaction but also to drive forward industries and sectors with fresh ideas and approaches. The promise of AI lies not in replacing human capabilities but in significantly augmenting them, opening up a future where humans and machines collaborate to address some of the most pressing challenges facing the world today.
You can submit your own questions to the Caltech Science Exchange.