This is an extract from “Getting smart about the future of AI“, a report produced by MIT Technology Review Insights in association with Intel.
While the concept of AI seems new to many, it has been around for decades. The phrase “artificial intelligence” was coined in 1956, said Genevieve Bell, distinguished professor of engineering and computer science at the Australian National University, during a presentation at the school in March 2018. The branch of computer science it came to describe was wildly popular for several years, with researchers optimistic that a fully intelligent machine was imminent—“and then not so popular because it turned out it was really hard,” Bell said. “It required all this computing that people didn’t have.”
“What does it mean to have systems that will be autonomous? How will we feel about that? What will it mean?”
Genevieve Bell, Distinguished Professor, Australian National University
Today, the supporting technologies have matured to the point where AI is now practical and effective. As it becomes widespread, it brings with it societal and ethical dilemmas. Questions and concerns about the impact of AI are becoming urgent. “What is all that technology going to mean for human beings, our systems, our institutions, our organizations, and our countries?” asked Bell.
AI observers agree it is essential to address how society should approach overarching philosophical and practical questions when planning for the future of AI. Here’s how Bell breaks it down in her research:
AUTONOMY: Will AI systems be autonomous, and should they be?
AGENCY: Who will set limits and controls and ensure they are consistently applied?
ASSURANCE: How will humans accommodate safety, risk, liability, trust, privacy, and ethics?
MEASUREMENT: How will we measure the effective- ness of AI systems?
HUMANITY: How will humans interact with AI-driven systems?