Novice in mediji
Interview - Artificial intelligence will support people, not replace them
Barbara Grosz, Higgins Professor of Natural Sciences at Harvard University, has worked since the 1970s at the highest level of artificial intelligence (AI) research. Here she talks to Creating Chemistry about her passion for the field, and why it would be a terrible mistake to replace people.
Creating Chemistry: Artificial intelligence seems to be everywhere today, but what exactly is it?
Professorin Barbara Grosz: They say that if you ask six researchers to define AI, you will get seven different definitions, so I’ll give you mine. Artificial intelligence is both a field of study and a set of computational methods. As a field of study, the focus is on what I would call a computational understanding of intelligent behavior. By a computational understanding, I mean determining the kinds of cognitive processes and representations that are needed to produce intelligent behavior, then determining how to realize those in a computer system. The set of computational methods are then the algorithms, even the mathematics, but also the computational structures, that you need to actually operationalize that understanding.
Creating computer systems that can communicate freely with people is a key challenge in artificial intelligence. How has your work on natural-language processing aided in the pursuit of this goal?
When I started, there were a lot of people working on syntactic processing, meaning the structure of sentences, and semantic processing, which is how meaning is built. Everybody knew, in some sense, that context, dialogue and pragmatics mattered, but they had no idea how to handle these factors computationally. So, one of the first things I did were what were later known as “Wizard of Oz” experiments. I put two people in two different rooms, communicating with teletype machines. I told one of them that they were talking with a computer, and I asked them to complete a task. The transcripts generated in those experiments revealed that these kinds of “task-oriented dialogues” have a structure, that the structure parallels the task, and that the way in which we talk is affected by that structure. Once I’d developed a computational model of these task-oriented dialogues, the next question was how you generalize from those to other types of conversation. That led me, with colleagues, to the development of intentional models and to speech-act theory, which other people in AI have picked up on.
What happens when there are multiple agents communicating with each other?
In a dialogue, you can’t assume that every participant understands everything about the other participants’ knowledge or intentions. When you have multiple participants working together, you have to model not only their individual plans but also the way participants interactand the way their plans are interwoven. One thread of my work has been the development of those models. Another has been to use the theoretical models as inspiration for design, or as constraints on what parts of systems you have to build. An example is a project I’m working on with a pediatrician at Stanford Medical school involving children with complex conditions. Those children may see upwards of 12 or 15 care providers, who may have little detailed knowledge about each other’s work. Today’s electronic health record systems do nothing to help these providers coordinate care delivery. We are using our multi-agent systems theory of collaboration as an analytic lens to see what is going on when care providers and patients, or in this case parents, try to work together, to see where the missing pieces are, and understand what systems we can design that would help them work more effectively as a team. One of those pieces is something that ensures all participants can see what the goals are, so they know what they are trying to achieve. Another relates to improvements in the way team members exchange information.
There seem to have been huge leaps in the development and application of AI technologies in recent years. What is driving that progress?
Many of the most important ideas in AI, like neural networks or text mining, have been around since the 1960s, but the computers of the time weren’t powerful enough, so they just didn’t work. Now, thanks to video games and the development of powerful graphical processing units, there is a lot more computing power out there. That’s enabled the machine-learning community to develop what’s known as deep learning, which involves neural networks with many more layers. That’s made an enormous difference in a range of areas of AI. Deep learning is not by itself sufficient, though. It’s unlikely even to handle all the visionand natural language problems, but it has made a huge difference to what AI systems can currently do.
What challenges still need to be overcome before we see truly “natural” conversations between computers and humans?
One major challenge is getting good data. There’s a lot more data around today than a few years ago, but it’s not always the right sort of data. If you want to learn about natural language, you need to study real dialogue. Twitter is not good data for that, because it isn’t like real dialogue, nor are the sort of rudimentary interactions we currently have with Siri and similar personal assistant systems. It will be difficult to get the right sort of data and to do it ethically, because you need permission from the participants if you are going to study their conversations. It will also be difficult to ensure you collect data from a full range of people. You can’t just collect data from college sophomore students, as they have traditionally done in psychology research, nor just from today’s heavy users of social media, nor just of English speakers. Even within a single country there will be different dialects and different cultural influences on conversational structures.
Will developments in other areas of computer science, like quantum computers, have a significant impact in the field of AI?
There’s no question that when they get quantum computing to work, it will allow us to solve problems we can’t solve now. But I can’t tell you what those problems will be. The extent to which these technologies will help to increase reasoning capabilities is dependent on us understanding better how to get systems to reason at the higher cognitive level involved in human intelligent behavior.
Where do you think AI technologies will have the biggest impact in future and what are the implications for people’s jobs and roles?
I don’t want to do any crystal ball gazing, but there is a lot of interest in using AI to improve education and healthcare delivery. I also think the autonomous vehicle arena is going to see a lot of change. With regard to health care and education, I think there’s a huge ethical question for society at large. We could build those systems to complement and work with physicians and teachers, or we could try to save money by having them replace people. It would be a terrible mistake to replace people. There are great things that AI systems will be able to do in terms of processing large amounts of data, but that doesn’t give you the same view into a patient. What patients need is a physician who knows them in depth. The same is true in education. Rather than trying to replace teachers, you can design systems to support them. If you have 30 or 40 students working on computer systems, the teacher can’t track them all, but a computer can. It can detect when students are not paying attention or are running into difficulty, and it can alert the teacher to who needs help and why. That’s exactly the kind of system we’ve been building.
The widespread use of AI will raise important risks and ethical issues. How can they be addressed?
All human activities carry risks, and managing those risks requires a combination of design, policies and regulations. I believe we have to deal with ethics at the moment of design. That means we need to teach our students to consider ethical issues in design and how to address those issues. And industry needs to make ethical issues and ethical design as important as the design of efficient algorithms. We need industry to form partnerships – as it is nowdoing – to share best practices, and we need to have technology people, social scientists and cognitive scientists as well as lawyers in the room when regulations are written.
What potential areas of application for AI technologies do you personally find most exciting?
I believe there is a tremendous opportunity for AI to help people in low-resource communities around the world to have better lives, and also for the environment, if we make that a priority. There are some people working on AI for such settings today in a variety of applications, including education systems and healthcare delivery systems. I think that’s very exciting. It may not make a lot of money for anybody right away, but the long-term economic benefits from raising the level of health and education in low-resource communities and improving the environment will be much more important.