All the sessions from Transform 2021 are available on-demand now. Watch now.
Women in the AI field are making research breakthroughs, spearheading vital ethical discussions, and inspiring the next generation of AI professionals. We created the VentureBeat Women in AI Awards to emphasize the importance of their voices, work, and experience and to shine a light on some of these leaders. In this series, publishing Fridays, we’re diving deeper into conversations with this year’s winners, whom we honored recently at Transform 2021. Check out last week’s interview with a winner of our AI responsibility and ethics award.
Think of an AI technology, and Dr. Nuria Oliver was likely working on it decades ago when it still felt like science fiction. Her research and inventions have ignited advancements across the industry, and now drive many of the products and services we use every day.
But while Oliver, the winner of our AI Research Award, has published more than 150 scientific papers and earned 41 patents, she doesn’t believe in technology advancement for the sake of it. Above all, she is today focused on responsible AI and “developing technology that’s on our side, that really has our interests and our well being as the main objective function.”
“To me, progress is an improvement to the quality of life for all people, all the beings on the planet, and the planet itself — not just some people,” she told VentureBeat. “So I think it’s very important before we invest in any technology, to think whether that development is continuing progress. Or if it’s not, maybe we shouldn’t do it.”
Oliver is activating this belief beyond just her own research, speaking regularly on the topic and also creating the Institute for Humanity Centric AI, a non-profit focused on the impact of AI. She’s also leading efforts to bring more women into the industry, and asks any young girls who may be reading this to consider the opportunities in the field. Oliver herself was the first woman computer scientist in Spain to be named an ACM Distinguished Scientist and an ACM Fellow. She was also the first woman scientific director of R&D at Telefonica, and continues making waves today as the chief scientific advisor of Vodafone Institute.
We’re thrilled to offer Oliver this much-deserved award. We recently caught up with her to learn more about her research and discuss responsible AI, the challenges in the industry, and how business leaders can make sense of the quickly evolving field.
This interview has been edited for brevity and clarity.
VentureBeat: How did you become an AI researcher? And what interests you most about the work?
Dr. Nuria Oliver: I discovered AI when I was studying telecommunications engineering in Spain. It’s a six year degree, and when I was in the third or fourth year, a professor from the math department asked me to write a paper for an international conference. I chose to write about neural networks and human intelligence versus artificial intelligence, and I became fascinated with the topic. And so I decided to do my master’s thesis project on computer vision. My PhD in the U.S. was also on AI. So I guess it all started in my third year of university, but I think before that what really fascinated me and still fascinates me about AI is also human intelligence.
VentureBeat: Of all your inventions and research, is there one that sticks out to you as the most impactful for the field of AI? Or the most impactful in another way?
Oliver: That’s like asking someone if they have a preferred child. But I guess my main area of expertise is building computational models of human behavior and building intelligent interactive systems that understand humans. And in terms of a landmark project, I would say the work I did on modeling human interactions using machine learning techniques, because that was one of the early works on detecting and modeling human interactions. I also did a system that was able to predict the most likely maneuver in a car before anyone was talking about autonomous driving — like back in 19999. So that was also a really complex but very exciting project.
I’m also proud of the first project I did at MIT, which was a real-time facial expression recognition system. That commercially exists today, but it was like science fiction back then in 1995. All the work I’ve done on the intersection between mobile phones, health, and wellness has also been really exciting, because it was sort of trying to really change the way we perceived phones. A lot of that work has also become mainstream today with wearables. And then finally, I would say all the work I’ve done on using data and AI for social good. That’s an area that I’m very passionate about, and I feel it’s had a lot of impact. I created the area for using data and AI for social good at Telefonica, and I created the area at Vodafone.
VentureBeat: Well that’s an amazing body of work, and it sounds like you’re always ahead of the time. So what are you working on now that we might see more of in the future? Is there any emerging area of research that you really have your eye on right now?
Oliver: I’m very interested in developing technology that’s on our side, that really has our interests and our well being as the main objective function. And this is not the case today. Why don’t we design technology that suggests we turn it off if it’s having a negative impact on us? Why is the expectation that the technology we use is designed to maximize the amount of time that we spend using it? I’m also working a lot on some of the key challenges of AI systems that are used for decision making: algorithmic bias, discrimination, opacity, violations of privacy, the subliminal manipulation of human behavior. Right now, I don’t think the impact is necessarily positive. So that’s a big area of focus right now of my work, and I recently created a nonprofit foundation called the Institute for Humanity Centric AI. A lot of the work I just described is part of the research agenda of this new foundation we just created.
VentureBeat: You mentioned some of the big ones like bias and privacy, but I’m wondering what you think are some of the lesser known hurdles with AI research today.
Oliver: There are different types of challenges. This is a very active research area, so there are a lot of technical challenges. In addition to what we already said, there’s inferring causality versus correlations. For a lot of big, important problems, we want to understand the causal relationships between different factors, but that is very difficult to do with many of today’s methods, which are very good at finding correlations but not necessarily causation. There are challenges related to data access and combining data from different sources. And for many impactful use cases like helping with a natural disaster or even the pandemic, you want to be able to make decisions in real time.
And then there are more human-related issues in terms of education and capacity building. I’ve been saying for like 10 years now that we should really transform the compulsory education system so it’s more aligned with the 21st century. I think the education system in many countries is from the second industrial revolution, but we’re in the fourth industrial revolution. I also think we need to invest more in developing human skills that have been very important for our own survival: our social intelligence, emotional intelligence, creativity, our ability to work together, to adapt. And beyond the formal education, I think it’s very important to invest in upskilling and reskilling programs for professionals whose jobs are being impacted by AI. And I think there’s a connection there with some of the other VentureBeat awards like the AI Mentorship Award Katia Walsh won. And then also investing in education for the general population and policymakers, so we can actually make informed decisions about this very important discipline of AI.
And I mentioned it briefly, but there are many challenges related to the data: accessing, sharing, analyzing, ensuring quality, and privacy implications. Because even if the data is non-personal data, you can infer personal attributes like political views, sexual orientation, gender, or age. And of course, there are many barriers related to the governance of these systems and the ethical frameworks necessary to make sure the huge power AI has is going to actually be used for social good. I always say we shouldn’t confuse technological development with progress.
VentureBeat: There are new AI papers and findings coming out every day, and like you said, advancements aren’t always progress. So what advice do you have for technical professionals and decision makers for how they can keep up, understand changes in the field, and parse what research is truly impactful?
That’s a very good question because the field has grown exponentially to the point where papers are being published constantly. And in fact, many influential papers aren’t even published in scientific conferences anymore; they’re published in open repository systems like arXiv without any peer review. So I think it’s important to understand that this work is incremental. If you’re a practitioner or a business leader, understand the main concepts and both the capabilities and limitations of existing AI systems. Try to think of how they can benefit your business without necessarily maybe going into all the details of the latest papers.
VentureBeat: Throughout the conversation, we’ve been touching on this idea of responsible and ethical AI. What do you feel is the role of AI researchers in regards to this and preventing the potential harms of these technologies? How is the responsibility the same or different from that of entrepreneurs and enterprises?
Oliver: Increasingly, leading machine learning conferences are asking for a clear ethical discussion on the implications of the work. So that’s really a step in the right direction. Many universities are now including ethics in the computer science degree as well. My main message here would be that if you’re using AI, develop a human-centric approach from the beginning. Take the direction the field and legislation are going into account. I think Europe is recognizing that if there is no regulation of AI systems, the negative unintended consequences of these systems can be pretty bad. And as I said, you know, we might not have progress at all.
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Source: Read Full Article