Illinois alum, entrepreneur and philanthropist Vilas Dhar Visits the College

4/17/2025 Bruce Adams

Written by Bruce Adams

For someone who advises the UN and US government, meets the Pope, and manages a multi-million-dollar foundation, Vilas Dhar says his “preferred way” to be introduced is as “a small-town boy from Urbana with some curiosity about the world.”

Vilas Dhar is an Illinois native, a University High School graduate, and a University of Illinois Urbana-Champaign Grainger College of Engineering alumnus with a dual bachelor’s degree in biomedical engineering and computer science.

Vilas Dhar is an entrepreneur, technologist, human rights advocate, and a leading global voice on equity in a tech-enabled world. As President and Trustee of the Patrick J. McGovern Foundation, Vilas champions a new social compact for the digital age that prioritizes individuals and communities in the development of new products, inspires economic and social opportunity, and empowers the most vulnerable.

He described his “real sense of homecoming” at “a place that has shaped me in so many ways” at the beginning of a fireside chat with Illinois Grainger College of Engineering Dean Rashid Bashir and Harry E. Preble Dean, College of Liberal Arts & Sciences Venetria K. Patton, on Friday, April 11, 2025. The three discussed AI and data for the greater good in a lively conversation enhanced by questions from the audience.

Four people stand in front of a blue backdrop and smile.
Photo Credit: Heather Coit/Illinois Grainger Engineering
From left to right: Dean Rashid Bashir of The Grainger College of Engineering, Vilas Dhar, Dean Venetria K. Patton of the College of Liberal Arts & Sciences, and Director Nancy M. Amato of the Siebel School of Computing and Data Science 

Q&A

Questions and answers from the fireside chat have been edited for content and length.
See the full fireside chat


Dean Patton: What first sparked your belief that technology must serve people and not the other way around?

Vilas Dhar: I grew up about four miles from here, and I remember being eight years old and running through NCSA's supercomputers. I got to see the amazing things that we were doing with technology and this idea that our creativity and our inspiration were unfettered by the physical world we knew but rather was made possible by what we might envision for the future.

A second part of it was growing up in rural Illinois in the 1980s and recognizing the complex political power dynamics. You saw this incredible divide between the deeply resourced schools and those that weren't. A third part of this was in those summers when we could go back to visit my family in very rural parts of India in places that felt about as far away from those supercomputers as you could imagine. No electricity, no running water, and one phone for an entire village. Seeing that no matter what we were capable of, if we couldn't apply it to and bring it to the challenges that people faced in the real world, we were missing the whole point.

Dean Bashir: So you have started from here and then gone to a global stage. You presented at the World Economic Forum at the UN, as I said in other places. What do you think is the most pressing issue around AI and governance and regulating the concerns that we all have?

Dhar: I can give you a technical answer, but I want to start from a different place. The biggest challenge, I think, actually has nothing to do with AI itself, but it's more broadly a state of the world that we're in now, where I think for 30-plus years, we have given over a lot of decisions about technology, but also how that technology shapes our society and our conception of democratic participation and justice to a very small set of people and institutions. Whether we've done it intentionally or not, the world that's been shaped for us hasn't necessarily been shaped for purpose. It's been shaped for power and profit.

You look at the world today and say, who's creating AI? Where is it being resourced? Who's making decisions about how the technology is being developed? I hate to say it, but I don't feel like that's a societal conversation that's happening. We're all seeing it happen to us. Maybe, in some cases, it's happening for us. Very rarely is it happening by us.

The promise that existed at the time when we started talking about what the World Wide Web would be, a place of open information, of creating this intellectual space where people could connect and share ideas, where you could have discovery and hold all of human knowledge. [It was] not necessarily a place where you could buy your next set of sneakers.

Patton: What role should higher education play in shaping the ethical frameworks around AI, and how can engineering and liberal arts collaborate to effectively advocate and teach our ethical technology and digital dignity?

 “I think there's no such thing as ethical technology. I think ethics, by definition, are a human function; our ability to make moral choices about the world is never something that we can encourage a technical system to do because [it] fundamentally aggregates our responsibility to be the ones who make the moral choice.” 

I suppose my question is how do we train our college graduates, not merely those who study the humanities, but those who study engineering and the sciences, those who study any number of fields, to understand how to make ethical choices in a tech-enabled world? And yet, it's not necessarily a conversation that's been had in all our right rooms. I think it's been had on this campus. But we can no longer isolate a set of skills that higher education trains somebody for from the set of skills that allow us to make meaningful decisions about pursuing our society.

We as a society spent 10,000 years building mythologies, stories, and philosophies about why we shouldn't aspire to godlike powers. Almost every faith-based tradition has a story of somebody who aspired to too much. A classic example is Icarus. We spent 10,000 years telling these stories that said to us, be wary of God-like powers. And then, about 30 years ago, we decided to forget all of that, and decided we would build a bunch of things that somehow made us more powerful than we ever might have imagined we would be.

I don't have any answers, but I have a lot of questions, and I think that we should be asking those questions. We should be asking them right at the heart of our engineering programs. 

A smiling woman sits on a stage.
A woman gestures and looks at a man on stage.
A smiling man sits on a stage and gestures.
A person stands and smiles.
An auditorium full of people watch people on stage.
A group of seated people clap.

Dhar: One of the questions I might ask you all is, what role might higher education have, even after you pass graduates through the halls of this institution, to bring them back together and re-expose them to those foundational questions after they've had a chance to go out and try them on the real world. And how can we help you do that?

Bashir: That's a great question. I think we need to create those forums with our alumni, industry leaders, and foundation leaders. One of the challenges we have in our curriculum is that you have four years, four-plus years, and how much do you add in? How do you add while keeping the foundational knowledge that we want to impart to the students, yet talk about these very important topics? I've been thinking a lot about this. Do we need to do this, given the climate we are in today? How do we do it so that people feel safe and welcome so they feel that they can have this conversation? This is an ongoing question and debate we're all thinking about a lot.

Patton: How do we train the models to be more ethical, more human, more conscious of what you know, what the values are that you aspire to, because the current models are being trained on the current data, and that is going to reflect the world in a certain way?

A man sits on a stage and gestures.Dhar: I sat on a stage like this with the CEO of one of these big companies, and he sat next to me, and with zero self-awareness, he said, ‘AI represents the very best of humanity.’ I had to say, I'm sorry, let's just take a step back from that because, as I understand it, your company's model is mostly trained on Reddit data.' [After the audience expressed its amusement, he noted] Most models are trained on the products of one billion people from the geographic north with digital connectivity engaging in a specific internet dynamic, not seven billion other people and their cultural wisdom and life experiences living on the frontlines of issues and events that are changing the world. [The answer is that we need to] invest in more representative data sets. These are not hard things to do. They're constrained not by technological capacity, but by political will and economic resources.

[On  trip to nearby Philo, IL, I met a farmer using AI-enabled planters, when I asked how his neighbors reacted he said,] “People are scared of AI, but they're not scared of an extra 6% of profit. They're not scared that they get to go out in the field, and instead of spending 15 hours making sure they know where the tractor is going, they get to make important decisions about how the seed is being planted and what they know about farming.”

It's not really a conversation about AI. It's a conversation that starts with what the outcome is. Does it transform somebody's life? Then can you translate it back to the technology? Why do we do such a bad job of having this conversation?’ I have a hypothesis: It's  because the folks out in Silicon Valley, many friends of mine, I've been one of them, don't spend a lot of time in Philo, IL. They're not talking to farmers and asking [about] technologies that [solve problems] but instead, building technologies that are great in their own right and saying, "I'm going to give you what I have. You figure it out for yourself." If that's the primary way we build things, I think there's something fundamentally broken in our technology development model….If we could put the capacity to build and design new technologies proximate to the communities that need them, you'd have a very different conversation.

Bashir: We can talk about how AI can expand capacity, helping some new things be more productive. But there's also fear that it can take over some jobs.

“What we're observing quite clearly, is maybe less of jobs will go away, that tasks will go away. Things that people do today that automated systems and AI systems are much better at. That set of things that AI systems are better at is getting bigger by the day. There is a massive disruption happening in our workforce. The question of whether that means jobs will disappear is, honestly, a matter of speculation by many people, and I'm not sure that anybody has a better sense than anybody else.”

What is coming is going to be very uncomfortable for us.  What it means is we're going to a time of extreme transformation of the political, social, and economic underpinnings of our world. And in that world, we need to band together and figure out common approaches.

A little too heavy for a Friday afternoon, I think!

See the fireside chat


Share this story

This story was published April 17, 2025.