CS professors Daniel Kang and Bo Li join Schmidt Sciences AI Safety Science Program

3/5/2025 Bruce Adams

Schmidt Sciences has announced that it has launched a  new AI Safety Science program and selected 27 projects that “develop the fundamental science critical to understanding the safety properties of AI systems.” The $10 million program is devoted to foundational research. Two featured projects will be run by CS professors Daniel Kang and Bo Li.

Written by Bruce Adams

Schmidt Sciences has announced that it has launched a new AI Safety Science program and selected 27 projects that “develop the fundamental science critical to understanding the safety properties of AI systems.” The $10 million program is devoted to foundational research.

Two featured projects will be run by faculty from the University of Illinois Urbana-Champaign Grainger College of Engineering Siebel School of Computing and Data Science.

Daniel Kang and Bo Li-
Daniel Kang and Bo Li

CS professor Daniel Kang “will assess the ability of AI agents to perform complex cybersecurity attacks.” CS professor Bo Li “will design and develop a virtual environment with advanced red teaming algorithms for automatically evaluating AI systems and AI agents, with a focus on exploring what level of access to AI models is needed for different levels of evaluation.”

Kang says his project is “related to creating benchmarks of cybersecurity capabilities of AI agents.”  Schmidt Sciences will provide computational support from the Center for AI Safety and API access from OpenAI. The project is related to Kang’s work in cybersecurity, which has been gaining attention for exposing weaknesses in AI that allow content manipulation, such as deepfakes.

Schmidt Sciences team “seeks out researchers pursuing early-stage, high-risk hypotheses.” The non-profit Schmidt Sciences was founded by Eric Schmidt, who led Google as CEO for a decade and executive chairman for four years and served as executive chairman of Alphabet.

Eric Schmidt wrote in the Wall Street Journal on January 26, 2024, that

"Today’s large language models, the computer programs that form the basis of artificial intelligence, are impressive human achievements. Behind their remarkable language capabilities and impressive breadth of knowledge lie extensive swaths of data, capital, and time. Many take more than $100 million to develop and require months of testing and refinement by humans and machines. They are refined, up to millions of times, by iterative processes that evaluate how close the systems come to the “correct answer” to questions and improve the model with each attempt. 

What’s still difficult is to encode human values.”

“The science of AI safety is a crucial new field underfunded by philanthropy, commercial AI labs, and the government,” said Michael Belinsky, a director of Schmidt Sciences’ AI and Advanced Computing Institute and lead of the AI Safety Science program. “We are proud to support these dedicated researchers as they work to ensure that AI is safe and aligned with human values.”


Share this story

This story was published March 5, 2025.