
AI & Data Science Stars Seminar Series
The AI & Data Science Stars Seminar Series aims to establish interdisciplinary collaboration by connecting emerging scholars and recent NSF (or other agencies) CAREER awardees across diverse research fields, especially targeting Data Science (DS) faculty members’ research directions and beyond (including Computer Science and Information Systems), enhancing the DS department's identity and Ying Wu College of Computing (YWCC) as a hub for innovation and thought leadership. It provides an open platform for sharing cutting-edge research, inspiring students and early-career researchers, and fostering professional networks between academia and industry.
Spring 2025 Speakers
LectureAutomatic Discovery of Algorithms and Neural Architectures in Scientific Machine Learning George Em Karniadakis, Brown University April 7, 2025, 1 PM - 2 PM Location: GITC 2121 |
Click here to sign up
Brief Bio:
George Em Karniadakis is the Charles Pitts Robinson and John Palmer Barstow
Professor of Applied Mathematics, Brown University; also with MIT & PNNL; a
member of the National Academy of Engineering and a Vannevar Bush Faculty
Fellow. He received M.S. and Ph.D. from MIT in 1987. He was a Lecturer in the
Department of Mechanical Engineering at MIT, and subsequently he joined the Center
for Turbulence Research at Stanford/Nasa Ames, then Princeton University as
Assistant Professor in the Department of Mechanical and Aerospace Engineering and
as Associate Faculty in the Program of Applied and Computational Mathematics. He
was a Visiting Professor at Caltech in 1993 and joined Brown University as Associate
Professor of Applied Mathematics in the Center for Fluid Mechanics in 1994. After
becoming a full professor in 1996, he continued to be a Visiting Professor and Senior
Lecturer of Ocean/Mechanical Engineering at MIT. He is an AAAS Fellow (2018-),
Fellow of the Society for Industrial and Applied Mathematics (SIAM, 2010-), Fellow
of the American Physical Society (APS, 2004-), Fellow of the American Society of
Mechanical Engineers (ASME, 2003-) and Associate Fellow of the American Institute
of Aeronautics and Astronautics (AIAA, 2006-). He received the SES GI Taylor
Medal (2024), the SIAM/ACM Prize on Computational Science & Engineering (2021), the Alexander von Humboldt award
in 2017, the SIAM Ralf E Kleinman award (2015), the J. Tinsley Oden Medal (2013), and the CFD award (2007) by the US
Association in Computational Mechanics.
Abstract
We will first review deep neural operators, which we will use as foundation models forscientific machine learning tasks. Then, we will design two classes of ultra-fast meta-solvers for linearsystems arising after discretizing PDEs by combining neural operators with either simple iterative solvers,e.g., Jacobi and Gauss-Seidel, or with Krylov methods, e.g., GMRES and BiCGStab, using the trunk basisof DeepONet as a coarse preconditioner. The idea is to leverage the spectral bias of neural networks toaccount for the lower part of the spectrum in the error distribution while the upper part is handled easilyand inexpensively using relaxation methods or fine-scale preconditioners. We create a pareto front ofoptimal meta-solvers using a plurality of metrics, and we introduce a preference function to select the bestsolver most suitable for a specific scenario. This automation for finding optimal solvers can be extended to neural architectures for predicting time series as well as to nonlinear systems and other setups, e.g. finding the best meta-solver for space-time in time-dependent PDEs.
The series is open to the public; no registration fee is required.
![]() |
LectureConvincing Experts to (not) Trust ML Models Eric Wong, University of Pennsylvania March 14th, 2025, 11:30 am- 12:30pm Location: GITC 2121 |
Click here to sign up
Brief Bio:
Eric Wong is an Assistant Professor in the Department of Computer and Information Science at the University of Pennsylvania. He researches the foundations of robust systems, building on elements of machine learning and optimization to debug, understand, and develop reliable systems. He is a recipient of an NSF Early Career award and an AI2050 Early Career award.
Abstract
ML systems have a long history of being unreliable---should we trust these models today? In this talk, we will discuss the challenges and opportunities facing trustworthy machine learning. On the one hand, ML systems are prone to manipulation, as exemplified by our research on jailbreaks for large language models. We will show how this procedure can be generalized beyond safety to automatically find the weaknesses of large language models, a procedure called task elicitation. Conversely, ML systems may contain a gold mine of information waiting to be discovered, but conventional explanations lack reliability. We will show how to create explanations with practical yet provable guarantees in the language of experts, with applications in assisting real-time surgery and enabling new discoveries in cosmology.
The series is open to the public; no registration fee is required.
![]() |
LectureThe Usefulness of Fibonacci Codes Shmuel T. Klein, Bar-Ilan University |
Click here to sign up
Brief Bio:
Shmuel T. Klein is a Professor Emeritus and former Head of the Computer Science Department at Bar-Ilan University, near Tel Aviv. His main research interest is in Data Compression. He has published 2 books, more than 100 scientific papers, and several patents.
The Usefulness of Fibonacci Codes
The talk explores the sub-field of Data Science, considering the algorithmic aspects of data compression and coding. Several properties and applications of Fibonacci codes are presented. These are fixed codeword sets, using binary representations of integers based on the Fibonacci sequence rather than on powers of 2. Applications range from robust data compression, over faster modular exponentiation to boosting the compression performance of rewriting codes for flash memory. No previous knowledge is assumed.
The series is open to the public; no registration fee is required.
![]() |
Inaugural LectureEmerging Legal |
Click here to sign up
Brief Bio:
David Opderbeck is Professor of Law and Co-Director of the Gibbons Institute of Law, Science & Technology and Institute for Privacy Protection at Seton Hall University Law School. His legal scholarship focuses on artificial intelligence, cybersecurity, data privacy, and intellectual property law. He develops and teaches innovative courses in technology law, including Cybersecurity Law and Policy, Artificial Intelligence and the Law, and a Data Privacy and Security Lab. He also leads the Law School's Data Privacy and Security Compliance Program. In the core law school curriculum, he has taught Property Law, Constitutional Law, and Torts. He is also a Faculty Associate with the Berkman-Klein Center for Internet & Society at Harvard University. Prior to his career in academia, Professor Opderbeck was a Partner in the Intellectual Property / Technology practice at McCarter & English, LLP, where he began practicing cyber and intellectual property law in the early years of the public Internet.
Emerging Legal Frameworks for AI
There is general consensus about some basic principles of AI ethics and policy, including transparency, fairness, explainability, privacy, security, and accountability. It is unclear how these broad goals could be incorporated into positive law. The European AI Act is the most prominent and extensive example of AI-specific law. It embodies a regulatory framework that is in many ways similar to the EU's General Data Protection Regulation (GDPR), which previously set the global pace for comprehensive privacy regulation. A number of U.S. states have enacted or are in the process of enacting AI laws, which are mostly issue- or sector-specific. During the Biden Administration, the U.S. Federal government began to develop policy positions relating to AI, but there has not yet been any sustained movement towards comprehensive legislation, and the incoming Trump Administrations priorities relating to AI are unclear. This talk will survey the existing legal landscape and highlight some difficulties policymakers face in this domain.
Learn more about our graduate programs:
M.S. in Data Science
Ph.D. in Data Science
M.S. in Artificial Intelligence
The series is open to the public; no registration fee is required.
Learn more about our graduate programs:
M.S. in Data Science
Ph.D. in Data Science
M.S. in Artificial Intelligence