March 21, 2022
BATON ROUGE, LA – Artificial intelligence (AI) and machine learning (ML) technologies
play an increasing role in our society today, including in high-stakes decision-making
systems like lending decisions, employment screenings, and criminal justice sentencing.
However, one growing challenge with AI and ML systems is avoiding the unfairness they
might introduce that can lead to discriminatory decisions. Finding a solution to that
problem is the aim of a project by LSU Computer Science Associate Professor Mingxuan
Sun and University of Iowa Computer Science Associate Professor Tianbao Yang and University
of Iowa Associate Professor of Business Analytics Qihang Lin.
The work is part of a grant from the National Science Foundation ($500,000) and Amazon
($300,000). Yang serves as principal investigator on the project, and Sun and Lin
are co-principal investigators.
The researchers’ objectives are to design new fairness measures and develop numerical
algorithms for solving the optimization with fairness guarantee. More specifically,
they will develop scalable stochastic optimization algorithms for optimizing a broad
family of rank-based, threshold-agnostic objectives.
Learning to rank is to select a set of top-k answers/items with the highest ranking
score according to the given scoring function. Ranking algorithms have many applications
such as selecting top-k job candidates, predicting top-k crime hotspots, and recommending
“Most current machine-learning approaches are based on optimizing traditional objectives,
such as accuracy in the training data, which are insufficient for addressing the minority
bias of training data,” Sun said. “In many domains, the data is highly skewed over
different classes. For example, an historical data bias or stereotype exists that
most software engineers are young males. An unfair ML system would recommend a software
engineer position to young males only.”
Sun added that the project will also include integrating the research team’s techniques
into education analytics to address fairness and ethical concerns of predictive models,
in particular, the “perpetuating biases toward under-represented minority students,
first-generation college students, and female students in STEM courses.”
“Our goal is to ensure more fairness between different demographic groups in applications
such as recommendations, top-k hotspot predictions, and students’ performance predictions.”
Like us on Facebook (@lsuengineering) or follow us on Twitter and Instagram (@lsuengineering).
Contact: Joshua DuplechainDirector of Communications225-578-5706 (o)[email protected]