Dr Fabio Motoki from the Norwich Business School has had his research on the political bias of large language models highlighted in the Stanford Institute for Human-Centered Artificial Intelligence's 2024 AI Index Report.
Stanford University's Human-Centered Artificial Intelligence (HAI) institute has been publishing its highly influential AI Index Report annually since 2017. The report provides a curated overview of the latest global trends and developments relating to artificial intelligence, serving as an invaluable resource for policymakers, researchers, journalists, executives, and the general public interested in this rapidly evolving field. The 2024 report has a deep focus on Responsible AI, and was launched on Apr 19 at the Hoover Institution office in Washington, D.C.
The inclusion of research from the University of East Anglia (UEA) in this prestigious report is being celebrated as a significant achievement by the authors. Dr Motoki, leader UEA researched, remarks: “The report is highly selective, so my co-authors and I were profoundly flattered when we were approached by Anka Reuel, the editor of the report's Responsible AI chapter. We think it recognizes the relevance of our pioneering approach to study ChatGPT bias from an applied social sciences perspective."
As the Highlights section of the report puts it, the paper “find(s) a significant bias in ChatGPT toward Democrats in the United States and the Labour Party in the U.K. This finding raises concerns about the tool’s potential to influence users’ political views, particularly in a year marked by major global elections.”
In a chapter dominated by contributions from prestigious US universities, as well as tech giants like Google's DeepMind and Anthropic, the AI Index Report places UEA in a distinctive group as one of just a handful of UK institutions featured alongside Cambridge, Oxford, and Queen's University Belfast.
Image: Dr Motoki (right) talking about human values and AI at the FT Future of AI 2023 event (credit: Financial Times)
Published: