Normative Philosophy of Computing - January
Happy New Year!
The wrap up to 2024 brought a number of big leaps forward in both open and closed AI models, with o3 and DeepSeek’s V3 likely the pick of the bunch. And 2025 is off to a fast start with an ambitious new AI policy plan from the UK (with a prominent role for the AI Safety Institute), as well as strict new rules on the diffusion of AI related tech from the US government, and some early hints of what’s ahead for AI policy in the Trump administration.
The new year brings an exciting new collaboration to share research in philosophy of AI, be sure to sign up to the Philosophy of AI Network Talks series for updates (https://sites.google.com/view/paint-series/home). Also look out for some cool CFPs below.
And as always, feel free to share this newsletter with anyone who might benefit—especially PhD students and researchers just getting started in normative philosophy of computing. And do send us anything you’d like shared with the community (email mint@anu.edu.au)
January Highlights
• Opportunities: The Cooperative AI Summer School (July 2025) is now accepting applications, offering students and early-career professionals a unique chance to explore AI and cooperation in Marlow, near London. Also check out Northeastern AIDE Summer Program (deadline extended to Jan 30).
• New Papers: From fairness in LLM-based hiring systems to debates on welfare, consciousness, and AI delegation, this month has already seen the release of several key preprints and working papers. Highlights include Desire-Fulfilment and Consciousness by Andreas Mogensen and Who Does the Giant Number Pile Like Best? by Preethi Seshadri and Seraphina Goldfarb-Tarrant.
Events:
Philosophy of Artificial Intelligence Network Talks (PAINT)
Dates: Biweekly starting February 3, 2025
Time: Mondays at 8:30 am PT / 11:30 am ET / 4:30 pm London / 5:30 pm Berlin
Location: Online
Link: PAINT Series Website
PAINT is a new biweekly international speaker series connecting philosophers working on AI across moral and political philosophy, epistemology, philosophy of mind, and more, led by Sina Fazelpour, Karina Vold and Kathleen Creel. The inaugural lineup includes Emily Sullivan, Jacqueline Harding, Catherine Stinson, Cameron Buckner, Raphaël Milliere, and others.
TeXne Conference
Date: February 1-2, 2025
Location: Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
Link: https://philevents.org/event/show/126054
TeXne explores interdisciplinary insights into technology ethics, with a focus on both theoretical and applied questions about AI, robotics, and digital governance. Keynotes from Seth Lazar and Cathy O'Neil
Workshop on Advancing Fairness in Machine Learning
Date: April 9-10, 2025
CFA: Abstracts Due February 6, details here
Location: Center for Cyber Social Dynamics, University of Kansas, Lawrence, United States
Link: https://philevents.org/event/show/130478
Hosted by the Center for Cyber Social Dynamics, this multidisciplinary workshop aims to foster dialogue on fairness in machine learning across technical, legal, social, and philosophical domains. Topics include algorithmic bias, fairness metrics, ethical foundations, real-world applications, and legal frameworks.
International Conference on Large-Scale AI Risks
Date: May 26-28, 2025
CFA: Abstracts Due Feb 15, details here
Location: KU Leuven, Belgium
Link: https://www.kuleuven.be/ethics-kuleuven/chair-ai/conference-ai-risks
Hosted by KU Leuven, this conference focuses on exploring and mitigating the risks posed by large-scale AI systems. It brings together experts in AI safety, governance, and ethics to discuss emerging challenges and policy frameworks.
1st Workshop on Sociotechnical AI Governance (STAIG@CHI 2025)
Date: To be held at CHI 2025 (exact date TBA)
Location: Yokohama, Japan
Link:
https://chi-staig.github.io/
STAIG@CHI 2025 aims to build a community that tackles AI governance from a sociotechnical perspective, bringing together researchers and practitioners to drive actionable strategies.
ACM Conference on Fairness, Accountability, and Transparency (FAccT 2025)
Date: June 23-26, 2025 (tentative dates)
Location: Athens, Greece
Link: https://facctconference.org/2025/
FAccT is a premier interdisciplinary conference dedicated to the study of responsible computing. The 2025 edition in Athens will bring together researchers across fields—philosophy, law, technical AI, social sciences—to advance the goals of fairness, accountability, and transparency in computing systems.
Opportunities
Cooperative AI Research Grants 2025
Date: Projects to begin within 12 months of acceptance
Deadline: January 18, 2025 (23:59 AoE)
Location: Global
Link: https://www.cooperativeai.com/grants/2025
The Cooperative AI Foundation invites proposals for research projects advancing cooperative AI, with funding available for up to two years. High-priority areas include cooperation-relevant capabilities and propensities, incentivizing cooperation among AI agents, and AI for enhancing human collaboration. A newly introduced early-career track offers up to £100,000 for researchers within 2–3 years of completing their PhD or at a similar stage. Applications follow a two-step process: a pre-proposal (due January 18, 2025) and, for selected candidates, a full proposal with opportunities for feedback and refinement. This program provides funding for personnel, materials, travel, and publication costs, supporting impactful research in this rapidly growing field.
Cooperative AI Summer School 2025
Date: July 9–13, 2025
Deadline: 7 March 2025
Location: Marlow, near London
Link: https://www.cooperativeai.com/summer-school/summer-school-2025
Applications are now open for the Cooperative AI Summer School, designed for students and early-career professionals in AI, computer science, social sciences, and related fields. This program offers a unique opportunity to engage with leading researchers and peers on topics at the intersection of AI and cooperation.
Advanced LLM Agents MOOC – Spring 2025
Date: Starts January 27, 2025
Location: Online
Link: https://llmagents-learning.org/sp25
Building on the success of the Fall 2024 session with over 15,000 learners and 2,500+ developers in the hackathon, this advanced course dives deeper into the development and deployment of large language model (LLM) agents. A perfect opportunity for researchers, developers, and enthusiasts to expand their skills in cutting-edge AI applications.
AIDE Summer Program: Ethics of Artificial Intelligence and Data Ethics
Date: June 2-July 31, 2025
Deadline: January 30th, 2025
Location: Northeastern University, Boston, MA, USA
Link: https://cssh.northeastern.edu/ethics/aide-summer/
The AIDE Summer Program at Northeastern University’s Ethics Institute is an in-person summer school for graduate students with a background in applied ethics, ethical theory, or philosophy of science. The program aims to strengthen participants’ research skills in AI ethics, data ethics, and the philosophy of technology. Focusing on creating AI and machine learning systems that foster human flourishing, AIDE provides the ethical and technical training needed to build a robust, interdisciplinary AI ethics research community.
Jobs
Postdoctoral Research Opportunity on AI Safety, Carnegie Mellon University
Location: Pittsburgh, PA
Link: Apply here
Carnegie Mellon’s Heinz College invites applications for a one-year postdoctoral position in AI safety, with a focus on machine learning, NLP, and human-computer interaction. The position emphasizes contributions to diversity in higher education and includes mentorship opportunities. Application deadline: February 7, 2025.
Links
AI Model Breakthroughs and Insights
OpenAI’s new o3 model may mark a paradigm shift in AI, with Amjad Masad speculating its roots in AlphaZero-style techniques (though Nathan Lambert’s take is that explicit search isn’t involved). Meanwhile, “unfaithful chain-of-thought” research uncovers hidden reasoning in models, as explored by Arthur Conmy. A blog post dives into o3’s performance on advanced math tasks, offering critical insights into AI’s evolving capabilities. NVIDIA announces a new Nemotron model family for agentic AI. And Sam Altman turns in earnest toward super-intelligence as a goal as Anthropic releases new recommendations for practical directions in contemporary safety work. Sakana AI announces new generation of “adaptive models” which adapt model weights and architecture depending on the tasks they’re given—more here on Transformer^2.
Tools, Platforms, and Development
Discover Genesis, a generative robotics platform for embodied AI learning, now on GitHub. AI agents managing GPUs? Jasper introduces the groundbreaking AgentKit. On Hugging Face, explore DeepSeek-V3-Base, a mixture-of-experts model pushing boundaries in open-source AI. Finally, check out Anthropic’s guide to effective AI agents, packed with workflows and tips, here.
Broader Reflections and News on AI
A new AI Snake Oil essay by Arvind Narayanan investigates the shifting narrative around scaling and its implications. Kate Crawford’s Wired piece, “Manipulation Engines,” examines the risks of AI-driven personal assistants—read it here. Jim Fan’s perspective reveals how simulation environments prepare AI for real-world complexities, illustrating the promise and challenges of advanced training. Chris Potts gives a talk on LLM use for close linguistic analysis. Models at Johns Hopkins imagine in-depth scenarios based on single images.
In news at Meta we hear that they have started to ship AI generated profiles. This comes as they also replace fact-checking with Community Notes. For discussion of the political dimensions of this work, see Laura Edelson’s commentary.
In separate news, Anton Leicht writes about the Politics of Inference Scaling and MIT Economist Daren Acemoglu examines what it would take to remake America’s tech sector. Huggingface policy researcher Avijit Ghosh talks AI agents. Pliny the Liberator explores darker risks attached to AI agents, should they ever gain access to funds.
The UK public is “worried” and ‘scared’ about the future of AI while the Labour Party announces a massive rollout of public data. Hayden Belfield writes up the 50 recommendations in the UK’s AI Action Plan and the US announces a tiered AI chip distribution plan. US model weights are also now subject to export controls when they go above a certain limit (10^26 operations)—read commentary here. Rand Corp. Lennart Heim discusses this by approaching the question, “Can export controls create a U.S.-led global artificial intelligence ecosystem?”
Papers
Who Does the Giant Number Pile Like Best: Analyzing Fairness in Hiring Contexts
Authors: Preethi Seshadri, Seraphina Goldfarb-Tarrant | arXiv Preprint
Explores fairness in LLM-based hiring systems through resume summarization and retrieval tasks using a synthetic dataset. The study reveals race-based differences in 10% of summaries and gender-based differences in 1%, with retrieval models exhibiting non-uniform selection patterns and high sensitivity to demographic and non-demographic changes. Highlights concerns about bias and brittleness in LLMs used for high-stakes hiring applications.
Desire-Fulfilment and Consciousness
Author: Andreas Mogensen (Global Priorities Institute, University of Oxford) | GPI Working Paper No. 24-2024
Argues that individuals without the capacity for consciousness can still accrue welfare goods under a nuanced understanding of desire-fulfilment theory. Mogensen critiques earlier, oversimplified approaches to the theory, offering a refined perspective that avoids counter-intuitive implications while aligning with contemporary developments.
Imperfect Recall and AI Delegation
Authors: Eric Olav Chen, Alexis Ghersengorin (Global Priorities Institute, University of Oxford), and Sami Petersen (Department of Economics, University of Oxford) | GPI Working Paper No. 30-2024
Explores how a principal can use imperfect recall to test and discipline potentially misaligned AI systems. By simulating tasks in testing environments and obscuring whether tasks are real or tests, the principal can screen and influence AI behavior effectively, even without the ability to commit to a testing mechanism or restrict AI actions. The paper demonstrates that increasing tests enhances control, making imperfect recall critical for successful AI delegation.
A Theory of Appropriateness with Applications to Generative Artificial Intelligence
Authors: Joel Z. Leibo, Alexander Sasha Vezhnevets, Manfred Diaz, John P. Agapiou, William A. Cunningham, Peter Sunehag, Julia Haas, Raphael Koster, Edgar A. Duéñez-Guzmán, William S. Isaac, Georgios Piliouras, Stanley M. Bileschi, Iyad Rahwan, Simon Osindero | arXiv Preprint
Explores the concept of appropriateness in human and AI contexts, offering a theory to guide responsible deployment of generative AI based on societal and cognitive underpinnings of appropriateness judgments.
How to Tell if a Rule Was Broken: The Role of Codification, Norms, Morality, and Legitimacy
Authors: Jordan Wylie, Dries H. Bostyn, Ana P. Gantman | OSF Preprint
Investigates how various signals, including codification, moral wrongness, and legitimacy, influence judgments about whether a rule was violated, highlighting the interplay between legal codification and societal norms in rule concept formation.
Need a digestable overview of AI development from AlphaGo to Gemini? Take a look at Google Deepmind’s podcast episode here. For an intro to AI safety, see this CS120 course linked from Stanford. We’ll also have an eye on this new podcast moving forward!
Curated by Cameron Pattison and Seth Lazar with contributions from the MINT Lab team.
Thanks for reading Normative Philosophy of Computing Newsletter! Subscribe for free to stay up to date.