Acceleramus
Our last email went out Jan 16. Anyone want to try to summarise the AI news since then? It’s a daunting thought. From Deepseek’s R1 on Jan 20 to Figure’s Helix model released just before this writing, there has been a tonne of news on the frontier AI side. Meanwhile the US government is done with AI safety (among other things), OpenAI has begun a $500b compute project called ‘Stargate’ (and as of this writing has 400m weekly active users on ChatGPT) the UK AI Safety Institute is now the AI Security Institute, the EU has binned its AI liability act but is investing massively in public AI infrastructure (or at least has some impressive press releases), *everyone* has some kind of ‘deep’ product out, from deep research to deep search, Grok v3 is plausibly the newest frontier model (where are the weights Elon)… If you feel like you’re struggling to keep up, don’t worry so is everyone else.
My big highlight from the last month? DeepSeek showing that only base models of a certain size and capability can be directly trained to efficiently use test-time compute to better solve problems by expending more tokens in chain of thought. But smaller models can be taught how to do something similar (albeit not as good) by distilling capabilities from the larger models. This is both an invaluable insight into what the real achievement of scaling pre-trained models is—that it creates a model that can learn new skills using reinforcement learning—and offers some promise that we won’t be trapped into dependence on extremely computationally intensive models that centralise power and control, because we can distil capabilities from larger models into smaller ones (that might, for example, work on device).
February Highlights
• Opportunities: There are lots! Check out the Oxford workshop on collective agency and AI, and a number of summer opportunities for PhD students. Also some promising new postdoctoral positions, and the Paris Conference on AI and Digital Ethics to look out for.
• New Papers: Happy to share my ‘Governing the Algorithmic City’, published this month; which pairs nicely with Elsa Kugelberg’s ‘Dating Apps and the Digital Sexual Sphere’. I missed Jacqueline Harding and Nathaniel Sharadin’s paper on capabilities last year, so including that, as well as a number of new papers and preprints from Christian Tarsney, Rob Long, David Chalmers, and others.
Events:
Philosophy of Artificial Intelligence Network Talks (PAINT)
Dates: Biweekly starting March 3rd, 2025
Time: Mondays at 8:30 am PT / 11:30 am ET / 4:30 pm London / 5:30 pm Berlin
Location: Online
Link: PAINT Series Website
PAINT is a new biweekly international speaker series connecting philosophers working on AI across moral and political philosophy, epistemology, philosophy of mind, and more, led by Sina Fazelpour, Karina Vold and Kathleen Creel. The inaugural lineup includes Emily Sullivan, Jacqueline Harding, Catherine Stinson, Cameron Buckner, Raphaël Milliere, and others.
AI for Animals 2025
Date: March 1 & 2, 2025
Location: University of California, Berkeley
Link: https://www.aiforanimals.org
Sessions will cover topics like animal consideration in AI models, advocacy and technology, and interspecies communication. Other areas include animal law, veterinary medicine, precision livestock farming, digital minds, artificial agents, and more. Featured guests include Jeff Sebo, Rob Long, Jonathan Birch, Peter Singer, and many others.
Workshop on Advancing Fairness in Machine Learning
Date: April 9-10, 2025
Location: Center for Cyber Social Dynamics, University of Kansas, Lawrence, United States
Link: https://philevents.org/event/show/130478
Hosted by the Center for Cyber Social Dynamics, this multidisciplinary workshop aims to foster dialogue on fairness in machine learning across technical, legal, social, and philosophical domains. Topics include algorithmic bias, fairness metrics, ethical foundations, real-world applications, and legal frameworks.
Workshop on Bidirectional Human-AI Alignment
Date: April 27, 2025 (ICLR Workshop, Hybrid)
Location: Hybrid (In-person & Virtual)
Website: https://bialign-workshop.github.io/#/
This interdisciplinary workshop redefines the challenge of human-AI alignment by emphasizing a bidirectional approach—not only aligning AI with human specifications but also empowering humans to critically engage with AI systems. Featuring research from Machine Learning (ML), Human-Computer Interaction (HCI), Natural Language Processing (NLP), and related fields, the workshop explores dynamic, evolving interactions between humans and AI.
International Conference on Large-Scale AI Risks
Date: May 26-28, 2025
Location: KU Leuven, Belgium
Link: https://www.kuleuven.be/ethics-kuleuven/chair-ai/conference-ai-risks
Hosted by KU Leuven, this conference focuses on exploring and mitigating the risks posed by large-scale AI systems. It brings together experts in AI safety, governance, and ethics to discuss emerging challenges and policy frameworks.
1st Workshop on Sociotechnical AI Governance (STAIG@CHI 2025)
Date: To be held at CHI 2025 (exact date TBA)
Location: Yokohama, Japan
Link: https://chi-staig.github.io/
STAIG@CHI 2025 aims to build a community that tackles AI governance from a sociotechnical perspective, bringing together researchers and practitioners to drive actionable strategies.
ACM Conference on Fairness, Accountability, and Transparency (FAccT 2025)
Date: June 23-26, 2025 (tentative dates)
Location: Athens, Greece
Link: https://facctconference.org/2025/
FAccT is a premier interdisciplinary conference dedicated to the study of responsible computing. The 2025 edition in Athens will bring together researchers across fields—philosophy, law, technical AI, social sciences—to advance the goals of fairness, accountability, and transparency in computing systems.
Open Opportunities
Diverse Intelligences Summer Institute 2025
Location: St Andrews, Scotland
Link: https://disi.org/apply/
Deadline: Rolling from March 1, 2025
The Diverse Intelligences Summer Institute (DISI) invites applications for their summer 2025 program, running July 6-27. Two tracks are available: The Fellows Program seeks scholars from fields including biology, anthropology, AI, cognitive science, computer science, and philosophy for interdisciplinary research. The Storytellers Program welcomes artists working in visual arts, writing, theater, dance, music, and podcasting. Both tracks focus on exploring diverse intelligences through collaborative work. Applications reviewed on a rolling basis starting March 1.
Cooperative AI Summer School 2025
Date: July 9–13, 2025
Deadline: March 7, 2025
Location: Marlow, near London
Link: https://www.cooperativeai.com/summer-school/summer-school-2025
Applications are now open for the Cooperative AI Summer School, designed for students and early-career professionals in AI, computer science, social sciences, and related fields. This program offers a unique opportunity to engage with leading researchers and peers on topics at the intersection of AI and cooperation.
The Paris Conference on AI & Digital Ethics
Date: June 16-17, 2025
Location: International Conference Centre, Sorbonne University, Paris
Link: https://paris-conference.com/call-for-papers-2025/
Abstract Deadline: March 15, 2025
The third edition of this cross-disciplinary conference focuses on threats to political systems and emerging solutions to rebuild trust in AI-powered societies. The conference features four tracks: controlling cyber-influence, countering information manipulation, exploring blockchain for civic trust, and rebuilding social cohesion. Open to PhD holders and candidates from academia and industry across computational philosophy, ethics, political theory, computer science, international relations, and law. Selected papers will be published in the conference's academic journal. Full paper (5,000 words) due July 31, 2025.
CFP: Artificial Intelligence and Collective Agency
Dates: July 3–4, 2025
Deadline: March 27, 2025
Location: Institute for Ethics in AI, Oxford University (Online & In-Person)
Link: https://philevents.org/event/show/132182?ref=email
The Artificial Intelligence and Collective Agency workshop explores philosophical and interdisciplinary perspectives on AI and group agency. Topics include analogies between AI and corporate or state entities, responsibility gaps, and the role of AI in collective decision-making. Open to researchers in philosophy, business ethics, law, and computer science, as well as policy and industry professionals. Preference for early-career scholars.
Ethics Institute 2025 Summer Research Internship
Deadline: March 31, 2025
Location: Boston, MA (in-person for at least 50% of the internship)
Stipend: $6,000–$8,000 per month
Link: Full details and application
Prof. Sina Fazelpour is inviting applications for 12-week Research Internship positions during Summer 2025 (June–August or July–September). Interns will collaborate on developing concepts, methodologies, or frameworks to enhance AI evaluation and governance.
This position is open to current PhD students and recent PhD graduates with a demonstrated interest in AI ethics and governance. Applicants from diverse fields—including philosophy, cognitive science, computer science, statistics, human-computer interaction, network science, and science & technology studies—are encouraged to apply.
Intro to Transformative AI 5-Day Course
Location: Remote
Link: https://bluedot.org/intro-to-tai
Deadline: Rolling (Next cohorts: March 3-7, 10-14, 17-21, 24-28)
BlueDot Impact offers an intensive course on transformative AI fundamentals and implications. The program features expert-facilitated group discussions and curated materials over 5 days, requiring 15 hours total commitment. Participants join small discussion groups to explore AI safety concepts. No technical background needed. The course is free with optional donations and includes a completion certificate.
ACM FAccT 2025 Doctoral Colloquium
Location: Athens, Greece (In-person)
Link: https://facctconference.org/2025/callfordc
Deadline: February 12, 2025 (AoE)
The ACM Conference on Fairness, Accountability and Transparency (FAccT) invites applications for their 2025 Doctoral Colloquium on June 23. Open to PhD, JD, MFA, and other terminal degree students researching fairness, accountability, and transparency in socio-technical systems. Fields include computer science, philosophy, sociology, law, and psychology. The program features mentoring sessions and panels with senior researchers. Applications require a research summary (250 words), career goals statement (250 words), and CV. Priority travel funding available for accepted students. Co-chaired by Kendra Albert, Emily Black, and Roger A. Søraa.
There are several free short-courses on AI agents released this month too: HuggingFace's AI Agents Course offers a comprehensive free program through May 2025 covering agent fundamentals, frameworks like LangChain, and real-world applications, with certification options available upon completion. DeepLearning.AI's Building Towards Computer Use provides a focused 1.5-hour introduction to computer-using AI applications with hands-on examples taught by Anthropic's Colt Steele. Advanced LLM Agents MOOC builds on its successful Fall 2024 run with an in-depth Spring 2025 program for developers and researchers looking to master state-of-the-art LLM agent development.
Jobs
University of Guelph Assistant Professor, Ethics or Applied Ethics
Location: Guelph, CA, N1G 2W1
Link: https://careers.uoguelph.ca/job/Guelph-Assistant-Professor
Deadline: Rolling from March 6, 2025 on.
The area of specialization is Ethics or Applied Ethics, and the successful candidate will engage in research related to their specialization and teach courses ranging from large service courses to small graduate seminars. The teaching commitment for this position is 5 semester courses per year at the undergraduate and graduate levels.
Sloan Foundation Metascience and AI Postdoctoral Fellowship
Location: Various eligible institutions (US/Canada preferred)
Link: https://sloan.org/programs/digital-technology/aipostdoc-rfp
Deadline: April 10, 2025, 5:00pm ET
Two-year postdoctoral fellowship ($250,000 total) for social sciences and humanities researchers studying AI's implications for science and research. Fellows must have completed PhD by start date and not hold a permanent/tenure-track position. Research focuses on how AI is changing research practices, epistemic/ethical implications, and policy responses. Key areas include AI's impact on scientific methods, research pace, explainability, and human-AI collaboration in science. Includes fully-funded 2026 summer school. Application requires research vision statement, approach description, career development plan, CV, mentor support letter, and budget. UK-based applicants should apply through parallel UKRI program.
Post-doctoral Researcher Positions (2)
Location: Trinity College Dublin, Ireland
Email: https://aial.ie/pages/hiring/post-doc-researcher/
Deadline: Rolling basis
The AI Accountability Lab (AIAL) is seeking two full-time post-doctoral fellows for a 2-year term to work with Dr. Abeba Birhane on policy translation and AI evaluation. The policy translation role focuses on investigating regulatory loopholes and producing policy insights, while the AI evaluation position involves designing and executing audits of AI systems for bias and harm. Candidates should submit a letter of motivation, CV, and representative work.
Papers
Dating Apps and the Digital Sexual Sphere
Author: Elsa Kugelberg | American Political Science Review
This paper examines dating apps as powerful intermediaries in the “digital sexual sphere,” shaping intimacy initiation through architecture, moderation, and amplification. Applying a liberal egalitarian framework, Kugelberg argues that apps must respect users’ claims to noninterference, equal standing, and choice improvement. While dating apps offer opportunities for justice, their design often reinforces existing inequalities, warranting regulation and reform.
Governing the Algorithmic City
Author: Seth Lazar | Philosophy & Public Affairs
Examines how algorithmic systems that mediate our social relationships raise novel questions for political philosophy. Lazar introduces the concept of the "Algorithmic City"—the network of algorithmically mediated social relations that now shape much of our lives. He argues that while this algorithmic governance shouldn't be eliminated, it must be justified against standards of procedural legitimacy, proper authority, and substantive justification. The paper shows how political philosophy must update its theories of authority, procedural legitimacy, and justificatory neutrality to account for algorithmic governance's distinctive features.
What is it for a Machine Learning Model to Have a Capability?
Authors: Jacqueline Harding & Nathaniel Sharadin | British Journal for the Philosophy of Science (Preprint)
Harding and Sharadin examine what it means for an ML model to possess a capability, developing a Conditional Analysis of Model Abilities (CAMA): a model has a capability to X if it would reliably succeed at X if it tried. They operationalize this framework to distinguish genuine abilities from coincidental successes, offering a principled approach to model evaluation. CAMA clarifies ML assessment practices and improves fairness in inter-model comparisons.
Construct Validity in Automated Counterterrorism Analysis
Author: Adrian K. Yee | Philosophy of Science
This paper examines the use of machine learning models in counterterrorism analysis, arguing that their application suffers from significant methodological flaws. Yee critiques the operationalization of “terrorist” in artificial intelligence systems, highlighting issues of construct legitimacy, criterion validity, and construct validity. He contends that machine learning models should not be used to identify general classes of terrorists or predict future attacks due to the high risks of false positives and methodological bias. Instead, AI should, at most, be limited to identifying specific individuals with sufficient supporting data.
Models of Rational Agency in Human-Centered AI: The Realist and Constructivist Alternatives
Authors: Jacob Sparks, Ava Thomas Wright | AI and Ethics (Preprint)
This paper examines different approaches to modeling human rational agency in Human-Centered AI systems, arguing that the dominant economic model of human rationality is insufficient compared to realist and constructivist alternatives. Using chatbot fine-tuning as a case study, the authors demonstrate how different philosophical models of human rationality lead to distinct design choices with important practical implications for AI development.
Deception and Manipulation in Generative AI
Author: Christian Tarsney | Philosophical Studies
A timely analysis of AI deception, arguing for stricter standards on AI-generated content compared to human communication. Tarsney develops new frameworks for identifying deception and manipulation based on their influence on human beliefs and choices under “semi-ideal” conditions. The paper proposes solutions such as “extreme transparency” requirements and “defensive systems” that provide contextual information about AI-generated content—particularly relevant for AI safety and alignment research.
Key Concepts and Current Beliefs about AI Moral Patienthood
Author: Robert Long | Preprint
Originally an internal document for Eleos AI Research, this paper offers a foundational framework for assessing AI moral status and welfare. Long examines how AI systems might exhibit consciousness, sentience, and agency—three key features potentially relevant to moral patienthood. The work highlights the need for precise evaluation methods and outlines promising research directions for the emerging field of AI welfare studies.
Propositional Interpretability in Artificial Intelligence
Author: David J. Chalmers | Preprint
This paper introduces propositional interpretability, a framework for explaining AI behavior by mapping its internal states to propositional attitudes—such as beliefs, desires, and probabilities—essential for AI safety, ethics, and cognitive science. Chalmers explores thought logging as a key challenge and evaluates existing interpretability methods (e.g., causal tracing, probing, sparse auto-encoders) for their potential to systematically track an AI system’s reasoning over time.
Suicide, Social Media, and Artificial Intelligence
Authors: Susan Kennedy and Erick Jose Ramirez | Oxford Handbook of Philosophy of Suicide Preprint
A comprehensive examination of the ethical challenges surrounding algorithmic suicide prevention on social media platforms. The authors argue that suicide is a complex phenomenon with varied meanings and rationality across cultures, making it ill-suited for algorithmic intervention. They show how current AI approaches to suicide prevention necessarily embed controversial normative assumptions about suicide's relationship to mental illness and rationality. The paper raises crucial questions about the ethics of imposing culturally-specific values through algorithmic systems deployed globally.
Aspirational Affordances of AI
Authors: Sina Fazelpour, Meica Magnani | Preprint
Fazelpour and Magnani introduce the concept of aspirational affordances to examine how AI influences imagination and agency, particularly in shaping cultural and epistemic possibilities. They argue that AI’s role in defining aspirations is distinct from traditional media due to its persuasive, ecological, and concentrated nature. The paper also introduces aspirational harm, a novel category of AI-induced harm that limits groups’ conceptual resources for imagining alternative futures. Through case studies, the authors illustrate how AI-generated aspirational affordances can reinforce existing social constraints, necessitating careful scrutiny of AI’s role in shaping imagination and identity.
Links
OpenAI has unveiled Operator, ChatGPT Gov and Deep Research. Anthropic has launched Citations. Researchers have introduced “Humanity’s Last Exam” (still basically a knowledge Q&A), MathArena, IssueBench (for clarifying bias in LLM writing assistance) and ENIGMAEVAL (multimodal reasoning) also released this month. A New York Times report exposes the vast scope of AI-powered surveillance in U.S. immigration enforcement. Discussions about AI autonomy take a new turn as Andy Ayrey’s AI foundation establishes legal oversight for Truth Terminal, a project seeking to give AI systems financial and intellectual independence.
Some recent AI Safety/Ethics papers to look out for: Fully Autonomous AI Agents Should Not be Developed; AI Language Model Rivals Expert Ethicist in Perceived Moral Expertise; Actions Speak Louder than Words: Agent Decisions Reveal Implicit Biases in Language Models; Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development.
Need a quick survey of DeepSeek v3 and R1? Take a look at this Simons Institute lecture. For broader introductions to ChatGPT and related models, take a look at Andrej Karpathy’s deep dive here.
Intro, Highlights and Links by Seth Lazar with editorial support from Cameron Pattison, Events, Opportunities, and Paper Summaries by Cameron Pattison with curation by Seth; additional link-hunting support from the MINT Lab team.
Thanks for reading Normative Philosophy of Computing Newsletter! Subscribe for free to stay up to date.