Profile Photo

Aidan Kierans

Ph.D. Student in Computer Science and Engineering
University of Connecticut

📧 aidan.kierans [at] uconn.edu
LinkedIn

About

I am a Ph.D. student in Computer Science and Engineering at the University of Connecticut, focusing on AI alignment and technical governance. My work spans AI safety research, policy coordination, and technical red teaming. I am advised by Prof. Shiri Dori-Hacohen in the Reducing Information Ecosystem Threats (RIET) Lab, and I am the founder of Beneficial and Ethical AI at UConn (BEACON).

I currently serve as a founding member of the Leadership Council for the National Policy Advocacy Network, coordinating AI safety communication and advocacy across U.S. university student groups. I also contribute to AI safety through technical work as part of the OpenAI Red Teaming Network and as a member of the Future of Life Institute's AI Safety Community Researchers.

Publications

Catastrophic Liability: Managing Systemic Risks in Frontier AI Development

Kierans, A., Ritticher, K., Sonsayar, U., Ghosh, A.
Presented at Technical AI Safety Conference (TAIS) 2025, Submitted to Artificial Intelligence, Ethics, and Society (AIES) 2025
[Poster] [Preprint]

ReNorM: Aligning AI Requires Automating Reasoning Norms

Kierans, A.
Submitted to NeurIPS 2025 Position Paper Track
[Preprint]

Quantifying Misalignment Between Agents: Towards a Sociotechnical Understanding of Alignment

Kierans, A., Ghosh, A., Hazan, H., & Dori-Hacohen, S.
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2025)
[Preprint]

Benchmarked Ethics: A Roadmap to AI Alignment, Moral Knowledge, and Control

Kierans, A.
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 964-965)
[Paper]

Bootstrap percolation via automated conjecturing

Bushaw, N., Conka, B., Gupta, V., Kierans, A., Lafayette, H., Larson, C., McCall, K., Mulyar, A., Sullivan, C., Taylor, S., Wainright, E., Wilson, E., Wu, G., Loeb, S.
Ars Mathematica Contemporanea, 23(3), P3-06 (2023)
[Paper]

Quantifying Misalignment Between Agents

Kierans, A., Hazan, H., Dori-Hacohen, S.
NeurIPS ML Safety Workshop 2022
[Paper]

Education

Selected Experience

Leadership Council – Founding Member, National Policy Advocacy Network (March 2025 – Present)
Coordinating AI safety communication and advocacy by U.S. university student groups with support from the Center for AI Policy. Co-organizing a national policy summit for students to design AI governance solutions.

Independent Contractor, OpenAI Red Teaming Network (Jan 2024 – Present)
Participating in red teaming efforts to assess risks and safety profiles of OpenAI models. Red-teamed OpenAI's Computer-Using Agent (CUA) model, "Operator", at early development stages, directly informing risk assessments and System Card outputs.

Graduate Research Assistant, UConn Reducing Information Ecosystem Threats (RIET) Lab (June 2022 – Present)
Leading research on AI alignment, winning multiple awards including the NeurIPS ML Safety Workshop AI Risk Analysis Award and ICLP 2023 Best Poster Award. Currently prototyping a knowledge graph for alignment research.

Founder & President, Beneficial and Ethical AI at UConn (BEACON) (Feb 2024 – May 2025)
Created and run fellowship programs introducing students to AI safety research and governance. Scaled to 6-person organizing team with 4 concurrent fellowship cohorts. Supported by Open Philanthropy University Group Fellowship.

Pathways to AI Policy Selected Participant, Wilson Center's Science and Technology Innovation Program (Dec 2024 – Apr 2025)
Attending Executive AI Labs seminars to understand government perspectives on AI policy and training to engage with policymakers on AI issues.

Teaching Assistant & Contractor, ML Alignment & Theory Scholars (MATS) (June 2024 – Aug 2024)
Facilitated weekly "AI Safety Strategy" sessions for MATS 6.0, guiding emerging scholars through AI threat models and research prioritization.

Google Policy Fellow, Center for Democracy & Technology (CDT) (June 2023 – Aug 2023)
Supported CDT at senate committee hearing on AI & human rights. Created CDT's "AI Policy Coordination Tracker" database and engaged in high-level discussions with policy teams at Meta and Google.

Selected Conferences & Presentations

• Congressional Exhibition on Advanced AI: Presented AI scheming demonstration at Capitol Hill event organized by the Center for AI Policy

• Technical AI Safety Conference 2025 (TAIS): Presented poster on "Catastrophic Liability: Managing Systemic Risks in Frontier AI Development"

• AAAI Conference on Artificial Intelligence (AAAI-25): Paper accepted in Special Track on AI Alignment; served as Program Committee member

• Technical Innovations for AI Policy Conference: Presented AI scheming demonstration at inaugural Washington, D.C. conference hosted by FAR.AI

• NIST AISIC Workshop: Provided feedback on NIST's early-stage standards development for AI evaluation and system documentation

• Center for Human-Compatible AI Workshop (CHAI 2024): Presented posters on "Quantifying Misalignment Between Agents" and "Reinforcement Learning from Oracle Feedback"

• The Safe and Trustworthy AI Workshop (ICLP 2023): Won Best Poster Award; served as Junior Program Committee member

• NeurIPS ML Safety Workshop 2022: Won AI Risk Analysis Award for "Quantifying Misalignment Between Agents"

Notable Projects

Knowledge Graph for AI Alignment
Developing a prototype knowledge graph to support thesis research on automating moral reasoning for AI systems.

Safety Assurance Index (SAI)
Developed a white paper on open-source standards for AI safety documentation and verification, securing 3rd place at the Apart Research AI Safety Entrepreneurship Hackathon (2025).

Congressional Exhibition on Advanced AI
Led a small team to create and present an AI risk demonstration on Capitol Hill at a Center for AI Policy event.

AI Safety Learning Community
Taught a month-long AI safety seminar series for UConn faculty and staff, with support from UConn's Center for Excellence in Teaching and Learning.

Selected Awards and Recognition

• AI Safety & Assurance Startup Hackathon (Apart Research) 3rd Place Prize (2025)

• Open Philanthropy University Group Fellowship for BEACON (2024-2025)

• Selected Participant in the Wilson Center's Pathways to AI Policy program (2024-2025)

• UConn Engineering Entrepreneurship Hub Graduate Entrepreneurship Fellow (2024-2025)

• The Safe and Trustworthy AI Workshop (ICLP 2023) Best Poster Award

• UConn Computer Science and Engineering Predoctoral Fellowship (2023)

• NeurIPS ML Safety Workshop AI Risk Analysis Award (2022)

• Vitalik Buterin PhD Fellowship in AI Existential Safety (2022 finalist)

• UConn School of Engineering Next Gen Scholar GE Graduate Fellowship for Innovation (2021-2022)

• Member of the Future of Life Institute's AI Safety Community Researchers

Teaching and Communication

Instructor Support Volunteering, UConn Center for Excellence in Teaching and Learning (Jan 2025 – May 2025)
Teaching AI safety "learning community" seminar series for faculty and staff. Delivering seminars on "AI Safety Literacy: From Awareness to Action" and providing drop-in feedback on AI-related questions in education.

Fellowship Facilitator, BEACON (Jan 2024 – Dec 2024)
Facilitated Technical AI Safety Fellowship and AI Policy Fellowship, guiding multiple cohorts through structured reading groups. Mentored undergraduate research projects, with one student joining the RIET Lab.

Teaching Assistant, ML Alignment & Theory Scholars (MATS) (Jun 2024 – Aug 2024)
Led weekly "AI Safety Strategy" sessions for MATS 6.0, guiding scholars through discussions on AI threat models and evaluating research proposals.

Media and Outreach

• The Conversation: Published article "Getting AIs working toward human goals — study shows how to measure misalignment"

• UConn Daily Campus: Featured in "Artificial Intelligence poses novel social threats, researchers prepare for the worst"

• Future of Life Institute: Presented talk on "Quantifying Misalignment Between Agents: Towards a Sociotechnical Understanding of Alignment"

• UConn CETL: Led "mAI dAI" seminars on AI alignment and malicious misuse for university instruction staff

• BEACON Panel: Convened and facilitated "Beneficial, Ethical AI at UConn: a student-led conversation" for UConn CETL

Contact

Feel free to reach out to me at aidan.kierans [at] uconn.edu or connect with me on LinkedIn.