Ph.D. Student in Computer Science and Engineering
University of Connecticut
📧 aidan.kierans [at] uconn.edu
LinkedIn
I am a Ph.D. student in Computer Science and Engineering at the University of Connecticut, focusing on AI alignment and technical governance. My research has been recognized with awards from workshops at NeurIPS and ICLP, and I have published AAAI on quantifying misalignment.
I am advised by Prof. Shiri Dori-Hacohen in the Reducing Information Ecosystem Threats (RIET) Lab. I founded Beneficial and Ethical AI at UConn (BEACON), which has grown to serve over 30 students across six concurrent fellowship cohorts.
My work spans technical AI safety research, policy development, and risk assessment. I contribute to frontier AI safety through the OpenAI Red Teaming Network, where I have identified vulnerabilities in early-stage computer-using agent systems. I also serve on the Leadership Council of the Collegiate Coalition for AI Policy, coordinating AI safety advocacy across university student groups nationwide.
Catastrophic Liability: Managing Systemic Risks in Frontier AI Development
Kierans, A., Ritticher, K., Sonsayar, U., Ghosh, A.
Presented at Technical AI Safety Conference (TAIS) 2025. Submitted to Artificial Intelligence, Ethics, and Society (AIES) 2025. Accepted for poster presentation at ACM EAAMO 2025.
[Poster] [Preprint]
ReNorM: Aligning AI Requires Automating Reasoning Norms
Kierans, A.
Submitted to International Association for Statutes, Epistemology, and Applications of Inferences (IASEAI) 2026
[Preprint]
Quantifying Misalignment Between Agents: Towards a Sociotechnical Understanding of Alignment
Kierans, A., Ghosh, A., Hazan, H., & Dori-Hacohen, S.
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2025), Special Track on AI Alignment
[Paper] [Preprint]
Benchmarked Ethics: A Roadmap to AI Alignment, Moral Knowledge, and Control
Kierans, A.
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 964-965)
[Paper]
Quantifying Misalignment Between Agents
Kierans, A., Hazan, H., Dori-Hacohen, S.
NeurIPS ML Safety Workshop 2022 (Won AI Risk Analysis Award)
[Paper]
Bootstrap percolation via automated conjecturing
Bushaw, N., Conka, B., Gupta, V., Kierans, A., Lafayette, H., Larson, C., McCall, K., Mulyar, A., Sullivan, C., Taylor, S., Wainright, E., Wilson, E., Wu, G., Loeb, S.
Ars Mathematica Contemporanea, 23(3), P3-06 (2023)
[Paper]
Co-Founder | Leadership Council, Collegiate Coalition for AI Policy (CCAP) (March 2025 – Present)
Co-founding non-profit organization establishing pathways for students into AI policy advocacy.
Independent Contractor, OpenAI Red Teaming Network (Jan 2024 – Present)
Participating in red teaming efforts to assess risks and safety profiles of OpenAI models. Red-teamed OpenAI's Computer-Using Agent (CUA) model, "Operator", at early development stages, directly informing risk assessments and System Card outputs.
Graduate Research Assistant, UConn Reducing Information Ecosystem Threats (RIET) Lab (June 2022 – Present)
Published first-author papers on quantifying misalignment, winning the NeurIPS ML Safety Workshop AI Risk Analysis Award (2022), ICLP Best Poster Award (2023), and ultimately a top conference publication (AAAI-25 AI Alignment Track). Served on program committees for 6+ workshops and conferences, reviewing 10+ papers. Currently developing knowledge graph infrastructure for philosophy publications to support research on automating moral reasoning.
Founder & President, Beneficial and Ethical AI at UConn (BEACON) (Feb 2024 – May 2025)
Built fellowship program from zero to 30+ students across six concurrent cohorts with 80%+ overall retention and 100% retention after week 2. Scaled organizing team to 6 graduate and undergraduate students. Two undergraduate fellows launched independent research projects; one joined RIET Lab and co-presented at CHAI-24. Secured Open Philanthropy University Group Fellowship funding.
Graduate Teaching Assistant, ENGR 1195: AI Literacy, University of Connecticut (Aug 2025 – Dec 2025)
Analyzed engagement patterns across 500 students from engineering and other disciplines, identifying key factors influencing student interactions with AI. Generated statistical reports on student responses for Provost assessment and course improvement recommendations.
Teaching Assistant & Contractor, ML Alignment & Theory Scholars (MATS) (Summer 2024 & 2025)
Facilitated weekly "AI Safety Strategy" curriculum for MATS cohort 6.0, guiding emerging researchers through threat model analysis and research prioritization. Evaluated scholars' research proposals for technical feasibility and alignment with AI safety priorities.
Google Policy Fellow, Center for Democracy & Technology (CDT) (June 2023 – Aug 2023)
Ensured technical accuracy of CEO testimony for Senate committee hearing on AI and human rights. Authored briefing materials on AI open-sourcing risks and participated in policy discussions with Meta and Google policy teams. Developed automated data extraction pipeline and created comprehensive "AI Policy Coordination Tracker" database cataloging 150+ CDT policy publications, reducing future data collection time by 90%.
Graduate Entrepreneurship Fellow, UConn Engineering Entrepreneurship Hub (Aug 2024 – Jul 2025)
Conducted 15+ interviews with senior technologists, policy counsel, and government officials—including the Assistant Director of AI R&D at the White House OSTP—to identify gaps in AI safety auditing infrastructure. Synthesized findings into recommendations for transparency standards, reporting requirements, and whistleblower protections in AI development.
Lead Data Scientist, OpenPrinciples (Jan 2022 – Dec 2022)
Redesigned semantic search pipeline that doubled recommendation accuracy from baseline. Fine-tuned GPT-3 on domain-specific dataset to generate contextually relevant life principle recommendations. Developed cosine similarity-based extraction system to automatically identify high-value quotes from unstructured text.
• ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO 2025): Poster presentation on "Catastrophic Liability: Managing Systemic Risks in Frontier AI Development"
• AI Law and Policy Workshop (Vista Institute): Selected participant in Washington, DC workshop on AI governance, cybersecurity, and biosecurity law
• NIST AISIC Workshop on AI Standards Zero Drafts Pilot Project: Provided technical feedback on early-stage standards for AI evaluation and system documentation
• Congressional Exhibition on Advanced AI: Led team that designed and presented AI scheming demonstration at Capitol Hill event organized by the Center for AI Policy
• Technical Innovations for AI Policy Conference: Presented AI scheming demonstration at inaugural Washington, D.C. conference hosted by FAR.AI
• Technical AI Safety Conference 2025 (TAIS): Presented poster on "Catastrophic Liability: Managing Systemic Risks in Frontier AI Development"
• AAAI Conference on Artificial Intelligence (AAAI-25): Published full paper in Special Track on AI Alignment; reviewed 4 papers as Program Committee member
• Center for Human-Compatible AI Workshop (CHAI 2024): Presented first-author poster on "Quantifying Misalignment Between Agents"; co-presented "Reinforcement Learning from Oracle Feedback"
• The Safe and Trustworthy AI Workshop (ICLP 2023): Won Best Poster Award for "Quantifying Misalignment Between Agents"; served as Junior Program Committee member
• NeurIPS ML Safety Workshop 2022: Won AI Risk Analysis Award for "Quantifying Misalignment Between Agents"
Machine Ethics and Reasoning (MERe) Workshop (July 2025)
Led 4-person organizing team for interdisciplinary virtual workshop attracting 75+ applicants from philosophy and computer science. Delivered plenary talk on "Computational Approaches to Moral Reasoning." Managed parallel OCR processing for 100K PDFs and technical requirements for collaborative annotation interface.
Moral Knowledge Graph for AI Alignment (Ongoing)
Creating the first large-scale dataset of philosophical arguments for machine understanding of moral reasoning. Building knowledge graph infrastructure to support thesis research on automating reasoning norms.
Safety Assurance Index (SAI) (2025)
Developed white paper proposing open-source standards for AI safety documentation and verification. Won 3rd place at Apart Research AI Safety & Assurance Startup Hackathon.
AI Safety Learning Community (Jan 2025 – May 2025)
Designed and delivered month-long AI safety seminar series for UConn faculty and staff, with support from UConn's Center for Excellence in Teaching and Learning. Seminar "AI Safety Literacy: From Awareness to Action" focused on understanding and addressing AI safety challenges in educational contexts.
Congressional Exhibition on Advanced AI (2024)
Proposed, designed, and led small team to create AI risk demonstration presented on Capitol Hill at Center for AI Policy event. Demonstration illustrated potential for AI scheming behavior to congressional staff and policymakers.
• AI Safety & Assurance Startup Hackathon (Apart Research) 3rd Place Prize (2025)
• Open Philanthropy University Group Fellowship for BEACON (2024-2025)
• Selected Participant in the Wilson Center's Pathways to AI Policy program (2024-2025)
• UConn Engineering Entrepreneurship Hub Graduate Entrepreneurship Fellow (2024-2025)
• The Safe and Trustworthy AI Workshop (ICLP 2023) Best Poster Award
• UConn Computer Science and Engineering Predoctoral Fellowship (2023)
• NeurIPS ML Safety Workshop AI Risk Analysis Award (2022)
• Vitalik Buterin PhD Fellowship in AI Existential Safety (2022 finalist)
• UConn School of Engineering Next Gen Scholar GE Graduate Fellowship for Innovation (2021-2022)
• Member of the Future of Life Institute's AI Safety Community Researchers
Instructor Support Volunteer, UConn Center for Excellence in Teaching and Learning (Jan 2025 – May 2025)
Designed and taught month-long "AI Safety Literacy: From Awareness to Action" seminar series for UConn faculty and staff. Convened panel discussion "Beneficial, Ethical AI at UConn: a student-led conversation," earning letter of recognition from CETL. Provided consultation on AI integration in educational contexts through drop-in office hours.
Fellowship Facilitator, Beneficial and Ethical AI at UConn (BEACON) (Jan 2024 – Dec 2024)
Facilitated Technical AI Safety Fellowship and AI Policy Fellowship for multiple cohorts, guiding students through structured curricula introducing AI safety research and governance. Mentored undergraduate research project on bias reduction; advised undergraduate fellow who subsequently joined RIET Lab and collaborated on CHAI-24 poster presentation.
Teaching Assistant, ML Alignment & Theory Scholars (MATS) (Jun 2024 – Aug 2024)
Led weekly "AI Safety Strategy" sessions for MATS 6.0, guiding emerging scholars through curated readings on AI threat models and research prioritization. Evaluated research proposals for threat model analysis and technical feasibility.
• The Conversation: Published article "Getting AIs working toward human goals – study shows how to measure misalignment" communicating research findings to general audience
• UConn Daily Campus: Featured in "Artificial Intelligence poses novel social threats, researchers prepare for the worst" discussing AI safety research
• Future of Life Institute: Presented invited talk "Quantifying Misalignment Between Agents: Towards a Sociotechnical Understanding of Alignment" for AI Existential Safety Community
• UConn Center for Excellence in Teaching and Learning: Led "mAI dAI" seminar series on AI alignment and malicious misuse for university instructional staff
Feel free to reach out at aidan.kierans [at] uconn.edu or connect with me on LinkedIn.