David Akinboro

I research reasoning, interpretability, and safety in AI systems.

I recently completed my MS at Cornell advised by Claire Cardie, where I developed search-augmented reinforcement learning frameworks for legal AI. I defended my thesis on September 5th, 2025. Previously, I worked on AI safety evaluation and red-teaming at Invisible Technologies.

I'm interested in problems at the intersection of causal reasoning, model interpretability, ontology construction for LLMs, and building systems that make AI safety tools accessible beyond well-resourced institutions.

Contact Me

Selected Projects

LegalReasoner: A Framework for Legal Document Analysis and Reasoning

Developing AI systems that model explicit relationships between legal concepts, cases, and statutory provisions.

GraphRAG for Education: Addressing Knowledge Representation Gaps

Inspired by Harvard's work with CS50.ai, this AI teaching assistant maps conceptual relationships within educational content.

Automated Unit Test Generation: Making Reliable Code Less Painful

Built a systematic pipeline that generates Python unit tests from docstrings, then uses mutation testing to verify the tests actually catch bugs.

Research

Search-Augmented Reinforcement Learning for Legal Reasoning

MS Thesis, Cornell 2025 | Advisor: Claire Cardie | Defended September 5, 2025

  • • First framework integrating legal database access during RL training episodes (not just inference)
  • • Multi-task evaluation with jurisdictional compliance across all US legal systems
  • • 10.5pp improvement on LegalBench through tool-assisted training with GRPO

Legal Reasoning Interpretability Framework

Cornell 2024-2025

  • • Developed "glass box" framework combining fact extraction, causal mapping, and attention analysis
  • • Enables transparent reasoning validation in high-stakes legal domains
  • • Built evaluation protocols for professional acceptance of AI legal reasoning

AI Safety Evaluation & Red-Teaming

Invisible Technologies, 2022-2023

  • • Developed red-teaming protocols and evaluation frameworks for LLM safety
  • • Conducted adversarial testing: prompt injection, jailbreak attempts, bias detection
  • • Trained safety evaluators on systematic vulnerability assessment

Research Interests

Problems I'm currently exploring:

Causal reasoning in language models: How can we enable models to construct and reason over explicit causal graphs rather than pattern-matching?
Ontology and knowledge representation: Building graph-based ontologies for LLMs that enable structured reasoning over domain knowledge
Interpretability for safety: Developing interpretability methods that enable actual safety validation, not just post-hoc explanation
Red-teaming and adversarial robustness: Systematic approaches to finding failure modes in reasoning systems
Democratizing AI safety infrastructure: Making evaluation frameworks, red-teaming protocols, and safety tools accessible to resource-constrained domains

📄 Read more about my approach to causal reasoning and neuro-symbolic AI in this article.

Teaching Experience

set union logo

CS 2800: Discrete Structures

Graduate Teaching Assistant, Fall 2023 & Fall 2024

Python language logo

CS 1110: Introduction to Python

Graduate Teaching Assistant, Spring 2024

🧠

CS 1700: Elements of Artificial Intelligence

Graduate Teaching Assistant, Spring 2025

Current Focus

I'm forming a research group focused on democratizing access to AI safety tools evaluation frameworks, red-teaming protocols, and data infrastructure for resource-constrained domains. Current work spans legal reasoning, health data systems, and building open-source safety tooling.

I'm actively looking for research positions where I can contribute to interpretability, causal reasoning, and safety infrastructure development.

Let's Connect

If you’d like to discuss research ideas or explore AI safety and reasoning, I’d love to hear from you.