David Akinboro

I build AI systems by first listening to people

This conviction was shaped by my time as a student body president acting as a voice for thousands and later as a sales engineer translating client needs into technical solutions. Today, I apply that same principle to my research as an MS in Computer Science student at at Cornell University, where I work with Claire Cardie focusing on NLP by enabling language models to reason through complex problems rather than pattern-match.

This blend of research and engineering lets me bridge both prototype and production translating ambiguous requirements into robust, scalable AI solutions. I am committed to building and deploying responsible AI systems that help us shape our world.

Contact Me

Fun Projects

Experiments at the crossroads of ML, NLP, and software engineering. Some sparked by coursework, others by genuine curiosity

LegalReasoner: A Framework for Legal Document Analysis and Reasoning

Developing AI systems that model explicit relationships between legal concepts, cases, and statutory provisions.

GraphRAG for Education: Addressing Knowledge Representation Gaps

Inspired by Harvard's work with CS50.ai, this AI teaching assistant maps conceptual relationships within educational content.

Automated Unit Test Generation: Making Reliable Code Less Painful

Built a systematic pipeline that generates Python unit tests from docstrings, then uses mutation testing to verify the tests actually catch bugs.

Current Research

🔍 Building Transparent AI Systems for Legal Reasoning

Legal AI systems often operate as "black boxes" - insufficient for practitioners who require transparent and justifiable reasoning.

Our Legal Reasoning Interpretability Framework pioneers "glass box" legal AI where every conclusion traces back to specific evidence through three core innovations:

  • Fact Extraction: Identify which case elements drove each inference
  • Causal Mapping: Trace how facts link to legal outcomes
  • Attention Analysis: Surface the evidence weighted most heavily

The challenge goes beyond accuracy - it's building trust through transparency. Legal professionals need to understand not just what the AI concluded, but how it arrived there and when to question its judgment

🛡️ Interpretable Legal Reasoning with Search-Augmented Reinforcement Learning: A Multi-Task Framework (Working Paper)

legal AI systems hallucinate confidently, providing incorrect answers about judicial reasoning or precedents with no detection mechanism. Our research addresses this through three key innovations:

Tool Access During Training: AI systems learn optimal research strategies during training itself, developing sophisticated information-seeking behaviors that mirror professional legal research patterns.

Multi-Task Legal Evaluation: Specialized evaluation systems assess legal reasoning across multiple competencies simultaneously - from constitutional analysis to precedent application - ensuring comprehensive rather than narrow legal competence.

Jurisdictional Compliance Training: Automatic detection and maintenance of compliance across all US jurisdictions, preventing inappropriate legal generalizations while ensuring contextually appropriate advice.

Our goal: Developing an AI systems that legal professionals can trust because they're transparent about their reasoning, honest about limitations, and equipped with safety mechanisms that flag uncertainty and escalate to human oversight when appropriate.

Teaching Experience

set union logo

CS 2800: Discrete Structures

Graduate Teaching Assistant, Fall 2023 & Fall 2024

Python language logo

CS 1110: Introduction to Python

Graduate Teaching Assistant, Spring 2024

🧠

CS 1700: Elements of Artificial Intelligence

Graduate Teaching Assistant, Spring 2025

Let's Connect

If you’d like to discuss research ideas or explore AI safety and reasoning, I’d love to hear from you.