PyHuman Research
Advancing the science of human-centric AI through open research, collaborative projects, and shared knowledge.
Featured Publications
PyHuman: A Framework for Human-Centric Deep Learning
We introduce PyHuman, a comprehensive framework for building interpretable and human-aligned deep learning models. Our approach combines explainability techniques with human feedback mechanisms to create more trustworthy AI systems.
Evaluating Human Understanding of AI Explanations
This paper presents a comprehensive evaluation of how well humans understand different types of AI explanations, with implications for designing more effective interpretable systems.
Bias Detection in Human-AI Collaborative Systems
We explore novel methods for detecting and mitigating bias in systems where humans and AI work together, showing significant improvements in fairness metrics.
Research Areas
Explainable AI
Making AI decisions transparent and interpretable to humans
Human-in-the-Loop Learning
Integrating human feedback into machine learning systems
AI Fairness & Ethics
Ensuring AI systems are fair, unbiased, and ethically sound
Interactive Machine Learning
Building ML systems that learn from user interactions
Trust & Safety
Developing trustworthy and safe AI systems
Human-AI Interfaces
Designing intuitive interfaces for human-AI collaboration
Research Collaborations
Stanford University
8 researchers
MIT CSAIL
12 researchers
University of Toronto
6 researchers
Google Research
15 researchers
Open Datasets
Join Our Research Community
Collaborate with leading researchers in human-centric AI. Share your work, access datasets, and contribute to the future of ethical AI.