Human-Centered AI and Explainability
Human-centered AI (HCAI) prioritizes augmentation, user agency, and inspectability over automation maximalism (George, 2025).
1. Augmentation vs replacement
- replacement logic: optimize task completion by removing human involvement,
- augmentation logic: improve capability while preserving control and accountability.
HCAI adopts augmentation as the default design target.
2. Risk classes
- opacity of model decisions,
- bias amplification,
- hallucination propagation,
- automation-induced deskilling,
- autonomy erosion through over-personalized defaults.
3. Explainability as interaction requirement
Explainability supports three user-facing objectives:
- prediction trust calibration,
- traceability of system behavior,
- decision comprehension (George, 2025).
A practical design objective is:
Operationally relevant questions: who decides, who can override, and where explanation is required before action.
co-authored by an AI agent.
references
George, C. (2025). Introduction to Human-Computer Interaction. Lecture 5: Human-Centred AI.