Zachary A. Caddick, PhD

AAAS Science & Technology Fellow | Cognitive Scientist

Alexandria, VA
Email
Github
Google Scholar
LinkedIn
ORCID
OSF
ResearchGate





At a Glance
I bridge the gap between cognitive science and functional engineering. From architecting behavioral microworlds and NASA flight-vehicle guidelines to building data pipelines, I design and evaluate the systems that help people reason, make decisions, and perform in high-stakes environments.

Core Expertise
Cognitive Engineering & Systems Architecture
I architect the digital infrastructure used to model and measure human performance. By building scalable behavioral environments, I bridge the gap between human-centered design and technical rigor, ensuring systems are optimized for real-world decision-making.
  • Featured Project: Policy Microworld | A full-stack behavioral simulation built to study how humans optimize outcomes under competing economic constraints.
  • Featured Project: Guidelines for Sleep Environments in Spaceflight Missions | Co-developed the formal sleep environment recommendations for future spaceflight vehicles, bridging physiological rigor with vehicle architecture.
  • Featured Project: Interface | A keyboard-driven Chrome extension engineered to transform web navigation into an efficient, muscle-memory task.
  • Featured Project: The Voting System Sandbox | A full-stack interactive simulation designed to visualize how different voting structures change election outcomes. Built to help users understand the trade-offs between complex systems like Ranked-Choice and Approval voting.

Data Orchestration & Pipelines
I build the technical systems that make high-stakes research and automation possible. I specialize in connecting workflows across different tools (e.g., Python, R) to turn manual, complex tasks into automated and reliable processes.
  • Featured Project: Cognitive Performance Data Processing Suite | Architected an automated ETL pipeline using Python and Streamlit to transform raw, unstructured E-Prime outputs into standardized datasets for PVT, DSST, and Serial Addition metrics.
  • Featured Project: Circadian Analysis Tool | A cross-functional application automating Non-Stationary Oscillation Analysis (NOSA) for physiological data.
  • Featured Project: Plex Music Library Organizer | A production-ready Python suite for automated media management and API-driven library organization.

Decision Science & Evaluation Rigor
I design analytical frameworks to measure and improve human decision-making. My work focuses on how experts maintain high-level skills over time and how individual biases—like motivated reasoning—shape our understanding of complex issues (e.g., climate change, voting, medicine, spaceflight, technology ecosystems).
  • Featured Project: Climate Change Reasoning Measure | A novel psychometric tool designed to evaluate "motivated reasoning" on anthropogenic climate change.
  • Featured Project: Medical Expertise, Reasoning and Physician Certification | Published extensive research on how experts maintain high-level skills throughout their careers. This work explores how strategic testing serves as a powerful tool for continuous learning—not just assessment—ensuring that specialized knowledge remains robust over decades of practice.

Professional Journey
Systems-Level Science Policy
Currently serving as a AAAS Science & Technology Policy Fellow at the National Science Foundation, I work within the Technology, Innovation, and Partnerships (TIP) Directorate to architect more effective national innovation programs. My role spans from evaluating program impact and equity to mapping global leadership in emerging technologies through bibliometric and competitive analysis. Additionally, I focus on workforce development, ensuring that national investments translate into sustainable career pathways and a robust technical talent pool.

Research Background
My training is in cognitive science and human factors, focused on how people process information and make decisions. At NASA, I contributed to research aimed at optimizing human performance in demanding operational environments. During my PhD at the University of Pittsburgh, I examined how individual judgments aggregate into collective outcomes, developing experimental and analytical frameworks to study decision-making at scale. I now apply this perspective in national science policy, helping design and evaluate large-scale programs for long-term impact.

Technical Systems & Infrastructure
I build and deploy the technical architecture required to model, orchestrate, and evaluate human performance at scale.

Data Engineering & Analysis
  • Statistical Modeling: Building complex simulations and predictive models in Python and R.
  • Data Transformation: Architecting large-scale pipelines to process and standardize behavioral data.
  • Performance Analytics: Utilizing matrix-based computation to quantify human reasoning and decision-making.

Interactive Tooling
  • Analytical Applications: Developing full-stack tools (Streamlit, R-based applications) to make complex data interactive and accessible.
  • Workflow Automation: Engineering custom browser-based interfaces and API-driven systems to streamline research and operations.

Deployment & Operations
  • Containerization: Managing resilient environments using Docker for consistent cross-platform performance.
  • Infrastructure: Overseeing self-hosted server infrastructure (Unraid) and production-ready automation pipelines.