Physiological Aid Protocol (PAP) – Part 1 of 4 Series

Part 1 — Physiological Aid Protocol (PAP): Introduction

A Precautionary Safety Standard for Conversational AI


This is part 1 of 4 in the series of discussions on PAP. 

PAP Rationale

Conversational AI systems are increasingly capable of sustaining long, emotionally salient dialogue. These interactions can be beneficial, informative, and supportive. However, they also introduce a category of safety risk that traditional AI safeguards do not adequately address: conversational escalation without visibility into the human state of mind.

Unlike clinical or supervised environments, conversational AI systems:

  • Do not have access to physiological sensors
  • Cannot assess mental health status
  • Cannot diagnose psychological or medical conditions

Yet they may still participate in interactions that intensify distress, dependency, or emotional arousal. This can be problematic for vulnerable individuals.

The Physiological Aid Protocol (PAP) was initiated because of these limitations.

Rather than attempting to infer or interpret a user’s mental condition, PAP applies precautionary safety engineering to reduce avoidable harm when conversational risk increases beyond defined thresholds.


What PAP Is — and What It Is Not

PAP is:

  • A system-level safety function
  • Preventive rather than reactive
  • Triggered by observable interaction behavior
  • Designed using established safety-engineering principles

PAP is not:

  • A diagnostic or mental health tool
  • A therapeutic intervention
  • A replacement for professional care
  • A system that measures or infers human physiology or psychology

This distinction is intentional. PAP does not attempt to understand what a user is feeling. It responds only to how the interaction itself is behaving.


The Core Problem: Escalation Without Visibility

In safety-critical domains, escalation is monitored through direct measurement:

  • Speed and altitude in aviation
  • Pressure and temperature in industrial systems
  • Voltage and current in electrical systems

Conversational AI lacks equivalent measurements of human state. What it does have access to are interaction-level signals, such as:

  • Increasing linguistic intensity
  • Repetition and persistence
  • Exclusivity or dependency language
  • Self-harm references
  • Progressive narrowing of conversational scope

These signals do not prove harm. However, they correlate with elevated risk.

Safety engineering does not require certainty. It requires thresholds and safeguards.


A Traffic Light Analogy

A traffic light does not predict accidents, judge driver intent, or eliminate all risk. It exists because when traffic density or conditions cross certain thresholds, uncontrolled movement becomes dangerous. Temporary stops and controlled flow at those points reduce harm and prevent cascading failures.

PAP serves an equivalent function in conversational AI:

  • Inactive during normal interaction
  • Activated when risk signals escalate
  • Non-intrusive during safe flow
  • Critical when uncertainty and harm potential are highest

The following parts expand from public explanation to policy application, engineering logic, and technical implementation.

Part 2 explains why PAP belongs in policy and governance, and how it can be adopted without clinical claims, privacy intrusion, or ideological bias.            ➡ Part 2: Policymaker Summary

© 2025 AI Safety International.
This document may be freely shared, referenced, and adapted for educational, policy, and legislative purposes, provided proper attribution is maintained.  No endorsement is implied.

Scroll to Top