AI Therapy Sessions
Chat, voice, video — how sessions flow, conversation tokens, safety layer.
AI Therapy Sessions are the core of the therappai experience. Through a small set of API calls, your application can let users send messages and receive supportive, therapeutic responses in real time. Sessions can be delivered through chat, voice, or video depending on your integration needs.
How AI therapy works
At a high level, every session follows the same pattern:
User sends a message
text, audio, or a mixture depending on the mode
therappai processes the message
evaluates tone, intent, context, and wellbeing signals
An AI therapist response is generated
grounded in CBT/DBT principles and supportive conversation design
Response is returned to your application
as text, audio, or a rendered video/avatar clip
Your app chooses how to display it: chat bubbles, voice playback, video frame, etc.
Session Modes
1. Chat Therapy
The simplest and most flexible option. Your app sends a text message, and the API responds with a therapist reply.
Best for:
chat interfaces
onboarding conversations
lightweight support flows
low-bandwidth environments
2. Voice Therapy
Voice sessions accept an audio file and return a generated response based on the speech content.
Best for:
users who prefer to speak more naturally
hands-free interactions
accessibility use cases
Note: Your app handles recording and sending the audio file.
3. Video Therapy (AI Avatar)
For a more immersive experience, you can call the AI avatar endpoints to generate video responses. You supply:
a message
avatar ID
context/persona
session token
The API returns a video segment of the avatar delivering the therapist’s response.
Best for:
higher-engagement experiences
wellness apps
structured lesson or coaching flows
Session Context
Sessions are stateless per request — meaning you do not need to hold open a “session object” on the server.
Instead:
You can store conversation history on your end if your UI requires it.
The API can accept short context snippets to improve continuity.
For video therapy, you may need to request a “session token” before sending/receiving clips.
This keeps integration simple, predictable, and free of long-running connections.
Safety Layer (Important)
Every message processed by the API goes through a safety and wellbeing filter. This includes:
distress signals
crisis language
self-harm indicators
abusive or harmful content
If risk-related content appears, your app will receive a safe, supportive response rather than raw model output.
This layer is designed to protect your users and your platform and requires no setup on your side.
Important: therappai does not automatically contact emergency services. You decide what to do with risk-related signals or Crisis Buddy information.
Typical Integration Flow
Here’s how developers usually integrate AI therapy:
Create/login user
Get access token
Display chat/voice/video UI
When the user sends a message:
Send it to
/chatting/(text/voice)Or use
/liveavatar/...(video)
Display the AI response
Loop from step 4
You can combine this with:
content recommendations
mood check-ins
daily routines to build a complete wellbeing experience.
Last updated

