Integrating Video Therapy
Calling the video avatar generation / playback endpoints.
Video Therapy allows you to offer a more immersive wellbeing experience using therappai’s AI-powered therapist avatars. Your application sends a text message, and the API returns a generated video clip of the AI therapist delivering the response.
This feature is ideal for high-engagement moments such as onboarding, guided sessions, structured lessons, reflection exercises, or supportive check-ins.
This page explains how to integrate video therapy end-to-end.
How Video Therapy Works
The process is simple:
Request a session token (if required by your implementation)
Send a message to the Live Avatar endpoint
therappai generates a spoken response
A video file or stream URL is returned to your app
You render the avatar video in your UI
Repeat for each interaction
Video clips are generated per message — there is no long-running session connection.
1. Request a Session Token (if needed)
Some implementations require a short-lived session token for generating multiple videos in a conversation.
POST /liveavatar/session/
Authorization: Bearer ACCESS_TOKENThe response typically includes a token you’ll pass into subsequent video calls.
If your integration doesn’t require session grouping, you can skip this.
2. Send a Video Therapy Request
To generate a video response from the avatar, call:
POST /liveavatar/generate/
Authorization: Bearer ACCESS_TOKEN
Content-Type: application/jsonExample body:
{
"message": "I'm struggling with motivation today. Can you help?",
"session_token": "optional-session-token",
"avatar_id": "default",
"voice_id": "default"
}Common Parameters
message — the user’s text input
session_token — (optional) link multiple videos together
avatar_id — choose the avatar
voice_id — choose the voice persona
Use default IDs unless you’re customizing the integration.
3. Receive the Video URL
A successful response returns a video file or URL.
Example:
{
"video_url": "https://cdn.therappai.com/videos/clip_12345.mp4",
"transcript": "I'm sorry to hear you're struggling..."
}Recommended handling:
preload the video for smooth playback
show a “preparing video…” indicator
autoplay with audio (if allowed)
provide mute/unmute options
4. Render the Video in Your UI
Developers typically display video therapy using:
a floating video bubble
a full-screen avatar
a card-style player
a standard
<video>componenta mobile native video player
Keep these UX tips in mind:
show a loading indicator while the video generates
allow users to replay the clip
provide a text transcript for accessibility
fade between clips for smooth transitions
5. Repeat the Loop
For each new user message:
Send a request to
/liveavatar/generate/Receive a new
video_urlRender the avatar response
Continue the conversation
You control history, UI layout, and session context on your side.
Optional Enhancements
Typing / Generating Indicator
Show a “therappai is preparing your video…” message during generation.
Transcript Display
Offer both video and text responses for accessibility.
Fallback to Chat
If video generation fails, fall back to chat therapy automatically.
Video Caching
Cache the last few clips for smooth navigation and replay.
Safety Layer (Important)
Every message goes through therappai’s wellbeing and safety system, which checks for:
distress language
harmful statements
self-harm indicators
crisis cues
Even in video mode, the avatar will always deliver a safe, supportive, and grounded therapeutic response.
therappai will never contact emergency services on its own. Your application is responsible for deciding when to surface Crisis Buddy or other support resources.
Minimum Viable Video Integration
Login → get access token
Build a video player component in your UI
POST to
/liveavatar/generate/with the user’s messagePlay the returned video
Loop for ongoing messages
This gives you a fully working AI video therapy experience.
Last updated

