# Integrating Video Therapy

Video Therapy allows you to offer a more immersive wellbeing experience using therappai’s AI-powered therapist avatars. Your application sends a text message, and the API returns a generated **video clip** of the AI therapist delivering the response.

This feature is ideal for high-engagement moments such as onboarding, guided sessions, structured lessons, reflection exercises, or supportive check-ins.

This page explains how to integrate video therapy end-to-end.

***

## **How Video Therapy Works**

The process is simple:

1. **Request a session token**\
   (if required by your implementation)
2. **Send a message to the Live Avatar endpoint**
3. **therappai generates a spoken response**
4. **A video file or stream URL is returned to your app**
5. **You render the avatar video in your UI**
6. **Repeat for each interaction**

Video clips are generated per message — there is no long-running session connection.

***

### **1. Request a Session Token (if needed)**

Some implementations require a short-lived session token for generating multiple videos in a conversation.

```
POST /liveavatar/session/
Authorization: Bearer ACCESS_TOKEN
```

The response typically includes a token you’ll pass into subsequent video calls.

If your integration doesn’t require session grouping, you can skip this.

***

### **2. Send a Video Therapy Request**

To generate a video response from the avatar, call:

```
POST /liveavatar/generate/
Authorization: Bearer ACCESS_TOKEN
Content-Type: application/json
```

Example body:

```json
{
  "message": "I'm struggling with motivation today. Can you help?",
  "session_token": "optional-session-token",
  "avatar_id": "default",
  "voice_id": "default"
}
```

#### Common Parameters

* **message** — the user’s text input
* **session\_token** — (optional) link multiple videos together
* **avatar\_id** — choose the avatar
* **voice\_id** — choose the voice persona

Use default IDs unless you’re customizing the integration.

***

### **3. Receive the Video URL**

A successful response returns a video file or URL.

Example:

```json
{
  "video_url": "https://cdn.therappai.com/videos/clip_12345.mp4",
  "transcript": "I'm sorry to hear you're struggling..."
}
```

#### Recommended handling:

* preload the video for smooth playback
* show a “preparing video…” indicator
* autoplay with audio (if allowed)
* provide mute/unmute options

***

### **4. Render the Video in Your UI**

Developers typically display video therapy using:

* a floating video bubble
* a full-screen avatar
* a card-style player
* a standard `<video>` component
* a mobile native video player

Keep these UX tips in mind:

* show a loading indicator while the video generates
* allow users to replay the clip
* provide a text transcript for accessibility
* fade between clips for smooth transitions

***

### **5. Repeat the Loop**

For each new user message:

1. Send a request to `/liveavatar/generate/`
2. Receive a new `video_url`
3. Render the avatar response
4. Continue the conversation

You control history, UI layout, and session context on your side.

***

### **Optional Enhancements**

#### **Typing / Generating Indicator**

Show a “therappai is preparing your video…” message during generation.

#### **Transcript Display**

Offer both video and text responses for accessibility.

#### **Fallback to Chat**

If video generation fails, fall back to chat therapy automatically.

#### **Video Caching**

Cache the last few clips for smooth navigation and replay.

***

### **Safety Layer (Important)**

Every message goes through therappai’s wellbeing and safety system, which checks for:

* distress language
* harmful statements
* self-harm indicators
* crisis cues

Even in video mode, the avatar will always deliver a **safe, supportive, and grounded** therapeutic response.

therappai will **never** contact emergency services on its own.\
Your application is responsible for deciding when to surface Crisis Buddy or other support resources.

***

### **Minimum Viable Video Integration**

1. Login → get access token
2. Build a video player component in your UI
3. POST to `/liveavatar/generate/` with the user’s message
4. Play the returned video
5. Loop for ongoing messages

This gives you a fully working AI video therapy experience.
