Ship live translations
with confidence
A production-ready full-stack Node.js + React application for seamless EN↔RU↔UK live auto-detect translation with voice synthesis.
⚙
Installation
Set up the project locally with Docker, Redis, and LibreTranslate in minutes.
▦
Architecture
Understand the STT → Translation → TTS pipeline and real-time Socket.io communication.
▶
Live Translation
Stream from YouTube or microphone with automatic EN/RU/UK language detection and voice output.
📜
Biblical Simulator
Test the full pipeline with AI-generated biblical passages in King James, Church Slavonic, or Ukrainian style.
🎤
Voice Training
Clone custom voices from microphone recordings or YouTube videos using ElevenLabs IVC.
Prerequisites
●
Node.js 20+
Runtime for backend and build tools
●
Docker + Docker Compose
For Redis and LibreTranslate services
●
yt-dlp + ffmpeg
Required for YouTube audio extraction
●
ElevenLabs API Key
For speech-to-text and text-to-speech
Clone & Configure
git clone https://github.com/Pzharyuk/live-translator-node.git && cd live-translator-node
cp .env.example .env
Edit .env and set your API key:
ELEVENLABS_API_KEY=sk-your-key-here
ADMIN_PASSWORD=your-secure-password
Start Infrastructure
# Start Redis + LibreTranslate
docker compose -f docker-compose.local.yml up -d
# Wait for LibreTranslate to download language models (~500 MB)
docker logs -f $(docker ps -qf "name=libretranslate") 2>&1 | grep -i "running"
Start Backend
cd backend
npm install
npm run dev # nodemon watches for changes
Start Frontend
cd frontend
npm install
npm run dev # Vite hot-reload on localhost:5173
✓
You're all set!
Open http://localhost:5173 — log in with user / changeme and you will be redirected to /translate. Admin panel: http://localhost:5173/admin (admin password: admin123).
System Overview
Frontend
React 19 + Vite
Socket.io Client
Web Audio API
↔
Backend
Express + Socket.io
TypeScript
Service Layer
ElevenLabs
Scribe v2 (STT)
TTS Streaming
Voice Cloning
Translation
Google Translate (Cloud API)
LibreTranslate (self-hosted)
DeepL (premium API)
Claude / Anthropic (AI)
Redis
Feature Flags
Settings Store
Google Gemini
Biblical Simulator
Sermon Generation
Voice Training Text
DeepL
Free & Pro tiers
Auto endpoint detection
Data Flow
1 Audio Input (Mic / YouTube / Simulator)
↓
2 PCM 16-bit LE @ 16kHz via Socket.io chunks
↓
3 ElevenLabs Scribe v2 WebSocket STT
↓
4 Commit Merge Buffer 2.5s VAD aggregation
↓
5 Translation Provider Google / LibreTranslate / DeepL / Claude
↓
6 ElevenLabs TTS Voice synthesis streaming
↓
7 Audio Playback Queued with 600ms pause
Key Architecture Decisions
Two-layer Language Detection
LibreTranslate's /detect endpoint returns 0-confidence for short Cyrillic phrases. The app uses script-based pre-detection (Unicode 0x0400–0x04FF = Cyrillic) combined with ElevenLabs Scribe's language_code output for reliable EN/RU/UK auto-detection.
VAD Commit Merging
Voice Activity Detection can fire aggressively on speaker breathing. Commits are buffered for 2.5 seconds before translation to merge fragments into meaningful phrases.
Feature Flag Merging
YAML config defaults are merged with Redis runtime overrides. Redis values take priority, falling back to YAML if Redis is unavailable.
API Key Hierarchy
Keys resolve in order: Runtime Cache → Redis → Config File → Empty. This allows hot-swapping keys without restarts.
Connection Lifecycle
- Client sends
start_session with source type (mic or youtube) and optional voiceId
- Backend opens a WebSocket to
wss://api.elevenlabs.io/v1/speech-to-text/realtime
- For YouTube: spawns
yt-dlp | ffmpeg child processes to extract PCM audio
- For Microphone: awaits
audio_chunk events from the frontend
Audio Streaming
Audio chunks are sent to Scribe as JSON messages:
{
"message_type": "input_audio_chunk",
"audio_base_64": "UklGR..." // PCM 16-bit LE, 16kHz, mono
}
Scribe Responses
| Response Type | Meaning | Action |
partial_transcript |
Live partial text (speculative) |
Emitted as non-final transcript event |
committed_transcript |
VAD fired — complete phrase |
Buffered for commit merge window |
Commit Merge Buffer
After receiving a committed_transcript, the backend waits 2.5 seconds (COMMIT_MERGE_MS) to collect additional commits before translating. This prevents fragmented translations from aggressive VAD.
Stability Timeout
If VAD stalls (no new commits), a 3.5 second fallback timer (STABILITY_TIMEOUT_MS) fires to translate whatever new text has accumulated, preventing indefinite silence.
Text Validation
Before translation, text is validated against EN/RU/UK character regex patterns. This filters out hallucinated text from the STT model (common with silence or background noise).
Provider Chain
The system supports three translation providers with automatic fallback:
Default
LibreTranslate
Self-hosted, no API key required. Runs in Docker alongside the app. Best for privacy and cost.
Premium
DeepL
High-quality translations. Supports both free and paid API tiers. Auto-detects endpoint.
AI
Claude
Anthropic's Claude for context-aware translations. Uses claude-haiku-4-5 for speed.
Fallback Logic
1. Try primary provider (admin-selected)
2. If primary fails → try configured fallback
3. If fallback fails → try LibreTranslate (last resort)
4. If all fail → emit error event
Language Detection
The app uses a two-layer auto-detection approach:
Layer 1: Script-based Pre-detection
Before calling any translation API, the backend checks Unicode character scripts:
- Cyrillic characters (Unicode 0x0400–0x04FF) → if >50% of matched letters are Cyrillic, detected as Russian
- Latin characters → detected as English
- This avoids low-confidence results from LibreTranslate's
/detect endpoint on short text
Layer 2: STT Language Code
When the auto_language_detect flag is enabled, ElevenLabs Scribe returns a language_code with each transcript commit. The backend uses this to correctly route EN/RU/UK without relying solely on script detection.
Note: For LibreTranslate, both Russian and Ukrainian Cyrillic text is passed with source ru since LibreTranslate handles Ukrainian text acceptably via the Russian model. DeepL and Claude providers distinguish Ukrainian natively and handle uk as a proper source language.
Language Gating
Detected languages are checked against the admin-approved pool. If a detected language isn't in the allowed set, the translation is rejected to prevent hallucinated language outputs.
TTS Pipeline
After translation, the text is sent to ElevenLabs TTS:
const stream = await client.textToSpeech.stream(voiceId, {
text: translatedText,
model_id: "eleven_multilingual_v2",
output_format: "mp3_44100_128",
voice_settings: {
stability: 0.5,
similarity_boost: 0.75,
style: 0.0,
speed: 1.0,
use_speaker_boost: true
}
});
Audio Delivery
TTS audio is streamed to a Buffer, then emitted as a base64-encoded MP3 via the tts_audio Socket.io event.
Frontend Playback Queue
The frontend maintains an audio queue to prevent overlapping playback:
- Received
tts_audio events are queued
- Each segment plays to completion before the next starts
- A configurable pause (600ms default) is inserted between segments
- The pause duration is controlled by
tts_segment_pause_ms (adjustable in admin)
Microphone Input
- User selects "Mic" tab and chooses a TTS voice
- Browser captures audio via Web Audio API's
ScriptProcessor
- PCM 16-bit LE at 16kHz sample rate sent to backend via Socket.io
- Backend pipes audio to ElevenLabs Scribe v2 Realtime WebSocket
- Language auto-detected (EN/RU/UK), text translated and synthesized
- TTS audio returned and played back with inter-segment pauses
YouTube Input
- User pastes a YouTube URL (live stream or video)
- Backend spawns
yt-dlp | ffmpeg child processes
- Audio extracted as PCM stream (16kHz, 16-bit LE, mono)
- Piped to Scribe v2, same pipeline as microphone
- Stream ends when YouTube content ends or user stops
User Interface
The user view features a dark cavern theme with:
- Waveform visualizer — Canvas-based bar chart with orange gradient and cyan tips
- Transcript display — White translated text scrolls upward with fade masks
- Partial transcript — Shown in italic orange while STT is processing
- Source tabs — Toggle between Mic and YouTube (controlled by feature flags)
How It Works
The backend uses yt-dlp and ffmpeg as child processes to extract audio from YouTube URLs:
yt-dlp (best audio) → ffmpeg (PCM 16kHz 16-bit LE mono) → Scribe v2
Supported Sources
- Live streams — Translates in real-time as the stream progresses
- Regular videos — Processes the full audio track
- Any URL supported by yt-dlp (YouTube, etc.)
Requirements
Both yt-dlp and ffmpeg must be installed and available in the system PATH. On macOS:
brew install yt-dlp ffmpeg
⚠
Feature Flag Required
YouTube input is controlled by the youtube_input feature flag. Enable it in the admin panel to show the YouTube tab in the user view.
Overview
The Biblical Transcript Simulator is an admin-only feature that generates biblical text passages using Google's Gemini API (gemini-2.5-flash), then routes them through the full translation pipeline. This provides a hands-free way to test STT → Translation → TTS without a live audio source.
Language Styles
| Language | Style | Example |
en |
King James English |
"In the beginning was the Word..." |
ru |
Church Slavonic Russian |
"В начале было Слово..." |
uk |
Traditional Ukrainian |
"На початку було Слово..." |
Flow
- Admin selects language (EN/RU/UK)
- Backend calls Gemini 2.5 Flash with streaming
- Gemini generates 6-8 biblical passages, 3-5 sentences each
- Stream is buffered until 140+ characters AND complete sentences
- Chunks emitted with 1800ms smooth pacing between them
- Each chunk flows through the standard pipeline:
- Emitted as
transcript (isFinal: true)
- Auto-translated via configured provider
- TTS synthesized and audio returned
- Frontend plays audio with standard inter-segment pause
💡
Feature Flag
Enable biblical_simulator in the admin feature flags panel. The Gemini API key is configured via the GEMINI_API_KEY environment variable or set at runtime in the admin API Keys panel.
Overview
Voice Training uses ElevenLabs' Instant Voice Cloning (IVC) API to create custom voices from audio samples. Once cloned, the voice appears in the voice selector immediately.
From Microphone
- Open the Voice Training section in the admin panel
- Click Generate Text to get an AI-generated reading passage (via Gemini) — gives the speaker natural, phonetically diverse text to read aloud
- Record multiple audio clips using your browser microphone while reading the generated text
- Provide a name for the voice
- Clips are uploaded to ElevenLabs IVC API
- Cloned voice is available for TTS immediately
- Click Preview Voice to hear the cloned voice speak a sample sentence via TTS
From YouTube
- Paste a YouTube URL in the Voice Training section
- Backend extracts N × 30-second clips via
yt-dlp + ffmpeg
- Clips are uploaded to ElevenLabs IVC API
- Resulting voice is stored in your ElevenLabs account
⚠
ElevenLabs Account
Cloned voices are stored in your ElevenLabs account, not locally. Ensure your plan supports voice cloning.
Concepts
| Concept | Description |
| Active Language Pair |
The current pair used for translation (e.g., EN ↔ RU, EN ↔ UK, or RU ↔ UK). Set by admin. |
| Available Languages |
The pool of languages viewers can select from (if user_language_selector is enabled). |
Admin Controls
- Change the active language pair via the admin panel
- Changes broadcast to all connected clients in real-time
- Manage the available languages pool for viewer selection
Viewer Selection
When the user_language_selector feature flag is enabled, viewers can override the admin-set language pair by selecting their own preferred languages from the available pool.
Overview
Two people can video call each other through the app, each speaking their own language. The app transcribes, translates, and synthesizes speech in real-time so each participant hears the other in their language.
Feature flag: Video call is gated behind the video_translation flag. Enable it in the admin panel or set video_translation: true in your YAML config.
How It Works
- Create a room — Person A selects their language, picks a TTS voice, and clicks "Create Room". A 6-character room code is generated.
- Share the code — Person A shares the room code with Person B (copy button provided).
- Join the room — Person B enters the code, selects their language and TTS voice, and clicks "Join".
- WebRTC connection — The app establishes a peer-to-peer video connection via WebRTC (signaled through Socket.io). Video flows directly between browsers.
- Audio translation — Each participant's microphone audio is simultaneously:
- Sent to the peer via WebRTC (but muted on their end)
- Captured as PCM chunks and sent to the backend via Socket.io for STT
- Translation pipeline — Each participant has their own independent Scribe STT session. Transcribed text is translated to the other participant's language, then synthesized via ElevenLabs TTS and sent back to the peer.
- Playback — The peer hears the TTS translation instead of the raw audio. Translated transcript is displayed below the video.
Architecture
Person A (Browser) Server Person B (Browser)
├─ getUserMedia ├─ Socket.io ├─ getUserMedia
├─ WebRTC P2P ═══video═══►│ (signaling) ◄═══ ├─ WebRTC P2P
│ │ │
├─ PCM chunks ──Socket.io─►├─ ScribeA(STT) │
│ │ ↓ translate │
│ │ ↓ TTS ───────────►├─ Plays TTS
│ │ │
│ Plays TTS ◄─────────────├─ ScribeB(STT) ◄───├─ PCM chunks
│ (remote video muted) │ ↓ translate │ (remote video muted)
└──────────────────────────┴────────────────────┘
Socket Events
| Event | Direction | Purpose |
video_create_room | C→S | Create a new room with language + voice |
video_room_created | S→C | Returns the 6-char room code |
video_join_room | C→S | Join an existing room |
video_room_joined | S→C | Sent to both participants, triggers WebRTC |
video_signal_offer/answer/ice | C↔S | WebRTC signaling relay |
video_audio_chunk | C→S | PCM audio for STT processing |
video_transcript | S→C | Transcript sent to the speaker |
video_translation | S→C | Translation sent to the listener |
video_tts_audio | S→C | TTS audio sent to the listener |
video_leave_room | C→S | Leave the room |
video_room_closed | S→C | Notify peer when other leaves |
Room Lifecycle
- Rooms are stored in Redis with key
video_room:{code} and a 4-hour TTL
- Maximum 2 participants per room
- When one participant disconnects, the other is notified and the call ends
- Scribe sessions are automatically cleaned up on disconnect
Feature Flags
Feature flags control the availability of major features across the application. They are stored in config/application.yaml with defaults that can be overridden at runtime via Redis. Admins can toggle flags live using the GET/POST /admin/flags endpoints without restarting the server.
| Flag |
Default |
Description |
youtube_input |
true |
Enable YouTube audio streaming as an input source for transcription. |
mic_input |
true |
Enable microphone audio input for real-time speech recognition. |
auto_language_detect |
true |
Automatically detect the source language of transcribed text during translation. |
user_language_selector |
false |
Allow end-users to select their preferred language pair for translation. |
audio_device_selector |
true |
Enable users to choose their audio input device from available options. |
video_translation |
true |
Enable real-time translation for video calls via the /video route. |
video_voice_cloning |
false |
Premium feature: show the Clone Voice button in the /video lobby for instant voice training. |
remote_audio_source |
false |
Enable the /audio-source route for headless remote audio relay to broadcasts. |
broadcast |
false |
Enable the /broadcast public receiver page for viewing live translated broadcasts. |
translate |
false |
Enable the /translate live translator page for admin-controlled broadcast translation. |
Storage & API
Feature flags are persisted in Redis with keys prefixed flag:. At startup, the application merges YAML defaults with any Redis overrides, so flags can be changed at runtime without restarting. All connected clients receive the merged flag set via the feature_flags Socket.IO event upon connection.
Admin Endpoints
# Fetch all flags (merged: YAML defaults + Redis overrides)
GET /admin/flags
→ { "flags": { "youtube_input": true, "broadcast": false, ... } }
# Get a single flag
GET /admin/flags/:flag
→ { "flag": "broadcast", "value": false }
# Set a flag (persists to Redis; broadcasts to all connected clients)
POST /admin/flags/:flag
Body: { "value": true }
→ { "flag": "broadcast", "value": true }
→ WebSocket broadcast to all clients: { "feature_flags": { ... } }
File Structure
| File | Purpose |
config/application.yaml |
Base defaults for all environments |
config/application-local.yaml |
Local development overrides (localhost URLs) |
config/application-prod.yaml |
Production overrides (Docker service names) |
The APP_ENV environment variable (local or prod) determines which overlay file is loaded on top of the base config.
Full Configuration Reference
server:
port: 3001
cors_origin: "http://localhost:5173"
elevenlabs:
api_key: "${ELEVENLABS_API_KEY}"
default_voice_id: "kxj9qk6u5PfI0ITgJwO0"
tts_model: "eleven_multilingual_v2"
tts_settings:
stability: 0.5
similarity_boost: 0.75
style: 0.0
speed: 1.0
use_speaker_boost: true
stt_model: "scribe_v2"
anthropic:
api_key: "${ANTHROPIC_API_KEY}"
deepl:
api_key: "${DEEPL_API_KEY}"
libretranslate:
url: "http://libretranslate:5000"
api_key: ""
redis:
host: "redis"
port: 6379
password: ""
feature_flags:
youtube_input: true
mic_input: true
auto_language_detect: true
user_language_selector: false
audio_device_selector: true
video_translation: false
video_voice_cloning: false
broadcast: false
audio:
sample_rate: 16000
channels: 1
chunk_duration_ms: 250
translation:
source_lang: "auto"
target_lang_en: "en"
target_lang_ru: "ru"
provider: "libretranslate"
fallback: "libretranslate"
Environment Variable Interpolation
YAML values using ${VAR_NAME} syntax are automatically replaced with the corresponding environment variable at startup.
TTS Settings
Configure text-to-speech behavior via the admin panel or REST API. Settings are persisted to Redis and applied immediately to active sessions.
API Endpoints
curl -X GET http://localhost:3001/admin/tts-settings \
-H "Cookie: connect.sid=YOUR_SESSION" \
-H "Content-Type: application/json"
Response:
{
"settings": {
"stability": 0.5,
"similarity_boost": 0.75,
"style": 0.0,
"speed": 1.0,
"use_speaker_boost": true
}
}
curl -X POST http://localhost:3001/admin/tts-settings \
-H "Cookie: connect.sid=YOUR_SESSION" \
-H "Content-Type: application/json" \
-d '{
"stability": 0.6,
"similarity_boost": 0.8,
"speed": 1.1
}'
Response:
{
"settings": {
"stability": 0.6,
"similarity_boost": 0.8,
"style": 0.0,
"speed": 1.1,
"use_speaker_boost": true
}
}
TTS Settings Reference
| Setting |
Range |
Default |
Description |
stability |
0.0 – 1.0 |
0.5 |
Controls variance in voice generation; lower = more variable & expressive, higher = more consistent. |
similarity_boost |
0.0 – 1.0 |
0.75 |
Amplifies adherence to the selected voice's characteristics; higher = more similar to original voice. |
style |
0.0 – 1.0 |
0.0 |
Exaggerates style descriptors of the voice; 0 = no exaggeration, higher = more pronounced style. |
speed |
0.5 – 2.0 |
1.0 |
Playback speed multiplier; 1.0 = normal speed, < 1.0 = slower, > 1.0 = faster. |
use_speaker_boost |
true / false |
true |
Enhances voice clarity & presence; requires higher stability for best results. |
Model Configuration
| Setting |
Value |
Description |
tts_model |
eleven_multilingual_v2 |
ElevenLabs multilingual TTS model; supports 29+ languages with natural prosody. |
default_voice_id |
kxj9qk6u5PfI0ITgJwO0 |
Fallback voice for TTS when no explicit voice is specified; must be a valid ElevenLabs voice ID. |
stt_model |
scribe_v2_realtime |
ElevenLabs real-time speech-to-text model; provides low-latency partial & committed transcripts. |
STT Timing Settings
Fine-tune speech-to-text buffering, VAD parameters, and dispatch thresholds.
curl -X GET http://localhost:3001/admin/stt-timing \
-H "Cookie: connect.sid=YOUR_SESSION"
Response:
{
"settings": {
"commit_merge_ms": 2500,
"stability_timeout_ms": 3000,
"tts_segment_pause_ms": 0,
"max_accumulation_ms": 30000,
"vad_threshold": 0.5,
"vad_silence_threshold_secs": 1.5,
"min_speech_duration_ms": 100,
"min_silence_duration_ms": 100,
"flush_on_sentence_boundary": false,
"min_chars_before_dispatch": 400
}
}
curl -X POST http://localhost:3001/admin/stt-timing \
-H "Cookie: connect.sid=YOUR_SESSION" \
-H "Content-Type: application/json" \
-d '{
"commit_merge_ms": 2000,
"min_chars_before_dispatch": 300
}'
| Setting |
Range |
Default |
Description |
commit_merge_ms |
100 – 10000 |
2500 |
Milliseconds to buffer VAD commits before flushing for translation; merges sentence fragments. |
stability_timeout_ms |
500 – 10000 |
3000 |
Milliseconds to wait for stable partial text before dispatching if VAD doesn’t fire. |
tts_segment_pause_ms |
0 – 5000 |
0 |
Pause duration between TTS audio segments; allows viewers to distinguish separate sentences. |
max_accumulation_ms |
5000 – 120000 |
30000 |
Maximum time to accumulate words during continuous speech before force-dispatching for translation. |
vad_threshold |
0.0 – 1.0 |
0.5 |
Voice activity detection sensitivity; higher = stricter noise filtering, lower = more sensitive to speech. |
vad_silence_threshold_secs |
0.1 – 5.0 |
1.5 |
Duration of silence (seconds) before VAD triggers a commit; lower = snappier response. |
min_speech_duration_ms |
50 – 500 |
100 |
Minimum speech duration (milliseconds) before VAD considers it valid speech; filters clicks & pops. |
min_silence_duration_ms |
50 – 500 |
100 |
Minimum silence gap (milliseconds) recognized as a speech boundary. |
flush_on_sentence_boundary |
true / false |
false |
When enabled, split transcripts at sentence boundaries (.?!;) for more natural chunking. |
min_chars_before_dispatch |
100 – 1000 |
400 |
Minimum character count before a transcript chunk is sent for translation; prevents tiny fragments. |
Notes
- Real-time updates: Changes to TTS settings apply immediately to new TTS requests; ongoing audio playback uses the settings from when the request started.
- VAD tuning: Adjust
vad_threshold and vad_silence_threshold_secs together for optimal speech detection in noisy environments.
- Accumulation timer:
max_accumulation_ms ensures continuous speech (sermons, lectures) is periodically dispatched even when VAD or stability timers never fire.
- Sentence boundaries: Enable
flush_on_sentence_boundary for more natural transcript chunking and audio alignment.
- Minimum dispatch size: Increase
min_chars_before_dispatch to reduce API calls and TTS overhead for very short utterances.
STT Timing Settings
Control how long the Speech-to-Text (STT) engine waits before translating captured audio.
These settings affect transcription buffering, stability detection, and VAD (Voice Activity Detection) behavior.
Configuration Table
| Setting |
Default |
Description |
commit_merge_ms |
2500 |
Buffer VAD commits for this duration before translating (ms)—merges short speech fragments into coherent chunks. |
stability_timeout_ms |
3000 |
Wait for stable partial text (unchanged) before translating if VAD doesn’t fire (ms). |
tts_segment_pause_ms |
0 |
Pause between consecutive TTS audio segments (ms)—frontend uses this value. |
max_accumulation_ms |
30000 |
Maximum time to accumulate words during continuous speech before force-dispatching for translation (ms). |
vad_threshold |
0.5 |
VAD noise filter strictness: 0–1 (higher → stricter, filters more background noise). |
vad_silence_threshold_secs |
1.5 |
Seconds of silence required before VAD commits the current utterance (seconds). |
min_speech_duration_ms |
100 |
Ignore speech segments shorter than this duration (ms)—filters out clicks and noise. |
min_silence_duration_ms |
100 |
Minimum silence gap (ms) between detected speech segments. |
flush_on_sentence_boundary |
false |
When true, dispatch text at sentence endings (.?!;) instead of all at once—improves responsiveness for natural speech. |
min_chars_before_dispatch |
400 |
Minimum characters before a chunk is sent for translation (prevents tiny fragments from triggering TTS). |
Tuning Guide
- Faster response time: Lower
commit_merge_ms (e.g., 1000–1500), lower stability_timeout_ms (e.g., 1500–2000), enable flush_on_sentence_boundary.
- Better accuracy (fewer fragments): Raise
commit_merge_ms (e.g., 3000–4000), increase min_chars_before_dispatch (e.g., 500–800).
- Continuous speech (sermon, long utterances): Lower
max_accumulation_ms (e.g., 10000–15000) to dispatch chunks more frequently, enable flush_on_sentence_boundary.
- Noisy environment: Raise
vad_threshold (e.g., 0.7–0.9), increase min_speech_duration_ms (e.g., 200–300).
- Quiet environment: Lower
vad_threshold (e.g., 0.2–0.4), lower vad_silence_threshold_secs (e.g., 0.8–1.0).
- TTS pacing: Adjust
tts_segment_pause_ms to add breathing room between audio clips (e.g., 200–500ms for natural rhythm).
API Endpoints
GET /admin/stt-timing
→ Returns current STT timing settings
Example response:
{
"settings": {
"commit_merge_ms": 2500,
"stability_timeout_ms": 3000,
"tts_segment_pause_ms": 0,
"max_accumulation_ms": 30000,
"vad_threshold": 0.5,
"vad_silence_threshold_secs": 1.5,
"min_speech_duration_ms": 100,
"min_silence_duration_ms": 100,
"flush_on_sentence_boundary": false,
"min_chars_before_dispatch": 400
}
}
---
POST /admin/stt-timing
Updates one or more STT timing settings. Send only the fields you want to change.
Example request:
{
"commit_merge_ms": 1500,
"flush_on_sentence_boundary": true,
"max_accumulation_ms": 15000
}
Example response:
{
"settings": {
"commit_merge_ms": 1500,
"stability_timeout_ms": 3000,
"tts_segment_pause_ms": 0,
"max_accumulation_ms": 15000,
"vad_threshold": 0.5,
"vad_silence_threshold_secs": 1.5,
"min_speech_duration_ms": 100,
"min_silence_duration_ms": 100,
"flush_on_sentence_boundary": true,
"min_chars_before_dispatch": 400
}
}
Notes
- All timing values are in milliseconds unless otherwise noted (VAD silence is in seconds).
- Settings are persisted to Redis and survive server restarts.
- Changes apply immediately to new sessions; existing sessions use the settings they loaded at connection time.
- The stability timeout & accumulation timer are fallbacks when VAD (Voice Activity Detection) doesn’t fire reliably during continuous speech.
- Sentence boundary flushing requires
flush_on_sentence_boundary: true and works by splitting on punctuation (.?!;) followed by whitespace.
Authentication: JWT cookie-based. Admin access requires is_admin=true OR user with assigned role(s) containing relevant permissions. Middleware: adminAuth on all endpoints; optional requirePermission(...) for specific operations.
API Keys Management
Retrieve status of all configured API keys (elevenlabs, anthropic, deepl, libretranslate, google).
Update one or more API keys.
Body: {
"elevenlabs"?: string,
"anthropic"?: string,
"deepl"?: string,
"libretranslate"?: string,
"google"?: string
}
Retrieve the current Anthropic API key.
Voice Management
Scan and list all available ElevenLabs voices with their IDs, names, and categories.
Get the list of voice IDs allowed for users (null → all voices allowed).
Set the whitelist of voice IDs available to users; broadcasts to all connected clients.
Body: {
"voiceIds": string[]
}
TTS & STT Settings
Retrieve current TTS settings (stability, similarity_boost, style, speed, use_speaker_boost).
Update TTS settings and persist to Redis.
Body: {
"stability"?: number,
"similarity_boost"?: number,
"style"?: number,
"speed"?: number,
"use_speaker_boost"?: boolean
}
Get STT timing & VAD parameters (commit merge, stability timeout, sentence boundary flushing, etc.).
Update STT timing & VAD parameters; affects live transcription behavior.
Body: {
"commit_merge_ms"?: number,
"stability_timeout_ms"?: number,
"tts_segment_pause_ms"?: number,
"max_accumulation_ms"?: number,
"vad_threshold"?: number,
"vad_silence_threshold_secs"?: number,
"min_speech_duration_ms"?: number,
"min_silence_duration_ms"?: number,
"flush_on_sentence_boundary"?: boolean,
"min_chars_before_dispatch"?: number
}
Retrieve video call STT/TTS settings (stability_ms, commit_merge_ms, translation_provider).
Update video call settings.
Body: {
"stability_ms"?: number,
"commit_merge_ms"?: number,
"translation_provider"?: "libretranslate" | "claude" | "deepl" | "google"
}
Languages
Get the current active language pair [source, target].
Set the active language pair; broadcasts update to all clients.
Body: {
"languages": [string, string] // exactly 2 language codes
}
Get the pool of languages users can select from.
Set the language pool and broadcast to all clients.
Body: {
"languages": string[]
}
Translation Provider
Get the active translation provider & list of available providers.
Switch the active translation provider (google, deepl, claude, libretranslate).
Body: {
"provider": "google" | "deepl" | "claude" | "libretranslate"
}
Get the current Claude model for translation & list of available models.
Set the Claude model for translation.
Body: {
"model": string // e.g., "claude-3-5-sonnet-20241022"
}
Audio Device
Get the admin-selected audio input device (overrides viewer selection).
Set admin-forced audio device; broadcasts to all clients.
Body: {
"deviceId"?: string,
"label"?: string
}
Feature Flags
Get all feature flags merged from YAML config & Redis overrides.
Get a single feature flag value.
Set a feature flag & broadcast to all clients.
Body: {
"value": boolean
}
Broadcast Schedule
Get the list of scheduled broadcast events.
Update the broadcast schedule.
Body: {
"events": [
{
"id": string,
"title": string,
"datetime": string, // ISO 8601
"description"?: string
}
]
}
TTS & Translation Preview
Generate and return MP3 audio from text using specified voice (admin testing).
Body: {
"text": string,
"voiceId"?: string
}
Generate a random biblical sermon snippet via Gemini Flash (admin testing).
Body: {
"apiKey"?: string,
"language"?: "ru" | "uk" | "en",
"sentences"?: number
}
Voice Training & Cloning
Clone a voice from browser mic recordings (base64-encoded audio blobs).
Body: {
"name": string,
"clips": string[], // base64-encoded audio chunks
"mimeType"?: string
}
Clone a voice from YouTube URL via yt-dlp & ffmpeg (extracts N×30s clips).
Body: {
"name": string,
"youtubeUrl": string,
"clipCount"?: number,
"startOffset"?: number
}
Monitoring & Logs
Get hallucination detection statistics & log.
Clear the hallucination log.
Get recent translation & TTS timing logs.
Clear the translation log.
Get current broadcast translation queue depth.
Broadcast Session History
List all broadcast sessions with metadata.
Get detailed session transcript & timing data.
Export session transcript in JSON, CSV, or TXT format.
Query: ?format=json | csv | txt
User Management
List all users (requires user_management permission); passwords stripped.
Update user admin status &/or assigned roles (requires user_management permission).
Body: {
"isAdmin"?: boolean,
"roleId"?: string | null,
"roleIds"?: string[]
}
Force reset a user's password (requires user_management permission).
Body: {
"password": string // min 6 chars
}
Delete a user account (requires user_management permission); cannot delete yourself.
Roles & Permissions
List all available permissions (requires user_management permission).
List all roles & their permissions (requires user_management permission).
Create a new role (requires user_management permission).
Body: {
"name": string,
"permissions": string[] // subset of all available permissions
}
Update a role's name & permissions (requires user_management permission).
Body: {
"name": string,
"permissions": string[]
}
Delete a role (requires user_management permission).
SDK
Uses the official @elevenlabs/elevenlabs-js SDK (v2). The client is lazy-loaded on first use.
Speech-to-Text (Scribe v2 Realtime)
Connects via native WebSocket to wss://api.elevenlabs.io/v1/speech-to-text/realtime. Handles:
- VAD-based commit buffering with configurable merge window
- Stability timeout fallback for stalled VAD
- Text validation (EN/RU/UK character regex filtering)
- Partial and final transcript emission
Text-to-Speech
Uses client.textToSpeech.stream() with the eleven_multilingual_v2 model. Audio is collected into a Buffer and emitted as base64 MP3.
Voice Management
client.voices.getAll() — fetches all voices from account
- Admin can filter which voices are available to viewers
- Voice cloning via IVC API (from recordings or YouTube)
Key File
backend/src/services/elevenlabs.service.ts
Provider Details
Google Translate
Google Cloud Translation API v2. Fast (~200ms), deterministic, and reliable. Requires GOOGLE_TRANSLATE_API_KEY with the Cloud Translation API enabled in Google Cloud Console. Ensure the API key has no HTTP referrer restrictions (server-side requests have no referrer).
File: backend/src/services/google-translate.service.ts
LibreTranslate
Self-hosted in Docker. No API key required by default. Provides language detection and translation via REST API.
File: backend/src/services/libretranslate.service.ts
DeepL
Premium translation API. Auto-detects free vs. paid endpoint based on the API key format.
File: backend/src/services/deepl.service.ts
Claude (Anthropic)
AI-powered translation using claude-haiku-4-5 for speed. Includes language detection and auto-flip logic.
File: backend/src/services/claude-translate.service.ts
Routing
Provider routing is handled by backend/src/services/translation.provider.ts:
- Try admin-selected primary provider
- On failure, try configured fallback provider
- LibreTranslate is always the last-resort fallback
Connection
Uses ioredis with automatic retry strategy. Falls back to in-memory/YAML defaults if Redis is unavailable.
Key Patterns
| Pattern | Example | Purpose |
flag:<name> |
flag:youtube_input |
Feature flag boolean values |
setting:<name> |
setting:tts_settings |
JSON settings objects |
Key File
backend/src/services/redis.service.ts
Local Development
Use docker-compose.local.yml for Redis and LibreTranslate only (backend/frontend run natively):
docker compose -f docker-compose.local.yml up -d
Production
Use docker-compose.yml for all services:
docker compose up -d --build
Services
| Service | Image | Port | Notes |
| frontend |
node:24-alpine + Nginx |
80 (exposed) |
Serves React build, proxies API/WS to backend |
| backend |
node:24-alpine |
3001 (internal) |
Express + Socket.io server |
| redis |
redis:7-alpine |
6379 (internal) |
Feature flags and settings store |
| libretranslate |
libretranslate/libretranslate |
5000 (internal) |
Self-hosted translation engine |
Configuration
ELEVENLABS_API_KEY=sk-your-production-key
ADMIN_PASSWORD=strong-secure-password
FRONTEND_URL=https://translate.example.com
APP_ENV=prod
REDIS_PASSWORD=redis-secret
Deploy
docker compose up -d --build
Reverse Proxy
When running behind Nginx or another reverse proxy:
- Set
LISTEN_PORT in .env (e.g., 8080)
- Proxy pass to
localhost:8080
- Important: Ensure WebSocket upgrades are forwarded for the
/socket.io/ path
server {
listen 443 ssl;
server_name translate.example.com;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
Monitoring
# Check all services
docker compose ps
# View backend logs
docker compose logs -f backend
# Health check
curl http://localhost:3001/api/health