Ship live translations
with confidence
A production-ready full-stack Node.js + React application for seamless EN↔RU↔UK live auto-detect translation with voice synthesis.
⚙
Installation
Set up the project locally with Docker, Redis, and LibreTranslate in minutes.
▦
Architecture
Understand the STT → Translation → TTS pipeline and real-time Socket.io communication.
▶
Live Translation
Stream from YouTube or microphone with automatic EN/RU/UK language detection and voice output.
📜
Biblical Simulator
Test the full pipeline with AI-generated biblical passages in King James, Church Slavonic, or Ukrainian style.
🎤
Voice Training
Clone custom voices from microphone recordings or YouTube videos using ElevenLabs IVC.
Prerequisites
●
Node.js 20+
Runtime for backend and build tools
●
Docker + Docker Compose
For Redis and LibreTranslate services
●
yt-dlp + ffmpeg
Required for YouTube audio extraction
●
ElevenLabs API Key
For speech-to-text and text-to-speech
Clone & Configure
git clone https://github.com/Pzharyuk/live-translator-node.git && cd live-translator-node
cp .env.example .env
Edit .env and set your API key:
ELEVENLABS_API_KEY=sk-your-key-here
ADMIN_PASSWORD=your-secure-password
Start Infrastructure
# Start Redis + LibreTranslate
docker compose -f docker-compose.local.yml up -d
# Wait for LibreTranslate to download language models (~500 MB)
docker logs -f $(docker ps -qf "name=libretranslate") 2>&1 | grep -i "running"
Start Backend
cd backend
npm install
npm run dev # nodemon watches for changes
Start Frontend
cd frontend
npm install
npm run dev # Vite hot-reload on localhost:5173
✓
You're all set!
Open http://localhost:5173 — log in with user / changeme and you will be redirected to /translate. Admin panel: http://localhost:5173/admin (admin password: admin123).
1
Start the services
Follow the Installation guide to get Docker services, backend, and frontend running.
2
Open the Admin Panel
Navigate to http://localhost:5173/admin and enter the admin password.
3
Select a Voice
Choose a TTS voice from the dropdown. The voice list is fetched from your ElevenLabs account.
4
Test with Text
Use the free-text area in the admin panel to type a phrase. Click translate to hear the TTS output instantly.
5
Go Live
Open the user view at http://localhost:5173/translate. Select "Mic" as input, pick a voice, and click Start. Speak into your microphone and watch real-time translation appear with audio playback.
💡
Try the Biblical Simulator
For a hands-free demo, enable the biblical_simulator feature flag in admin, enter an Anthropic API key, select a language, and click "Generate". The system will produce biblical passages through the full STT → Translation → TTS pipeline.
System Overview
Frontend
React 19 + Vite
Socket.io Client
Web Audio API
↔
Backend
Express + Socket.io
TypeScript
Service Layer
ElevenLabs
Scribe v2 (STT)
TTS Streaming
Voice Cloning
Translation
Google Translate (Cloud API)
LibreTranslate (self-hosted)
DeepL (premium API)
Claude / Anthropic (AI)
Redis
Feature Flags
Settings Store
Google Gemini
Biblical Simulator
Sermon Generation
Voice Training Text
DeepL
Free & Pro tiers
Auto endpoint detection
Data Flow
1 Audio Input (Mic / YouTube / Simulator)
↓
2 PCM 16-bit LE @ 16kHz via Socket.io chunks
↓
3 ElevenLabs Scribe v2 WebSocket STT
↓
4 Commit Merge Buffer 2.5s VAD aggregation
↓
5 Translation Provider Google / LibreTranslate / DeepL / Claude
↓
6 ElevenLabs TTS Voice synthesis streaming
↓
7 Audio Playback Queued with 600ms pause
Key Architecture Decisions
Two-layer Language Detection
LibreTranslate's /detect endpoint returns 0-confidence for short Cyrillic phrases. The app uses script-based pre-detection (Unicode 0x0400–0x04FF = Cyrillic) combined with ElevenLabs Scribe's language_code output for reliable EN/RU/UK auto-detection.
VAD Commit Merging
Voice Activity Detection can fire aggressively on speaker breathing. Commits are buffered for 2.5 seconds before translation to merge fragments into meaningful phrases.
Feature Flag Merging
YAML config defaults are merged with Redis runtime overrides. Redis values take priority, falling back to YAML if Redis is unavailable.
API Key Hierarchy
Keys resolve in order: Runtime Cache → Redis → Config File → Empty. This allows hot-swapping keys without restarts.
Connection Lifecycle
- Client sends
start_session with source type (mic or youtube) and optional voiceId
- Backend opens a WebSocket to
wss://api.elevenlabs.io/v1/speech-to-text/realtime
- For YouTube: spawns
yt-dlp | ffmpeg child processes to extract PCM audio
- For Microphone: awaits
audio_chunk events from the frontend
Audio Streaming
Audio chunks are sent to Scribe as JSON messages:
{
"message_type": "input_audio_chunk",
"audio_base_64": "UklGR..." // PCM 16-bit LE, 16kHz, mono
}
Scribe Responses
| Response Type | Meaning | Action |
partial_transcript |
Live partial text (speculative) |
Emitted as non-final transcript event |
committed_transcript |
VAD fired — complete phrase |
Buffered for commit merge window |
Commit Merge Buffer
After receiving a committed_transcript, the backend waits 2.5 seconds (COMMIT_MERGE_MS) to collect additional commits before translating. This prevents fragmented translations from aggressive VAD.
Stability Timeout
If VAD stalls (no new commits), a 3.5 second fallback timer (STABILITY_TIMEOUT_MS) fires to translate whatever new text has accumulated, preventing indefinite silence.
Text Validation
Before translation, text is validated against EN/RU/UK character regex patterns. This filters out hallucinated text from the STT model (common with silence or background noise).
Provider Chain
The system supports three translation providers with automatic fallback:
Default
LibreTranslate
Self-hosted, no API key required. Runs in Docker alongside the app. Best for privacy and cost.
Premium
DeepL
High-quality translations. Supports both free and paid API tiers. Auto-detects endpoint.
AI
Claude
Anthropic's Claude for context-aware translations. Uses claude-haiku-4-5 for speed.
Fallback Logic
1. Try primary provider (admin-selected)
2. If primary fails → try configured fallback
3. If fallback fails → try LibreTranslate (last resort)
4. If all fail → emit error event
Language Detection
The app uses a two-layer auto-detection approach:
Layer 1: Script-based Pre-detection
Before calling any translation API, the backend checks Unicode character scripts:
- Cyrillic characters (Unicode 0x0400–0x04FF) → if >50% of matched letters are Cyrillic, detected as Russian
- Latin characters → detected as English
- This avoids low-confidence results from LibreTranslate's
/detect endpoint on short text
Layer 2: STT Language Code
When the auto_language_detect flag is enabled, ElevenLabs Scribe returns a language_code with each transcript commit. The backend uses this to correctly route EN/RU/UK without relying solely on script detection.
Note: For LibreTranslate, both Russian and Ukrainian Cyrillic text is passed with source ru since LibreTranslate handles Ukrainian text acceptably via the Russian model. DeepL and Claude providers distinguish Ukrainian natively and handle uk as a proper source language.
Language Gating
Detected languages are checked against the admin-approved pool. If a detected language isn't in the allowed set, the translation is rejected to prevent hallucinated language outputs.
TTS Pipeline
After translation, the text is sent to ElevenLabs TTS:
const stream = await client.textToSpeech.stream(voiceId, {
text: translatedText,
model_id: "eleven_multilingual_v2",
output_format: "mp3_44100_128",
voice_settings: {
stability: 0.5,
similarity_boost: 0.75,
style: 0.0,
speed: 1.0,
use_speaker_boost: true
}
});
Audio Delivery
TTS audio is streamed to a Buffer, then emitted as a base64-encoded MP3 via the tts_audio Socket.io event.
Frontend Playback Queue
The frontend maintains an audio queue to prevent overlapping playback:
- Received
tts_audio events are queued
- Each segment plays to completion before the next starts
- A configurable pause (600ms default) is inserted between segments
- The pause duration is controlled by
tts_segment_pause_ms (adjustable in admin)
Microphone Input
- User selects "Mic" tab and chooses a TTS voice
- Browser captures audio via Web Audio API's
ScriptProcessor
- PCM 16-bit LE at 16kHz sample rate sent to backend via Socket.io
- Backend pipes audio to ElevenLabs Scribe v2 Realtime WebSocket
- Language auto-detected (EN/RU/UK), text translated and synthesized
- TTS audio returned and played back with inter-segment pauses
YouTube Input
- User pastes a YouTube URL (live stream or video)
- Backend spawns
yt-dlp | ffmpeg child processes
- Audio extracted as PCM stream (16kHz, 16-bit LE, mono)
- Piped to Scribe v2, same pipeline as microphone
- Stream ends when YouTube content ends or user stops
User Interface
The user view features a dark cavern theme with:
- Waveform visualizer — Canvas-based bar chart with orange gradient and cyan tips
- Transcript display — White translated text scrolls upward with fade masks
- Partial transcript — Shown in italic orange while STT is processing
- Source tabs — Toggle between Mic and YouTube (controlled by feature flags)
How It Works
The backend uses yt-dlp and ffmpeg as child processes to extract audio from YouTube URLs:
yt-dlp (best audio) → ffmpeg (PCM 16kHz 16-bit LE mono) → Scribe v2
Supported Sources
- Live streams — Translates in real-time as the stream progresses
- Regular videos — Processes the full audio track
- Any URL supported by yt-dlp (YouTube, etc.)
Requirements
Both yt-dlp and ffmpeg must be installed and available in the system PATH. On macOS:
brew install yt-dlp ffmpeg
⚠
Feature Flag Required
YouTube input is controlled by the youtube_input feature flag. Enable it in the admin panel to show the YouTube tab in the user view.
Overview
The Biblical Transcript Simulator is an admin-only feature that generates biblical text passages using Google's Gemini API (gemini-2.5-flash), then routes them through the full translation pipeline. This provides a hands-free way to test STT → Translation → TTS without a live audio source.
Language Styles
| Language | Style | Example |
en |
King James English |
"In the beginning was the Word..." |
ru |
Church Slavonic Russian |
"В начале было Слово..." |
uk |
Traditional Ukrainian |
"На початку було Слово..." |
Flow
- Admin selects language (EN/RU/UK)
- Backend calls Gemini 2.5 Flash with streaming
- Gemini generates 6-8 biblical passages, 3-5 sentences each
- Stream is buffered until 140+ characters AND complete sentences
- Chunks emitted with 1800ms smooth pacing between them
- Each chunk flows through the standard pipeline:
- Emitted as
transcript (isFinal: true)
- Auto-translated via configured provider
- TTS synthesized and audio returned
- Frontend plays audio with standard inter-segment pause
💡
Feature Flag
Enable biblical_simulator in the admin feature flags panel. The Gemini API key is configured via the GEMINI_API_KEY environment variable or set at runtime in the admin API Keys panel.
Overview
Voice Training uses ElevenLabs' Instant Voice Cloning (IVC) API to create custom voices from audio samples. Once cloned, the voice appears in the voice selector immediately.
From Microphone
- Open the Voice Training section in the admin panel
- Click Generate Text to get an AI-generated reading passage (via Gemini) — gives the speaker natural, phonetically diverse text to read aloud
- Record multiple audio clips using your browser microphone while reading the generated text
- Provide a name for the voice
- Clips are uploaded to ElevenLabs IVC API
- Cloned voice is available for TTS immediately
- Click Preview Voice to hear the cloned voice speak a sample sentence via TTS
From YouTube
- Paste a YouTube URL in the Voice Training section
- Backend extracts N × 30-second clips via
yt-dlp + ffmpeg
- Clips are uploaded to ElevenLabs IVC API
- Resulting voice is stored in your ElevenLabs account
⚠
ElevenLabs Account
Cloned voices are stored in your ElevenLabs account, not locally. Ensure your plan supports voice cloning.
Concepts
| Concept | Description |
| Active Language Pair |
The current pair used for translation (e.g., EN ↔ RU, EN ↔ UK, or RU ↔ UK). Set by admin. |
| Available Languages |
The pool of languages viewers can select from (if user_language_selector is enabled). |
Admin Controls
- Change the active language pair via the admin panel
- Changes broadcast to all connected clients in real-time
- Manage the available languages pool for viewer selection
Viewer Selection
When the user_language_selector feature flag is enabled, viewers can override the admin-set language pair by selecting their own preferred languages from the available pool.
Overview
Two people can video call each other through the app, each speaking their own language. The app transcribes, translates, and synthesizes speech in real-time so each participant hears the other in their language.
Feature flag: Video call is gated behind the video_translation flag. Enable it in the admin panel or set video_translation: true in your YAML config.
How It Works
- Create a room — Person A selects their language, picks a TTS voice, and clicks "Create Room". A 6-character room code is generated.
- Share the code — Person A shares the room code with Person B (copy button provided).
- Join the room — Person B enters the code, selects their language and TTS voice, and clicks "Join".
- WebRTC connection — The app establishes a peer-to-peer video connection via WebRTC (signaled through Socket.io). Video flows directly between browsers.
- Audio translation — Each participant's microphone audio is simultaneously:
- Sent to the peer via WebRTC (but muted on their end)
- Captured as PCM chunks and sent to the backend via Socket.io for STT
- Translation pipeline — Each participant has their own independent Scribe STT session. Transcribed text is translated to the other participant's language, then synthesized via ElevenLabs TTS and sent back to the peer.
- Playback — The peer hears the TTS translation instead of the raw audio. Translated transcript is displayed below the video.
Architecture
Person A (Browser) Server Person B (Browser)
├─ getUserMedia ├─ Socket.io ├─ getUserMedia
├─ WebRTC P2P ═══video═══►│ (signaling) ◄═══ ├─ WebRTC P2P
│ │ │
├─ PCM chunks ──Socket.io─►├─ ScribeA(STT) │
│ │ ↓ translate │
│ │ ↓ TTS ───────────►├─ Plays TTS
│ │ │
│ Plays TTS ◄─────────────├─ ScribeB(STT) ◄───├─ PCM chunks
│ (remote video muted) │ ↓ translate │ (remote video muted)
└──────────────────────────┴────────────────────┘
Socket Events
| Event | Direction | Purpose |
video_create_room | C→S | Create a new room with language + voice |
video_room_created | S→C | Returns the 6-char room code |
video_join_room | C→S | Join an existing room |
video_room_joined | S→C | Sent to both participants, triggers WebRTC |
video_signal_offer/answer/ice | C↔S | WebRTC signaling relay |
video_audio_chunk | C→S | PCM audio for STT processing |
video_transcript | S→C | Transcript sent to the speaker |
video_translation | S→C | Translation sent to the listener |
video_tts_audio | S→C | TTS audio sent to the listener |
video_leave_room | C→S | Leave the room |
video_room_closed | S→C | Notify peer when other leaves |
Room Lifecycle
- Rooms are stored in Redis with key
video_room:{code} and a 4-hour TTL
- Maximum 2 participants per room
- When one participant disconnects, the other is notified and the call ends
- Scribe sessions are automatically cleaned up on disconnect
Feature Flags
Feature flags control which routes and UI sections are enabled. Defaults are defined in config/application.yaml and can be overridden at runtime via Redis (admin panel). The system merges YAML defaults with any Redis overrides, allowing live feature toggling without redeployment.
| Flag |
Default |
Description |
youtube_input |
true |
Allow users to stream audio from YouTube URLs as input source. |
mic_input |
true |
Allow users to input audio from their microphone. |
auto_language_detect |
true |
Automatically detect source language; if false, require user to specify. |
user_language_selector |
false |
Allow users to change the active language pair; if false, admin controls it. |
audio_device_selector |
true |
Show microphone device selection dropdown in the UI. |
video_translation |
true |
Enable the /video route for real-time peer-to-peer video call translation. |
video_voice_cloning |
false |
Premium feature — show voice cloning option in video call lobby. |
remote_audio_source |
false |
Enable /audio-source route for headless remote audio relay to broadcast. |
broadcast |
false |
Enable /broadcast route — public receiver page for live translation streams. |
translate |
false |
Enable /translate route — live translator page for personal translation sessions. |
Storage & Runtime Override
Flags are stored in Redis with the key prefix flag: (e.g., flag:broadcast). On each request, the server merges YAML defaults with Redis values, giving Redis overrides priority. This allows admins to toggle features live via the admin panel without restarting the server.
Admin API
Feature flags can be managed programmatically via the admin REST endpoints:
GET /admin/flags
→ Returns all flags merged from config + Redis
{
"flags": {
"youtube_input": true,
"mic_input": true,
"auto_language_detect": true,
"user_language_selector": false,
"audio_device_selector": true,
"video_translation": true,
"video_voice_cloning": false,
"remote_audio_source": false,
"broadcast": false,
"translate": false
}
}
GET /admin/flags/:flag
→ Read a single flag value
POST /admin/flags/:flag
→ Set a flag & broadcast update to all connected clients
{
"value": true
}
Returns:
{
"flag": "broadcast",
"value": true
}
When a flag is updated via POST /admin/flags/:flag, the server broadcasts a feature_flags event to all connected Socket.IO clients, ensuring instant UI synchronization across all pages.
File Structure
| File | Purpose |
config/application.yaml |
Base defaults for all environments |
config/application-local.yaml |
Local development overrides (localhost URLs) |
config/application-prod.yaml |
Production overrides (Docker service names) |
The APP_ENV environment variable (local or prod) determines which overlay file is loaded on top of the base config.
Full Configuration Reference
server:
port: 3001
cors_origin: "http://localhost:5173"
elevenlabs:
api_key: "${ELEVENLABS_API_KEY}"
default_voice_id: "kxj9qk6u5PfI0ITgJwO0"
tts_model: "eleven_multilingual_v2"
tts_settings:
stability: 0.5
similarity_boost: 0.75
style: 0.0
speed: 1.0
use_speaker_boost: true
stt_model: "scribe_v2"
anthropic:
api_key: "${ANTHROPIC_API_KEY}"
deepl:
api_key: "${DEEPL_API_KEY}"
libretranslate:
url: "http://libretranslate:5000"
api_key: ""
redis:
host: "redis"
port: 6379
password: ""
feature_flags:
youtube_input: true
mic_input: true
auto_language_detect: true
user_language_selector: false
audio_device_selector: true
video_translation: false
video_voice_cloning: false
broadcast: false
audio:
sample_rate: 16000
channels: 1
chunk_duration_ms: 250
translation:
source_lang: "auto"
target_lang_en: "en"
target_lang_ru: "ru"
provider: "libretranslate"
fallback: "libretranslate"
Environment Variable Interpolation
YAML values using ${VAR_NAME} syntax are automatically replaced with the corresponding environment variable at startup.
| Variable |
Required |
Default |
Description |
| ELEVENLABS_API_KEY |
Yes |
— |
ElevenLabs API key for speech synthesis & real-time STT. |
| ELEVENLABS_VOICE_ID |
No |
kxj9qk6u5PfI0ITgJwO0 |
Default ElevenLabs voice ID when user doesn’t select one. |
| ANTHROPIC_API_KEY |
No |
— |
Anthropic API key for Claude translation provider & sermon generation. |
| GEMINI_API_KEY |
No |
— |
Google Gemini API key for biblical simulator & sermon generation (cheaper alternative to Anthropic). |
| GOOGLE_TRANSLATE_API_KEY |
No |
— |
Google Cloud Translation API key; default translation provider. |
| DEEPL_API_KEY |
No |
— |
DeepL API key for translation provider fallback. |
| TRANSLATION_PROVIDER |
No |
google |
Boot-time default translation provider (google | deepl | claude | libretranslate); can be changed live in Admin UI. |
| APP_ENV |
No |
local |
Application environment: local (dev) or prod (Docker). |
| FRONTEND_URL |
No |
http://localhost |
Frontend URL used for CORS origin in production; set to your actual domain. |
| LISTEN_PORT |
No |
80 |
Port the frontend listens on. |
| REDIS_PASSWORD |
No |
— |
Redis authentication password; leave empty for no authentication. |
| LIBRETRANSLATE_API_KEY |
No |
— |
LibreTranslate API key if your instance requires authentication. |
| ADMIN_PASSWORD |
No |
admin123 |
Legacy socket authentication password for admin page; change in production. |
| APP_ADMIN_USERNAME |
No |
admin |
Admin user seeded into database on first boot. |
| APP_ADMIN_PASSWORD |
No |
admin123 |
Admin user password; change in production. |
| APP_USERNAME |
No |
user |
User-facing login username; change in production. |
| APP_PASSWORD |
No |
changeme |
User-facing login password; change in production. |
| JWT_SECRET |
Yes |
— |
Secret key for JWT session tokens; generate strong random string via openssl rand -hex 32. |
| COOKIE_SECURE |
No |
true |
Enable secure cookies (HTTPS only); set to true in production. |
| DB_PASSWORD |
Yes |
— |
PostgreSQL database password for user “translator”. |
TTS Settings
API Endpoints
GET /admin/tts-settings
Returns: { settings: TtsSettings }
POST /admin/tts-settings
Body: Partial<TtsSettings>
Returns: { settings: TtsSettings }
| Setting |
Range |
Default |
Description |
stability |
0.0 – 1.0 |
0.5 |
Voice stability: lower = more expressive & variable, higher = more consistent & monotone. |
similarity_boost |
0.0 – 1.0 |
0.75 |
How closely the voice model mimics the original voice characteristics. |
style |
0.0 – 1.0 |
0.0 |
Exaggeration of voice style — 0 = neutral, higher = more dramatic/emotional. |
speed |
0.1 – 2.0 |
1.0 |
Playback speed multiplier for TTS audio output. |
use_speaker_boost |
boolean |
true |
Enable speaker boost for louder, more prominent voice output. |
STT Timing Settings
Controls speech recognition buffering, accumulation, & translation triggers.
API Endpoints
GET /admin/stt-timing
Returns: { settings: SttTimingSettings }
POST /admin/stt-timing
Body: Partial<SttTimingSettings>
Returns: { settings: SttTimingSettings }
| Setting |
Range |
Default |
Description |
commit_merge_ms |
0 – 10000 |
1500 |
Time to buffer VAD commits before flushing to translation (merges short VAD fragments). |
stability_timeout_ms |
0 – 10000 |
2000 |
Time to wait for stable partial text before dispatching for translation (stability fallback timer). |
tts_segment_pause_ms |
0 – 5000 |
0 |
Pause between TTS audio segments (frontend-facing — controls playback timing). |
max_accumulation_ms |
0 – 30000 |
8000 |
Max time to accumulate words during continuous speech before force-dispatching for translation (prevents late dispatch during long sentences). |
vad_threshold |
0.0 – 1.0 |
0.5 |
Voice Activity Detection sensitivity — higher = stricter noise filter, lower = more permissive. |
vad_silence_threshold_secs |
0.1 – 5.0 |
1.0 |
Seconds of silence required before VAD triggers a commit event. |
min_speech_duration_ms |
0 – 1000 |
100 |
Ignore speech shorter than this duration (noise rejection). |
min_silence_duration_ms |
0 – 1000 |
100 |
Minimum silence gap required between detected speech segments. |
flush_on_sentence_boundary |
boolean |
true |
When true, split chunks at sentence boundaries (.?!;) instead of flushing all at once. |
min_chars_before_dispatch |
1 – 1000 |
40 |
Minimum characters required before a chunk is dispatched for translation (prevents tiny fragments). |
Video Call Settings
Separate STT/TTS timing for low-latency video call translation mode.
API Endpoints
GET /admin/video-settings
Returns: VideoCallSettings
POST /admin/video-settings
Body: Partial<VideoCallSettings>
Returns: VideoCallSettings
| Setting |
Range |
Default |
Description |
stability_ms |
100 – 5000 |
500 |
Wait time for stable partial text before translating (video call optimized for low latency). |
commit_merge_ms |
0 – 500 |
50 |
Merge time for VAD commits in video mode (very short for responsiveness). |
translation_provider |
libretranslate | claude | deepl | google |
claude |
Translation provider for video call sessions (can differ from broadcast provider). |
STT Timing Settings
Configure speech-to-text timing parameters that control when transcripts are dispatched for translation. All values are in milliseconds unless otherwise noted.
Settings Table
| Setting |
Default |
Description |
commit_merge_ms |
1500 |
Buffer VAD commits for this duration before translating (merges short speech fragments into coherent chunks). |
stability_timeout_ms |
2000 |
Wait for stable partial text (unchanged for this duration) before translating when VAD doesn’t fire. |
tts_segment_pause_ms |
0 |
Pause between TTS audio segments sent to frontend (prevents audio stuttering on slow connections). |
max_accumulation_ms |
8000 |
Force-dispatch accumulated words after this duration during continuous speech (ensures responsiveness without waiting for VAD). |
vad_threshold |
0.5 |
Voice Activity Detection sensitivity (0–1, higher = stricter noise filtering). |
vad_silence_threshold_secs |
1.0 |
Seconds of silence before VAD triggers a commit (Scribe parameter, sent as WebSocket query). |
min_speech_duration_ms |
100 |
Ignore speech shorter than this duration (Scribe parameter, filters noise). |
min_silence_duration_ms |
100 |
Minimum silence gap in milliseconds (Scribe parameter). |
flush_on_sentence_boundary |
true |
When true, dispatch chunks at sentence boundaries (.?!) instead of waiting for timeout (improves punctuation preservation). |
min_chars_before_dispatch |
40 |
Minimum characters required before a chunk is dispatched for translation (prevents tiny fragments like "um" or "uh"). |
API Endpoints
GET /admin/stt-timing
Retrieve current STT timing settings.
curl -X GET http://localhost:3001/admin/stt-timing \
-H "Cookie: auth_token=YOUR_JWT_TOKEN"
Response:
{
"settings": {
"commit_merge_ms": 1500,
"stability_timeout_ms": 2000,
"tts_segment_pause_ms": 0,
"max_accumulation_ms": 8000,
"vad_threshold": 0.5,
"vad_silence_threshold_secs": 1.0,
"min_speech_duration_ms": 100,
"min_silence_duration_ms": 100,
"flush_on_sentence_boundary": true,
"min_chars_before_dispatch": 40
}
}
POST /admin/stt-timing
Update one or more STT timing settings. All fields are optional — omitted fields retain their current value.
curl -X POST http://localhost:3001/admin/stt-timing \
-H "Content-Type: application/json" \
-H "Cookie: auth_token=YOUR_JWT_TOKEN" \
-d '{
"max_accumulation_ms": 10000,
"min_chars_before_dispatch": 50,
"vad_threshold": 0.6
}'
Response:
{
"settings": {
"commit_merge_ms": 1500,
"stability_timeout_ms": 2000,
"tts_segment_pause_ms": 0,
"max_accumulation_ms": 10000,
"vad_threshold": 0.6,
"vad_silence_threshold_secs": 1.0,
"min_speech_duration_ms": 100,
"min_silence_duration_ms": 100,
"flush_on_sentence_boundary": true,
"min_chars_before_dispatch": 50
}
}
Tuning Guide
- For snappy response time: Reduce
max_accumulation_ms (e.g., 5000) and min_chars_before_dispatch (e.g., 20). Trade-off: shorter fragments may feel choppy.
- For complete sentences: Increase
max_accumulation_ms (e.g., 12000) and min_chars_before_dispatch (e.g., 80). Trade-off: longer latency between speech and translation.
- To merge short pauses: Increase
commit_merge_ms (e.g., 2500). Useful when speakers pause mid-sentence to breathe.
- To handle continuous speech: Enable
flush_on_sentence_boundary and set max_accumulation_ms to a reasonable duration (8000–10000ms). This ensures translation happens naturally at sentence breaks rather than timeouts.
- For noisy environments: Increase
vad_threshold (e.g., 0.7) and min_silence_duration_ms (e.g., 200) to filter background noise. Increase vad_silence_threshold_secs (e.g., 1.5) to require longer silence before VAD triggers.
- For quiet environments: Lower
vad_threshold (e.g., 0.3) to detect softer speech; reduce vad_silence_threshold_secs (e.g., 0.8).
- To reduce TTS audio gaps: Increase
tts_segment_pause_ms (e.g., 50–200) so playback waits briefly between chunks, allowing the next audio segment to buffer.
- For real-time latency: Disable
flush_on_sentence_boundary, reduce stability_timeout_ms (e.g., 800), and set max_accumulation_ms low (e.g., 3000).
How STT Timing Works
The STT pipeline uses multiple mechanisms to decide when a transcript fragment is ready for translation:
- VAD (Voice Activity Detection): Scribe detects silence and fires a
committed_transcript event. These commits are buffered for commit_merge_ms to merge fragments split by brief pauses.
- Stability Timer: If VAD doesn’t fire and partial text remains unchanged for
stability_timeout_ms, translation triggers. This handles cases where the VAD parameters are too conservative.
- Accumulation Timer: During continuous speech (sermon, monologue), neither VAD nor stability fires reliably. The accumulation timer ensures translation happens every
max_accumulation_ms, preventing long gaps. Text is split at sentence boundaries when flush_on_sentence_boundary is enabled.
- Sentence Boundary Detection: When enabled, the pipeline detects complete sentences (.?!) in partial text and dispatches them immediately, rather than waiting for VAD or timers. This preserves punctuation and improves natural breakpoints.
- Minimum Character Threshold: Fragments shorter than
min_chars_before_dispatch are held back to avoid tiny, noisy translations ("um", "uh", single words).
Settings Persistence
All STT timing settings are persisted to Redis under the key stt_timing. Changes made via the admin API take effect immediately for new sessions; active sessions continue with the settings at startup until manually restarted.
WebSocket Query Parameters
The following settings are sent as query parameters to the Scribe WebSocket endpoint:
vad_threshold → controls noise sensitivity
vad_silence_threshold_secs → silence duration before commit
min_speech_duration_ms → minimum speech fragment duration
min_silence_duration_ms → minimum silence gap
See the Scribe v2 Realtime documentation for full details on these ElevenLabs parameters.
Authentication: All endpoints require a valid JWT cookie (COOKIE_NAME). Admin endpoints enforce is_admin=true OR user roles with appropriate permissions. Roles & permissions are validated per-endpoint via requirePermission(...) middleware.
API Keys
Retrieve all API key statuses (shows which keys are configured, not values).
Update one or more API keys (elevenlabs, anthropic, deepl, libretranslate, google).
Body: {
"elevenlabs": "string",
"anthropic": "string",
"deepl": "string",
"libretranslate": "string",
"google": "string"
}
Retrieve the configured Anthropic API key (returns key value).
Voice Management
Scan & list all available ElevenLabs voices with metadata (name, voice_id, category, preview_url).
Get the list of voice IDs allowed for use by viewers (null = all voices allowed).
Restrict available voices to a specific list; broadcasts update to all connected clients.
Body: {
"voiceIds": ["voice_id_1", "voice_id_2"]
}
Feature Flags
Retrieve all feature flags (merged from YAML config defaults & Redis overrides).
Get the value of a single feature flag.
Set a feature flag value & broadcast updated flags to all connected clients.
TTS & STT Settings
Get current TTS settings (stability, similarity_boost, style, speed, use_speaker_boost).
Update TTS voice parameters for ElevenLabs.
Body: {
"stability": 0.5,
"similarity_boost": 0.75,
"style": 0.0,
"speed": 1.0,
"use_speaker_boost": true
}
Get STT timing settings (VAD thresholds, silence duration, accumulation timeout, sentence boundary flush).
Update STT timing & VAD parameters for live speech-to-text.
Body: {
"commit_merge_ms": 1500,
"stability_timeout_ms": 2000,
"tts_segment_pause_ms": 0,
"max_accumulation_ms": 8000,
"vad_threshold": 0.5,
"vad_silence_threshold_secs": 1.0,
"min_speech_duration_ms": 100,
"min_silence_duration_ms": 100,
"flush_on_sentence_boundary": true,
"min_chars_before_dispatch": 40
}
Get video call STT/TTS settings (stability_ms, commit_merge_ms, translation_provider).
Update video call translation pipeline settings.
Body: {
"stability_ms": 500,
"commit_merge_ms": 50,
"translation_provider": "claude"
}
Languages
Get the current active language pair (e.g., ["en", "ru"]).
Set the active language pair & broadcast to all viewers for real-time update.
Body: {
"languages": ["en", "ru"]
}
Get the pool of languages available for viewers to select from.
Update the available language pool & broadcast to all clients.
Body: {
"languages": ["en", "ru", "uk"]
}
Translation Provider
Get the currently active translation provider & list of available providers (google, deepl, claude, libretranslate).
Set the active translation provider (routing all translate calls through it).
Body: {
"provider": "google"
}
Get the currently selected Claude model & list of available Claude models.
Set the Claude model to use when translation provider is "claude".
Body: {
"model": "claude-3-5-sonnet-20241022"
}
Audio Device
Get the admin-selected audio input device (overrides viewer's local choice).
Set the admin-enforced audio input device & broadcast to viewers.
Body: {
"deviceId": "device_id_string",
"label": "Microphone Name"
}
Text-to-Speech Preview
Generate & return MP3 audio for a test text string in a given voice.
Body: {
"text": "Hello world",
"voiceId": "kxj9qk6u5PfI0ITgJwO0"
}
Voice Training & Cloning
Clone a voice from browser microphone recordings (base64-encoded audio blobs).
Body: {
"name": "My Custom Voice",
"clips": ["base64_audio_blob_1", "base64_audio_blob_2"],
"mimeType": "audio/webm"
}
Clone a voice from a YouTube video (server-side yt-dlp & ffmpeg extraction).
Body: {
"name": "My Cloned Voice",
"youtubeUrl": "https://www.youtube.com/watch?v=...",
"clipCount": 3,
"startOffset": 0
}
Sermon Generation
Generate a biblical sermon excerpt via Gemini Flash 2.5 (configurable language, sentence count).
Body: {
"apiKey": "optional_gemini_api_key",
"language": "en",
"sentences": 5
}
Broadcast Schedule
Get the list of upcoming broadcast events (with timestamps & descriptions).
Update the broadcast schedule (array of events with id, title, datetime, description).
Body: {
"events": [
{
"id": "evt_1",
"title": "Sunday Service",
"datetime": "2024-01-14T10:00:00Z",
"description": "Weekly sermon"
}
]
}
Monitoring & Diagnostics
Get hallucination detection stats & log (false transcript detections).
Clear the hallucination detection log.
Get recent translation audit log entries (original, translated, detected language, timing).
Clear the translation audit log.
Get real-time broadcast TTS queue depth & Redis Streams stats (pending, translated messages).
Session History
List all broadcast sessions from PostgreSQL (with metadata & transcript counts).
Get detailed transcript & metadata for a specific session.
Export session transcripts in JSON, CSV, or TXT format (download attachment).
User Management
Requires: user_management permission. List all users (password hashes stripped).
Requires: user_management permission. Update user admin status & assign roles.
Body: {
"isAdmin": true,
"roleIds": ["role_id_1", "role_id_2"]
}
Requires: user_management permission. Set a new password for a user (minimum 6 characters).
Body: {
"password": "newpassword123"
}
Requires: user_management permission. Delete a user account (cannot delete own account).
Roles & Permissions
Requires: user_management permission. List all available permissions (constants).
Requires: user_management permission. List all defined roles & their permissions.
Requires: user_management permission. Create a new role with specific permissions.
Body: {
"name": "Translator",
"permissions": ["broadcast_control", "voice_management"]
}
Requires: user_management permission. Update role name & permissions.
Body: {
"name": "Updated Role Name",
"permissions": ["broadcast_control"]
}
Requires: user_management permission. Delete a role.
Socket.IO Events
Server → Client Events
| Event |
Payload |
Description |
feature_flags |
{ [flag: string]: boolean } |
Merged feature flags from YAML defaults & Redis overrides. |
languages |
{ languages: string[] } |
Current active language pair (source & target). |
available_languages |
{ languages: string[] } |
Pool of languages viewers can select from. |
stt_timing |
{ tts_segment_pause_ms: number } |
STT timing configuration for frontend audio pause behavior. |
broadcast_status |
{ active: boolean, source?: string, pauseReason?: string | null, skipSourceLang?: string | null, voiceId?: string, orphaned?: boolean } |
Global broadcast on/off status & metadata; sent to all clients. |
broadcast_viewer_count |
{ count: number } |
Number of viewers in the broadcast room. |
remote_audio_sources |
{ sources: Array<{ socketId: string, label: string, deviceId: string }> } |
List of registered remote audio sources for broadcast. |
admin_audio_device |
{ deviceId: string, label: string } |
Admin-selected audio device override for viewers. |
transcript |
{ text: string, isFinal: boolean } |
Live transcription for private session. |
translation |
{ original: string, translated: string, detectedLanguage?: string } |
Translated text for private session. |
tts_audio |
{ audio: string } |
Base64-encoded MP3 audio for private session TTS. |
audio_level |
{ data: number[] } |
Waveform data (downsampled PCM) for audio level visualization. |
session_started |
{ source: 'mic' | 'youtube' } |
Private session successfully started. |
session_stopped |
{} |
Private session ended. |
stream_ended |
{} |
YouTube or broadcast stream ended naturally. |
tts_clear_queue |
{} |
Clear pending TTS audio queue (broadcast pause or session stop). |
error |
{ message: string } |
Error notification (STT failure, translation error, etc.). |
broadcast_transcript |
{ text: string, isFinal: boolean, skipped?: boolean } |
Live transcription for broadcast viewers. |
broadcast_translation |
{ original: string, translated: string, detectedLanguage?: string } |
Translated text for broadcast viewers. |
broadcast_tts_audio |
{ audio: string } |
Base64-encoded MP3 audio for broadcast TTS. |
broadcast_voice_changed |
{ voiceId: string } |
TTS voice changed mid-broadcast. |
admin_translate_result |
{ original: string, translated: string, detectedLanguage?: string, audio: string } |
Result of admin instant translate test (private to admin socket). |
Client → Server Events
| Event |
Payload |
Description |
join_broadcast |
{} |
Viewer joins the broadcast room to receive live translation & audio. |
leave_broadcast |
{} |
Viewer leaves the broadcast room. |
set_languages |
{ languages: string[] } |
Viewer selects language pair from available pool. |
start_session |
{ source: 'mic' | 'youtube', voiceId?: string, youtubeUrl?: string } |
Start private translation session from mic or YouTube. |
stop_session |
{} |
Stop private session. |
change_voice |
{ voiceId: string } |
Change TTS voice mid-session (live). |
audio_chunk |
{ audio: string } |
Base64-encoded PCM audio chunk from mic; routes to broadcast or private session based on context. |
test_audio_chunk |
{ audio: string } |
Audio chunk for testing (never routes to broadcast). |
admin_start_broadcast |
{ voiceId?: string, source: 'mic' | 'youtube' | 'remote', youtubeUrl?: string } |
Admin starts broadcast from mic, YouTube, or remote audio source. |
admin_stop_broadcast |
{} |
Admin stops broadcast. |
reclaim_broadcast |
{} |
Admin reclaims an orphaned broadcast after reconnect. |
broadcast_pause |
{ reason: 'prayer' | 'song' } |
Pause broadcast (no STT audio sent to Scribe; clears TTS queue). |
broadcast_resume |
{} |
Resume broadcast after pause. |
broadcast_skip_lang |
{ lang: string | null } |
Skip translation/TTS for source language (e.g., human translator mode). |
register_audio_source |
{ label: string, deviceId: string } |
Register as remote audio source for broadcast. |
unregister_audio_source |
{} |
Unregister as remote audio source. |
admin_translate_test |
{ text: string, voiceId?: string, sourceLang?: string, targetLang?: string } |
Admin instant translate & TTS test (private to admin socket). |
start_biblical_sim |
{ anthropicApiKey?: string, geminiApiKey?: string, language: BiblicalLanguage, voiceId?: string } |
Start biblical text simulator broadcast. |
stop_biblical_sim |
{} |
Stop biblical text simulator. |
SDK
Uses the official @elevenlabs/elevenlabs-js SDK (v2). The client is lazy-loaded on first use.
Speech-to-Text (Scribe v2 Realtime)
Connects via native WebSocket to wss://api.elevenlabs.io/v1/speech-to-text/realtime. Handles:
- VAD-based commit buffering with configurable merge window
- Stability timeout fallback for stalled VAD
- Text validation (EN/RU/UK character regex filtering)
- Partial and final transcript emission
Text-to-Speech
Uses client.textToSpeech.stream() with the eleven_multilingual_v2 model. Audio is collected into a Buffer and emitted as base64 MP3.
Voice Management
client.voices.getAll() — fetches all voices from account
- Admin can filter which voices are available to viewers
- Voice cloning via IVC API (from recordings or YouTube)
Key File
backend/src/services/elevenlabs.service.ts
Provider Details
Google Translate
Google Cloud Translation API v2. Fast (~200ms), deterministic, and reliable. Requires GOOGLE_TRANSLATE_API_KEY with the Cloud Translation API enabled in Google Cloud Console. Ensure the API key has no HTTP referrer restrictions (server-side requests have no referrer).
File: backend/src/services/google-translate.service.ts
LibreTranslate
Self-hosted in Docker. No API key required by default. Provides language detection and translation via REST API.
File: backend/src/services/libretranslate.service.ts
DeepL
Premium translation API. Auto-detects free vs. paid endpoint based on the API key format.
File: backend/src/services/deepl.service.ts
Claude (Anthropic)
AI-powered translation using claude-haiku-4-5 for speed. Includes language detection and auto-flip logic.
File: backend/src/services/claude-translate.service.ts
Routing
Provider routing is handled by backend/src/services/translation.provider.ts:
- Try admin-selected primary provider
- On failure, try configured fallback provider
- LibreTranslate is always the last-resort fallback
Connection
Uses ioredis with automatic retry strategy. Falls back to in-memory/YAML defaults if Redis is unavailable.
Key Patterns
| Pattern | Example | Purpose |
flag:<name> |
flag:youtube_input |
Feature flag boolean values |
setting:<name> |
setting:tts_settings |
JSON settings objects |
Key File
backend/src/services/redis.service.ts
Local Development
Use docker-compose.local.yml for Redis and LibreTranslate only (backend/frontend run natively):
docker compose -f docker-compose.local.yml up -d
Production
Use docker-compose.yml for all services:
docker compose up -d --build
Services
| Service | Image | Port | Notes |
| frontend |
node:24-alpine + Nginx |
80 (exposed) |
Serves React build, proxies API/WS to backend |
| backend |
node:24-alpine |
3001 (internal) |
Express + Socket.io server |
| redis |
redis:7-alpine |
6379 (internal) |
Feature flags and settings store |
| libretranslate |
libretranslate/libretranslate |
5000 (internal) |
Self-hosted translation engine |
Configuration
ELEVENLABS_API_KEY=sk-your-production-key
ADMIN_PASSWORD=strong-secure-password
FRONTEND_URL=https://translate.example.com
APP_ENV=prod
REDIS_PASSWORD=redis-secret
Deploy
docker compose up -d --build
Reverse Proxy
When running behind Nginx or another reverse proxy:
- Set
LISTEN_PORT in .env (e.g., 8080)
- Proxy pass to
localhost:8080
- Important: Ensure WebSocket upgrades are forwarded for the
/socket.io/ path
server {
listen 443 ssl;
server_name translate.example.com;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
Monitoring
# Check all services
docker compose ps
# View backend logs
docker compose logs -f backend
# Health check
curl http://localhost:3001/api/health
Shipped
v0.1 – v0.2 — Core Translation Engine
- Real-time STT via ElevenLabs Scribe v2 Realtime
- Multi-provider translation (LibreTranslate, DeepL, Claude)
- TTS voice synthesis with ElevenLabs
- Microphone and YouTube live input
- Admin panel with feature flags, voice management, TTS tuning
- Biblical Transcript Simulator for pipeline testing
- Instant Voice Cloning from recordings and YouTube
Shipped
v0.3 — Audio Mixer & Device Selection
Browser-side audio device scanning with support for professional mixing consoles, virtual audio devices, and audio interfaces.
- Browser-side device enumeration with permission flow
- Virtual device detection (Loopback, BlackHole, VB-Audio, Voicemeeter, OBS)
- Categorized device picker (Microphones vs Mixers / Virtual Devices)
- Admin device override broadcast to all viewers via Socket.io
- Real-time feature flag broadcasting
Shipped
v0.7 — Broadcast Service
The /translate route is now a true broadcast service. Admins start one global translation session from the admin panel and all connected viewers receive the live output simultaneously.
- Single global broadcast session (one-to-many)
- Admin "Broadcast Control" panel — Start/Stop with source + voice selection
- Microphone and YouTube source both supported for broadcast
- All translation output (transcript, translated text, TTS audio)
io.emit’d to every viewer
- Viewer shows Waiting for broadcast to start… status when off air
- "On Air" / "Off Air" status pill visible to viewers in real-time
- Broadcast ownership tracked by admin socket ID; auto-stops on admin disconnect
- Biblical Transcript Simulator also broadcasts to all viewers
Shipped
v0.8 — Navigation, Broadcast FF & Transcript UX
Global persistent bottom navigation, feature-flag-gated route visibility, and a refined transcript reading experience.
- Persistent bottom navigation bar on all pages (
/translate, /broadcast, /video, /admin)
- FF-gated nav links — Broadcast and Video Call entries only appear when their flags are enabled
- No extra socket connection — nav reads flags from the page’s existing
useSocket call via props
- Nav renders a frosted dark background gradient so it never overlaps content
/broadcast route is now public (no login required); gated inside the page by the broadcast feature flag
broadcast feature flag added to YAML, backend config, and frontend FeatureFlags interface
- Transcript panel: newest translation is always at the top; older lines scroll down and fade out at the bottom
- Each new transcript entry animates in from above (
transcriptIn keyframe)
- Removed duplicate “Video Call” button from
/translate and /broadcast header bars
Shipped
v0.9 — Translation Pipeline Overhaul & Google Integration
Major improvements to translation chunking, provider support, and admin tooling.
- Google Translate as primary translation provider with automatic fallback chain
- Google Gemini 2.5 Flash for biblical simulator and sermon generation (replaces deprecated Gemini 2.0 Flash)
- Overhauled STT chunking: disabled aggressive sentence-boundary splitting, stability timer defers to accumulation during continuous speech, commit buffer defers when speaker has resumed
- Configurable sermon length (1–20 sentences) in admin UI
- Voice training: AI-generated reading text (Gemini) for mic recording sessions
- Voice training: preview playback of cloned voice after training via TTS
- Broadcast mute/unmute toggle (muted by default, replaces “Tap to enable audio” banner)
- Audio device auto-scan on page load with spinning refresh indicator
- Fixed admin Raw Server Logs auto-scroll toggle re-enabling on new messages
- Updated Claude model list: removed deprecated models, default is
claude-haiku-4-5
- Docker images upgraded to Node.js 24 (Alpine)
Up Next
v0.4 — Direct Audio Interface Feed
Accept audio directly from professional mixing consoles and audio interfaces — bypass browser mic capture entirely for broadcast-quality input.
- Direct audio interface input (ASIO / Core Audio / ALSA)
- Multi-channel mixer feed support
- Low-latency audio routing (sub-100ms)
- Hardware device auto-discovery and selection
- Professional broadcast integration (NDI, Dante)
Shipped
v0.5 — Video Call Translation
WebRTC peer-to-peer video calls with real-time bidirectional translation. Two people speak different languages and hear each other translated via TTS.
- Built-in WebRTC video call with room codes
- Full-duplex translation (each person hears the other translated)
- Per-participant STT pipeline with independent Scribe sessions
- Video grid UI with local PiP and remote full-screen
- Mic/video mute controls, hang up, auto-cleanup on disconnect
- Feature-flagged behind
video_translation
Shipped
v0.6 — Auth, Mobile & Voice Cloning in /video
- User-facing login page (
/) with JWT cookie sessions (30-day sticky, HttpOnly)
- All app routes protected — redirect to login if unauthenticated
- Live translator moved to
/translate
- Mobile-responsive UI across Translator, Admin, and Video Call views
- FaceTime-style full-screen in-call layout on mobile with safe-area insets
- “Clone Voice” button in
/video lobby, gated by video_voice_cloning feature flag
- Voice cloning modal with mic recording or YouTube URL, admin-password gated
Planned
Future
- Additional language pairs beyond EN/RU/UK
- Speaker diarization (multi-speaker detection)
- Translation memory and glossary support
- Webhooks and API for third-party integrations
- Multi-tenant deployment with user accounts