Documentation

Everything you need to install, configure, and use TellMeMo

Overview

TellMeMo is an open-source AI platform that turns project chaos into clarity. Upload meetings, ask questions in plain English, get instant answers with sources. Generate summaries across your entire portfolio.

⚡ NEW: Live Meeting Intelligence

Revolutionary feature: Get answers to questions DURING meetings, not after. AI detects questions as they're spoken and finds answers in real-time using 4-tier discovery (RAG → Meeting Context → Live Monitoring → AI Fallback). No more "let me get back to you."

Learn more →

💡 What You'll Learn

  • Install TellMeMo in 10 minutes with Docker
  • Configure AI models and integrations
  • Use Live Meeting Intelligence for real-time answers
  • Upload content and generate summaries
  • Best practices for security and backups

TellMeMo vs Alternatives

The key differences that matter:

Feature TellMeMo Otter.ai Notion AI
Live Question Detection ✅ Real-time during meetings ❌ Post-meeting only ❌ No
Real-Time Answers ✅ 4-tier discovery (RAG + context + live + AI) ❌ No ❌ No
Open Source ✅ Yes ❌ No ❌ No
Self-Hosted ✅ Full control ❌ Cloud only ❌ Cloud only
AI Conversations ✅ Multi-turn with context ❌ Basic search ✅ Yes
Project Hierarchy ✅ Portfolio→Program→Project ❌ No ⚠️ Basic
Cost 💰 Free (+ ~$1/hr for live meetings) 💰💰 $20-40/user/mo 💰💰 $10-18/user/mo

🎯 Why TellMeMo?

  • Own your data - Self-host on your servers
  • Open source - Customize everything
  • Free forever - No vendor lock-in

Architecture Overview

TellMeMo runs as a containerized microservices architecture with all services running in Docker:

┌─────────────────────────────────────────────────────────────────────────┐
│                           TellMeMo Architecture                            │
└─────────────────────────────────────────────────────────────────────────┘

                        ┌──────────────────┐
                        │      User        │
                        │    (Browser)     │
                        └────────┬─────────┘
                                 │ HTTP
                                 ▼
                        ┌──────────────────┐
                        │   Flutter       │
                        │   Frontend       │
                        │   (Docker)       │
                        │   Port 8100      │
                        └────────┬─────────┘
                                 │ HTTP/WebSocket
                                 ▼
                        ┌──────────────────┐
                        │   FastAPI       │
                        │   Backend        │
                        │   (Docker)       │
                        │   Port 8000      │
                        │                  │
                        │ • REST API       │
                        │ • WebSocket      │
                        │ • Authentication │
                        │ • RAG Processing │
                        └────────┬─────────┘
                                 │
                ┌────────────────┼────────────────┐
                │                        │                        │
                ▼                        ▼                        ▼
┌──────────────────┐        ┌──────────────────┐        ┌──────────────────┐
│ PostgreSQL     │        │   Redis         │        │   Qdrant        │
│   (Docker)       │        │   (Docker)       │        │   (Docker)       │
│  Port 5432       │        │  Port 6379       │        │  Port 6333       │
│                  │        │                  │        │                  │
│ • Users          │        │ • Job Queue (RQ) │        │ • Vectors        │
│ • Projects       │        │ • Pub/Sub        │        │ • Embeddings     │
│ • Content        │        │ • Caching        │        │ • Search         │
└──────────────────┘        └──────────────────┘        └──────────────────┘
                                     │
                                     ▼
                            ┌──────────────────┐
                            │ Anthropic      │
                            │    Claude        │
                            │   API Key        │
                            │                  │
                            │ • LLM            │
                            │ • Reasoning      │
                            │ • Summaries      │
                            └──────────────────┘

Optional Services:
  • Sentry (error tracking - configured via backend .env)
  • Firebase Analytics (usage tracking - configured via frontend)

Core Components

  1. Frontend (Flutter Web): User interface running in Docker container on port 8100
  2. Backend (FastAPI): REST API, WebSocket server, authentication, and RAG processing in Docker
  3. PostgreSQL: Primary database for users, projects, content metadata, and conversations
  4. Redis: Job queue (RQ), pub/sub for real-time updates, and caching
  5. Qdrant: Vector database for semantic search and embeddings
  6. Claude API: Anthropic's LLM for AI reasoning and summary generation

Core Services

Service Port Purpose
Flutter Frontend 8100 Web application UI
FastAPI Backend 8000 REST API, WebSocket, Authentication, RAG
PostgreSQL 5432 Primary database (metadata, users, projects)
Redis 6379 Job queue (RQ), Pub/Sub, caching
Qdrant 6333 Vector database (embeddings, semantic search)
RQ Dashboard 9181 Job queue monitoring and management (optional)

💡 Authentication

TellMeMo includes built-in authentication handled by the FastAPI backend. No external authentication service (like Supabase) is required.

📊 Optional Integrations

  • Sentry: Add SENTRY_DSN to backend .env for error tracking
  • Firebase Analytics: Configure in frontend for usage analytics

These are completely optional and TellMeMo works fully without them.

Deployment

TellMeMo uses Docker and Docker Compose for easy, consistent deployment across all environments.

💡 What Gets Deployed

When you run docker compose up -d, the following core services start automatically:

  • Frontend UI - Flutter web app on port 8100
  • Backend API - FastAPI with authentication & RAG on port 8000
  • PostgreSQL - Primary database on port 5432
  • Redis - Job queue and pub/sub on port 6379
  • Qdrant - Vector database on port 6333

All services are containerized and managed by Docker Compose. No manual setup required!

Prerequisites

Required Software

  • Docker ≥20.10
  • Docker Compose ≥2.0

💡 Installing Docker & Docker Compose

System Requirements

  • RAM: 8GB minimum (16GB recommended)
  • Disk Space: 20GB minimum
  • CPU: 4+ cores recommended
  • OS: Linux, macOS, or Windows with WSL2

Required API Keys

  • Anthropic API Key - Get it from Anthropic Console
    • Sign up for an account
    • Navigate to API Keys section
    • Create a new API key (starts with sk-ant-api03-)
  • Hugging Face Token (Optional) - Get it from Hugging Face Settings
    • Sign up for a Hugging Face account
    • Go to Settings → Access Tokens
    • Create a new token with "read" permissions
    • Used for downloading AI models

Quick Start

Get TellMeMo running in 5 minutes:

# 1. Make sure Docker and Docker Compose are installed
docker --version
docker compose version

# 2. Clone the repository
git clone https://github.com/Tell-Me-Mo/tellmemo-app.git
cd tellmemo-app

# 3. Create .env file with your API keys
cat > .env << EOF
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
HF_TOKEN=your-huggingface-token-here
EOF

# 4. Start all services
docker compose up -d

# 5. Wait for all services to start (this can take 4-5 minutes)
# You can monitor the logs with: docker compose logs -f

# 6. Access the application
# UI will be available at http://localhost:8100

⏱️ First Startup Takes Time

After running docker compose up -d, it can take 4-5 minutes for all services to fully start and initialize. This includes downloading embedding models, initializing the database, and setting up the vector database.

You can monitor progress with: docker compose logs -f

✅ That's it!

Your TellMeMo instance is now running at http://localhost:8100

Configuration

TellMeMo is configured via environment variables in the .env file. The basic installation only requires two API keys - everything else has sensible defaults.

🔴 Required Configuration

These are the only variables you must configure:

# Required - Anthropic API Key
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here

# Required - Hugging Face Token (for embedding model)
HF_TOKEN=your-huggingface-token-here

⚠️ Important for Production

For production deployments, you should also set a secure JWT secret:

# Generate with: openssl rand -hex 32
JWT_SECRET=your-jwt-secret-generate-with-openssl-rand-hex-32

If not provided, the backend will automatically generate a temporary JWT secret on startup. This works for development but should not be used in production (tokens won't persist across restarts).

🟢 Auto-Configured (No Changes Needed)

These work out of the box with Docker Compose:

# Database Configuration
POSTGRES_USER=pm_master
POSTGRES_PASSWORD=pm_master_pass
POSTGRES_DB=pm_master_db
POSTGRES_HOST=localhost
POSTGRES_PORT=5432

# Vector Database
QDRANT_HOST=localhost
QDRANT_PORT=6333

# Backend API
API_HOST=0.0.0.0
API_PORT=8000

# Frontend
FRONTEND_URL=http://localhost:8100
API_BASE_URL=http://localhost:8000

🟡 Optional Features

Authentication Provider

# Choose: 'backend' (default) or 'supabase'
AUTH_PROVIDER=backend

# Only if using Supabase:
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-supabase-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-supabase-service-role-key
SUPABASE_JWT_SECRET=your-jwt-secret

Error Tracking (Sentry)

# Backend error tracking
SENTRY_ENABLED=false
SENTRY_DSN=your-sentry-dsn-here

# Frontend error tracking
FLUTTER_SENTRY_ENABLED=false
FLUTTER_SENTRY_DSN=your-flutter-sentry-dsn-here

Analytics (Firebase)

FLUTTER_FIREBASE_ANALYTICS_ENABLED=false

LLM Observability (Langfuse)

# Requires additional Docker services (Redis, ClickHouse, MinIO)
LANGFUSE_ENABLED=false

Logging (ELK Stack)

# Requires Elasticsearch, Logstash, Kibana
ENABLE_LOGSTASH=false

Transcription Service

# Choose: "whisper" (local), "salad" (Salad API), or "replicate" (incredibly-fast-whisper)
DEFAULT_TRANSCRIPTION_SERVICE=replicate

# Replicate (recommended - 242x faster than local Whisper)
REPLICATE_API_TOKEN=your-replicate-token-here

# Salad Cloud (cost-effective for batch processing)
SALAD_API_KEY=your-salad-api-key
SALAD_ORGANIZATION_NAME=your-org-name

# Performance comparison (30-min audio):
# - Replicate: ~20 seconds (242x speedup)
# - Salad: ~5-8 minutes
# - Local Whisper: ~696 seconds

⚙️ Advanced Settings

LLM Configuration

LLM_MODEL=claude-3-5-haiku-latest
MAX_TOKENS=4096
TEMPERATURE=0.7

Embedding Model

# Choose: all-MiniLM-L6-v2 (lightweight) or google/embeddinggemma-300m (better quality)
EMBEDDING_MODEL=google/embeddinggemma-300m
EMBEDDING_DIMENSION=768

RAG Settings

CHUNK_SIZE_WORDS=300
CHUNK_OVERLAP_WORDS=50
TOP_K_CHUNKS=5

File Upload

MAX_FILE_SIZE_MB=10

Multilingual Support

ENABLE_MULTILINGUAL=true
DETECT_LANGUAGE=true

Email Digest Configuration

# SendGrid Configuration (required for email digests)
SENDGRID_API_KEY=SG.your-sendgrid-api-key-here
EMAIL_FROM_ADDRESS=noreply@tellmemo.io  # Must be verified in SendGrid
EMAIL_FROM_NAME=TellMeMo

# Email Digest Settings
EMAIL_DIGEST_ENABLED=true
EMAIL_DIGEST_BATCH_SIZE=50
EMAIL_DIGEST_RATE_LIMIT=100  # Free tier: 100 emails/day

📧 Setting Up SendGrid

  1. Sign up for free at SendGrid (100 emails/day free)
  2. Verify your sender email address in SendGrid dashboard
  3. Create an API key with "Mail Send" permissions
  4. Add the API key to your .env file
  5. Restart the backend: docker compose restart backend

Without SendGrid: Email digests will be disabled but all other features work normally.

💡 Full Configuration Reference

For all available configuration options, see the .env.example file in the repository.

⚠️ Security Warning

  • Never commit .env file to version control
  • Always change default passwords before production deployment
  • Generate secure JWT secrets using: openssl rand -hex 32
  • Use strong passwords for database credentials

First-Time Setup

  1. Access the Application
    Open http://localhost:8100 in your browser
  2. Create an Account
    Sign up with email/password
  3. Create an Organization
    Set up your workspace
  4. Set Up Project Hierarchy
    • Create Portfolio (top level)
    • Create Programs under Portfolio
    • Create Projects under Programs

Common Use Cases

Real-world examples of how teams use TellMeMo in their daily workflows:

1. Weekly Executive Reporting

📊 Scenario

An executive needs to prepare a weekly summary of all projects across the portfolio for the board meeting.

How to do it in TellMeMo:

  1. Create your Portfolio hierarchy: Portfolio → Programs → Projects
  2. Throughout the week, upload meeting transcripts and status reports to each Project
  3. On Friday, navigate to your Portfolio in the sidebar
  4. Click "Generate Summary" button
  5. Select "Executive Summary" type
  6. Review the AI-generated summary that aggregates insights from all projects
  7. Click "Export as PDF" for your board presentation

✅ What you get

A high-level summary with key decisions, risks, milestones, and strategic insights from across your entire portfolio.

2. Finding Risks Across Projects

⚠️ Scenario

A project manager needs to identify and track all risks mentioned in quarterly meetings.

How to do it in TellMeMo:

  1. Upload all your Q4 meeting transcripts to the Project
  2. Go to the Chat or Query section
  3. Type: "What are all the risks mentioned in this quarter?"
  4. Review the AI response with citations to specific meetings
  5. For deeper analysis, generate a Technical Summary focused on risk mitigation
  6. Export the risk list to share with stakeholders

✅ What you get

A comprehensive list of all identified risks with references to when and where they were mentioned.

3. Having Conversations About Past Decisions

💬 Scenario

A team member needs to understand the context and reasoning behind a decision made weeks ago.

How to do it in TellMeMo:

  1. Navigate to your Project
  2. Open the Chat interface
  3. Start a conversation: "What was decided about the API redesign?"
  4. Ask follow-up questions: "Who raised concerns about backward compatibility?"
  5. Continue the conversation: "When is the migration deadline?"
  6. The system remembers the context across all your questions

✅ What you get

A natural conversation where you can dig deeper into topics without repeating context. Each answer includes citations to source documents.

4. Understanding Trends Across Multiple Projects

🔍 Scenario

A CTO wants to understand how different teams are approaching the same technical challenge.

How to do it in TellMeMo:

  1. Navigate to your Program (which contains multiple Projects)
  2. Open the Chat interface at the Program level
  3. Ask: "How are different teams approaching authentication?"
  4. TellMeMo searches across all Projects under this Program
  5. Review the unified answer with citations from each team
  6. Identify best practices and inconsistencies across teams

✅ What you get

Cross-project insights that help identify patterns, best practices, and opportunities for alignment.

5. Tracking Action Items from Meetings

✅ Scenario

A scrum master needs to track all action items and task assignments from sprint meetings.

How to do it in TellMeMo:

  1. Upload your sprint meeting transcript to the Project
  2. TellMeMo automatically extracts tasks, assignees, and due dates
  3. Navigate to the Tasks tab to see all action items
  4. Filter by assignee, status, or due date
  5. Mark tasks as complete as work progresses
  6. Ask questions like: "What are John's pending tasks?"

✅ What you get

Automatic task extraction with no manual entry required. All action items are linked to the source meeting transcript.

Available Summary Types

Summary Type Best For What It Includes
General All audiences Comprehensive overview of all content
Executive Leadership, stakeholders High-level decisions, risks, milestones, strategic insights
Technical Engineering teams Technical decisions, architecture, implementation details
Stakeholder Project stakeholders Impact analysis, timelines, resource requirements

Uploading Content

  1. Navigate to a Project
  2. Click "Upload Content"
  3. Select files to upload:
    • Audio files (for transcription)
    • Text transcripts
    • Documents (PDF, TXT)
    • Emails
  4. Content is automatically:
    • Transcribed (if audio)
    • Chunked and embedded
    • Stored in vector database

Querying with RAG

Ask natural language questions about your content:

  1. Navigate to "Ask a Question" or "Query"
  2. Enter your question in plain English
  3. System performs:
    • Semantic search across all content
    • Retrieves relevant chunks
    • Generates AI answer with Claude
    • Provides source citations

💡 Example Questions

  • "What were the main decisions from last week's sprint planning?"
  • "What risks were mentioned in the Q4 review meetings?"
  • "Who is responsible for the authentication feature?"
  • "Summarize all feedback about the new UI design"

Live Meeting Intelligence

Revolutionary feature: Get answers to questions DURING your meetings, not after. TellMeMo's real-time intelligence detects questions as they're spoken and finds answers instantly.

⚡ What This Means

No more "let me get back to you." No more awkward pauses while someone searches for documents. Questions get answered in real-time while your meeting continues naturally.

How to Use Live Meeting Intelligence

  1. Start Your Meeting Session
    • Navigate to your Project
    • Click "Start Live Meeting"
    • Toggle "AI Assistant" ON
    • Allow microphone access when prompted
  2. Have Your Meeting Normally
    • AI listens in the background
    • Real-time transcription via AssemblyAI
    • Questions detected automatically as they're spoken
  3. Get Instant Answers
    • Answer cards appear on screen within seconds
    • Source shown (documents, earlier in meeting, or AI-generated)
    • Click to see full context
  4. Track Actions Automatically
    • AI detects commitments and task assignments
    • Extracts owner, deadline, and description
    • Completeness score shown (100% = has all details)

Four-Tier Answer Discovery

When a question is detected, TellMeMo searches in this order:

Tier 1: RAG Search (2-3 seconds)
📚 Searches all your uploaded documents
   • Meeting transcripts
   • PDFs, emails, notes
   • 50% confidence threshold
   • Source: "From Documents: filename.pdf"

Tier 2: Meeting Context (0.5 seconds)
💬 Searches earlier in THIS meeting
   • Maybe someone already answered it
   • Prevents duplicate questions
   • Source: "Earlier in Meeting: [10:08] Mike said..."

Tier 3: Live Monitoring (15 seconds)
👂 Waits to see if someone answers naturally
   • AI listens to next 15 seconds
   • If answer appears, card shows it
   • Source: "Answered Live: Sarah just said..."

Tier 4: AI Fallback (2-3 seconds)
🤖 GPT-5-mini generates best guess
   • ONLY if all other tiers fail
   • Big warning: "AI-generated, verify this"
   • Source: "AI Answer (verify)" with disclaimer

Real Example

Question Detected: "What's our API rate limit?"

  • Time to answer: 0.8 seconds
  • Source: Technical Specification v2.3, page 8
  • Answer: "1000 requests per hour per API key"
  • Result: Nobody stopped talking. Meeting continued.

Required API Keys

To use Live Meeting Intelligence, you need:

  • OPENAI_API_KEY - For GPT-5-mini streaming intelligence (~$0.15/hour)
  • ASSEMBLYAI_API_KEY - For real-time transcription with speaker diarization (~$0.90/hour)
# Add to your .env file
OPENAI_API_KEY=sk-proj-your-key-here
ASSEMBLYAI_API_KEY=your-assemblyai-key-here

Cost Per Meeting

Service Cost per Hour What It Does
AssemblyAI $0.90 Real-time transcription + speaker diarization
GPT-5-mini ~$0.15 Streaming intelligence (question detection, action tracking)
Total ~$1.05 Complete real-time meeting intelligence

For comparison: Otter.ai Business costs $20/user/month. If you have 10 users and 20 meetings/month, that's $200/month = $10 per meeting hour. TellMeMo is 10x cheaper.

Best Practices

  • Upload documents first - The more content in your knowledge base, the better Tier 1 (RAG) answers will be
  • Use good microphone - Clear audio = better transcription accuracy
  • Review action items after - AI auto-extracts tasks, but verify owner and deadline
  • Verify AI-generated answers - If you see the Tier 4 fallback, double-check the answer
  • Check completeness scores - 100% means action has description + owner + deadline

💡 Privacy & Security

Audio is processed in real-time and NOT stored on AssemblyAI servers. Only the transcript is saved to your database. All data stays within your control when self-hosting.

Troubleshooting

Questions Not Being Detected

  • Check microphone permissions in browser
  • Verify OPENAI_API_KEY is set correctly
  • Check browser console for errors
  • Make sure questions are clear and specific

No Answers Appearing

  • Verify you have documents uploaded (Tier 1 needs content to search)
  • Check ASSEMBLYAI_API_KEY is valid
  • Wait for all 4 tiers to complete (can take up to 20 seconds total)
  • Check backend logs: docker compose logs -f backend

High API Costs

  • Use Live Meeting Intelligence only for important meetings
  • Turn off AI Assistant for informal chats
  • Monitor usage in OpenAI and AssemblyAI dashboards
  • Set up billing alerts in both platforms

Generating Summaries

  1. Select a Project, Program, or Portfolio
  2. Click "Generate Summary"
  3. Choose summary type:
    • General Summary
    • Executive Summary
    • Technical Summary
    • Stakeholder Summary
  4. AI generates summary based on all content
  5. Export as PDF, CSV, or share with team

Email Digests

Stay informed without logging in. TellMeMo can send automated email digests with project updates, summaries, tasks, and risks directly to your inbox.

✨ What are Email Digests?

Email digests are automated, personalized summaries of your project activity delivered on a schedule you choose. They include:

  • Summary statistics (active projects, new summaries, pending tasks, critical risks)
  • Per-project breakdowns with recent updates
  • Direct links to view full content in TellMeMo
  • Beautiful HTML design optimized for all email clients

Email Types

1. Welcome Email

Automatically sent when you create an account:

  • Getting started guide with quick tips
  • Links to key platform features
  • Next steps for setting up your first project

2. Digest Emails

Scheduled delivery based on your preferences:

  • Daily: Every day at 8 AM UTC
  • Weekly: Every Monday at 8 AM UTC
  • Monthly: 1st of each month at 8 AM UTC

Content includes summaries, tasks, risks, and activities from all your projects.

3. Inactive User Reminders

If you haven't uploaded content in 7 days:

  • Friendly reminder to get started
  • Simple instructions for recording your first meeting
  • Sent only once (no repeated reminders)

Managing Email Preferences

How to access:

  1. Navigate to your Profile screen in the app
  2. Click "Notification Settings"
  3. Select "Email Digest Preferences"

Configuration options:

  • Enable/Disable: Turn email digests on or off with a toggle
  • Frequency: Choose daily, weekly, or monthly delivery
  • Content Types: Select what to include:
    • Meeting summaries
    • Tasks assigned to you
    • Critical risks
    • Project activities
    • Decisions and action items
  • Portfolio Rollup: Include high-level portfolio insights

Testing Your Digests

Preview Before Sending:

  1. Go to Email Digest Preferences
  2. Click "Preview Digest"
  3. See exactly what your email will look like
  4. No email sent - just a preview

Send Test Email:

  1. Go to Email Digest Preferences
  2. Click "Send Test Email"
  3. Receive a digest immediately to verify your settings
  4. Check your inbox to review formatting and content

Privacy & Data

⚠️ What's Included in Digests

All projects across all organizations: Email digests include content from all projects you have access to, regardless of organization. There is no filtering by organization or project in the initial release.

Privacy considerations:

  • Email content may contain sensitive project information
  • Emails are sent securely via SendGrid with TLS encryption
  • Email delivery tracking data stored temporarily in SendGrid
  • No permanent storage of email content on TellMeMo servers

Unsubscribing

You can unsubscribe from email digests at any time:

Via Email:

  1. Open any digest email
  2. Scroll to the footer
  3. Click "Unsubscribe from email digests"
  4. Instant opt-out (no login required)

Via Settings:

  1. Go to Email Digest Preferences in the app
  2. Toggle "Enable email digests" to off
  3. Click "Save Changes"

Smart Features

✅ No Spam Guarantee

Empty digest prevention: TellMeMo automatically skips sending digests if there's no new content to report. You'll only receive emails when there's something useful to share.

📊 Rate Limiting

TellMeMo respects SendGrid's free tier limits (100 emails/day). If you have a large team, consider upgrading to SendGrid's paid tier or implementing email batching strategies.

Technical Details

Delivery Infrastructure:

  • Email Provider: SendGrid (free tier: 100 emails/day)
  • Scheduler: APScheduler (runs in background)
  • Templates: Responsive HTML with plain text fallback
  • Authentication: JWT-based unsubscribe tokens (90-day expiration)

Delivery Times (UTC):

  • Daily: 8:00 AM UTC every day
  • Weekly: 8:00 AM UTC every Monday
  • Monthly: 8:00 AM UTC on the 1st of each month
  • Inactive check: 9:00 AM UTC daily

🔧 Admin Testing Endpoints

For administrators and developers, TellMeMo includes API endpoints to manually trigger digest jobs:

  • POST /api/v1/admin/email/trigger-daily-digest
  • POST /api/v1/admin/email/trigger-weekly-digest
  • POST /api/v1/admin/email/trigger-monthly-digest
  • POST /api/v1/admin/email/trigger-inactive-check
  • POST /api/v1/admin/email/send-digest/{user_id}

See the full API documentation at http://localhost:8000/docs for details.

Troubleshooting

Docker Services Won't Start

⚠️ Common Issue

Port conflicts with existing services running on your machine.

Solution:

# Check which services are using ports
lsof -i :5432  # PostgreSQL
lsof -i :6333  # Qdrant
lsof -i :8000  # Backend
lsof -i :8100  # Frontend

# Stop conflicting services or change ports in docker-compose.yml
docker compose down
docker compose up -d

Backend Startup Error: "Database Connection Failed"

⚠️ Error Message

sqlalchemy.exc.OperationalError: could not connect to server

Possible Causes:

  • PostgreSQL container not running
  • Incorrect database credentials in .env
  • Database not initialized

Solution:

# Check PostgreSQL is running
docker compose ps postgres

# Check logs
docker compose logs postgres

# Verify .env settings
cat .env | grep POSTGRES

# Restart database
docker compose restart postgres

# Wait 10 seconds for DB to be ready
sleep 10

# Restart backend
cd backend
source venv/bin/activate
uvicorn main:app --reload

Qdrant Connection Error: "Vector Database Unavailable"

⚠️ Error Message

QdrantException: Service unavailable

Solution:

# Check Qdrant container
docker compose ps qdrant

# Check Qdrant logs
docker compose logs qdrant

# Restart Qdrant
docker compose restart qdrant

# Verify Qdrant is accessible
curl http://localhost:6333/collections

Anthropic API Error: "Invalid API Key"

⚠️ Error Message

anthropic.AuthenticationError: Invalid API key

Solution:

# Verify API key in .env
cat .env | grep ANTHROPIC_API_KEY

# Format should be: sk-ant-api03-xxxxx
# Get your key at: https://console.anthropic.com/

# After updating .env, restart backend
cd backend
source venv/bin/activate
uvicorn main:app --reload

Flutter Build Fails: "Target file not found"

⚠️ Error Message

Target file "lib/main.dart" not found

Solution:

# Clean build cache
flutter clean

# Get dependencies
flutter pub get

# Rebuild
flutter run -d chrome --web-port 8100

WebSocket Connection Fails

⚠️ Error Message

WebSocket connection to 'ws://localhost:8000/ws/audio' failed

Possible Causes:

  • Backend not running
  • CORS configuration blocking WebSocket
  • Incorrect WebSocket endpoint

Solution:

# Verify backend is running
curl http://localhost:8000/

# Check CORS settings in .env
cat .env | grep CORS_ORIGINS
# Should include: http://localhost:8100

# Test WebSocket with wscat
npm install -g wscat
wscat -c ws://localhost:8000/ws/audio

Embedding Generation Slow or Fails

⚠️ Issue

Content upload hangs or embedding generation takes too long.

Possible Causes:

  • Large content files (>10MB)
  • Embedding model not loaded
  • Insufficient system resources

Solution:

# Check backend logs for embedding errors
docker compose logs backend | grep embedding

# Verify embedding model in .env
cat .env | grep EMBEDDING_MODEL
# Default: google/embeddinggemma-300m

# Check system resources
docker stats

# If needed, increase Docker memory limit:
# Docker Desktop → Settings → Resources → Memory (16GB recommended)

Performance Issues: Slow Query Responses

💡 Optimization Tips

  • Reduce MAX_TOKENS in .env (default: 4096, try 2048 for faster responses)
  • Use Claude 3.5 Haiku instead of Sonnet (faster, cheaper)
  • Limit search scope to specific Project instead of Portfolio
  • Reduce CHUNK_SIZE_WORDS for faster embedding processing

Docker Compose: "Network Error" During Build

⚠️ Error Message

failed to solve: error getting credentials

Solution:

# Clear Docker build cache
docker builder prune -a

# Rebuild without cache
docker compose build --no-cache

# Start services
docker compose up -d

Still Having Issues?

🔍 Debug Checklist

  1. Check all containers are running: docker compose ps
  2. Review logs for errors: docker compose logs -f
  3. Verify .env file has all required variables (ANTHROPIC_API_KEY, HF_TOKEN)
  4. Ensure API keys are valid and have correct permissions
  5. Check system resources (RAM, disk space, CPU)
  6. Verify network ports are not blocked by firewall
  7. Review backend logs: docker compose logs backend

Get Help:

Backup & Security

Database Backup

# Backup PostgreSQL
docker exec pm_master_postgres pg_dump -U pm_master pm_master_db > backup.sql

# Restore
docker exec -i pm_master_postgres psql -U pm_master pm_master_db < backup.sql

Security Checklist

  • ✅ Change all default passwords
  • ✅ Generate secure encryption keys
  • ✅ Enable HTTPS with SSL/TLS
  • ✅ Configure firewall to restrict ports
  • ✅ Set up automated backups
  • ✅ Enable Sentry error tracking
  • ✅ Review CORS origins

API Reference

The backend API is available at http://localhost:8000

Interactive API Documentation

Example API Request

curl -X GET "http://localhost:8000/api/v1/projects" \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json"

Need Help?

Can't find what you're looking for? We're here to help.