Skip to content
/ lira Public

Lira is a voice-first AI companion that provides real-time conversations, context-aware responses, and on-demand image generation. It listens, understands, and interacts naturally to assist users with daily tasks, emotional check-ins, and creative prompts.

License

Notifications You must be signed in to change notification settings

Naomer/lira

Repository files navigation

Lira

Lira is a sleek Flutter mobile app (iOS/Android) that acts as your always-on voice buddyβ€”like ChatGPT’s voice mode but cozier.
It uses a cloned "grandma" voice to provide empathetic advice on daily life, emotional check-ins, quick planning (e.g., "Remind me about that meeting in Amharic?"), or just venting sessions.

  • Hands-free, real-time chat: Speak naturally (even with Ethiopian accents), it listens live, thinks via AI, and responds in a warm, storytelling tone.
  • Privacy-first (mostly on-device), with optional integration with Neuroviate vibes for multicultural empathy.
  • Monetization can be added later via premium voices or third-party integrations.
  • Target audience: Busy individuals craving low-key wisdom, starting in Ethiopia/global diaspora.

πŸ”₯ Tech Stack & Free LLM Options

Frontend

Flutter Dart

Backend

Python FastAPI

AI & LLMs (Free)

Mistral LLaMA OpenRouter HuggingFace

Speech Processing (Free)

Whisper Vosk CoquiTTS FlutterTTS


πŸ“Έ Screenshots

Lira Home Screen Lira Voice Screen


πŸ—οΈ UI Components

1. Home / Dashboard Screen (lib/screens/home_screen.dart)

  • User greeting with profile picture
  • "Good Morning" prompt
  • Main "Talk to AI assistant" card with Start Talking button
  • Voice and Image feature cards
  • Topics section with pill-shaped buttons
  • Information cards (Blood pressure, Sleep)
  • Bottom navigation bar with AI sparkle button

2. Voice Analysis Screen (lib/screens/voice_analysis_screen.dart)

  • "Listening..." indicator
  • Animated 3D orb visualizer with gradient colors
  • Live transcript display
  • Bottom control bar with timer, microphone button, and cancel button

3. Smart Chat Screen (lib/screens/smart_chat_screen.dart)

  • Chat interface with AI and user message bubbles
  • Sparkle icons for AI messages
  • Audio message bubbles with waveform visualization
  • Text input field with mic and add buttons
  • Pre-populated sample conversation

4. Shared Components

  • Gradient Background (lib/utils/gradient_background.dart) β€” Purple/pink gradient
  • Status Bar (lib/widgets/status_bar.dart) β€” Time, signal, WiFi, battery
  • Orb Visualizer (lib/widgets/orb_visualizer.dart) β€” Animated 3D sphere with swirling patterns

🎨 Design Features

  • Purple/pink gradient backgrounds matching app visuals
  • Rounded corners on all UI elements
  • Modern, clean aesthetic
  • Smooth animations on the orb visualizer
  • Consistent color scheme using #9B7EDE purple

🧠 Lira MVP Workflow

flowchart TD
    A[User speaks into Flutter app] --> B[Flutter captures audio]
    B --> C[Speech-to-Text (Whisper / Vosk / Coqui STT)]
    C --> D[Text sent to Python FastAPI backend]
    D --> E[Backend queries Free LLM (Mistral / LLaMA / OpenRouter)]
    E --> F[AI generates agentic response (grandma voice style)]
    F --> G[Text returned to Flutter app]
    G --> H[Text-to-Speech (Coqui TTS / flutter_tts)]
    H --> I[Flutter plays AI voice response]
    I --> A[User continues conversation]
Loading

Workflow explanation:

  1. User speaks β†’ Flutter captures audio
  2. Audio β†’ text via STT
  3. Python backend receives text β†’ queries free LLM
  4. LLM generates empathetic, agentic response
  5. Text-to-speech converts AI text β†’ voice
  6. Flutter plays voice back to user
  7. Conversation continues naturally

πŸ› οΈ Free Backend Setup (Python + LLM)

  • Python with FastAPI for REST API endpoints
  • Free LLM options: OpenRouter, HuggingFace Inference (Mistral, LLaMA, Grok, Qwen)
  • Speech-to-Text: Whisper (local) or Vosk
  • Text-to-Speech: Coqui TTS or flutter_tts
  • Conversation memory: store last 3–5 messages in RAM (privacy-first)

Fully free, no subscription required, and privacy-friendly MVP


πŸ“‚ Project Structure

Lira/
│── lib/
β”‚   β”œβ”€β”€ screens/
β”‚   β”‚   β”œβ”€β”€ home_screen.dart
β”‚   β”‚   β”œβ”€β”€ voice_analysis_screen.dart
β”‚   β”‚   └── smart_chat_screen.dart
β”‚   β”œβ”€β”€ widgets/
β”‚   β”‚   β”œβ”€β”€ status_bar.dart
β”‚   β”‚   └── orb_visualizer.dart
β”‚   └── utils/
β”‚       └── gradient_background.dart
│── assets/
│── backend/
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ main.py (FastAPI factory)
β”‚   β”‚   β”œβ”€β”€ routers/ (chat, stt, tts routes)
β”‚   β”‚   β”œβ”€β”€ schemas.py (Pydantic models)
β”‚   β”‚   └── services/ (LLM provider abstractions)
β”‚   └── requirements.txt
│── README.md

▢️ Getting Started

1. Clone the repository

git clone https://github.com/your-username/lira.git
cd lira

2. Install Flutter dependencies

flutter pub get

3. Run the app

flutter run

4. Backend setup

cd backend
pip install -r requirements.txt
cp .env.example .env
# Edit .env and add your LLM API key
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

5. Configure environment

Edit backend/.env:

LLM_API_BASE_URL=https://openrouter.ai/api/v1
LLM_API_KEY=sk-...
LLM_MODEL=mistralai/mistral-7b-instruct

6. Configure Flutter backend URL

Edit lib/config/api_config.dart and set your backend URL:

  • Local: http://localhost:8000
  • Android Emulator: http://10.0.2.2:8000
  • Physical device: http://YOUR_COMPUTER_IP:8000

πŸ“– For detailed setup instructions, see SETUP.md


πŸ“Œ Roadmap for MVP β†’ Full App

  • Multi-language support (Amharic, English)
  • Premium voices & AI personality options
  • Push notifications & reminders
  • Advanced conversation memory & reasoning
  • Integrations with Neuroviate for multicultural empathy
  • Polished UI animations and orb visualizer

πŸ—£οΈ Voice + Agentic Integration Checklist

  • Audio capture (Flutter): use record or flutter_sound to stream PCM via WebSocket to the /stt endpoint. Buffer 1–2s chunks for responsiveness.
  • Speech-to-Text (Python): replace the stub with Whisper (faster-whisper) or Vosk. Emit partial transcripts to the client so voice_analysis_screen.dart can display live text.
  • Conversation hand-off: send the latest transcript plus last 5–10 turns to /chat. The backend keeps persona prompts + temperature settings server-side.
  • LLM provider config: switch models using LLM_MODEL env var without touching Flutter. Supports OpenRouter, HuggingFace or local inference once you point LLM_BASE_URL accordingly.
  • Text-to-Speech: call /tts with the assistant reply. Implement Coqui TTS (offline) or gTTS for a quick cloud option; Flutter plays via just_audio.
  • Memory + tools: use conversation payload to pass lightweight memory now; later extend backend to persist slots and emit tool_calls for reminders, journaling, etc.

🀝 Contributing

Pull requests welcome! Please open an issue for major changes.


πŸ“„ License

MIT License

About

Lira is a voice-first AI companion that provides real-time conversations, context-aware responses, and on-demand image generation. It listens, understands, and interacts naturally to assist users with daily tasks, emotional check-ins, and creative prompts.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published