Lira is a sleek Flutter mobile app (iOS/Android) that acts as your always-on voice buddyβlike ChatGPTβs voice mode but cozier.
It uses a cloned "grandma" voice to provide empathetic advice on daily life, emotional check-ins, quick planning (e.g., "Remind me about that meeting in Amharic?"), or just venting sessions.
- Hands-free, real-time chat: Speak naturally (even with Ethiopian accents), it listens live, thinks via AI, and responds in a warm, storytelling tone.
- Privacy-first (mostly on-device), with optional integration with Neuroviate vibes for multicultural empathy.
- Monetization can be added later via premium voices or third-party integrations.
- Target audience: Busy individuals craving low-key wisdom, starting in Ethiopia/global diaspora.
- User greeting with profile picture
- "Good Morning" prompt
- Main "Talk to AI assistant" card with Start Talking button
- Voice and Image feature cards
- Topics section with pill-shaped buttons
- Information cards (Blood pressure, Sleep)
- Bottom navigation bar with AI sparkle button
- "Listening..." indicator
- Animated 3D orb visualizer with gradient colors
- Live transcript display
- Bottom control bar with timer, microphone button, and cancel button
- Chat interface with AI and user message bubbles
- Sparkle icons for AI messages
- Audio message bubbles with waveform visualization
- Text input field with mic and add buttons
- Pre-populated sample conversation
- Gradient Background (
lib/utils/gradient_background.dart) β Purple/pink gradient - Status Bar (
lib/widgets/status_bar.dart) β Time, signal, WiFi, battery - Orb Visualizer (
lib/widgets/orb_visualizer.dart) β Animated 3D sphere with swirling patterns
- Purple/pink gradient backgrounds matching app visuals
- Rounded corners on all UI elements
- Modern, clean aesthetic
- Smooth animations on the orb visualizer
- Consistent color scheme using
#9B7EDEpurple
flowchart TD
A[User speaks into Flutter app] --> B[Flutter captures audio]
B --> C[Speech-to-Text (Whisper / Vosk / Coqui STT)]
C --> D[Text sent to Python FastAPI backend]
D --> E[Backend queries Free LLM (Mistral / LLaMA / OpenRouter)]
E --> F[AI generates agentic response (grandma voice style)]
F --> G[Text returned to Flutter app]
G --> H[Text-to-Speech (Coqui TTS / flutter_tts)]
H --> I[Flutter plays AI voice response]
I --> A[User continues conversation]
Workflow explanation:
- User speaks β Flutter captures audio
- Audio β text via STT
- Python backend receives text β queries free LLM
- LLM generates empathetic, agentic response
- Text-to-speech converts AI text β voice
- Flutter plays voice back to user
- Conversation continues naturally
- Python with FastAPI for REST API endpoints
- Free LLM options: OpenRouter, HuggingFace Inference (Mistral, LLaMA, Grok, Qwen)
- Speech-to-Text: Whisper (local) or Vosk
- Text-to-Speech: Coqui TTS or flutter_tts
- Conversation memory: store last 3β5 messages in RAM (privacy-first)
Fully free, no subscription required, and privacy-friendly MVP
Lira/
βββ lib/
β βββ screens/
β β βββ home_screen.dart
β β βββ voice_analysis_screen.dart
β β βββ smart_chat_screen.dart
β βββ widgets/
β β βββ status_bar.dart
β β βββ orb_visualizer.dart
β βββ utils/
β βββ gradient_background.dart
βββ assets/
βββ backend/
β βββ app/
β β βββ main.py (FastAPI factory)
β β βββ routers/ (chat, stt, tts routes)
β β βββ schemas.py (Pydantic models)
β β βββ services/ (LLM provider abstractions)
β βββ requirements.txt
βββ README.md
git clone https://github.com/your-username/lira.git
cd lira
flutter pub get
flutter run
cd backend
pip install -r requirements.txt
cp .env.example .env
# Edit .env and add your LLM API key
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000Edit backend/.env:
LLM_API_BASE_URL=https://openrouter.ai/api/v1
LLM_API_KEY=sk-...
LLM_MODEL=mistralai/mistral-7b-instructEdit lib/config/api_config.dart and set your backend URL:
- Local:
http://localhost:8000 - Android Emulator:
http://10.0.2.2:8000 - Physical device:
http://YOUR_COMPUTER_IP:8000
π For detailed setup instructions, see SETUP.md
- Multi-language support (Amharic, English)
- Premium voices & AI personality options
- Push notifications & reminders
- Advanced conversation memory & reasoning
- Integrations with Neuroviate for multicultural empathy
- Polished UI animations and orb visualizer
- Audio capture (Flutter): use
recordorflutter_soundto stream PCM via WebSocket to the/sttendpoint. Buffer 1β2s chunks for responsiveness. - Speech-to-Text (Python): replace the stub with Whisper (
faster-whisper) or Vosk. Emit partial transcripts to the client sovoice_analysis_screen.dartcan display live text. - Conversation hand-off: send the latest transcript plus last 5β10 turns to
/chat. The backend keeps persona prompts + temperature settings server-side. - LLM provider config: switch models using
LLM_MODELenv var without touching Flutter. Supports OpenRouter, HuggingFace or local inference once you pointLLM_BASE_URLaccordingly. - Text-to-Speech: call
/ttswith the assistant reply. Implement Coqui TTS (offline) orgTTSfor a quick cloud option; Flutter plays viajust_audio. - Memory + tools: use
conversationpayload to pass lightweight memory now; later extend backend to persist slots and emittool_callsfor reminders, journaling, etc.
Pull requests welcome! Please open an issue for major changes.
MIT License