XiaoxingAI
A local-LLM-enabled AI processing and automation system that handles incoming data through analysis, storage, and configurable result delivery.
- Built a production-style AI platform using React, TypeScript, FastAPI, PostgreSQL, and Redis, supporting multi-user automation workflows and internal tooling for AI-driven operations.
- Developed a modular React dashboard for worker monitoring, live logs, prompt management, debugging, and user administration, with hot-reload configuration to improve operational efficiency.
- Implemented real-time bot interaction flows with conversation state, per-bot memory, and tool-routing logic, improving response consistency and enabling more reliable multi-session usage.
- Improved system reliability through structured logging, cache-backed deduplication, JWT-based access control, and concurrency-safe chat handling across multiple active bots.
Problem
AI workflows often depend too heavily on external services and lack clear control over reliability, storage, and recovery.
The goal was to build a more flexible system that could combine local inference, automation, and observable processing behaviour.
Approach
Designed the system as a modular backend pipeline with storage, analysis, retry handling, and monitoring support.
Used local LLM integration to keep the architecture flexible and suitable for experimentation.
Implementation
AI processing pipeline
- Built a Python-based backend that processes incoming data through ingestion, analysis, and result delivery stages
- Structured the workflow so components could evolve without tightly coupling the whole system
Model integration and state handling
- Integrated llama.cpp for local inference while keeping support for API-based model switching
- Used SQLite to store system state, log activity, prevent duplicates, and support recovery after restart
Observability and control
- Implemented retry logic, structured logging, and failure isolation for long-running tasks
- Built a lightweight React dashboard for monitoring behaviour, managing configuration, and debugging workflows
Result
Delivered a working AI automation system with local model support, persistent state tracking, and a monitoring interface for debugging and operations.
Lessons
- AI systems become much more usable when observability and recovery are treated as first-class concerns.
- Local model support is valuable, but the surrounding workflow design matters more than the model alone.