Mission
Our mission is to democratize artificial intelligence by developing open-source, practical solutions that tackle real-world challenges. We believe AI should be accessible to everyone, and we advance this goal through collaborative research, innovative tools, and community-driven development.
Our AI Research & Open-Source Projects
Discover our initiatives in agentic AI, LLM infrastructure, and practical machine learning. We build open-source tools that solve real-world problems through collaborative research and community-driven development.
Project list
- Model Gateway - A high-performance gateway for intelligent routing and management of Large Language Model requests across multiple providers. Features automatic failover, cost optimization, and request caching.
- Random Number MCP Server - An MCP server that generates random numbers using national weather data as entropy sources. Built with FastMCP framework and includes comprehensive test coverage.
- QuranLLM - An LLM-powered search application for the Quran that leverages artificial intelligence to enable intelligent querying and semantic search of Islamic scripture.
- Recovering text from obsolete PDF - Recovering Bengali text from a legacy PDF memoir encoded with proprietary SutonnyMJ ANSI font by using Tesseract 5.x's LSTM neural networks based OCR on rasterized page images, achieving 98-99% accuracy. The machine learning approach bypassed the intractable problem of reverse-engineering ANSI-to-Unicode font mappings by treating visual glyph recognition as an image classification task, successfully converting the document to clean UTF-8 Unicode text in the pdf book. Technical report
- Tangle - Deadlock and livelock detection for multi-agent AI workflows. Monitors agent interactions in real time, detects when agents are stuck or looping without progress, and triggers configurable resolution actions. Works as an embedded Python library or standalone FastAPI sidecar with native LangGraph and OpenTelemetry support.
- Reverb - A semantic response cache with knowledge-aware invalidation for LLM applications. Provides two-tier caching (exact match and semantic similarity) to reduce redundant LLM calls, with automatic invalidation when underlying knowledge base changes. Built in Go with pluggable backends and standalone HTTP/gRPC servers.