Work stream 2: Edge Database & Algorithms
Leader: @Rick Cao @Qi Tang
Objective: The Edge Database work stream is dedicated to advancing database solutions tailored for edge computing environments. It targets improvements in data handling and storage capabilities on edge devices, enhancing local data processing and decision-making.
Approach: Efforts will include the development of lightweight, scalable database systems that support real-time data processing and analytics, pivotal for Edge AI Virtual Agents.
Background
Our initial proposal covers some use cases (Slide #4) of AI virtual agents on the edge.
We are open for more real-life use cases and edge business service ideas.
Current direction is to implement AI agent LLM service (e.g. customer service on the edge).
Objectives
Two long-term goals of this work stream:
1) (Interoperability) Define clean, standardized and lightweight data and data processing interface for AI LLM services on the edge
2) (non-expert) Deliver non-expert, on-premise and low-cost AI virtual agent solution for local and small businesses (e.g. FAQs / customer LLM services in a local store)
Features
Data content / context
Real-time information
Location-specific information
Event information
Function calling (_functions, python scripts)
Input Data types
Unstructured: PDF, HTML, Audio, Image, Video, etc.
Structured: SQL, vector stores, knowledge graphs
Files: Json, CSV
APIs
Off-the-shelf vector database and embedding models
Vector database
LangChain
LlamaIndex
Sentence transformers
Data Access Authentication
Public access
Private data (Enterprise use cases)
Protocols
Algorithms
semantic search
data chunking
ranking
recommendation
Lora Adaptor
data routing
Evaluation and analytics
Click through rate (from the reference links)
User feedback
Experiment
An Example Scenario
Company A wants to leverage AI to build a powerful agent-like services with its current data (SQL, CSV, PDF, …)
Build a lightweight database for efficient retrieval
The service is successfully deployed and starts to receive customer’s feedback (e.g. analytics, LLM inputs/outputs stored)
Is there a way to refine the current solution to improve the services?
Store the LLM inputs/outputs data efficiently and labeled properly
Train / Tune a model to gain extra capability of the current given the new data
Data serialization / de-serialization for scalability
Timeline
Phase 1: (Prototyping) 1) adopt off-the-shelf solutions; 2) provide benchmark results; 3) develop the evaluation matrix for edge database/algorithm solution
Phase 2: (Standardization) Based on the lessons learnt, consider the scalability and interoperability
Version history
(08/20/2024) initial proposal, outlines of work scope, objective, and approaches
(11/05/2024) First design and implementation: https://docs.google.com/presentation/d/19I07kyHKRLvaatBtbagcpdoaicc_UWMcqMzsGc6wDic/edit#slide=id.g312466b3fbd_0_0
(11/19/2024) First demo
Before: User selects the defined-product that meets the need closely
Now: User defines how they want to use the data, and the code / product generated for this specific need automatically
#About:
- Data is converted into a retrievable format and hosted in the local directory
- Standalone retreival code is generated for easy usage (including function calling from LLMs)
- Support autonomous customizable retrieval
#Usage:
- python worker.py "./demo/"
#Generated files:
- .cache/storage stores the generated index
- .cache/api.yaml is generated for creating an open api for external agent build
- rag.py is generated for function calling by LLM
#To-Do-List:
- Refine the interface file format and code structure for scalability and robustness
- Add evaluator.py to evalute the rag performance
- Add "generate prompt" function to generate a prompt for LLM function calling
- Clean up the api.yaml file and demonstrate the function calling by external LLMs
- Support other common data formats images / csv / sql ...
- License and legal
(02/25/2025) Initial tech stacks:
Huggingface/smolagents is used for the underlying agent framework
Div99/agent-protocol (to be investigated) for the API framework
E2B (to be investigated) used for sandbox code execution
llama_index is used for the data retrieval framework
SQLalchemy is used for structured data retrieval framework
Note: What is a data agent? Shall we choose to hardcode a retriever with a few lines of codes, or ask the LLM to generate the code? In analogy to autonomous driving, the L5 of self-driving can transport the user from Place A to B without any intervention (no need to learn to drive) even with the road environment has not been seen before. So the ultimate goal of this agent implementation can be no codes shall be code implementation? Any codes becomes a posteriori or by-product that if the codes are useful then we might not need to re-code it again. Anyway, some thoughts I realize when I start the coding of this project:
minimalist → any codes will be replaced by the future AI code agent in anyway
AI agent is bounded by resource and environments (dependencies, low-level kernels, computer resources, data size, …) so the agent should be aware of the resources (in analogy to the self-driving car should be aware of the environments)
LLM is really good at creating ideas, plans, and implementations, but still lack of intentions, or how to fit to user’s intention
Some use cases to be done
User manual (50 pages of pdf) turned into a QA agent
Synthetic hospital patients documents for metadata retrieval
Sensory data csv file into some pandas code
Regulatory / Compliance document (pdf) into compliance check agent
(03/18/2025) MVP release GitHub - lfedgeai/eda: Data on-Prem, Code on-the-Fly
eda cli commands:
eda rag “please retrieve the data and build a rag system for ../examples/data/pdf”
eda run “you can ask any request here about the data base”
eda learn “learn my database thoroughly ../examples/data/3gpp/" #for large and complicated database
eda mcp “please serve the data under ../examples/data/finance externally 0.0.0.0"
There are 3 levels of intelligence for the data retrieval agents:
Level 1: eda rag ⇒ rag.py python code ⇒ most efficient (compute only), no LLM tokens
Level 2: eda run ⇒ multi-step reflections ⇒ query LLM for better summarization / search
Level 3: eda learn ⇒ knowledge graph and memGPT ⇒ costly, but full understanding of the doc