Work stream 2: Edge Database & Algorithms
Leader:Â @Rick Cao @Qi Tang
Objective:Â The Edge Database work stream is dedicated to advancing database solutions tailored for edge computing environments. It targets improvements in data handling and storage capabilities on edge devices, enhancing local data processing and decision-making.
Approach: Efforts will include the development of lightweight, scalable database systems that support real-time data processing and analytics, pivotal for Edge AI Virtual Agents.
Background
Our initial proposal covers some use cases (Slide #4) of AI virtual agents on the edge.Â
We are open for more real-life use cases and edge business service ideas.Â
Current direction is to implement AI agent LLM service (e.g. customer service on the edge).
Objectives
Two long-term goals of this work stream:Â
1) (Interoperability) Define clean, standardized and lightweight data and data processing interface for AI LLM services on the edge
2) (non-expert) Deliver non-expert, on-premise and low-cost AI virtual agent solution for local and small businesses (e.g. FAQs / customer LLM services in a local store)
Features
Data content / context
Real-time information
Location-specific information
Event information
Function calling (_functions, python scripts)
Input Data types
Unstructured: PDF, HTML, Audio, Image, Video, etc.
Structured: SQL, vector stores, knowledge graphs
Files: Json, CSV
APIs
Off-the-shelf vector database and embedding models
Vector database
LangChain
LlamaIndex
Sentence transformersÂ
Data Access Authentication
Public access
Private data (Enterprise use cases)Â
Protocols
Algorithms
semantic search
data chunking
ranking
recommendation
Lora AdaptorÂ
data routing
Evaluation and analytics
Click through rate (from the reference links)
User feedback
Experiment
An Example Scenario
Company A wants to leverage AI to build a powerful agent-like services with its current data (SQL, CSV, PDF, …)
Build a lightweight database for efficient retrieval
The service is successfully deployed and starts to receive customer’s feedback (e.g. analytics, LLM inputs/outputs stored)
Is there a way to refine the current solution to improve the services?
Store the LLM inputs/outputs data efficiently and labeled properly
Train / Tune a model to gain extra capability of the current given the new data
Data serialization / de-serialization for scalability
Timeline
Phase 1: (Prototyping) 1) adopt off-the-shelf solutions; 2) provide benchmark results; 3) develop the evaluation matrix for edge database/algorithm solution
Phase 2: (Standardization) Based on the lessons learnt, consider the scalability and interoperability
Version history
(08/20/2024) initial proposal, outlines of work scope, objective, and approaches
(11/05/2024) First design and implementation: https://docs.google.com/presentation/d/19I07kyHKRLvaatBtbagcpdoaicc_UWMcqMzsGc6wDic/edit#slide=id.g312466b3fbd_0_0
(11/19/2024) First demo
Before:Â User selects the defined-product that meets the need closely
Now:Â User defines how they want to use the data, and the code / product generated for this specific need automatically
#About:
- Data is converted into a retrievable format and hosted in the local directory
- Standalone retreival code is generated for easy usage (including function calling from LLMs)
- Support autonomous customizable retrieval
#Usage:
- python worker.py "./demo/"
#Generated files:
- .cache/storage stores the generated index
- .cache/api.yaml is generated for creating an open api for external agent build
- rag.py is generated for function calling by LLM
#To-Do-List:
- Refine the interface file format and code structure for scalability and robustness
- Add evaluator.py to evalute the rag performance
- Add "generate prompt" function to generate a prompt for LLM function calling
- Clean up the api.yaml file and demonstrate the function calling by external LLMs
- Support other common data formats images / csv / sql ...
- License and legal