Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

// TODO: update project name in all places.


  • Leaders: Wilson Wang Tina Tsou 
  • Objective: To design, develop, and deploy a robust Agent-as-a-Service (AaaS) platform leveraging edge computing to run AI models locally on edge devices. This platform aims to enhance performance, reduce latency, and improve scalability by deploying machine learning agents closer to data sources and end-users, ensuring efficient and real-time processing of AI tasks.
  • Approach: The approach for Edge AaaS (Agent-as-a-Service) involves designing and implementing a scalable platform that leverages edge computing to deploy and manage AI models on edge devices, ensuring real-time processing, reduced latency, and enhanced performance by conducting thorough requirements analysis, robust architecture design, seamless integration, and continuous monitoring and optimization.


Introduction

The Edge AaaS (Edge Agent-as-a-Service) project aims to revolutionize the deployment of AI agents on edge devices using a Function-as-a-Service (FaaS) system. This innovative approach enhances user access to AI models by leveraging the computational power and low latency of edge computing.

...

Integrating Edge Function-as-a-Service (FaaS) to serve AI agent workloads amplifies the advantages of edge computing and LLM-powered AI agents. Edge FaaS offers a fast, lightweight, and serverless architecture that enhances the deployment and execution of AI tasks at the network’s edge. This serverless model eliminates the need for maintaining dedicated infrastructure, allowing developers to focus on building and optimizing AI functionalities. The lightweight nature of FaaS ensures that resources are allocated dynamically and efficiently, reducing overhead and enabling rapid scaling to accommodate varying workloads. By executing AI processes closer to end users, Edge FaaS minimizes latency and accelerates response times, which is essential for real-time applications and services. Furthermore, this architecture supports seamless updates and scaling, ensuring AI agents are always running the most current models and can handle increased demand without performance degradation. Overall, using Edge FaaS to serve AI agent workloads combines the speed, flexibility, and efficiency of serverless computing with the powerful capabilities of LLMs and edge technology, delivering robust and responsive AI solutions.


Objectives

Primary Goals

1.Accelerate User Access to AI Models:

...

Utilize FaaS to enable dynamic management and scaling of AI services, ensuring the system can adapt to varying workloads and demands efficiently.

Secondary Goals

1.Energy Efficiency:

Optimize the deployment and operation of AI agents on edge devices to minimize energy consumption, contributing to sustainability goals and reducing operational costs.

...

Enable user-specific customization of AI models and services, allowing for personalized experiences and adaptability to diverse user needs, thereby enhancing user satisfaction and engagement.


Scope

In-Scope

1.Development of AI Agent Deployment Mechanisms on Edge Devices: Design, develop, and implement mechanisms for deploying AI agents on various edge devices to ensure efficient and effective operation.

...

6.Monitoring and Maintenance: Implement monitoring tools and maintenance protocols to ensure the continuous and optimal performance of AI agents and the FaaS platform.

Out-of-Scope

1.Large Language Model (LLM) Internals and Implementations: The project will not cover the internal workings or implementation details of large language models.

...

4.End-User Application Development: Development of end-user applications that consume the AI services is out of scope. The project focuses on the backend deployment and management of AI agents.

Breakdown

TBD

Project Timeline

TBD

Resource Allocation

TBD

Risk Management

TBD

Communication Plan

TBD

Quality Assurance

TBD

Documents

TBD