// add more content
...
Components:
•Client: Initiates requests.
•API Servers: Located in both the data center (DC) and at the edge.
•Metadata Servers: Store and manage metadata.
•Workers: Execute tasks, located in both the DC and at the edge.
Scenarios:
1.Request Relay:
•If the local API server lacks resources, it forwards the request to another cluster (red arrows between API servers).
2.Request Redirect:
•The API server instructs the client to resend the request to a different cluster (gray arrows looping back to the client and then to another API server).
Interactions:
•Client Requests (Gray Arrows): Clients send requests to API servers.
•Cluster Information Exchange (Red Arrows): API servers exchange information between clusters.
•Metadata Storage (Green Arrows): API servers store and retrieve metadata from metadata servers.
•Task Processing (Gray Arrows): API servers direct tasks to workers in their cluster or relay them to other clusters as needed.
This setup ensures efficient processing and resource utilization by leveraging both edge and cloud resources.
// Shall we change the name to SAICE (Serverless AI enabled Cloud Edge Continuum Platform)
//TODO add more detailed diagrams as requested by Borui Li (李博睿)