Agenda
AI Agent platform in Gaming. Anfernee Guan Wilson Wang
discuss usage of vivGrid to host OPEA, @ruoyu Ying, C.C. Fan , Yona Cao , 15min
Release 1 readiness checking
Recording
https://bytedance.us.larkoffice.com/minutes/obusf8xcy5ln83s9q56zk88g?from=from_copylink
Summary
The meeting discussed various agent implementations, resource requirements, hosting options, etc. The main contents include:
Updates on AI Agent Platform for Gaming
NPC Agent Implementation:Planning to implement an NPC agent for the AI agent platform starting from next year. It can help calculate player tasks and take advantage of gaming platform interfaces to automatically complete custom tasks.
Foreign Accommodation Agent:People are already using similar agents for making comments. The platform can also handle this kind of task and it's more straightforward than the NPC agent.
Further Discussions:Will have further discussions with Anthony to explore other possible agents for the platform.
Hosting of OPEA and Integration Work
Resource Requirements for OPEA:OPEA deployment may need GPU, Intel CPUs, AMD CPU, container platform like Docker or Kubernetes, and access to microservices with embedded models like Rama model through REST API.
Search for Hosting Environment:The team is trying to identify a place to host OPEA and Spear for the integration work, and exploring if existing resources can be utilized.
Uncertainty about Grid Integration:There is confusion regarding whether it's an integration of grid and OPEA or just hosting OPEA.
Providing Environments for OPA Hosting
OPA Hosting Target Unclear:The target of hosting OPEA is unknown and needs to be clarified.
Preferred Endpoint Options:The environment like either a cluster or a virtual machine is considered good enough for hosting.
Resource Leverage:OPM will leverage resources like hardware and accelerators to bring up services.
Sponsorship or POC:The provision of environment might be for sponsorship type or for making POC.
Further Discussion:Details will be discussed with Roy after the meeting and small help from Tom and Wilson might be needed.
Work Stream Updates and Planning
Geo Distributed Cloud for AI Agent : Working on adding more API from a platform and releasing Gemini API bridge to developers next week. Release planning table to be filled.
Edge Database : Boris is preparing risky documents and expected to finish by next Tuesday.
Edge Gateway : Updated release planning table. Preparing presentation for release review next week.
Work Stream 6 : Fukanasan was sick last week but is back. Jeff and he will take care of the update.
Chapters
00:38 Meeting Kickoff
Meeting Kickoff
01:48 Update on AI Agent Platform in Gaming for Wilson Wang
This section begins with Tom Qin asking about Guan's presence for the AI Asian platform in gaming. Wilson Wang then offers to give an update. He mentions updating the overall project design and specifically the AI agent platform part on the product page. He adds that two agents will be implemented for the AI agent platform, starting with the NPC agent which can help players achieve tasks by using the gaming platform's interfaces.
04:08 Wilson Wang's Update on AI Agent Platform Features for Industry and Next Steps
This section is about Wilson Wang presenting updates regarding AI agents. They plan to implement an AA NPC agent starting next year in discussion with Anthony. Also, existing agents like the forum Common Agent are in use for making comments, and the OA Agent platform can be used for similar tasks more straightforwardly. Tom Qin asks a quick question about Anthony, then inquires about API integration for the two features. With no further comments, they move to the next item.
06:42 Discussion on Hosting OP and Spear for Integration Work and Resource Needs
This section is about the integration of op and spear. Ruoyu Ying is looking for a place to host both OPMs for the integration work. They are considering using an environment like the vivid cloud. Tom Qin mentions Cici and Yona. Yona (曹昱) from very great has done some research on OBA. They have an API bridge and can provide shared GPU but don't know how to deploy or host OPEA and want to know the resources needed.
09:06 Deployment architecture of Pier and resource sharing for OPA hosting
This section mainly focuses on the architecture related to Pier. To deploy Pier, GPU, Intel CPUs or AMD CPUs are required, along with a container platform like Docker or Kubernetes to deploy components in OPR. Then microservices (including ones with models like Rama model) can be accessed via REST API. Wilson also mentions Tina's query about using resources. Tom Qin notes that a Kubernetes cluster is needed and Cao Yu offers to provide it but is unsure about the OPA hosting target.
11:14 Discussion on Endpoint Building and Release 1 Readiness Checking
This section mainly focuses on the discussion of building endpoints. They consider using a cluster or virtual machine provided by big cloud for the op to bring up LLM or direct services. It's for prototyping or POC. There is also a question about whether it's sponsorship or for testing. Then, they move on to the third engine item which is release 1 readiness checking for all work streams and the related tasks and timelines.
14:50 Project Updates and Agenda Discussions in a Meeting
This section mainly focuses on updates regarding different work streams in a project. Updates include releasing the Gemini API bridge next week in work stream 1, finishing risky documents next Tuesday in work stream 2. Work stream 5 has updated the release planning table and is preparing for a release one review next week. For work stream 6, an absent member is back and they know what to do. After discussing all agenda items, the meeting is concluded.