2025-04-29
Agenda
Release 1.2 Review: Work stream 3: SPEAR (Scalable and Performant Edge Agent Runtime) , 15 min
Release 1.2 Review: Work, @Roy Shadmon , 15 min
Release 1.2 Review: Work steam 6: Physical AI , 15 mins
Participants
@Wilson Wang @Chun Liu @Yike Gong @Mengyu Hang @Ning Jiang @Jianwen Pi @Jiayi Song @Lance Lim @Qingkun Li @Mo Li @Tina Tsou @Wenhui Zhang @Hao Xie @Xun Chen @Ye Xu @Ying Wang @Yizheng Jiao @Zixiong Zhang @Akram Sheriff @anfernee.guan @Ashwanth Gnanavelu @Yona @Jeff Brower @Jenny Yang @李博睿 @刘秉伟 @LU CHENG @Rachel Roumeliotis @Roy Shadmon @Ruoyu Ying @Sam Han @Shunli @Suhaas Teja Vijjagiri @Sumeet Solanki @TangQi @Tina Tsou @TinaTsou @Tom Qin @Victor Lu @万里 @Wilson Wang @Yin Li @Yu Wang - AIG
Recording
https://bytedance.us.larkoffice.com/minutes/obus68656vo3rtp2wfn553rl?from=from_copylink
Summary
The meeting discussed the progress and details of various work streams and projects, the main contents included:
Work Stream 3: Integrated with Appeal, added stream support and string test cases, provided REST API for OKR.
Work Stream 4: Presented federated learning on Edge Lake, explained its working process and architecture, including edge node setup and data management.
Work Stream 6: Discussed the Teddy Bear Project for autism, updated the wiki page, and worked on improving the data flow diagram.
Edge Lake Security: Addressed authentication and authorization issues in the edge computing environment, such as public-private key encryption and the use of query nodes.
API and Gateway: Considered how to handle API calls and security in a distributed edge computing setup without a central API gateway.
This content is only supported in a Feishu Docs
Chapters
01:20 Discussion on Project Progress, Ideas and Release 1.2 Review
This section begins with attendees arriving and discussing absentees. They then talk about AI projects, like the Teddy Bear Project needing training. Jeff mentions an R&D project on weight addressable memory. After that, they move to the meeting agenda, starting with work stream 3 of release 1.2 review, where Wilson Wang shares updates. Finally, they're about to start the review of work stream 4, with Roy Shadmon preparing to present on the federated landing with Edge Lake.
23:16 Edge Lake: A Federated Learning Platform for Decentralized Model Training
This section focuses on Edge Lake's federated learning platform. It explains how data remains distributed at the edge rather than being moved to the cloud for model training. Federated learning is described, its benefits like data privacy and efficiency. Adoption challenges are noted but claim to be solved. The process of running a training task on the platform is detailed, stating it's automated after the initial training app submission. The platform is agnostic to libraries and supports various hardware configurations.
31:52 Victor Lu's Questions about Edge Lake Platform and Roy Shadmon's Answers
This section features Victor Lu asking multiple questions about Roy Shadmon's platform. Victor inquires about different aggregation levels, how to differentiate device configurations, security identification mechanisms, and how other projects interact with Edge Lake. Roy responds to each query, explaining choices for aggregation location, how to configure device contributions, security via public and private keys, and that Edge Lake supports various APIs for communication. He also offers a demo upon request.
44:24 Discussion about Link Sharing and Inference in Federated Learning Platform on Github
This section begins with Roy asking Tina to put a link in the chat, which Tina does, indicating it's in the wiki page on confluence. Jeff then has questions. He shares his exploration of the Edge Lake repository on Github, asking where the inference is as he can't find it. Roy explains that the federated learning platform isn't fully open source yet; it's only been released on request. Jeff acknowledges this, understanding that while there's much open source, some parts may be at a higher level not yet available.
46:09 Discussion on Handling Model Training and Inference in Work Streams
This section focuses on how to handle model training on the infant side. Jeff Brower asks Roy Shadmon about it, especially regarding the need for standardization. Roy explains that the training application file published by the user must contain key functions like the training process, including inputs, outputs, and flow, and the inference process. These can be customized. He adds that users only need to define the file, and edge takes care of other aspects like networking. Then Tina Tsou indicates they may move on to work stream 6.
48:38 Discussion on Work Stream 6, Teddy Bear Project, Edge Lake Security and More
This section covers multiple topics. Jeff has no updates on Work Stream 6 except robotics. The Teddy Bear Project may be added to physical AI work stream 6. Jeff is improving Hashem's data flow diagram. Victor raises questions about API security in edge computing, to which Roy provides solutions. Tina and Jeff discuss getting Noee and Hashim involved in the teddy bear project via Slack. Finally, there's some difficulty with audio while discussing voting for 1.2.
01:09:17 Meeting Concluded
Meeting Concluded