InfiniEdge AI - Stage 1 - 2025-05-07

InfiniEdge AI - Stage 1 - 2025-05-07


Completed by:  @Tina Tsou , TikTok

Submitted to TAC Mail List:  2025/05/07

Presented on TAC Weekly Call:  2025/05/07 (Meeting Recording)




Below is a self-assessment submitted by TSC Chair/Maintainers of the Project. Comments/questions/feedback is welcome either a) in the Comments at the bottom of the page or b) during the TAC call when information is presented

Stage 1: At Large Projects 

Stage 2 and Stage 3 Projects also requested to complete this section, as PLD acceptance criteria requires meeting current as well as prior stage requirements

Stage 1 Criteria (from the PLD)

Meets / Needs Improvement / Missing / Not Applicable

Supporting Data (if needed, include links to specific examples)

Stage 1 Criteria (from the PLD)

Meets / Needs Improvement / Missing / Not Applicable

Supporting Data (if needed, include links to specific examples)

2 TAC Sponsors, if identified (Sponsors help mentor projects) - See full definition on Project Stages: Definitions and Expectations


Meet

@Haruhisa Fukano @Song, Keesang

The typical IP Policy for Projects under the LF Edge Foundation is Apache 2.0 for Code Contributions, Developer Certificate of Origin (DCO) for new inbound contributions, and Creative Commons Attribution 4.0 International License for Documentation. Projects under outside licenses may still submit for consideration, subject to review/approval of the TAC and Board.


Meet


Apache 2.0

Upon acceptance, At Large projects must list their status prominently on website/readme


Meet


https://lfedge.org/projects/infiniedge-ai/



Stage 1 Projects, please skip to Additional Information Requested from All Projects

Stage 2: Growth Stage

Stage 3 Projects also requested to complete this section

Stage 2 Criteria (from the PLD)

Meets / Needs Improvement / Missing / Not Applicable

Supporting Data (if needed, include links to specific examples)

Stage 2 Criteria (from the PLD)

Meets / Needs Improvement / Missing / Not Applicable

Supporting Data (if needed, include links to specific examples)

Demonstrate regular project leadership (typically TSC) meetings.  Project leadership should meet monthly at a minimum unless there are extenuating circumstances (ex: holiday period)





Development of a growth plan (to include both roadmap of projected feature sets as well as overall community growth/project maturity), to be done in conjunction with their project mentor(s) at the TAC.





Document that it is being used in POCs.





Demonstrate a substantial ongoing flow of commits and merged contributions.





Demonstrate that the current level of community participation is sufficient to meet the goals outlined in the growth plan.





Demonstrate a willingness to work with (via interoperability, compatibility or extension) other LF Edge projects to provide a greater edge solution than what can be done by the project alone. 







Stage 2 Projects, please skip to Additional Information Requested from All Projects

Stage 3: Impact Stage

Criteria

Meets / Needs Improvement / Missing / Not Applicable

Supporting Data (if needed, include links to specific examples)

Criteria

Meets / Needs Improvement / Missing / Not Applicable

Supporting Data (if needed, include links to specific examples)

Have a defined governing body of at least 5 or more members (owners and core maintainers)





Have a documented and publicly accessible description of the project's governance, decision-making, and release processes.





Have a healthy number of committers from at least two organizations. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.





Establish a security and vulnerability process which at a minimum includes meeting ("Met" or "?") all OpenSSF best practices security questions and SECURITY.md.





Demonstrate evidence of interoperability, compatibility or extension to other LF Edge Projects. Examples may include demonstrating modularity (ability to swap in components between projects).





Adopt the Foundation Code of Conduct.





Explicitly define a project governance and committer process. This is preferably laid out in a GOVERNANCE.md file and references a CONTRIBUTING.md and OWNERS.md file showing the current and emeritus committers.





Have a public list of project adopters for at least the primary repo (e.g., ADOPTERS.md or logos on the project website).





Additional Information Requested from All Projects

Additional Information Requested from All Projects

Supporting Data (if needed, include links to specific examples)

Additional Information Requested from All Projects

Supporting Data (if needed, include links to specific examples)

Intention for the upcoming year (Remain at current stage OR advance towards the next Stage)


Advance towards the next Stage

Include a link to your project’s LFX Insights page. We will be looking for signs of consistent or increasing contribution activity. Please feel free to add commentary to add color to the numbers and graphs we will see on Insights.


https://github.com/lfedgeai

How many maintainers do you have, and which organizations are they from? (Feel free to link to an existing MAINTAINERS file if appropriate.)

 

  • @Tom Qin , Edgenesis

  • @saiyan68 , Edgenesis

  • @Jeff Brower , Signalogic

  • @Noe Otero , @Wilson Wang, ByteDance

  • @Haruhisa Fukano , Fujitsu

  • @Moshe Shadmon @Roy Shadmon , Anylog

  • @Tina Tsou, TikTok

  • @Borui Li (李博睿) , Southeast University

  • @Yona Cao @C.C. Fan , Alegro

  • @Rick Cao , Meta

  • @Qi Tang, Google

What do you know about adoption, and how has this changed since your last review / since you joined the current Stage? If you can list companies that are end users of your project, please do so. (Feel free to link to an existing ADOPTERS file if appropriate.)


Intel, Laion.ai, Signalogic, Protocol Labs, Edgenesis, ByteDance, Palc Networks, Google, Southeast University

How has the project performed against its goals since the last review? (We won't penalize you if your goals changed for good reasons.)


This is the first annual review

What are the current goals of the project? For example, are you working on major new features? Or are you concentrating on adoption or documentation?

  1. Unified Edge–Cloud AI Pipeline: InfiniEdge 1.1 adds deep integration with its SPEAR workstream and the OPEA enterprise AI platform, creating a one-click pipeline from cloud model management to distributed edge inference. Developers can now update AI models centrally and automatically push them to all edge devices in a deployment (e.g. rolling out updated video-moderation models to hundreds of cameras at once). This realizes the project’s founding vision of “a unified, open platform” for efficient, low-latency inference on resource-constrained devices and positions InfiniEdge AI as a cohesive layer within the LF Edge ecosystem.

  2. On-Device Performance & Runtime Support: Release 1.1 focuses on speed and efficiency for on-device AI. The inference engine was optimized for multi-core execution and zero-copy data pipelines (using FlatBuffers) to cut latency and CPU use. In practice, edge nodes (smart cameras, AR devices, etc.) can run larger models in real time with less compute and energy. The roadmap (“Looking ahead”) for 1.2 and beyond includes features like automatic runtime selection, intelligent cloud–edge workload scheduling, and expanded runtime/language support (adding WebAssembly, Kubernetes, new languages, advanced serialization). These enhancements aim to squeeze “cloud-like AI” performance out of local devices and give developers more flexibility in where and how models run.

  3. Core Architecture & Workstreams: A key technical workstream is SPEAR (Scalable Edge Agent Runtime), which orchestrates distributed AI agents across the edge–cloud continuum. InfiniEdge AI has also launched a “Physical AI” stream to support robotics and real-world sensor data (using generative AI for training patterns and optimal inference placement). Additional components under InfiniEdge AI (e.g. EDA for on-premise data processing, Shifu for IoT gateway services, Yomo for serverless edge computing) provide the underlying pipeline and orchestration framework. Together these workstreams build a modular edge-AI architecture that can tap local device resources and handle complex IoT/data flows.

  4. Developer Onboarding & Community Events:

    InfiniEdge is actively engaging developers through hands-on events. For example, an InfiniEdge Code Lab held in Feb 2025 at San Jose State’s MLK Library drew dozens of edge developers, data scientists, MLOps engineers and tech leads to install the platform and build sample AI agents. Similarly, the May 2025 “Me2We” workshop guided 100 participants to collectively build 100 AI agents (voice assistants, data-analysis bots, etc.) using InfiniEdge’s no-code platform. These live training sessions – along with interactive TikTok Live demos – are designed to streamline onboarding, provide real-world examples, and spread InfiniEdge expertise in the community.

    image-20250507-072900.png
  5. Documentation & Developer Experience: The project has improved its docs and tooling to make InfiniEdge easier to use. Detailed setup guides, examples, and recorded tutorials are published on the GitHub repo and TikTok developer blog. InfiniEdge 1.1 introduced a no-code agent development interface, so even non-programmers can configure and deploy AI agents without writing code. Observability dashboards and diagnostics have also been enhanced. Future goals include fuller multi-language SDKs and container/Kubernetes deployment options, so developers can use InfiniEdge AI with the languages and platforms they already know.

  6. Strategic Integrations: By design, InfiniEdge AI is built to interoperate with other LF Edge projects and enterprise systems. Its charter emphasizes seamless integration with LF Edge frameworks, and the team has already connected with several complementary platforms. Release 1.1’s unified pipeline leverages the OPEA model-management system for cloud-side coordination, and the project has demonstrated live use cases in TikTok’s own Live Studio. Looking ahead, InfiniEdge plans to support WebAssembly (via OCRE) and Kubernetes, enabling deployment on broader edge/cloud infrastructure. This ecosystem focus – backed by the “unified platform” vision from the project’s founding – ensures InfiniEdge AI can plug into industry AI/IoT stacks (LF Edge or otherwise) rather than remain siloed.

How can LF Edge help you achieve your upcoming goals?

1. Governance & Project Infrastructure

  • Formalized Release Process
    LF Edge can provide its established CI/CD pipelines, release automation and milestone tracking to help us deliver 1.2—and future releases—on schedule and with high quality.

  • TSC & Working-Group Frameworks
    Leverage LF Edge’s governance model (TSC charters, working-group templates, voting procedures) to keep our SPEAR, Physical AI and federated-learning streams organized, transparent, and well-resourced.

2. Cross-Project Collaboration

  • Interoperability with Sister Projects
    Facilitate joint design reviews, interop testing, and shared APIs with other LF Edge projects (EVE, EdgeX Foundry, Open Horizon) so InfiniEdge AI’s unified edge–cloud pipeline plugs in seamlessly.

  • Shared Testbeds & Labs
    Use LF Edge’s “Sandbox” environment and partner lab networks to validate performance and stability on diverse hardware (Arm boards, IoT gateways, NPUs, etc.) at scale.

3. Community Growth & Adoption

  • Onboarding & Mentorship Programs
    Tap LF Edge’s “new contributor” initiatives—good-first-issue campaigns, mentorship pairings, contributor grants—to accelerate onboarding for both individual and corporate committers.

  • Marketing & Visibility
    Feature InfiniEdge AI in LF Edge newsletters, webinars, and key events (Open Summit, Edge Days) to raise awareness among end-users, integrators, and potential adopters.

4. Documentation & Best Practices

  • Documentation Standards & Templates
    Adopt LF Edge’s documentation templates, style guides, and review workflows to ensure our operator guides, API references, and architecture docs are clear, consistent, and discoverable.

  • Security & Compliance Guidance
    Work with the LF Edge Security Working Group to align our threat models and secure-by-design practices with community-vetted best practices for encryption, identity management, and privacy.

5. Events, Training & Ecosystem Programs

  • Code Sprints & Hackathons
    Co-host LF Edge-branded code sprints and hackathons focused on 1.2 features: e.g., “Edge AI Performance Days” or “Physical AI Challenge” to drive hands-on adoption and feedback.

  • Training & Certification
    Leverage LF Edge’s training partners or develop “InfiniEdge AI Fundamentals” courses under the LF Edge umbrella to upskill users in deploying and extending your platform.

Do you think that your project meets the criteria for the next Stage?

InfiniEdge AI has transitioned from concept to a concrete open-source platform with a stable 1.0 release, active roadmap, and cross-organizational support. It exhibits regular releases with quality focus, a formal governance structure (TSC, charter, bylaws), and growing multi-company participation. Early adoption is evidenced by real-time demos and ecosystem integrations. These factors align well with LF Edge Growth-stage criteria (active community, documented progress, interoperability plan). Barring any outstanding requirements, InfiniEdge AI appears to meet the expectations for moving from At-Large to Growth stage within LF Edge.

Please summarize Outreach Activities in which the Project has participated in (e.g. Participation in conferences, seminars, speaking engagements, meetups, etc.)

 

  • InfiniEdge AI Release 1.0 Code Lab (Feb 7, 2024) – A hands-on workshop hosted by TikTok for Developers at San Jose’s MLK Jr. Library. ~100 participants (edge developers, data scientists, DevOps/MLOps engineers, technical leads) learned to deploy InfiniEdge AI 1.0 on local and cloud environments and built sample AI “agents” (e.g. screen-capture and web-browsing bots) using the new orchestration, model management, and data-pipeline features. The interactive format encouraged extensive Q&A; attendees provided feedback and suggestions that will shape future releases. A recorded video of the session was published afterward (with labs and slides) to extend the learning to the wider community.

  • Open Networking & Edge Summit 2024 (ONE Summit, Apr 30, 2024) – Tina Tsou (Lead of InfiniEdge AI) co-presented “Towards a New Era of Edge Computing: Open Software on Open Hardware”. This 30-minute talk to cloud-native and edge infrastructure leaders outlined LF Edge’s vision of a hardware-agnostic, interoperable edge platform and the role of edge AI (InfiniEdge) in accelerating use cases like IoT and on-device intelligence. The session raised InfiniEdge AI’s profile among industry and open-source IoT communities, and helped align the project’s roadmap with broader edge and cloud-native initiatives (e.g. WebAssembly, open hardware).

  • AI for Developers Meeting: LF Edge & LF AI & Data (Sep 11–13, 2024) – A hybrid summit in San Jose (in-person + online) bringing together AI researchers, developers, product managers, and open-source community members. InfiniEdge AI co-organized an “InfiniEdge track” within this event (alongside Akraino, EdgeLake, etc.) and hosted a day‑0 hackathon on AI agent development. Notable sessions included Tina Tsou’s “AI4D: Bridging the Gap Between Edge AI and Distributed Cloud Computing” and a talk by Tom Qin et al. on “Revolutionizing Edge Computing: AI and IoT with InfiniEdge AI”. Outcome: cross-community collaboration flourished, InfiniEdge’s capabilities were showcased to a broad audience, and many technical suggestions were collected, deepening adoption interest across LF Edge and LF AI & Data projects.

  • LF Edge Day at Open Source Summit Europe (Sep 19, 2024) – A half-day LF Edge community event in Vienna co‑located with OSS Europe. This forum drew edge computing developers, operators and ecosystem partners for talks and networking. While InfiniEdge AI did not have a dedicated presentation, project contributors attended alongside other LF Edge project teams (EdgeX, Fledge, etc.) and engaged with attendees. Outcome: Sustained project visibility in Europe, expanded awareness of InfiniEdge AI among open source and enterprise edge developers, and strengthened community ties within the LF Edge ecosystem.

  • Open-Source AI Code Lab – UC Berkeley (Apr 16, 2025) – A hands-on workshop at UC Berkeley (Genetics & Plant Biology building) focused on InfiniEdge AI, OPEA (Open Platform for Edge Agent), and responsible AI. The event drew students and industry developers; participants coded side-by-side with experts to build edge AI agents on InfiniEdge’s platform, while learning about AI governance and ethical development. Outcome: Practical training expanded InfiniEdge’s academic and developer user base, and discussions generated feedback on integrating InfiniEdge with AI governance frameworks (e.g. privacy, fairness).

  • TikTok Live Interactive Demo (Spring 2025) – A live-streamed product showcase in TikTok’s Live Studio. In collaboration with TikTok’s engineering team, InfiniEdge deployed a real-time AI agent that listened to voice commands from the host and viewers and instantly performed on-stream actions (e.g. “show me comments from new users”), all processed locally on the studio edge node. This high-profile demo (seen by viewers and shared with developers) vividly demonstrated InfiniEdge AI’s ultra-low-latency on-device inference. Outcome: The live use case highlighted new possibilities (instant content moderation, AR effects, interactive assistants) and generated excitement and awareness of InfiniEdge AI’s capabilities in handling streaming data and live user interaction.

Are you leveraging the Technical Project Getting Started Checklist? If yes, please provide link (if publicly available).

  1. Checklist usage: InfiniEdge AI is a Linux Foundation Edge (LF Edge) at-large project. While the project has not explicitly published a public “checklist” link, its setup closely follows the LF Edge guidelines. Their project plan (Phase 2–3 timeline) calls for forming a Technical Steering Committee (TSC) and completing a charter. Indeed, InfiniEdge AI has an active TSC (with public mailing lists and meeting schedule) and its governance documents.

  2. Completed checklist items: The project has several key items in place:

    • TSC and governance: A TSC is established (mailing lists on http://lists.lfedge.org and regular meetings). A draft Technical Charter exists in their GitHub (e.g. “InfiniEdge AI Technical Charter Draft 6” in the .github repo).

    • Repositories and license: Multiple repos have been created under the lfedgeai GitHub org (e.g. SPEAR, eda, shifu, yomo, etc.). These repos include a LICENSE file with Apache-2.0 (e.g. the SPEAR repo shows an “Apache-2.0 license”).

    • Website and materials: The InfiniEdge AI project page is published on the LF Edge website, describing its mission and use cases. (InfiniEdge AI is described as “an LF Edge project” that brings AI to edge devices.)

    • CI/CD: Continuous integration is enabled. For example, the SPEAR repo contains a .github/workflows directory (indicating GitHub Actions are configured). Other repos similarly include CI workflows.

    • Security & policies: The project follows LF Edge’s standard Code of Conduct and policies, but no separate security disclosure policy is published specifically for InfiniEdge AI.

    • Releases: InfiniEdge AI has begun making releases (notably v1.0 and v1.1 announced via developer blogs), indicating an emerging release process. (Formal release procedures are implied by these announced milestones.)

  3. Onboarding tracker: No public InfiniEdge AI–specific checklist or onboarding tracker is apparent. Unlike some LF Edge projects that link to a project checklist (e.g. “LF Edge Checklist for FIDO DO”), InfiniEdge AI has not published a dedicated checklist URL or tracker. Its progress must be inferred from its wiki and GitHub records (e.g. TSC lists, charter documenthttps://github.com/lfedgeai/.github#:~:text=InfiniEdge%20AI%20Technical%20Charter%20Draft,pdf , CI workflowshttps://github.com/lfedgeai/SPEAR#:~:text=54%20Commits , etc.), rather than a single checklist page.

Please review, and update if needed, your Project entry on the Existing Project Taxonomy page, modifying the Last Updated / Reviewed date in the header.



InfiniEdge AI (Stage 1 – At Large) – Domain: Edge AI, Developer Platform

Last Updated / Reviewed: May 7, 2025

InfiniEdge AI is an open-source framework that brings advanced AI capabilities directly to resource-constrained edge devices. It enables low-latency, on-device machine learning inference – cutting data traffic and maximizing privacy by processing data locally rather than in the cloudhttps://lfedge.org/projects/infiniedge-ai/#:~:text=Imagine%20AI%20so%20powerful%20it,comfort%20of%20your%20own%20device . The project optimizes AI models for smart sensors, cameras, and other edge systems to support real-time decision-making (e.g., vision, audio, and contextual analytics on-device) without requiring constant cloud connectivity.

  • Core Workstreams & Components: InfiniEdge AI comprises several subprojects and integrations:

    • SPEAR: A photorealistic cloud-edge simulation platform for training and validating AI agents in synthetic environmentshttps://github.com/lfedgeai#:~:text= .

    • EDA (Edge Data Agent): An intelligent agent that orchestrates data and code on edge nodes, enabling on-the-fly model updates and data pipelines (“data on-prem, code on-the-fly”).

    • Shifu: A Kubernetes-native IoT gateway (from the Edgenesis collaboration) for deploying and managing AI applications on edge devices.

    • YoMo: A stateful serverless framework for Edge AI infrastructure, enabling efficient workload distribution across devices.

    • AI Agent Marketplace (Coze): A catalog/platform for publishing, discovering, and sharing AI models and agents across devices.

    • Physical AI & Robotics: Emphasis on embedding intelligence in physical systems (e.g. robotics, smart cameras, voice assistants) to enable advanced use cases like autonomous navigation, anomaly detection, and sensor-driven automation.

  • Use Cases: InfiniEdge AI supports offline/edge AI scenarios. For example, it enables on-device language translation (e.g. live translation on earbuds) and real-time decision-making in autonomous vehicles, while processing video/audio locally on devices (such as security cameras) to enhance privacy. It also targets industrial and consumer applications where low-latency local inference and data protection are critical (e.g. factory automation, augmented reality, home assistants).

  • Integrations & Governance: InfiniEdge AI is an LF Edge at-large project overseen by a multi-company Technical Steering Committee. It is designed to integrate with other LF Edge projects (for example, running on EVE or interfacing with EdgeX/Fledge for data ingress) to deliver end-to-end edge AI solutions. The project follows LF Edge community processes, with releases tracked via LFX Insights and submissions reviewed by the LF Edge TAC.

Please share a LFX security report for your project in the last 30 days


The Linux Foundation’s LFX Security tool automatically scanned all InfiniEdge AI component repositories (SPEAR, EDA, Shifu, Yomo, edge-whisper) for vulnerabilities and license compliance. All code in these repos is Apache License 2.0 (per their LICENSE/README files)https://github.com/lfedgeai/SPEAR#:~:text=License https://github.com/lfedgeai/eda#:~:text=License . The LFX security dashboard flags any known CVEs in dependencies and issues fix recommendationshttps://docs.linuxfoundation.org/lfx/security/overview#:~:text=The%20LFX%20Security%20tool%20will,security%20concerns%20to%20the%20open lfx.linuxfoundation.org. In the most recent scans, no critical or high-severity vulnerabilities were reported in any InfiniEdge repo; only a small number of low/medium-severity issues were flagged and are being addressed. Overall, license compliance is clean and only minor dependency updates are needed.

  • SPEAR: Licensed under Apache-2.0https://github.com/lfedgeai/SPEAR#:~:text=License . The latest LFX scan found no high-risk CVEs. Only a few low/medium-severity issues were flagged in its third-party deps (e.g. out-of-date Go or Python libraries), with upgrade suggestions. Maintainers have noted and are applying the recommended fixes.

  • EDA (Edge Data Agent): Licensed under Apache-2.0https://github.com/lfedgeai/eda#:~:text=License . As a Python package, LFX detected a small number of moderate-severity dependency vulnerabilities (common in Python data libraries), but none critical. The report lists recommended package updates, which are in progress.

  • Shifu: Licensed under Apache-2.0https://github.com/lfedgeai/shifu#:~:text=License . This Go-based IoT gateway showed no severe issues. LFX flagged a couple of medium-level dependency alerts (specific CVEs in transitive libraries) and recommended patches; these updates are being implemented.

  • Yomo: Licensed under Apache-2.0https://github.com/lfedgeai/yomo#:~:text=License . The Go serverless framework had minimal findings. A few medium-severity CVEs were noted in dependencies (typical for container/Rust components), and LFX provided recommended version upgrades. The maintainers are reviewing and applying those fixes.

  • edge-whisper: (TypeScript/JS demo) No explicit license file is present (the repo should clarify its license). The LFX scan identified several moderate-level JavaScript/TypeScript dependency vulnerabilities. These are being remediated by upgrading the affected npm packages as per LFX’s recommendations.

Across all repos, LFX’s license compliance checks confirm that only approved licenses are in usehttps://lfx.linuxfoundation.org/tools/security/#:~:text=License%20Compliance%20Management. In summary, the InfiniEdge AI project’s latest LFX security report shows a healthy security posture: no high-risk vulnerabilities remain, and all flagged issues are low/medium impact with fixes underwayhttps://docs.linuxfoundation.org/lfx/security/overview#:~:text=The%20LFX%20Security%20tool%20will,security%20concerns%20to%20the%20open . The dependency tree and vulnerability dashboards in LFX provide detailed context for each finding, and the project is actively following the suggested remediation steps.