The News: Last week, Apple’s acquisition of Xnor.ai was reported, no doubt aiming to deliver TinyML to edge devices. Xnor.ai, a Seattle startup specializing in low-power, edge-based artificial intelligence (AI) tools. Spun off from the Allen Institute for Artificial Intelligence, the three-year-old startup’s technology embeds AI on the edge, enabling facial recognition, natural language processing, augmented reality, and other ML-driven capabilities to be executed on low-power devices rather than relying on the cloud. (TechCrunch).
Apple’s Acquisition of Xnor.ai aims to Deliver TinyML to Edge Devices
Analyst Take: Developers of AI applications for edge deployment are doing their work in a growing range of frameworks and deploying their models to myriad hardware, software, and cloud environments. This complicates the task of making sure that each new AI model is optimized for fast inferencing on its target platform, a burden that has traditionally required manual tuning. That’s why Apple’s acquisition of Xnor.ai was big news.
Over the past several years, DevOps tools have come to market to ensure that the toolchain automatically optimizes AI models for fast efficient edge execution without significantly compromising model accuracy. One such product portfolio is from Xnor.ai. This startup provides technology that makes AI more efficient by allowing data-driven machine learning (ML), deep learning (DL), and other AI models to run directly on resource-constrained edge devices—including smartphones, Internet of Things endpoints, and embedded microcontrollers–without relying on data centers or network connectivity.
Apple Optimizes AI for Edge Deployment
Apple’s acquisition of Xnor.ai takes Apple deeply into the “TinyML” revolution. This refers to a wave of new approaches that enable on-device AI workloads to be executed by compact runtimes and libraries installed on ultra-low-power, resource-constrained edge devices. These new approaches are essential because many chip-level AI operations—such as calculations for training and inferencing—must be performed serially, which is very time consuming, rather than in parallel. In addition, these are computationally expensive processes that rapidly drain device batteries. The usual workaround—uploading data to be processed by AI running in a cloud data center—introduces its own latencies and may, as a result, be a non-starter for performance-sensitive AI apps, such as interactive gaming, at the edge.
Xnor.ai addresses these requirements by replacing AI models’ complex mathematical operations with simpler, rougher, less precise binary equivalents. Xnor.ai’s approach can boost the speed and efficiency at which AI models can be run by several orders of magnitude. Their technology enables fast AI models to run on edge devices for hours, leveraging only a single CPU core without appreciably draining device batteries. It achieves a trade-off between the efficiency and accuracy of the AI models and assures that real-time device-level calculations stay within acceptable confidence levels.
Xnor.ai’s approach greatly reduces the CPU computational workloads typically associated with edge-based AI functions like object recognition, photo tagging, speech recognition, and synthesis. The Apple acquisition of Xnor.ai suggests that AI-enabled runtimes and libraries could become standard features in future Apple iPhones, Apple Watches, TVs, and other devices.
Xnor.ai Helps Apple Field High-Performance AI for Mobile, Embedded, and IoT Platforms
Most likely, Apple will leverage Xnor.ai technology in future versions of Siri, achieving performance gains in natural language understanding and generation. In this way, Xnor.ai technology will be a necessary—albeit far from sufficient—tool to help Apple catch up with Amazon Alexa and Google Assistant in the edge-based virtual assistant market.
Bringing Xnor.ai into its product portfolio also provides Apple with an edge app development tool geared for a wide range of programmers, not just those who are knowledgeable about AI, DL, and ML.
Xnor.ai offers the AI2Go developer SDK, a self-service platform that allows programmers to easily drop AI-centric code and data libraries into device-based apps. The SDK provides a unified abstraction layer for building, compilation, and training of edge AI models that frees developers from having to worry about target-device CPUs and AI accelerators.
In addition, Apple may use Xnor.ai’s Bundles to address industry-specific edge-AI opportunities. This refers to a family of industry-specific binary modules that each contain a domain-focused AI model and an inference engine. Currently available Bundles include modules for AI applications in smart home, retail, and automotive, with the relevant models pre-trained and optimized for those use cases. Bundles allow domain models to run on-device and be programmed with a few lines of implementation code.
Furthermore, the Xnor.ai acquisition gives Apple an in-house AI edge-device development team. Xnor.ai has developed a standalone AI chip capable of running for years on solar power or a coin-sized battery. It has also developed an AI-enabled device that can autonomously monitor grocery shelves.
The acquired technology will allow Apple devices to operate independent of the cloud and thereby give the vendor strategic leverage over such cloud titans as AWS, Microsoft Azure, and Google Cloud Platform.
Last but not least, Xnor.ai’s tools will help Apple address compliance concerns associated with safeguarding data privacy on edge devices. Xnor.ai technology keeps AI data secure on mobile devices rather than sending it to the cloud. It does this while maintaining acceptable performance and accuracy in edge-based AI apps.
Futurum Recommendations for Apple
Futurum recommends that Apple follow-up the Xnor.ai acquisition by acquiring a family of tools for lifecycle management of “TinyML” DevOps workflows. This will be a critically important portfolio for Apple to build as the edge AI space bursts wide open in the coming 5G era.
Apple should explore acquiring DevOps tools to support management of datasets, development of algorithms, governance of model versions, and deployment and monitoring of device-optimized edge-AI models and code.
It would behoove Apple to seek out complimentary partnerships in the TinyML ecosystem. One vendor that would be useful for Apple to explore partnering with, licensing, or acquiring outright would be Deeplite, whose first-to-market “neural architecture search” technology, was something I discussed recently in my article “CES 2020: Consumer-Facing Opportunities Accelerate Evolution of the AI DevOps Toolchain.” Already, AWS has an open-source offering, AutoGluon (more on that in the article below), that can support an equivalent automation feature for making AI neural networks smaller, faster, and more energy efficient with minimal accuracy degradation.
To further its edge-facing TinyML capabilities, Apple should also explore relationships with several other startups discussed in that recent CES post (linked above). At that event, SiFive, Inc., and CEVA, Inc. announced that they are partnering to deliver systems on chip for a wide range of domain-specific AI applications on edge devices for smart home, automotive, robotics, security, augmented reality, industrial, and IoT applications. Last but not least, SensiML provides a toolkit for end-to-end development of data collection, labeling, algorithm auto generation, and testing for on-device AI applications.
Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.