CES 2020: Consumer-Facing Opportunities Accelerate Evolution of the AI DevOps Toolchain
by James Kobielus | January 9, 2020

CES is not an enterprise-oriented tech event. But it hosts a wide range of exhibitors and features a fair number of products and technologies that are “dual-use,” in the sense that they have clear applications both in consumer and business domains, as I noted in my recent article CES 2020: It Might As Well Stand for Connected Ecosystems Showcase.

Chief among these versatile technologies is artificial intelligence (AI), which is in abundance at CES 2020. One of the most noteworthy trends in the consumer space is the embedding of AI into many products to support natural language processing, predictive analysis, and contextual recommendations. Indeed, it was not hard to find consumer products at CES 2020 that embed such popular AI-based technologies such as Amazon Alexa and Google Assistant.

But a wide range of more platform-level AI technologies were also launched and discussed at CES 2020, and many of these had clear dual-use applicability to business applications of AI. In addition to Intel’s announcement of AI-optimized next-generation edge-oriented chipsets on Monday, Futurum Research learned that Arm is designing its Pelion software IP and device-embeddable operating system for a growing range of low-power, high-performance AI apps running on edge devices. In one-on-one discussions with Arm executives, we learned that the vendor has leveraged IP from its recent acquisitions of Stream Technologies and Treasure Data to ingest, store, and manage data to be used in building and training ML models that can execute transparently across CPUs, GPUs, and neural network processing units. It is also making further investments in tools enabling ML models to be dynamically updated on edge devices and support secure, distributed ML computations that span multiple nodes.
But that just scratches the surface of platform-level AI activities at CES 2020. Here are some other noteworthy launches, demonstrations, and discussions:

  • Automation of AI model optimization for target deployments: At its booth at CES, Montreal-based AI startup Deeplite ( ) discussed its “neural architecture search” technology for automating the creation of optimized AI neural-net architectures for the deep learning and machine models that go into diverse AI applications. Deeplite’s “Lightweight Intelligence” tool uses neural architecture search algorithms to make deep neural networks smaller, faster, and more energy efficient with minimal accuracy change—and without requiring manual inputs or guidance from scarce, expensive data scientists. Here is a recent press release describing how Deeplite work with Taiwanese firm Andes Technology to optimize large models for deployment to Andes’ RISC-V hardware. The reinforcement learning engine in Deeplite’s hardware-aware optimization engine automatically found, trained and deployed a new model less than 188KB in size and with only a 1 percent drop in neural-net inferencing accuracy.
  • Domain-optimized on-device AI systems on chip: SiFive, Inc., and CEVA, Inc. announced at CES that they are partnering to deliver systems on chip (SoCs) for a wide range of domain-specific AI applications on edge devices. The vendors are engaging in joint SoC development that combines their respective IP for ultra-low-power on-device inferencing in smart home, automotive, robotics, security, augmented reality, industrial, and IoT applications. They are optimizing SiFive’s RISC-V CPUs and CEVA’s digital signal and neural-network processors to support on-device inferencing workloads such as imaging, computer vision, speech recognition and sensor fusion. CEVA is also supplying a full development platform, compiler, tools, and libraries for development of deep learning applications.
  • Accelerated AI app development for IoT/edge endpoints: SensiML demonstrated its toolkit for easy integration of real-world edge AI inference models into existing IoT platform. The toolkit provides an end-to-end development platform for data collection, labeling, algorithm auto generation, and testing for on-device AI applications. It supports Arm Cortex-M class and higher microcontroller cores, Intel x86 instruction set processors, and heterogeneous core QuickLogic SoCs and QuickAI platforms with FPGA optimizations. SensiML claims that its tool allows developers to build intelligent endpoints up to five times faster than hand-coded AI-based IoT solutions.
  • Automated AI real-world experiments: Allegro AI announced at CES an open-source DevOps tool that automates how data scientists run real-world experiments involving deployment of deep learning and machine learning models from cloud to edge. The new Allegro Trains Agent automate AI real-world experiments, version control on deployed DL/ML models, and resource management on clusters that execute these models. It enales data scientists, researchers and algorithm engineers to run, track, reproduce and collaborate on successful ML/DL experiments.
  • Over-the-air programming of AI-powered smart cameras: AnyConnect and ASUS announced the availability of an AI-powered camera platform for secure edge applications running over Wi-Fi, 4G, and 5G networks. The new AnyConnect Smarter Camera Platform enables camera networks, such as for surveillance applications, that can make automated, instant decisions, notifications, and actions. The camera platform incorporates ASUS Tinker Edge T, a single-board computer (SBC) designed for AI applications and featuring the Google Edge TPU. It supports over-the-air programming of AI models for flexible camera use cases. It supports standard AI accelerators and frameworks including Google Edge TPU and TensorFlow Lite. The partners have also released a reference design for AI camera form factors including security cameras, dashcams, and bodycams.
  • Synthetic data to improve training of AI models for autonomous endpoints: QuEST Global demonstrated an AI-enabled advanced driver assistance system (ADAS) that had been developed by training deep learning models using synthetic (aka simulated) data representing various environmental conditions and terrains. Using synthetic training data enables the ADAS application to make automated inferences that are 25 percent more accurate than the ones trained using only actual image data.
  • Contextually adaptive in-app AI-facilitated user experiences: LG launched a conceptual framework for development of AI-enabled in-app experiences that are highly adaptive, contextual, and effective. LG’s new framework ties layered enhancements in AI-enavle user experience to specific technical capabilities. Experience efficiency uses AI to automatically adjust performance in user interactions in relation to pre-established sensory input parameters. Experience personalization accumulates data from interactions with the environment and users, uses AI to recognize patterns, and leverages the patterns to improve user task effectiveness. Experience reasoning uses AI causality learning to drive more reliably predictive outcomes for users. Experience exploration uses AI to develop new application capabilities through the automated forming and testing of hypotheses to uncover new inferences.

In Futurum Research’s ongoing coverage of the AI DevOps toolchain for consumer and other applications, we will explore these and other innovative approaches for making cloud-to-edge inferencing more efficient, accurate, and flexible. What was noteworthy about the AI-centric discussions at CES 2020 is how thoroughly these practices are coming into the development and tuning of the intelligence being baked into every new consumer-focused product.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Related content:

CES 2020: It Might As Well Stand for Connected Ecosystems Showcase 

Intel Provides Big Updates on Project Athena at CES 2020

5 Key Themes That Will Dominate Headlines At CES 2020

Image Credit: UploadVR
James Kobielus
Close Menu