The Internet of Things (IoT) is about to change profoundly the design of embedded systems—but probably not in the way you are thinking. The change will begin not in silicon or in algorithms but in business models. Yet it will quickly permeate every aspect of embedded design.
Early warnings of the shift began several years ago, when IBM—the quintessential hardware company—began to divest its hardware operations to focus on services. Today, we see the CEO of Apple saying he is focused on doubling the company’s services business—including the App Store and Apple Music—from last year’s $25 billion, which already exceeded Mac sales.
But what do IBM or Apple corporate strategies have to do with embedded design? The answer is illustrated in a recent product announcement from a very different kind of company.
An All-Hearing Speaker
Consider the Amazon Echo. At first glance it is a rather expensive amplified, wireless loudspeaker—a really simple embedded system. But to stop there would be to completely misunderstand Amazon’s business model for the Echo. And without getting the business model, you would conclude the Echo is the most hopelessly over-designed audio appliance in history. But it is not. It is a harbinger.
Reading Amazon’s literature, one learns that the Echo has not only a Bluetooth port like any other wireless speaker, but also a WiFi connection that allows it to play audio files directly from the Amazon cloud. The user interface for this connection is Amazon’s Alexa voice-recognition personal assistant.
But Alexa can do much more than front-end Amazon Music. It can also access other commercial music services. And Web services like Yelp or Google Calendar. And it can answer questions with search-based responses. And it gives access to thousands of commercial apps, and to Amazon shopping. In short, Alexa makes this wireless speaker a portal for the retail Web.
But wait—there’s more. Echo can also front-end smart-home IoT networks, giving you voice control over everything from lights to door locks to your furnace.
The point Amazon doesn’t advertise is that in providing all this, the Echo becomes the collection point for a huge amount of personal data. It observes what you listen to, what questions you ask, what you buy, and how you interact with your home. It can infer from its beam-forming microphones array and cloud-based postprocessing, the identities and locations of people in your home, and what media they are playing audibly. There are already tales about an Echo attempting to execute commands given by an Alexa commercial playing on a TV in another room.
Amazon can use all of this data to refine its services to you—improving its response to questions, tweaking its music or audiobook offerings, proposing products and up- or cross-selling. But it can also, with your permission—you did read that fine print before you clicked, right?—collect data about you for use by third parties, including application developers, market researchers, or retailers. Did you really walk back to the TV when that hair-loss ad came on? Each of these uses of data is a potential revenue source for Amazon.
Accordingly, if you take an Echo apart, what you find is not just a Bluetooth chip and a power amplifier. There is the WiFi port. There is a substantial DSP chip, an array of seven microphones, 256 Mb of DRAM and 4 GB of NAND flash. All of this hardware contributes to the user interface, which reportedly can capture spoken voice from across the room with the music turned up loud. But the cost probably was justified at least partly by anticipation of these new revenue streams.
A Layered Model
“Fine,” I hear you say. “But I don’t work for Amazon.” Consider that the Echo, and Google Home, and doubtless similar devices on the way from other giant companies with major Web retail presence, are not just harbingers of future consumer audio products. They are pointing out a path for many kinds of embedded systems.
Let’s look, for example, at the slowly-emerging industrial IoT. Traditionally, an industrial controller will be a microcontroller unit (MCU), digital signal processor (DSP), or FPGA with a number of inputs from the device under control, each input measuring a necessary state variable, and a number of outputs to various actuators (Figure 1). With the need to include the acronym IoT in the sales literature, some such designs are now including a network interface of some sort, with the at least implied ability to log data to and receive commands from this network port. This ability may simply rest on top of the controller’s legacy functionality. Or it may be used, latencies permitting, to move some of the control functions to an external hub or even into the cloud.
Now with a sideways glance at the Echo, let’s go a little further. If the controller is logging system state to the cloud, someone up there is building a huge, and potentially valuable, data set. Big-data analysis might detect prospective optimizations of the control loop, or trends that could predict needs for maintenance. Or it might spot anomalies that could indicate trouble elsewhere in the system, or even operator malfeasance. All of this is technical data that can be used within the system.
The trend is for data that normally would have been locked into a local control loop to be collected and disseminated more widely within the control system, or even beyond. Just how deeply the need to give IoT access to control variables has penetrated into the hardware is illustrated by motor control subsystems and chips from German motion experts Trinamic. Company director of marketing Jonas Proeger observes “That is a very typical use of our monitoring and sensorless control functionalities like stallGuard2, which is sensing the load on the motor during movement.”
In addition to locally controlling motor stalls, the motion controller can continuously report data to an external app. “You can monitor your system while it is in use. If there is a continuous increase in load over time you can expect that your mechanical system is wearing out. Some of our customers use this to automatically detect when it’s time to call the service technician.”
In some cases, data collected at the point of control will be exported for use by other parts of the control network, or even by cloud apps, all in the name of improving system performance. But other uses of the data might have a very different client. State data analysis can give product planners and marketers insights into how the system is being used, and even where there are friction points between operators and equipment. Such data can lead to new features or services, and to further sales calls to enhance installed equipment.
Some of the data planners want, however, may lie beyond simple state information. It may require the controller to extend it sensors deeper into, or further beyond, the device under control. Perhaps information about the equipment’s surroundings is valuable, or data about the operator’s behavior. Think about that microphone array on the Echo, and consider that sound can carry a great deal of information about vibration, friction, shaft lashing, and other effects that might not be easy to extract from just winding currents on the motors.
Another interesting example of collateral information comes from a human-input technology developer called Quantum Interface. The company’s IP, instead of relying on mouse sliding and clicking, uses gesture input devices to capture, for example, the center of mass, velocity vector, and predicted trajectory of an operator’s hand in three-dimensional space.
Company founder and CTO Jonathan Josephson explains that such continuous, spatial motions can be more natural than pointing and clicking. They can create economies—for instance by allowing the system to observe an operator opening a valve rather than having to read a shaft encoder on the valve shaft. And maybe most important, such an interface can detect metadata—such as the operator’s feelings of urgency, uncertainty, or distraction—from the motion, even before a gesture is completed. Whether from whole-body motion, hand motion, or eye tracking, “the path tells us everything about you,” Josephson says.
The controller may even look beyond its operator for data. In many situations there is a threshold in how much state information you collect. Below the threshold it is more economical to measure individual state variables directly with analog sensors. Above the threshold, it makes more sense to capture many variables at once indirectly—for instance with video or audio—and then to use analytics to derive values for the individual variables.
The classic example comes from municipal services. It is far easier to derive parking-space occupancy from an overhead video camera than from sensors buried under each parking space. And once you have the video stream, you can further analyze it for traffic information, security information, and maybe even foot traffic and pedestrian behavior data for local merchants.
A New Line of Business
This 1st idea illustrates an important point. Sometimes the data available to a controller have nothing to do with the controller’s original function—they are available because of the accident of the controller’s location, or of other data the controller must gather. If you use a video camera to monitor on-street parking, you are also going to collect a lot of data about sidewalk goings-on. Less obviously, perhaps, if you are using video to replace a number of shaft encoders and linear position sensors, using a wide-angle lens and aiming the camera up a bit could bring a windfall of data about operator behavior, other operations on the factory floor, or customer behavior in a retail setting.
But why? None of this data has anything to do with the controller’s function. Now we come to the importance of thinking about alternative business models. Data is money. Often this extraneous data can create great value when analyzed, properly sanitized, and marketed to third parties. Operator data can yield information for functional-safety compliance, worker training, or even workplace security. Shop-floor video could yield clues about workflow, process inputs, even incidentals like lighting, supervisor behavior, and weather. The wealth of information you can get by watching retail customers needs no elaboration.
The point is that not only is this data valuable, but you—the system vendor, service provider, or equipment owner—can sell it. This is a valuable potential revenue stream, sometimes eclipsing the value of the embedded system’s original function. And that changes everything.
A New Machine
The presence of the IoT brings a new dimension to a wide range of embedded systems, from industrial controllers to vehicular subsystems to drones and consumer appliances. Now designers must consider not only the embedded system’s primary function, but the potential value of the data the system might gather.
This value can come from several sources:
- Data for improved control strategies, broader optimizations, or anticipatory maintenance
- Vital inside information about users and use cases for product planning and marketing
- Data that can be mined and sold to third parties.
As the value and volume of collected information grow, these requirements begin to materially alter the system design (Figure 2). For example, the Amazon Echo, far from being a simple Bluetooth speaker, devotes significant cost to a microphone array, signal processing, WiFi, and Internet access. Much of this cost can be justified only by the other functions—retail transactions and Web traffic—the Echo generates.
Moving deeper into the IoT-influenced age, we may begin thinking of embedded systems not as independent fixed-function devices, nor even as clusters of sensors and actuators for cloud-based algorithms. Rather, they may become collectors of dense, high-volume data, with hardware dedicated to transporting streaming data, to local analytics functions, and to vastly enhanced security. They become the ears, eyes, and perhaps snouts of increasingly voracious big-data analyses, seeking the ultimate profitability of omniscience.
By Ron Wilson, Altera Corp