Ecosystem may be something of a misnomer, but it hits most of the right buttons, so it' a good way to begin.
In the future, nearly every electrical device will also be an electronic device, and nearly every electronic device will either implement voice control or (at a minimum) be connected to a network which provides that service for the space in which the device must operate. Voice control requires sophisticated signal processing. For an SoC with the power to handle voice control, the additional operations necessary to carry out the decoded voice commands will nearly always be trivial.
Say the command is "preheat the oven to 375 degrees". Once that command is received, parsed, and decoded, all that remains is to encode a relatively simple message and send it off to the oven. At a guess, this might represent 1/10,000 of the overall processing to carry out the voice command, certainly a very minor portion.
Devices that move around on their own will have more need for built in voice control than stationary devices located where there is likely to be a room-sized network to handle it for them, as in the kitchen. They might accomplish this by wirelessly piping their audio input to a central server and receiving back the decoded command, but they will need substantial onboard processing power anyway, for vision and other senses, so having them handle voice control for themselves will seem quite natural.
Voice control and machine vision are, by today's standards, very processor intensive. "Processor" here doesn't necessarily refer to a CPU, but given the trend to SoCs it might refer to a specialized core on a chip with other cores, at least one of which is general purpose. Ideally, those specialized cores could be temporarily repurposed as needed.
In terms of sheer processing power, an SoC with both machine vision and machine hearing would run rings around the CPUs in today's desktop computers. One implication of this is that such a chip could handle many of the tasks desktops currently handle, in its spare time. If you have several such SoCs distributed among several devices in a network, these tasks could be distributed among them, further lightening the burden.
That is based only on available processing capabilities and doesn't take interface issues into account. Perhaps, for some purposes, like writing or coding, you'd rather sit in front of a big screen and interact with it using a keyboard, and for other purposes you might prefer to use a touchscreen tablet. It also neglects the need for a central hub in the network, to act as a local server, as an always-on connection to a cloud service, and as a gateway to the internet as a whole. But, important as that component will be, it won't represent much of an investment, something on the order of the router/switch hubs of today, and you might forget that it's there.
Even the large screen and keyboard is likely to eventually become a thinish client, intended for use with a server, instead of a stand-alone machine(*), and the software it runs is likely to split into client and server components, partly because you won't want you're e-self to be too tightly associated with any particular machine, and partly because you'll want it represented by always-on, internet-connected agents running on the server and in the cloud. Most of the purposes for which we use computers today will either be relegated to the central server (and/or the cloud) or will be handled by one or another of the devices connected to the network, including the big screen in the living room.
*(Laptops are likely to continue to be stand-alone machines, since they will still need to work independently when no network connection is available.)
Most of the consumer's dollar will go to devices that move around on their own, with their hefty processing capabilities and ever-growing mechanical sophistication, performing an ever-growing repertoire of tasks.
Sunday, January 31, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment