As just about anyone who knows me can tell you, I'm into robots. But what I'm into is way beyond anything I could build myself, given current resources.
Once you get beyond a minimal level of robotic complexity, you start seeing advantages to breaking out parts of the computational load, keeping them relatively local to the sensors and effectors they manage. This means distributed processors, which is fine, until you start trying to get them to talk to each other, at which point you'll discover that you've just become a pioneer, exploring poorly-charted territory.
It's not that there hasn't been any groundwork at all done, but there's nothing close to being a single, standard approach to solving this relatively straightforward problem.
Nor is that so surprising, because until recently there hasn't been much need to solve it, since most devices had only a single CPU, or, if more than one, then they were tightly integrated on the same circuit board, connected via address and data buses, and most of the exceptions have been enterprise servers, with multiple processor boards all plugged into a single backplane.
But the time is coming when, for many devices, the only convenient way to connect distributed computing resources together will be via flexible cables, because they will be mounted on surfaces that move, relative to each other, and separated by anywhere from a few centimeters to tens of meters. But they'll still need fast connection, both low latency and high data rates.
From what I've seen so far, RapidIO is the leading contender for this space.
No comments:
Post a Comment