Monday, November 17, 2014
I have, on several occasions, remarked that it would be nice if certain, unnamed chip design houses (on at least one occasion I imprecisely used the word "vendors") would make their chips available to the startup/DIY/hobbyist/education market, in lot sizes appropriate to that market.
So what, you might ask, would prevent other companies from scooping up those chips and using them in competing products? There are two answers to that question.
First, you don't market your latest designs this way, particularly not when your own products are parts-supply constrained. Rather, as each design reaches the end of its life in your own product lineup, you let the fabrication line run for just a bit longer to produce a few extra parts (thousands, tens of thousands, or hundreds of thousands), for use in repairing returned devices and for sale, at a significant mark-up, to alternative markets, as to the producers of small circuit boards like the Raspberry Pi, or even as single parts to supply houses like SparkFun.
That "significant markup" is the second reason why this would not aid the competition. While you may be able to cost parts included in your own products at a slim margin above the cost of production, there's no need to apply this practice to parts sold into the open market. You can charge several times, even ten times, the cost of production, and still be doing your customers a favor.
The only party who would stand to lose from this, so far as I can see, is Atmel, who has the lion's share of the market for processors used in boards sold to individuals and in small lots. However, since they already have the relationships for serving this market, as well as some potentially useful processor-design related IP, a great first step would be to buy Atmel.
In any case, as the size of this business grows, and it's sure to, it will become more reasonable to create custom designs better suited to it, combining cores honed for highly-competitive mass markets with more generic i/o circuitry. It will also become more reasonable to make one's software development tools available for use in programming devices into which one's chips have been incorporated.
Not saying who I'm talking about/to here, but, if the shoe fits, please try it on.
Sunday, November 09, 2014
Let's use an imaginary example, to avoid confusion with real team loyalties, beginning with an imaginary sport, SpaceBall, played in zero gravity (in orbit until it becomes possible to cancel out gravity over a small portion of the Earth's surface, at which point the popularity of the sport takes off). Players navigate about a polyhedron-shaped arena using arm-powered flaps (wings), rebounding off trampoline-like walls, and manipulating the ball with their legs, holding it between their knees as they fly, dribbling it with little nudges as they accelerate and repeatedly catch up with it, or shoving or kicking it to pass it to another player or move it nearer to their own goal or attempt a score. Because the wings afford little control at low speed, the usual practice is to take full advantage of the walls to build up speed, so players can be seen flying through the arena in all directions. To add just a bit to the excitement, players have the option of storing part of the effort they exert in the form of compressed air, which can be released as a jet, accelerating them 'upwards' meaning in the direction of their heads. Near collisions happen continuously, and actual collisions resulting in injuries are quite common. Also, although technically forbidden, except to knock the ball from the grasp of the player who has possession of it, players frequently make intentional contact with their opponents, kicking or slapping them with their arms. It being very difficult to distinguish between intentional and accidental contact, only the most obvious instances are penalized.
Because of the huge expense involved in building an arena, there are only a handful, essentially one per continent, and because of the huge investment required to build a competitive team, only the largest metropolises have their own, while most teams are franchises relying upon something other than specific geographical identity, such as a broader cultural identity, to build their fan bases. Monetizable fan bases are critical, so the investors who originally built or who later bought the teams can recoup their money and make a profit.
The appeal to broader cultural identity means that the teams become surrogates for actual inter-cultural tensions, with the outcome of specific contests frequently being portrayed in moral terms and the elation or dejection from a win or loss frequently spilling over into the streets.
Like the teams in this imaginary scenario, political parties have set themselves up as the champions of various cultural segments, usually multiple such segments, in the effort to patch together a plurality of voters. And, even though they may be put off by the others with whom they find themselves lumped together, and by some of the positions taken by their team's candidates, voters usually hold their noses and vote for the team with which they most strongly identify, with a lot of dark money going to insuring that any combinational irritations aren't felt strongly enough to keep them from doing so, and magnifying the irritations that would be experienced by those switching to a different team affiliation.
This nose-holding propensity is what makes it possible for the deep pockets, essentially investors, to bankroll one team or another, in the assurance that victory will result in a more favorable state of (financial) affairs for themselves.
Rather than go on, providing real-world examples, I'm going to cut to the chase, which is that if you would like to help shrink the influence of big money on politics, one way to do it is to participate in MAYDAY.US, the crowd-funded effort to elect "a Congress committed to fundamental reform by 2016."
Friday, November 07, 2014
I don't talk much about this, but I do think vertical farming will be an increasingly important contributor to food production in the future, and that it will be highly mechanized almost from the outset. My concern is with the land that continues to be subject to the need for production and the desire for landscaping, pressures that vertical farming won't relieve soon. So long as we continue to manage land for our own purposes, we need to do a far better, far less destructive job of it!
Thursday, October 30, 2014
Sometimes you'd like to vote for a ballot issue, but it contains some fatal flaw, such as the use of debt to pay for something that ought to be funded out of current revenues, even if it means being patient, yet voting against it seems like sending the wrong message, because it's the use of debt that you're voting against, not the basic proposal itself.
Sunday, October 26, 2014
An extensive review article published on Nature's website, and described on the UC Davis news website, concludes that no-till farming only results in yield increases in dryland areas, and then only when combined with crop rotation and residue retention, and that it results in a yield reduction in moist climates.
While I have no reason to doubt the conclusions of the co-authors, as far as they go, I do have some concerns as to the scope of the comparisons they've made. However, not having read the full article, I can only pose questions and suggest considerations which may offset or even outweigh the modest yield reductions in moist climates, which they've noted.
It's hard to know where to start; this is such a complex subject. As practiced in western countries, no-till usually also means weed suppression by use of herbicides. It may or may not include residue retention, but if the residue is retained it is likely to be in rough form rather than finely chopped, or retained as the dung of the animals that grazed on it after harvest, never as well-distributed as the residue was in the first place. It may or may not include crop rotation, but almost certainly does not include polyculture (also called intercropping), which has become an all too rare practice.
Allow me to back up a bit, and consider an assumption, as expressed by one of the co-authors: "The big challenge for agriculture is that we need to further increase yields but greatly reduce our environmental impacts." Certainly we need to vastly reduce the environmental damage being done by modern agriculture, but just how much do we really need to increase yields. Population growth estimates not taking into account the predictable reduction in fecundity that accompanies prosperity will result in alarmism, but the reality is that what benefits the global economy has to offer the poorest are slowly finding their way to every corner of the planet, and it's reasonable to think that the world population will plateau, if not at ten billion, then perhaps at eleven or twelve billion. Of course, there is hunger now, even starvation, much of it happening in the dryland areas surrounding the Sahara. Yield increases in this region would be particularly helpful, but are complicated by competing uses, as fuel and as animal feed, for the residues which should be left in the fields. Realistically, the bottom line comes down to this: Can we afford to sacrifice long-term fertility for short-term gains in yield?
That question begs another, does the article published in Nature include any long-term studies, by which I mean at least twenty years, preferably longer? Not only does tillage gradually burn through (literally oxidize) soil organic matter, eventually effecting water absorption and retention and nutrient availability, and increasing the energy required for ongoing tillage as the soil becomes denser, but also it takes time for an ecosystem of animals and microbes to develop that can efficiently incorporate crop residues into the soil, particularly in fields that have a long history of routine tillage.
Were any options other than simply leaving residue in the field or grazing considered? Are there any cases of fine-chopping residue during harvest? What about initially removing everything but the stubble and returning it after processing it through animals (as feed), through anaerobic digestion (producing methane gas for fuel), and/or through composting?
Were the costs of production considered? No-till generally involves the cost of herbicide and its application, but tillage is an energy-intensive operation, and over the long term diesel will only become more expensive. If the fuel must be grown, shouldn't the percentage of the overall crop area required to grow it be deducted from the net yields? How does no-till look after performing that calculation?
Nor have we yet seen the full benefits of no-till, because we have yet to develop equipment appropriate to it. Western civilization is so accustomed to tillage that we tend to be blind to assumptions made stemming from a fundamental assumption that tillage is the foundation of agriculture. We see equipment built to perform tillage at work and don't think twice about it. There have been some adaptations – spraying equipment that is only as heavy at it needs to be for that purpose, and oversized tires for heavier equipment – but nearly all of the equipment in use, even in no-till operations, still deals with land as a bulk commodity, measured in acres per hour, rather than at the level of detail required to, for example, selectively harvest one crop while leaving several others, intermingled with it, undisturbed.
Until recently, this could only be accomplished by hand labor, but with the advent of computing using integrated circuits, and its combination with sensory hardware, sophisticated mechanisms, and software to match the problem space (together comprising the field of robotics), the question of whether such work can be mechanized has been transformed into one of how soon. A significant obstacle to this development is cultural, in that we've all but forgotten how to tend land in this manner, and may have to reinvent the practice in order to program the machines. Certainly many in our agricultural colleges and universities will require remedial education.
Sunday, October 12, 2014
James Gosling, famed software developer who has spent his last several years working at Liquid Robotics, was recently the featured speaker at a CMU Robotics Institute seminar. My purpose here is not to discuss that talk as a whole, but to focus in on particular issues he discussed which are more generally applicable.
At 52:10, he begins the discussion of fault management, describing, among other things, how LR relies heavily upon features of Java that support continuous operation in the face of problems that would cause software to stop abruptly in other environments.
At 54:30, he discusses communication modes and data prioritization, which is an issue for LR because real-time transmission can cost them as much as $1/kilobyte, for a data rate of ~50 baud.
At 57:46, he briefly discusses security issues, which he says he could have talked about at much greater length.
At 58:43, he mentions Java's write once run anywhere advantage, and how LR makes good use of it in writing and debugging their software.
At 1:05:17, he responds to a comment from the audience regarding inclusion of a basic feature, camera panning, the consequences of various approaches to crafting hardware to support it, and how LR has worked around the problem.
At 1:07:59 he launches into the topic of parts availability, or lack thereof, noting that chips LR would like to acquire are only available as part of circuit boards, or in large lots, which constrains their choices in hardware design.
This last item, the lack of availability of what are, in a volume context, standard parts, is my main motivation for going to the trouble of posting this. It holds back not only the development of robotics, but electronics startups of all sorts, and, to a lesser extent, hobbyists (because in most cases those complete boards are what they need).
Wednesday, October 08, 2014
While casting about for some way of putting the phenomenon of the Islamic State in context, it occurred to me that the history of Christianity provides a rough parallel – the Inquisition.
Sure, the Inquisition was organized more like a court than a military operation, and no one was guaranteed a place in Heaven for participating in it, but the idea of harsh punishment for heresy or apostasy was as much a part of it as it is today a part of the Islamic State.
On huge difference is that the Islamic State is, of necessity, also a civil authority, and that among its ambitions are the elimination of foreign influences from the territories is considers to be its domain, and in that it is more like the war of reconquest (La Reconquista), which achieved ultimate success in 1492 and paved the way for the Inquisition.
Perhaps the Islamic State is like La Reconquista and the Inquisition rolled into one.
Friday, August 01, 2014
Wikipedia also has a fairly extensive article on UARTs, the electronic components found at both ends of most serial connections and responsible for encapsulating the complexities of making them work reliably, presenting simplified interfaces to the processors to which they are connected.
Sunday, July 27, 2014
As just about anyone who knows me can tell you, I'm into robots. But what I'm into is way beyond anything I could build myself, given current resources.
Once you get beyond a minimal level of robotic complexity, you start seeing advantages to breaking out parts of the computational load, keeping them relatively local to the sensors and effectors they manage. This means distributed processors, which is fine, until you start trying to get them to talk to each other, at which point you'll discover that you've just become a pioneer, exploring poorly-charted territory.
It's not that there hasn't been any groundwork at all done, but there's nothing close to being a single, standard approach to solving this relatively straightforward problem.
Nor is that so surprising, because until recently there hasn't been much need to solve it, since most devices had only a single CPU, or, if more than one, then they were tightly integrated on the same circuit board, connected via address and data buses, and most of the exceptions have been enterprise servers, with multiple processor boards all plugged into a single backplane.
But the time is coming when, for many devices, the only convenient way to connect distributed computing resources together will be via flexible cables, because they will be mounted on surfaces that move, relative to each other, and separated by anywhere from a few centimeters to tens of meters. But they'll still need fast connection, both low latency and high data rates.
From what I've seen so far, RapidIO is the leading contender for this space.
Tuesday, June 24, 2014
People distrust authority, and for good reason.
There are many examples, both historical and contemporary, of authority being abused for the advantage (whether personal or collective) of those in authority and/or belonging to the power base behind the authority, or for reasons relating to unquestioned dogma. This is true across the board, whether that authority is religious, political, economic, or even scientific in nature.
There are also many examples of upstart movements and theories, deserving of being smacked down, in each of these realms. Aside from the background of nonsense noise, this is a problem in that it can be very hard to differentiate between a quack and the next Einstein, and broad suppression of quackery risks 'throwing out the baby with the bathwater'.
But beyond that, suppression feeds people's suspicion regarding authority, which plays into the hands of the quacks.
To me this appears to be an irresolvable quandary, and that the best we can do is to insure that the public is as prepared as realistically possible to evaluate novel ideas for themselves, and to detect the whiff of quackery wherever it might turn up – even when it emanates from the halls of authority.
Friday, June 06, 2014
Tuesday, May 27, 2014
There is a rumor going around that Apple is (again/still) considering switching to its own ARM-based CPUs in at least its lower-end Macs.
First, consider that platform independence was one of the primary touchstones in the development of OSX, and, from the beginning, Apple maintained parallel PowerPC and Intel builds, for something like five years before finally deciding to take the plunge, driven, in the end, by IBM's unwillingness to continue to invest in energy-efficient consumer versions of its POWER architecture, and Motorola's disinterest in what they viewed as a niche market and heavy investment (eventually leading to heavy losses) in Iridium.
Driven by the need for reasonable performance in a very low energy package, Apple has developed its own line of processors, based on ARM, which they've made and sold by the millions, packaged in iPhones, iPods, iPads, AppleTVs, and perhaps even Airport Extremes. Because it owns the designs, the marginal cost of each additional unit is very low, and it's likely that they can assemble a circuit board bearing four, six, or even eight of their own A-series chips for what a single Intel processor costs them.
That Apple would maintain a parallel build of OSX on ARM is practically a given. Of course they do, and would have been doing so from the moment they had ARM-based chips that were up to the task.
Does the existence of such a parallel build mean that a switch to ARM is imminent? No, but Intel had better watch out that they don't try to maintain profitability by hiking the prices of their processors even higher, because it's very possible that they've already passed the point where Apple could get better performance for less money by using several of their own processors in place of one Intel processor.
And, don't forget that Apple has been through such a transition twice before; it would (will?) be as seamless as possible.
Sunday, May 11, 2014
That might seem like a strange question for someone like myself to be asking, but it's an important one.
It has become clear to many educators that some facility with data structures, algorithms, and user interfaces has become an important aspect of literacy. While this is a welcome development, it is nevertheless important to ask "to what end?"
Is it necessary, or even desirable, for all of today's K-12 students to grow up to be programmers? Clearly not. Not only are there many other positions which will need to be filled, but, beyond relatively trivial examples, programming is a subtle craft requiring a concurrence of aptitude, attitude, and knowledge to achieve useful results, and most people who are not professional programmers, even if they know enough to put together working code, are, in most instances, better off leaving the coding to the professionals.
Nevertheless, early exposure can tune one's attitude, and improve one's aptitude and one's chances for accumulating the necessary knowledge. At least as importantly, it will also serve to identify those with a particular gift for coding sooner than would otherwise be the case. But there is value in that exposure that has very little to do with preparation for direct involvement in future programming projects, and a great deal to do with learning to think rationally and to communicate with precision.
Those skills are generally applicable, in all manner of vocations, for reasons having nothing to do with computing, but they become particularly important as decisions formerly made and tasks formerly performed by humans become the purview of machines, whether computers or robots.
For each such real-world context into which some degree of automation is to be introduced, it is vital that there be at least one person who is adept or able to interpret for those who are, and possesses the clarity of thought and expression to guide those who are tasked with developing those cybernetic systems. Without such guidance, in the vast majority of cases, automation also means a sacrifice of competence, as even senior engineers are rarely also domain experts, outside of their specialities, which may or may not apply to the project at hand.
By insisting that all students have some exposure to programming, we are improving the chances of such a person being available to guide the next expansion of the domain of automation, and the next, and the next, and thereby improve the chances that the knowledge and skills of contextual experts will be preserved in the process.
Tuesday, May 06, 2014
Internet backbone provider Level 3 reports that six of the internet service providers it connects to have allowed those connections to remain continuously congested, and that these same ISPs are insisting that Level 3 should be paying them for access to their networks.
[Insert sound of loud, annoying buzzer.]
The problem with this is that it's backwards. If anyone should be paying for access to a network, it ought to be the companies with subscriber income paying the backbone providers, not the other way around.
Wake up and smell the stench of irrational overreaching, people!
Friday, May 02, 2014
The 2014 Apple Worldwide Developer Conference (WWDC) opens June 2nd, in San Francisco, at which the company is widely expected to have something more to say about its reportedly health-related 'iWatch' product. Arch-competitor Samsung just announced its own health-related event five days earlier, also in San Francisco.
I have to wonder just who Samsung thinks is going to attend their event. Local tech journalists with nothing better to do, obviously, but what if you were a tech journalist based somewhere further away than San Jose or Sacramento, and weren't already planning to spend the week leading up to WWDC seeing the sights of San Francisco, and had invitations to both events, but could only reasonably attend one of them, which one would you choose? For most, the choice would be obvious, and it wouldn't be Samsung.
We can presume the coverage of the Samsung event will come from 1) locals, 2) junior staffers sent by their editors, and perhaps 3) a vacationing pundit or two.
So why is Samsung going to the trouble when the most likely outcome is that their event will serve as a set, which merely lofts the ball for Apple's spike?
I see three ways in which Samsung stands to benefit.
If Apple makes no mention of anything resembling an 'iWatch' in the public keynote which opens WWDC, then, for a few weeks or months, Samsung looks like the company that's actually doing something about health, and gains a degree of credibility for being in the market from the beginning, when in fact they are very late entrants.
If, on the other hand, Apple does introduce the 'iWatch', Samsung's event will serve to focus even more attention on it than would have otherwise been the case, drumming up even more hype, and, presumably, expanding the size of the potential market for health-related devices in general, of which Samsung might reasonably expect to eventually inherit a sizable chunk.
However, the real coup for Samsung would be if the state of readiness of the 'iWatch' project is such that Apple would prefer to delay its announcement, but, having been thus challenged by Samsung, opt to go ahead with a pre-announcement, even though product availability is still months away, thus providing Samsung with both a clear target and time enough to pull off one of their rapid cloning acts.
Very clever, actually.
Wednesday, April 23, 2014
Much has been made of the turnover at Apple since the death of Steve Jobs, with more than a few concluding that Apple's time of amazing success is over, to be replaced by either stagnation or decay. Steve was the source of innovation within the company, they argue, and without him Apple is doomed.
There's no doubt that Steve was a genius, in his own way, and that Apple's turnaround and rapid ascension to contend for the title of most valuable company in the world was, in no small part, his doing. On the other hand, as much as he relished being surrounded by brilliant minds who could steal the spotlight from him, there is a strong tendency for such powerful leaders to become encrusted with others for whom the truth is whatever the leader says it is, who contribute little more than amplification of that leader's insights and predilections.
No more. Those days are gone at Apple, or at least so dramatically altered as to require a wholesale changing of the guard. Tim Cook may not have Steve's charisma, but neither is he as susceptible to flattery, and, as long-time operations chief, he has a great deal of practice in peering through pretense to gauge whether a person, partner firm, or product proposal contributes to the company's health or degrades it.
Any who made a career of being a yes-man for Steve would have a very hard time of it in today's Apple, and I would like to suggest that underlies the departure of at least a few from the company.
Sunday, March 30, 2014
Facebook obviously isn't going away, despite having paid far too much for WhatsApp.
So, please, before doing so becomes even more difficult, find a new name for it!
"Facebook" derives directly from the company's origins, but, frankly, it sucks as a name.
My preference, given their recent purchase of Oculus, would be "The Rift", but almost anything would be preferable to "Facebook".
Saturday, March 29, 2014
"Them" – we've all heard, and probably said it, thousands of times, that vague reference to those who are really in control, whoever they might be.
I no longer believe in "Them", at least not in the sense of a single, mutually aware group occupying the top of the pecking order for all purposes.
Sure, there are people who wield more power than others, particularly in specific contexts, but there are millions of them, and taken together they are so far from being a united force in human affairs that the notion is frankly laughable. Even "Citizens United" only come close to actually being of one mind on a very narrow range of issues. Outside of that context, they're all over the map.
My advice? Spend less time worrying over what "They" might be up to, and more time and energy on figuring out what we all need to be doing in this epoch, and how you can contribute to that.
Thursday, March 27, 2014
UPDATE: Almost simultaneously with my posting this, Microsoft announced Word, Excel, and PowerPoint for iPad. While editing requires an Office 365 Home subscription, the free apps work as viewers without that, so they have essentially just shipped a free PowerPoint viewer for iPad. My recommendation? Get Keynote instead. It's fully functional for $10, and it also works as a PowerPoint viewer.
While all of us not in the business or holding stock in one of the major broadband providers would like them to both drop prices and raise the bandwidth above the threshold where it becomes meaningless as a constraint on internet use, we shouldn't be holding our proverbial breath. Prices charged to consumers may come down, and overall bandwidth will surely continue to rise, but unless the FCC sees its way clear to declare data transmission a utility, and those that provide it common carriers, savings to consumers will likely be more than offset by charges to content providers for the full-speed access to networks they require to remain competitive, and those charges will necessarily be passed along to consumers, except where the content providers' price structures already provide sufficient wiggle room to absorb them.
This mainly affects the delivery of streaming media, streaming video in particular, which needs uninterrupted bandwidth to perform as expected. Buffering can help, but to really be sure that a playback won't balk halfway through, the entire program or movie needs to be buffered, which is no longer streaming.
Part of the problem with both streaming media and play on demand is that each instance of delivery is a separate transmission. Multiple data centers allow a content provider to originate transmissions more locally, but thousands of store-and-forward nodes would be required to make them truly local, and the cost of so many network connections at that level could very well prove exorbitant.
An option as old as programmable VCRs is to record the programs you want from a broadcast stream, for later viewing. Digital equivalents exist, but my impression is that they really don't do anything to enhance the quality, like capturing files complete with adequate error correction out of the digital cable stream.
If you could capture a bit-perfect file from a broadcast stream, and I'm certain it's possible, that file could also be encrypted, facilitating paid high-quality content.
While I'm on the subject, I'd also like to mention the glaring absence of a standard multimedia format that combines video sequences, still photos, transitions, programmed graphics and animations, audio, and so forth, using no more data than is required for each. There's Flash, but if it were easy to use why do we see narrated slideshows recorded as video? There's PowerPoint and Keynote, but the same objection applies. Quicktime may have come closer to providing a cross-platform solution than anything else, and if it were to be transformed into a player for Keynote files (including video as a media type) and made available on Android in addition to Apple's platforms and Windows, that might be the best available solution.
With this foundation, Apple would be in a position to challenge YouTube, by providing a better experience per bandwidth consumed, while providing yet another reason for content creators to own a Mac.
Sunday, March 16, 2014
I have a great deal of experience in a relatively narrow niche of public transit: cities with populations of around 100,000, with particular emphasis on the operation of a circulator route, with no fixed schedule, which connects major destinations and other bus routes, using GPS technology that was state of the art fifteen years ago.
At first glance, my job is all about the positions of buses relative to the other buses going the same direction (clockwise or counterclockwise), and the frequency with which they pass each stop along the route, but dig a little deeper and it ends up being mainly about people.
There are a few things I could say about the purely operational layer, without getting into personalities, but nothing very interesting, so there's no point in pursuing it, unless perhaps I were to transform it into a game.
Such a game might be an excellent way of training others to do what I do. It could also constitute a big step toward the creation of better tools to support that work, even automating parts of it, improving overall performance. However, considering my age and how focused I am on other things, it's doubtful that I'll ever get around to writing it.
In case you're motivated to take up this challenge, I'd just like to say that, in the ideal case, such a game wouldn't be tied to any particular geography, but would be configurable for whatever real (or imaginary) context the user chooses. Elements of the game might include the number of buses on route (each direction if bidirectional), traffic signals, the patterns in which consecutive signals are linked, the alteration of those patterns through the day and by day of the week, the probability of having to wait through more than a single cycle of some particular signal due to backed-up traffic, the placement of stops and the probability of a passenger showing up at any particular stop at various times of the day/week, where the boarding passengers are likely to want to get off, inherent instabilities in the regularity of buses passing particular stops, and a toolkit of techniques to rectify irregularities.
There's quite a lot more that could be included, but these are the most basic factors, and more than enough to take on for a first pass.
Wednesday, March 05, 2014
As Microsoft discovered with the Kinect, something developed for a particular context may prove very desirable in other contexts.
For example, consider the tiny cameras that are used in most cell phones, and particularly the better of these, like those used in iPhones. Despite their size, they are very capable, and they are made by the millions, taking full advantage of the economies of scale implied. Despite the rather complex design, with multiple lens elements, Apple's cost for one of these is a few dollars.
For another example, consider the M7 motion coprocessor in the iPhone 5s, also fabricated by the millions. It independently tracks motion, making it unnecessary for the CPU to be powered up to handle such tasks, extending battery life while making continuous tracking more practical.
Both of these technologies would be very helpful in many robotic applications. Sure, there might be alternatives for both, but would they be as thoroughly engineered, as efficient, as compact, or produced in anything like the same numbers?
If you want to take advantage of technologies developed for a mass market, the most direct path to doing so is to make use of the actual parts used in that market. Of course, a prerequisite for doing so is that those parts be made available outside of the supply stream for the products they were developed to be part of.
Sunday, March 02, 2014
Okay, some things are changing at PrimeSense.
According to this post on I Programmer, the OpenNI website is set to close on April 23rd, just over seven weeks from now, plenty of time for those involved to download the latest version of the SDK, which they'll want to do as I see that the version sitting on the parallel GitHub site is not the latest beta, more likely the latest stable release.
If you have another look at the OpenNI website, you'll notice that there are Windows and Linux versions of the SDK. Considering that Apple has no clear interest in supporting development of the software on these platforms, the wonder is that those versions are still there, months after their acquisition of PrimeSense, not that they have set a date for pulling the plug on the OpenNI website.
Anyone who cares to will be able to pursue development of the (necessarily forked) version of the SDK that will continue to be available through GitHub, on whatever platform they choose. Being middleware, even if Apple were to cut off the supply of PrimeSense chips, OpenNI will continue to have value as it should be possible to make it work with other sensing hardware.
For their part, Apple is sure to make a derivative version of the SDK available through their iOS and/or OSX frameworks (eventually both, undoubtedly), as part of some future version of Xcode.
In my view, Apple's enlightened self-interest would dictate that they should continue to make PrimeSense chips available, not the latest designs of course, but about two years after they first find their way into Apple products, by which time they will have been reverse-engineered by competitors multiple times anyway. If Apple can maintain a technological lead, then their two year old designs should still be competitive with current designs from competitors, especially if priced at a low multiple of the cost of production. Likewise, they could safely contribute two year old frameworks to the GitHub-hosted OpenNI project, in the certainty that in doing so they would not be giving away any secrets.
By the same reasoning it could be to Apple's benefit to make their older SoC designs available as parts – say beginning with the A4, after it has been retired from Apple's product line – and to cooperate with smaller companies seeking to incorporate those chips into microcontrollers or similar products. This would be a way of recovering residual value from the expense of developing those designs in the first place.
Thursday, February 13, 2014
A rumor that Apple had acquired PrimeSense (the developer of the technology found in the original Kinect), which had been making the rounds for months, was finally publicly confirmed in late November, less than three months ago.
Immediately following that announcement, a wave of angst regarding the availability of Kinect-like technology passed through the tech community, and anyone with a stake in that availability scrambled to find alternatives, if they hadn't already begun that search based upon rumor alone.
Apparently the general presumption is, as is often the case when a larger company (like Apple) swallows a smaller one, that PrimeSense's ongoing business would be limited to the fulfillment of existing contracts, while all assets not needed for that would be busily assimilated into and repurposed for the needs of the mothership. After all, this is what happened to PA Semi, a few years ago.
That would be a reasonable expectation, except that, to judge by its website, PrimeSense is still very much in business.
Sure, their product roadmap is likely to have been altered as a result of the acquisition, and Apple is likely to reserve the newest, hottest technology for their own use, until it's no longer the newest and hottest, but they'd be fools to shut down a revenue stream they can basically get for free, since whatever they develop for their own needs is sure to find a persistent, ready market, if made generally available as parts.
Apple would, of course, be keen to secure the advantage of being able to differentiate their products from those of their competitors, so any company in the computer, smartphone, or tablet business, or any other business Apple is about to enter, would probably find the selection limited to technology that's no longer cutting edge, and others will likely find that Apple's contract stipulates OEM use only, with a prohibition on component resale.
That would be a problem for the hobbyist market and businesses that serve it, but Apple could mitigate this by allowing small-lot retail resale.
Additionally, allowing PrimeSense to engage in wholesale distribution of older component designs could provide Apple with an outlet for disposing of any component overstock that wasn't thoroughly specific to their own products. They might even discover a nice revenue stream in the sale of their SOCs and other chips, like the M7, for use in microcontrollers.
So, while it's nice to have options, and I can't blame anyone for looking for alternatives, don't forget that PrimeSense is there, since they may still turn out to be your best option.
Friday, February 07, 2014
Updated, see below.
You have to wonder what's up with the Woz. My theory is that he's set himself up as Apple's foil, contributing emphasis to what makes Apple Apple through personally contrasting with it at every opportunity.
Case in point: according to Infoworld Woz actually, publicly suggested that Apple should consider building and marketing its own Android phone.
Personally, I can't imagine a quicker path to undermining everything the company stands for. Not only would such a project dilute Apple's focus on their own platform, but it would erode the market for that platform while at the same time devaluing the company's reputation for quality, through the marketing of an inherently inferior product.
And that's probably the point. In thinking through this suggestion, we are reminded why it is a nonstarter, as with so many other offhand comments regarding Apple's business model.
So the more sincere Woz is in his ranting, the better he serves as a model for all of the naysayer pundits who continue to bellow that Apple must lose its soul to preserve its success, making them all look foolish by association.
UPDATE (February 9th): It appears Woz was trolling.
Friday, January 17, 2014
Think triage. Take a collection of companies that Apple might be interested in buying and they will fall into one of three groups: 1) yes, buy it now, 2) not currently a good fit, or 3) a good business partner, but Apple would only need to buy it to keep it from falling into the hands of a competitor.
An option, for that third category, would be for Apple to acquire an option to buy each company they consider vital to their own business, so that, if a competitor were to make a (verifiably legitimate) offer to buy one of these, Apple would have the option of interdicting that purchase by matching the competitor's offer themselves, or if they elected not to do so they should at least be provided with evidence that the terms which had been presented to them were the actual terms of the competitor's purchase, and not a trumped-up figure. Moreover, if Apple decided not to accept the terms of the buyout, and the competitor backed out of the deal, the company under consideration of acquisition should refund the cost of Apple's option, unless Apple were to respond with a counter offer which they accepted.
Wednesday, January 08, 2014
Korea, South Korea, is at the top of its game. That's not to say that a fall is imminent, nor even on the horizon, only that their current degree of engineering prowess and economic prosperity is unprecedented and, to use an over-worn expression, world class.
The other Korea, North Korea, by contrast, is decades behind the world community in almost every respect, a situation compounded by the malnourishment of its people.
This set of circumstances is very like that which existed in Germany (then West & East Germany, respectively), just prior to the dismantling of the Berlin Wall. For West Germany, reunification was initially a disruptive burden, owing to the poor economic condition of the east, but it also meant a suddenly expanded labor force sharing a common language.
In Korea reunification seems a remote possibility, at present, but a shift in the political wind in the north could change that rather quickly. And don't expect much in the way of resistance from China, since North Korea's persistent cult of the leader makes communism look worse than it otherwise would, and reunification would provide them with a more prosperous trading partner in place of the burden North Korea currently represents to them.
But the really interesting dynamic would be how the economy of Korea as a whole would benefit from reunification, through the combination of North Korean labor, which could be made far more potent by means of a little food aid, and South Korean industrial ability, which, automated as it is, can still make good use of less expensive labor.
Even more than with Germany, language would bind and lubricate Korean reunification, since the Korean language is not widely spoken outside of the peninsula and also not strongly subdivided into dialects. Even without exclusionary laws, the common language would help insure that South Korean business had special access to the North Korean labor pool.
One major question remains; can some degree of economic reunification proceed without a regime change in the north? If China were to signal that North Korea should look to its southern sister for assistance, that seeming impossibility would suddenly become a matter of how much how soon, and for China that would be a matter of shrugging off a burden.
Wednesday, January 01, 2014
While acknowledging your right to disagree, for the present purpose I'm going to assume that the ‘theory of evolution’ (really more of an established principle) is essentially correct, along with all of the usual corollaries.
However, it does not follow that holding this view, however orthodox and realistically unassailable it may be, confers any evolutionary advantage whatsoever upon those who hold it. In fact the advantage, as measured by fecundity, may belong to those who hold another view, for example that humans have existed in our present form since the beginning of time, a view that approximately one third of Americans espouse. (Pew Research via Reuters)
While there are practical limits on how far one's internal model of the world may diverge from reality without negative repercussions upon one's chances for contributing to the gene pool of the future, social cohesion probably matters more than accuracy in something as esoteric as the origin of the human species. If access to resources depends in any tangible way upon echoing the views of those around you, however erroneous, voicing disagreement is likely to prove counterproductive, in terms of natural selection, even when your view is correct and theirs is not – perhaps especially then.
Nor should there be anything surprising about this. Science is a relatively young phenomenon. For most of the time since the emergence of humanity, we have dealt with our own unknown origins by telling stories, frequently quite fanciful stories involving magical beings, also frequently transforming those stories into dogma as some groups became dominant and others subservient. The need for some explanation, even if a vacuous one, might be considered a defining characteristic of the human species, as distinguished from chimps, for example, who lack complex language and presumably therefore also lack the need for explanations regarding questions they lack the capacity to pose, much less to contemplate at length.
And so we still do, traffic in stories that is. For most of us, most of what we know is composed, not of data and analysis, but of stories, sometimes based on data and analysis, more often not, or at least not directly so. Even so, the majority of us have had enough exposure to science to recognize reliable methods and reasonable conclusions, and make use of that general familiarity in filtering the stories to which we are exposed.
The lingering question is whether we should be concerned over higher rates of childbirth among people whose story filters are less well developed. Is there a risk of resultant devolution? Perhaps, if it were to persist for another few thousand years, but cultural evolution is happening far more quickly and there are far more pressing issues to worry about. We do, however, need to continue to put effort into bridging the gap between scientific methods and popular beliefs, working to improve everyone's story filters.
To be brief, government has no natural interest in protecting intellectual property of any sort unless doing so results in an improvement in the volume, quality, and/or relevance (largely a function of timeliness) of material passing into the public domain.