tag:blogger.com,1999:blog-326341422024-02-20T19:29:43.326-07:00Lacy Ice + HeatJohn Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.comBlogger406125tag:blogger.com,1999:blog-32634142.post-19322265556360340862021-11-06T07:55:00.004-06:002021-11-06T08:06:45.311-06:00Peer-to-peer networking, etc, etc, ...<p>
Among my collection of lesser obsessions (actually, at this point, a stronger obession than my once-compelling interest in the martial arts) is a way of communicating among anything from code fragments to processor cores to entirely separate devices, over what is essentially a mesh network, although it may have hierarchical attributes layered over the peer-to-peer relationships.
</p><p>
<a href="https://twitter.com/harmonicLattice/status/1456635897726836750" target="_blank">This tweet</a>, and the thread in which it is embedded, should serve as a decent entry point into this constellation of ideas/interests, which reaches beyond such networks to include on-the-fly programmable gate arrays, radial bit-slice processors, plug-and-play robotics, and, potentially, a way of thinking about how memes and messages move through social networks.
</p><p>
A term of my own invention (so far as I'm aware) you'll see mentioned is "cascading networks", which refers to an addressing scheme for navigating a netowrk composed of nodes having ports, each dedicated to a single, typically bi-directional connection, either to another node or to code/hardware hosted by a node, inclduing but not limited to the code/hardware of the node itself (its own kernel). The nutshell version is that however many bits are needed to specify a port to receive a message are stripped away from the header when forwarding the message, and the return address appended at the end of the message. Under ideal conditions, this scheme should enable message forwarding with extremely low latency. Making such a network secure and robust would no doubt entail considerably more complication.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-34154148499765902212021-10-25T10:03:00.003-06:002021-10-25T10:03:43.959-06:00<p>
It seems I have more strong interests (not quite obsessions) than are represented by my other three blogs.
</p><p>
This seems like as good a place as any to pursue those.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-75608904967892569902019-12-18T08:48:00.001-07:002019-12-18T08:52:38.262-07:00UI latency, and how to avoid it...<p>
What I left unsaid in my last post, in September, was that much of what initially drove my interest in SwiftUI is the hope that it will enable lower latency interactions, by avoiding things like the responder chain and delays that are built in for the purpose of distinguishing among gestures (whether a touch is actually the beginning of a swipe, for example).
</p><p>
The responder chain is about which view should respond to a click (in macOS) or a touch (in iOS and iPadOS). Events are first passed to the screen/window's root view, and then down a chain of more specific views until one possessing an appropriate gesture recognizer is found.
</p><p>
The main source of built-in delay I know about relates to scroll views, and whether a touch is intended to scroll the overall content view or to interact with a child view contained within it.
</p><p>
I'm now thinking that avoiding these sources of latency was probably a vane hope, since, as I understand it, (for now) SwiftUI translates the declarative code it enables into native UI entities for the platform(s) for which it is compiled. There may be some performance advantages, and are likely to be more in the future, but there are some hard constraints. No matter the nature of the code, it will still be necessary to determine which view should respond and to what type of gesture.
</p><p>
Happily, as a source of motivation, this hope has been replaced by some interest in the framework for its own sake, and in the other language features that enable it, although you might not know it from the paltry progress I've managed to make so far.
</p><p>
As for how to avoid latency, the most reliable answer seems to be the same as ever, keep your UI structure as flat as is reasonable, rather than going hog-wild with layering views within views within views. Some such layering is inevitable, just be moderate with it, and don't sweat the unavoidable milliseconds.
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-69471504280379795972019-09-03T09:11:00.001-06:002019-09-03T09:14:25.237-06:00Dog-paddling behind the bow wave<p>
Those who make it their business to be on the leading edge of the rapid evolution of Swift are deep into figuring out what the latest developments are all about, and have begun sharing the fruits of their study and experimentation with the wider community, myself included.
</p><p>
What I've gleaned from the sliver of this growing abundance I've actually managed to take in is more about general impressions than about, for example, precise syntax. This is typical for me, and I believe it pegs me as a 'field-dependent' learner. (I tend to build a cognitive framework first and fill in the detail later, rather than working from detail to generalities.)
</p><p>
Immediately following WWDC, most of the attention went to SwiftUI — at least within the Apple ecosystem an altogether new approach to user interface design and construction, coming with an interactive Xcode component that does for UI much the same thing playgrounds have done for Swift code since Swift was first introduced, and enabling a degree of code-sharing among platforms previously only dreamt about.
</p><p>
As interesting as this is, the technologies making it possible are arguably even more so.
</p><p>
Combine continues to mystify me, somewhat, but I get that it's about data synchronization, so your UI and your data store remain in continuous agreement, for example, although the utility of combine goes far beyond UI synchronization.
</p><p>
Property Wrappers allow commonalities among the behaviors of various properties to be expressed once, and applied in any specific case as a single line of code.
</p><p>
Function Builders I've barely even got a sense for, but they seem to be about reducing repetitive code by generalizing function definitions to accept varying numbers and/or types of parameters, and/or to return tuples containing varying numbers and/or types of members. (Don't try spending this, it might prove to be counterfeit!)
</p><p>
While I have installed the betas of Catalina and Xcode 11, what coding I've had time for has mainly been in C (more precisely C code in .m files devoid of custom Obj-C classes), since I'm working toward building a new audio generator in C, which will live within an app otherwise written in Swift. I fully expect to get into SwiftUI and all that goes into it down the line, but I want to get that audio generator working first.
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-84276878835394811522019-06-19T08:53:00.001-06:002019-07-19T07:35:00.535-06:00In the wake of WWDC<p>
WWDC has come and gone, but its wake is still rolling across the Swift language and Apple platform developer communities (lots of overlap there, but they're not identical).
</p><p>
Swift got a major facelift, in the form of SwiftUI, an Apple framework that shines a spotlight on recent changes to the language, property wrappers and opaque return types in particular.
</p><p>
Unfortunately, as I understand it, to dive into SwiftUI I'd need to install the beta version of macOS 10.15, which I'm not yet ready to do. Maybe I'll take the plunge when the public beta is released, but even that isn't guaranteed.
</p><p>
Other language features only require the beta of Xcode 11, which I probably will install when beta 3 becomes available. (I didn't wait for Beta 3.)
</p><p>
In any case, putting together blog posts is part of my learning process, so you can expect to see some of that here in coming months.
</p><p>
Meanwhile, there are better resources out there than this blog, and I'd encourage you to go have a look at <a href="https://swift.org" target="_blank">the Swift.org website</a> and Dave Verwer's <a href="https://iosdevdirectory.com" target="_blank">iOS Dev Directory</a>.
</p><p>
You can also follow me on Twitter at <a href="https://twitter.com/harmonicLattice" target="_blank">https://twitter.com/harmonicLattice</a>, just be aware that I sometimes tweet out of confusion rather than understanding.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-18125220617636082712019-02-09T07:07:00.001-07:002019-02-11T11:45:33.597-07:00Manual Memory Management in Swift<p>
While working my way through a <a href="https://www.fast.ai/2019/01/10/swift-numerics/" target="_blank">blog post by Jeremy Howard</a>, a missing piece in the puzzle of working with audio data in Swift bubbled up to the surface of my mind, specifically how Swift's handling of mutable value types using copy-on-write might result in multiple memory allocations while writing values into a buffer, exactly what you don't want in a real-time context.
</p><p>
This realization led me to open <a href="https://docs.swift.org/swift-book/LanguageGuide/TheBasics.html">The Swift Programming Language</a> in the Books app and search on the word "buffer" which led to this link, <a href="https://developer.apple.com/documentation/swift/swift_standard_library/manual_memory_management" target="_blank">Manual Memory Management</a>, which is clearly the right starting point for investigating the handling of buffers when copy-on-write is not what you want.
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-28475524644603447112019-02-02T09:29:00.001-07:002019-02-08T22:26:14.919-07:00My first "Real" Programming Project, and also Real-time Coding<p>
Back in the mid-late 80's and early-mid 90's, I went through a series of hastily foreshortened computing experiences. I was a bit late to the personal computing scene, so there was an element of catch-up involved, but, from the outset, I was really more interested in development than in personal use.
</p><p>
My first exposure to computing was a class in Fortran, spring semester 1972, work for which was done on a card punch machine and programs run by taking card decks to a counter that separated the computer operators from the rest of us, loading the deck into a card reader, then returning later to pick up the printout. In addition to Fortran, there was a bit of JCL at the beginning (and end?) of each deck. I also had a single experience with using a printing terminal (no screen), having figured it out enough to get started from the documentation on a nearby table.
</p><p>
I did well in Fortran, but bombed out of PL/I (Programming Language One) the following semester. This was probably more due to extracurricular complications than anything to do with that language, but that, combined with my initial failure to comprehend calculus, did put any notion I had of going into computer science on hold for more than a decade. (At that time, computer science was widely viewed as a branch of mathematics, and calculus was required for admission into the degree program.)
</p><p>
I dropped out after my second year in order to pursue an entirely different, non-academic interest, which involved moving to a different city in a different state. Nevertheless, what little I'd learned informed how I then saw the world, and I was beginning to view everything largely in terms of information.
</p><p>
I caught my first sight of a personal computer of some sort in late-summer 1976, just after the end of a summer program I'd attended at a small college in Vermont. I didn't get close enough to it to read any labels, and it's possible that it was merely a desk-sized word processor, but that thought didn't occur to me until much later. I thought I was looking at a stand-alone computer dedicated to use by a single person, and that idea began worming its way into my consciousness and prepared me to understand the significance of Moore's Law, when I finally encountered it as such, and (re)awakened an interest in microprocessors and integrated circuitry in general, and in the programmable code behind digital computation. Clearly there was something there that would only become increasingly important, and would sooner or later enable things that were all but inconceivable at that time.
</p><p>
Even so, when I did go back to school full-time, in 1978, it was in biology, with the half-hearted intention of switching into engineering after completing my bachelor's degree. I've never really regretted my foray into biology, because without it I might never have found my way to (the layman's version of) <a href="https://en.wikipedia.org/wiki/Systems_theory" target="_blank">systems theory</a>, from which I gleaned a few key concepts, including that of an <a href="https://en.wikipedia.org/wiki/Open_system_(systems_theory)" target="_blank">open system</a>, <a href="https://en.wikipedia.org/wiki/Emergence" target="_blank">emergence</a>, and the elusive idea of a <a href="https://en.wikipedia.org/wiki/Attractor#Strange_attractor" target="_blank">strange attractor</a>.
</p><p>
All of this had nothing at all to do with what I was doing for a living at the time. So, shifting gears...
</p><p>
The first computer I actually owned was an Atari 600XL (<i>Great keyboard!</i>), which came with just 16 kilobytes of RAM, but I also purchased the external module that increased that to 64 kilobytes and a 300 baud modem, which I used to connect to a mainframe at the local university so I could work online and file programs for a course in Pascal.
</p><p>
Next I moved up to the Atari 1040 ST, and bought the developer's kit (Mark Williams C), but made the mistake of getting the lower resolution color screen instead of the higher resolution grayscale screen, so coding was difficult, and I was unfamiliar both with C and with operating system APIs in general and GEM/TOS in particular (DRI's Graphics Environment Manager and Tramiel Operating System after Jack Tramiel who'd purchased Atari). I did learn a bit, but it was overwhelming, and I ended up selling that machine, including the dev kit, and switching to MS-DOS running on an Epson computer for my next go-round.
</p><p>
As I recall, I used that machine for both a class in data structures, using Pascal, and an accelerated course in C, but that whole period is a blur for me, so I might have the details wrong. I do remember owning a copy of Turbo C for it.
</p><p>
Next came an Amiga 500, which made the Epson superfluous, followed by an Amiga 3000, which made the 500 superfluous. I got the 3000 because I had this idea for a program I wanted to write. There was also a dongle involved, the name of which I don't remember, that converted the Amiga's video output to a standard SD video signal, recordable on VHS. In any case, as with my early attempts at programming the Atari ST, this project involved programming in C using operating system APIs, but this time I had a clue about C and the <a href="https://archive.org/details/1990-beats-steve-amiga-rom-kernel-ref-3rd" target="_blank">available documentation</a> was much better!
</p><p>
This project involved first creating a map from code, then scrolling it around (similar to how UIScrollView works in iOS) while the mouse cursor, positioned in the middle of the screen, traced out a path. To avoid artifacts, and as a means of establishing pacing, the map scrolling had to be done during the vertical blank, the time when the electron beam in the monitor moved back from the bottom to the top. I don't clearly recall how this worked, but it probably involved creating a callback function and passing a pointer to that to the OS, so it would be called when the vertical blank happened. In any case, this was my first exposure to time-constrained (real-time) processing.
</p><p>
I might have a hard time proving this, since, if I do still have the code for it squirreled away somewhere, it's probably on an Amiga-formatted floppy disk, but that program did work, and I was able to make a video recording. I was not, however, successful in getting anyone else interested it the project, so I also lost interest. At the same time, my interest in Amiga as a platform was ebbing. I'd recognized the limitations (in the absence of scale and adequate investment) of their custom-hardware approach, and was ready to jump ship.
</p><p>
I nearly forgot one chapter of this story, which was that I replaced the Amiga with one of the original Pentium machines, a Packard Bell Legend 100, as I recall, which came with Windows For Workgroups and a ridiculous graphical wrapper. It had an optical drive as well as a hard disk and a couple of floppy drives. This was a machine I could reasonably have used for development, but my heart wasn't in it, and this was at the time when the web was taking off. I picked up a bit of HTML, but otherwise mainly used that machine as a fancy terminal emulator.
</p><p>
I'd been casually following Steve Jobs's NeXT since the beginning, and around the time of Apple's 'acquisition' of NeXT I donated the Pentium, then, soon after that, switched up my whole situation, which was disruptive, so it wasn't until the iMac came along that I again owned a computer, and even longer before I got into Mac programming. But that's another story, for another time.
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-24539544823903540892019-01-12T10:58:00.001-07:002019-04-03T08:18:01.980-06:00The Three Phases of Real-time Code<p>
Think of this as a novice's understanding, if you like. I make no claim to being any sort of expert, certainly not an expert in real-time coding, but I do have a tiny bit of experience, and have used this pattern, even if not quite intentionally.
</p><p>
First there's what you can do before the real-time code runs, to smooth the way for it and minimize the amount of work that has to be done by real-time code. In my own projects, this has mainly meant creating a precomputed table of sine values, enabling the calculation of sound samples based on sines via simple table lookups, but anything you can do ahead of time that results in fewer cpu cycles to accomplish real-time work is helpful.
</p><p>
The real-time code itself should be as simple as possible, avoiding anything that can be done beforehand or left for later and altogether avoiding dynamic method calls (calls to code that cannot be inlined because the specific version of the method to be used cannot be determined until runtime). In Swift, if you need to access a class method, make it a base class, use the "final" keyword on either the class or the method, and, if possible, within that method avoid accessing anything other than parameters and stored values.
</p><blockquote>
(Inserted 03April2019: Above, I should have said that the real-time <strong>algorithm</strong> should be as simple as possible. The code itself should be flat, so it executes inline, with a minimum of <i>jump</i> statements in the machine code. This may mean duplicating code to avoid indirection, and that's a reasonable tradeoff; performance matters more than code size in this context.)
</blockquote><p>
Avoid algorithms higher that O(n), or at the very worst O(n log n). Eliminate loops within loops, if you possibly can. Also avoid heap allocation and deallocation; whatever needs to be in the heap should be set up beforehand and left in memory until later. If need be, you can set up a flag to indicate when an object in memory is no longer needed.
</p><p>
Don't run real-time code on the main thread. If using a framework that includes a real-time context, take advantage of that by making use of the appropriate thread it provides, as by putting your real-time code into a callback. If you're rolling your own real-time thread, you're way deeper into this than I am already!
</p><p>
Finally, any cleanup that doesn't have to be done in the real-time code shouldn't be; leave it for later, passing along just enough information from the real-time code to enable the cleanup code to do its job. You can pass information out of the real-time context to your other code by modifying values stored in variables defined in the scope enclosing the definition of the callback. Grouping these into a mutable struct, an instance of a final base class, or mutable static values seems like a good idea.
</p><p>
And then there's testing. Think about what the worst-case scenario might be, and test for that. If your real-time code reliably returns within the time allowed under those conditions, it's time to test it on the slowest device that might be called upon to run it, and if that works you're golden!
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-54413143454953172812019-01-06T23:41:00.001-07:002019-02-11T07:49:17.330-07:00Swift: Origins and My Personal History with it<p>
Once upon a time there was a graduate student named <a href="https://en.wikipedia.org/wiki/Chris_Lattner" target="_blank">Chris Lattner</a>, who had a bright idea. That bright idea turned into <a href="https://en.wikipedia.org/wiki/LLVM" target="_blank">LLVM</a> (an acronym for Low Level Virtual Machine), essentially compiler technology, the popularity of which has been on a tear ever since.
</p><p>
Lattner went to work for Apple, but continued to be heavily involved in the development of LLVM, notably including in the extension of Clang (LLVM's C language front-end) to support C++. (In the LLVM world, a front-end turns what we commonly think of as computer code into <a href="https://en.wikipedia.org/wiki/LLVM#Intermediate_representation" target="_blank">LLVM-IR</a>, LLVM Intermediate Representation, from which it can then be further transformed by a back-end into machine code for some specific platform, after a bit of polishing while still in LLVM-IR.)
</p><p>
As I understand it, in the wake of that effort, Lattner thought there had to be a better way, and set out to create what has become known as <a href="https://en.wikipedia.org/wiki/Swift_(programming_language)" target="_blank">Swift</a> (<a href="https://swift.org" target="_blank">see also</a>). As you might guess from its name, one of the design goals for Swift was that it should run quickly, especially as compared with scripting languages like Javascript that are interpreted into machine code as they are executed, rather than being compiled ahead of time.
</p><p>
Another primary design goal was that it should be safe, immune to many of the categories of <a href="https://en.wikipedia.org/wiki/Software_bug" target="_blank">defects</a> that find their way into computer software. Yet another was that it should be modern, bringing together features from an assortment of other programming languages, features that make code easier to write and comprehend.
</p><p>
Something that may never have been a goal so much as an underlying assumption was that it should leverage LLVM. Of course it should; who would even think to question that, myself included! Swift's most tangible existence is as a <a href="https://en.wikipedia.org/wiki/Compiler#Front_end" target="_blank">front-end compiler</a> for LLVM, implementing and evolving in lockstep with an evolving language specification. (That front-end compiler determines what will and will not be allowed to progress to the LLVM-IR stage.)
</p><p>
But to get back to the story of its origin, Lattner worked alone on this for awhile, then showed it to a few others within Apple, where it at first became something of a skunkworks project, then a more substantial project involving people from outside the company, but still keeping a low profile. In fact it kept such a low profile that when it was publicly introduced at <a href="https://developer.apple.com/videos/wwdc2014/" target="_blank">WWDC 2014</a> nearly everyone was taken completely off-guard.
</p><p>
That public introduction brings to mind another design goal, or maybe design constraint, which was that Swift had to be interoperable with <a href="https://en.wikipedia.org/wiki/Cocoa_(API)" target="_blank">Apple's existing APIs</a> (Application Programming Interfaces), otherwise it would have had a much more difficult time gaining traction. Being interoperable with the APIs meant being interoperable with <a href="https://en.wikipedia.org/wiki/Objective-C" target="_blank">Objective-C</a>, the language Apple began using when it acquired <a href="https://en.wikipedia.org/wiki/NeXT" target="_blank">NeXT</a> in 1997. I can't speak to how Swift might have turned out differently if that were not the case, but I'm relatively confident this requirement served to accelerate its development by dictating certain decisions, obviating the need for extended discussion. (Swift also inherited some of the features of Objective-C, notably including <a href="https://en.wikipedia.org/wiki/Automatic_Reference_Counting" target="_blank">automatic reference counting</a> and the use of both internal and external parameter labels in functions, initializers, and methods, which contribute to its readability.)
</p><p>
So it's June, 2014, and Swift has just been announced to the world. Despite the vast majority of the existing code base being in Objective-C, <a href="https://en.wikipedia.org/wiki/C_(programming_language)" target="_blank">C</a>, or <a href="https://en.wikipedia.org/wiki/C%2B%2B" target="_blank">C++</a>, both at Apple and at other companies providing software for Apple's <a href="https://en.wikipedia.org/wiki/Computing_platform" target="_blank">platforms</a>, the writing was plainly on the wall that Swift would eventually largely if not entirely displace Objective-C, just not right away. Since I didn't personally have a large Objective-C code base, and what I did have I'd basically neglected for over three years, I saw nothing to hold me back from diving into Swift, well nothing other than having very limited time to give to it.
</p><p>
However, as I got further into it, I discovered some details that muted my enthusiasm. Most importantly for my purposes was Swift's initial unsuitability for hard <a href="https://en.wikipedia.org/wiki/Real-time_computing" target="_blank">real-time</a> use, like synthesizing complex sound on the fly (my primary use-case). It still isn't really suitable for such use, but it is getting closer.
</p><p>
I also had quibbling issues, including the initial lack of a Set type (collections of unique elements) despite that much of the functionality of sets had already been developed as part of the Dictionary type, and then, when a Set type was introduced, it felt like a step-child, with an initializer based on Array syntax (ordered collections which may have duplicate elements). I remember thinking, if Swift had started out with a Set type with its own syntax, it would have made far more sense for Dictionary syntax to be based on that rather than on arrays, since a Dictionary is essentially a Set with associated values, all of the same type. (There are only so many options for symbolic enclosure on most keyboards — parentheses, curly braces, square brackets, and angle brackets — and these were already fully subscribed — by expressions, blocks, arrays, and type specifications, respectively. <a href="https://en.wikipedia.org/wiki/Unicode" target="_blank">Other symbols</a> and combinations of symbols are available but would not be as straightforward to utilize, and, in any case, it's a bit late in the game to be making anything other than additive changes, alternative syntax that does not replace what already exists.)
</p><p>
Another quibble revolved around numeric values not being directly usable as Booleans (true/false). In C, in a Boolean context, zero evaluates to false, and any other value evaluates to true. This can be very convenient and is one of my favorite features of that language. Yes, in Swift one can always use a construction like <span style="color:orange">numericValue != 0</span>, but when you're accustomed to just placing a numeric value in a Boolean context and having it evaluated as a Boolean, that feels slightly awkward. I get that using numerics as Booleans invites the use of complicated Boolean constructions, which can make code more difficult to read. There have been many times when I've had to parse what I had myself strung out onto a single line, by breaking it out over several lines, to be able to understand it and gauge its correctness. Even so, it initially annoyed me that I would have to give this up in Swift. (I've long since gotten over this and now prefer the absence of implicit type conversions. In Swift, with regard to types, what you see is what you get, and that's a good thing!)
</p><p>
But there were also aspects of Swift I liked right away! <a href="https://docs.swift.org/swift-book/LanguageGuide/TheBasics.html#ID330" target="_blank">Optionals</a>, for example, just made so much sense I was amazed they hadn't already been incorporated into every language in existence, and enums with the choice of <a href="https://docs.swift.org/swift-book/LanguageGuide/Enumerations.html#ID146" target="_blank">raw or associated values</a> were clearly a huge improvement over how enumerations work in C and Objective-C. Also, once it finally settled down, function/method declaration and call-site syntax hit the sweet spot for me. Likewise access control — with the exception of the descriptive but nevertheless slightly awkward 'fileprivate' key word, but no more than I'm likely to ever have need for that I can live with it and certainly have no intention of attempting to reopen that can of worms at this late date!
</p><p>
Even though it initially seemed overdone, I like the emphasis on value types (<a href="https://academy.realm.io/posts/swift-gallagher-value-semantics/" target="_blank">value semantics</a>). I'm comfortable with a little bit of <a href="https://en.wikipedia.org/wiki/Indirection" target="_blank">indirection</a>, but begin to get nervous when dealing with pointers to pointers, and get positively fidgety when dealing with pointer arithmetic. (You know you want to keep those <a href="https://en.wikipedia.org/wiki/Call_stack#STACK-FRAME" target="_blank">stack frames</a> small, and pointers to objects allocated on the heap can be the most direct means to that end, but it can get out of hand.) Happily, Swift not only doesn't take away the reference option, it makes using it easier and safer, while also preventing value types from resulting in overly large stack frames and excessive copying.
</p><p>
I also like the notions of <a href="https://docs.swift.org/swift-book/LanguageGuide/Protocols.html" target="_blank">protocols</a> and <a href="https://docs.swift.org/swift-book/LanguageGuide/Generics.html" target="_blank">generics</a>, but in a fuzzier way, since I really don't completely comprehend either, and there are particular fundamental protocols, like Sequence, the existence of which I am vaguely aware of but not much more than that. I suppose these are the sorts of things I'll be delving into here going forward.
</p><p>
But to climb back up to my preferred edge-of-space point of view, I've had this sneaking suspicion all along that Swift is composed from <a href="https://en.wikipedia.org/wiki/Atomism" target="_blank">primitives</a> of some sort, although you might have to be a compiler adept to really understand what they are and how they fit together. To use the example of sets, dictionaries, and arrays, from above, the essential characteristic of a set is that its elements are unique; no two are identical. The Set type, as implemented, is unordered, but you might also have OrderedSet, and it is in fact quite possible to create such a type, moreover Apple's Objective-C APIs already include both NSOrderedSet and NSMutableOrderedSet. Likewise you might want an unordered collection, the elements of which are not necessarily unique. Uniqueness and ordering are independent attributes of collections.
</p><p>
Besides Sequence, among the protocols I'm vaguely aware of, are such things as Collection, MutableCollection, Numeric, and so forth. While these are straining in this direction, they really aren't the primitives that exist (only?) in my imagination, which at this point I would expect to be written in C++ rather than Swift, or only expressible in LLVM-IR.
</p><p>
I'm more curious about this than is good for me, considering how unprepared I am to understand things at this level, but there's no point in fighting it. Show me bread and I see the ingredients and the processes applied to them to make the finished product. Show me a car and I see parts and the assembly process. It's in my nature, and so it's also likely to show up here, to the extent I make any headway in wrapping my head around such esoterica.
</p><p>
<a id="goMeta">(17Feb2019)</a> I just realized that there's a general principle to be extracted from this, which is that there's no point in fighting a tendency to dream about what might be. Instead, it's better to choose dreams that are both achievable and worth the effort, and build bridges to them from current reality.
</p><p>
Update: As if to add credibility to my hunch about Swift being built up from abstract and/or compiler-level primitives, on January 10th <a href="https://twitter.com/jeremyphoward" target="_blank">Jeremy Howard</a> published <a href="https://www.fast.ai/2019/01/10/swift-numerics/" target="_blank">High Performance Numeric Programming with Swift: Explorations and Reflections</a>, in which he states "Chris [Lattner] described the language to me as “syntax sugar for LLVM”, since it maps so closely to many of the ideas in that compiler framework."
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-82889878616263079132019-01-04T17:40:00.001-07:002019-01-10T07:33:03.027-07:00Filling in the Gaps with Swift<p>
They say the best way to learn is to teach.
</p><p>
Well, despite having dabbled in Swift for 4.5 years, I don't think I'm quite ready to teach it, nevertheless I don't suppose it would do anyone any harm to watch over my shoulder as I attempt to wrap my own head around it.
</p><p>
I doubt that I'll entirely devote this blog to that purpose, but you can expect to see such posts begin to appear, directly.
</p><p>
For now, here's a link to an article by the amazingly prolific Paul Hudson, <a href="https://www.hackingwithswift.com/glossary" target="_blank">Hacking with Swift: Glossary of Swift Common Terms</a>. This article contains a few breezy, imprecise definitions, but they're mostly of a nature that won't be relevant as you're just setting out to learn the language. Just be aware that there'll be more detail to learn as you advance.
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-83711118406670567402018-10-28T19:59:00.001-06:002018-11-01T08:17:59.725-06:00Other blogs<p>
In my last post here, I mentioned three other blogs, but I failed to link to them. Here are those links...
</p>
<ul>
<li><a href="https://harmonicratio.blogspot.com" target="_blank">Harmonic Lattice</a>, renamed from Harmonic Ratio, is about a project to make musical scales based on pure intervals easier to use (loosely based on Just Intonation).</li>
<li><a href="http://cultibotics.blogspot.com" target="_blank">Regenerative AgRobotics</a>, renamed from Cultibotics, is about the application of robotics to enabling the scalability of perennial polyculture.</li>
<li><a href="https://gentlemartialpractice.blogspot.com" target="_blank">Aging Gracefully Through Gentle Martial Practice</a> is about what my fascination with the martial arts has evolved into and the insights I've experienced along the way.</li>
</ul>
<p>
Outside of these long-term obsessions, I haven't had much to say lately. No idea whether that will change.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-84556188308889265522018-02-11T10:41:00.001-07:002018-02-11T20:36:59.556-07:00Repurposing this blog<p>
If you don't count <a href="https://people.well.com/user/satyr/" target="_blank">my well.com homepage</a>, this blog is my first public online endeavor still in existence, predating <a href="https://twitter.com/lacyiceplusheat" target="_blank">my original Twitter account</a> by a couple of years.
</p><p>
Nevertheless, it has fallen into neglect, owing in no small part to having lost my taste for the brashness with which much of it is written. Perhaps I've gotten over myself.
</p><p>
I'd been toying with the idea of closing it, or weeding out the more egregious posts (a bigger project than I really wanted to take on), but rather than either of those I think I'll simply repurpose it.
</p><p>
Henceforth you can expect less in the way of pompous broad strokes here, and, if anything, a bit more attention to detail, technical and otherwise.
</p><p>
I'm not setting out to be boring, but there's a ready supply of brashness to be found elsewhere, and I really don't feel as though I need to be contributing to it.
</p><p>
I have three other blogs for specific interests. As before, this one remains a catchall for whatever doesn't neatly fit one of those, so maybe it isn't so much a repurposing as a retuning.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-41709212154602318412017-03-11T09:51:00.001-07:002017-03-11T09:51:17.711-07:00Trump, the Cyber-coding language<p>
There ought to be a programming ("cyber"-coding) language that reflects Donald Trump's handling of information, both for the fun of creating and wielding it, and for the assistance it could provide in making his circumlocutions explicit.
</p><p>
Obviously, any such effort should be crowd-sourced, complete with a GitHub project. Unfortunately, I am neither a good enough programmer nor familiar enough with the ins and outs of open source software to contribute much of value to any such effort, but I do have a few suggestions.
</p><p>
Booleans should have four states: true, false (false but intended to be believed), crossed-fingers (truth optional, but not really intended to be believed), and indeterminate (something akin to Schrödinger's cat).
</p><p>
Scalar values should have only two states: too small to care about and too big to measure (expressible using the sign bit).
</p><p>
Assertions should exist but have no effect when they fall flat.
</p><p>
The switch statement should be recast to perform an operation analogous to a bait-and-switch, perhaps simply ignoring the cases (there only for show) and always performing the default.
</p><p>
And, of course, it should be named "Trump", and the standard library or runtime system should be named "Bannon".
</p><p>
I doubt that attempting to turn this into a working language, one that actually produces compilable code, would be worth the effort, but in the role of prototyping pseudocode it might actually prove to be useful.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-88015295029515119062016-10-26T10:02:00.001-06:002016-10-26T10:02:27.014-06:00Earth covered, Terraced, Molded Dome Structures<p>
If you spin a vessel containing a liquid around the vertical axis, the lower/outer surface of the liquid will mold to the inner surface of the container, while the upper/inner surface of the liquid will form a parabolic cavity. Use a liquid that hardens to a solid, and this is a simple way to create a single-piece dome.
</p><p>
One advantage of a single-piece structure is that it can be very leak-resistant, and domes can be quite strong. The combination of these two characteristics makes molded domes ideal starting points for earth covered buildings, but to keep the earth from sliding off the dome, it's necessary to berm the sides thickly, so the surface of the earth covering slopes more gently than the dome itself.
</p><p>
However, if terrace forming indentations are built into the mold, the resulting dome will be better at supporting its earth covering, and there will be less need for wide berming.
</p><p>
Unless drainage is built into the mold, or drilled into the dome after molding, heavy precipitation will result in overflow, with excess water from higher terraces flowing onto the soil retained by lower terraces, so it would make sense to use a sandier soil mix in the lower terraces, and plants that thrive in such an environment.
</p><p>
The mold can include a protrusion in the bottom to create a hole in the top of the dome for a skylight. Similarly, holes for windows and doors (with reinforced edges and overhangs) may also be designed into the mold, and hardware for mounting doors and windows fitted into the mold before molding.
</p><p>
Once in use, a growing mass of plant roots will help keep the earth covering in place.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-78857707776875126412016-09-03T17:00:00.001-06:002016-09-04T16:51:21.644-06:00Tipping Point or Bottleneck<p>
I love <a href="http://gladwell.com" target="_blank">Malcolm Gladwell</a>, as much as I love any man I've never met in person and to whom I am not closely related, but I wonder about the central metaphor of his book <a href="http://gladwell.com/the-tipping-point/" target="_blank">The Tipping Point</a> (published in 2000), although I do think the implication of leaving behind the possibility of going back to the way things were before is altogether accurate.
</p><p>
What for me seems to be missing from this metaphor is the limited capacity of any culture to process change. You might think of it as being analogous to inertia or friction, but I think it might better be characterized in terms of density and pressure.
</p><p>
It's as though we are being forced, by the pressure of innumerable events, into a conical channel with what at present remains a tiny opening at the pointy end, like the nozzle of an acetylene torch, being accelerated into an unpredictable future beyond anyone's control. The effect is rather like an extreme roller coaster, both exciting and terrifying.
</p><p>
Perhaps we should be reaching back 30 years further to the publication of <a href="https://en.wikipedia.org/wiki/Future_Shock" target="_blank">Alvin Toffler's Future Shock</a> to find the other side of the Tipping Point coin, and the explanation for why so many people are so ready to support such regressive public policies.
</p><p>
Afterthought: Perhaps an even more apt metaphor is <a href="https://youtu.be/-IfmgyXs7z8" target="_blank">quantum tunneling</a>, in this case between paradigms. Any individual has some probability of finding themselves in an alternative paradigm at any moment, and should they find a place there they may make the transition to that new paradigm permanent.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-43399841954394649522016-08-24T21:45:00.001-06:002016-08-24T21:45:53.083-06:00Apple less visionary under Tim Cook? Don't bet on it!<p>
On the excuse of Tim Cook's five-year anniversary as CEO of Apple, there has been a flurry (one might even say a feeding frenzy) of articles proclaiming that, under his leadership, Apple is less visionary than it was in the past, under Steve Jobs.
</p><p>
That's not the way it looks to me.
</p><p>
Sure, it's been quite a while since certain products have been updated, and, other than the much anticipated Apple Watch, most of the customer-facing news that has broken surface over the last few years has felt incremental rather than new and brilliant.
</p><p>
This is less true of developer-facing news, which has included the introduction and rapid evolution of Swift, and also less true of the underlying hardware technology, such as the A-series chips, which have dramatically improved every year since they were first publicly mentioned (the A4 used in the iPhone 4), in terms of shear performance and also in terms of performance per watt.
</p><p>
Add to that the rumors that they're hard at work preparing an autonomous electric vehicle of some sort, and that they are also investing heavily in augmented reality.
</p><p>
To me it looks like Apple is laying the groundwork for bigger visions, perhaps even more profound visions, than it ever attempted under Steve Jobs. Time will tell.
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-83460501107887522752016-06-28T09:24:00.001-06:002016-06-28T23:23:25.626-06:00Bring back the five-year plan!<p>
Okay, five may not be the right number, maybe seven, maybe four, maybe even just two, but the idea deserves another look.
</p><p>
The Soviet Union took a lot of jibes for its five-year plans, with their lofty targets and less than stellar fulfillment, at least that was the view of them we got growing up in the U.S. The Soviet example aside, the point of having such plans isn't so much to push progress as to control its collateral effects, which largely have to do with new stuff arriving piecemeal, instead of in a coordinated manner, each driven as if by an ambition of its own — and push-back born of what happens to the value of investments in displaced ways of doing things.
</p><p>
I know I should be providing examples at this point, but the noise around any particular interesting example is so deafening that it makes thinking about imposing a little discipline on progress very difficult — and that's near to the point, without that discipline chaos reigns.
</p><p>
What such 'plans' can offer is staged transitions, with new things that are interdependent arriving together, and together with provision for the retirement of old things. (For 'things' read infrastructure, technologies, practices, methods, regulations, arrangements, ...)
</p><p>
Of course nothing above the quantum level happens instantly, and there would need to be some overlap, say a two-year ramp-up period before a new plan takes effect, and another two-year period to tie up loose ends after it has been superseded.
</p><p>
Have a great idea that isn't quite ready? Maybe it gets pushed back to the middle of the next plan, maybe to the beginning of the following plan, but when it does roll out it will arrive as a complete idea, with thought having been given to how other things are effected, including who stands to profit from having their idea anointed and how <a href="https://en.wikipedia.org/wiki/Essential_patent" target="_blank">standards essential</a> applies.
</p><p>
So who gets to say what each new plan should include and what it shouldn't, and how much advantage should those who play by the plan receive over those who chose to ignore it? Good questions, which I leave as an exercise to the reader.
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-65680941788964003412015-11-22T09:04:00.001-07:002015-11-22T09:05:16.453-07:00Tai Chi for roboticists<p>
As roboticists struggle to create devices (especially humanoid devices) capable of moving about safely and elegantly in uncontrolled environments, it would help if they had a deep, visceral understanding of movement themselves.
</p><p>
This is something the practice of Tai Chi could help with. Tai Chi begins with static balance, and progresses very gradually to dynamic balance, although incorporating momentum from the outset, while it is still negligible, with the aim of developing exquisite awareness of it.
</p><p>
There are also health benefits, which can be achieved through many activities, but for assimilating the fundamentals of graceful movement, there is nothing better than Tai Chi.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-22201375303883843652015-06-12T10:47:00.001-06:002015-06-12T10:47:42.396-06:00hunger & crop subsidies<p>
Alternative uses for commodities that are directly consumable by people (wheat, maize, soya, etc.), such as the production of meat, fuel, and bioplastics, drive up the prices of those commodities, making them less affordable to those who can't afford anything else. Government subsidies contribute to the profitability of producing such commodities, but are inefficient as a means of keeping the prices to end consumers under control.
</p><p>
The solution would be to confine subsidies to shipments which actually go to direct human consumption, leaving other uses, including meat production, to compete in an open market. (Dairy and egg production might be subsidized at a lower rate than direct human consumption, although this begins to get complicated as laying hens, dairy cows, and the majority of male chicks and calves go to slaughter sooner or later, so such operations are a mixture.)
</p><p>
However, this approach begs the question of whether the grain that goes into a box of processed breakfast cereal should receive the subsidy. Since some processing (roasting, rolling, milling and/or grinding) renders many commodities more useful, it wouldn't make sense to preclude that, but there are other ways to approach this issue.
</p><p>
Subsidies could be limited to larger package sizes, say one kilogram (2.2 pounds) as a minimum, or to products where marketing overhead (advertising, packaging, etc.) and profit constituted no more than, say, 20% of the price to the end consumer. (That figure would need to be high enough to fund a distribution network, but not so high as to make that business lucrative enough to attract corruption.)
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-29736093292136375632015-05-19T11:04:00.001-06:002015-05-19T11:04:20.615-06:00Intel fails Apple again<p>
Well, obviously not just Apple, but Apple in particular.
</p><p>
Apple has chosen to ship new 15-inch MacBook Pro models with last year's (Haswell) processors, because the appropriate low-power, quad-core chips remain unavailable in the current generation (Broadwell) of Intel processors. With the first of the next generation (Skylake) processors arriving in August, it's likely that, for this particular product line, Apple will skip Broadwell altogether, and, once new MBPs ship with Skylake, all will be well once again, for awhile.
</p><p>
Meanwhile, ARM cores and Apple's implementations of them are closing in on the performance levels of Intel's products, while continuing to beat the pants off of Intel in terms of performance-per-watt, although Intel has made progress in that regard.
</p><p>
If current trends continue, at some point it won't make sense for Apple to continue to use Intel processors for some Mac line, probably beginning with the 12-inch MacBook or MacBook Airs, but once one line switches over, the others will surely follow, with the Mac Pro being the last holdout.
</p><p>
When that day comes, it's likely there won't be many tears shed at Apple.
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-74828875295172030212015-05-07T10:20:00.001-06:002015-05-07T10:20:12.137-06:00state security, nexus of control, and government legitimacy<p>
I you believe, as I do, that the general well-being of the governed is the primary rationale for the existence of government, and the source of its legitimacy, certain things follow.
</p><p>
Among them, in the current environment, is the need for some state security apparatus, a sort of early warning system for any sort of threat to that general well-being, combined with some means to effectively head off those threats or respond in the event they cannot be averted.
</p><p>
But in a world in which state security is a given, and a culture unto itself, one of the most poignant questions to be asked is in whose interest it acts. This breaks down into three more specific questions, relating to the law authorizing the existence and activities of such agencies, the political appointees who run them, and the career agents who rise through the ranks to exert a degree of control.
</p><p>
Of these three, the agents, faced with harsh, pragmatic realities, are the least likely to bend to changes in the political wind, while the political appointees running the agencies are the most. Law moves more slowly, but it too grows in reflection of the prevailing winds of the times. That's not to say that the agents are necessarily more interested in the general well-being of the governed than their bosses, but that can be one result.
</p><p>
This might make the agents seem hard-nosed and unresponsive, but they have a job to do. What's more material is how they conceive of that job, and how they are directed by law and agency administrators, whether it is truly in service of the general well-being or whether it is in service of something else, something more in line with the agenda of the Koch brothers.
</p><p>
Efforts to 'out' agents, like the recent mining of LinkedIn data, are sure to expose many well-intentioned people for every bad actor they uncover, and paint them all with the same brush. While it does possess a certain ironic quality, the net effect will be to thicken the wall between the agents and the general populace they serve. This is not useful.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-26716748246606178832015-04-05T12:39:00.001-06:002015-04-05T12:39:33.397-06:00Drought & desertification: Robots can help<p>
A NYTimes article published April 2nd, <a href="http://www.nytimes.com/interactive/2014/upshot/mapping-the-spread-of-drought-across-the-us.html">Mapping the Spread of Drought Across the U.S.</a>, leads off with an animated map supplied by the <a href="http://droughtmonitor.unl.edu/Home.aspx">National Drought Mitigation Center</a>, which shows the spread of drought conditions across the contiguous 48 states since late fall, 2014.
</p><p>
From that article: “Droughts appear to be intensifying over much of the West and Southwest as a result of global warming. Over the past decade, droughts in some regions have rivaled the epic dry spells of the 1930s and 1950s. About 37 percent of the contiguous United States was in at least a moderate drought as of March 31, 2015.”
</p><p>
There are two major ways in which robots can help with the effects of climate change, whether permanent or cyclical, upon food production.
</p><p>
Most immediately, robots can operate indoor production facilities using artificial light to produce high value, quickly maturing crops requiring moist environments. To operate most efficiently, that artificial light would be predominantly red and blue, since green light is mostly reflected away by plants, which is why they appear green to us. This might prove a stressful environment for human workers, but robots won't care.
</p><p>
The other way in which robots can help is in dry fields under the hot sun. This can be as simple as reflective umbrellas, nets, or horizontal shutters that shade the ground from the mid-day sun, but uncover it again in the late afternoon to allow cooling radiation into the night sky. Robots could also maintain drip-irrigation systems or make daily rounds to inject water into the soil near root crowns.
</p><p>
In principle, they could also perform planting, weeding, pest control, pruning, harvesting, and deal with plant materials left behind after harvest, and do it all working a mixture of annuals between and around standing perennials, although much of the technology needed for such a scenario remains to be developed.
</p><p>
On the other hand, given that level of utility, much becomes possible that currently is not. The weight of machinery can be kept entirely off of productive soil, rendering it more capable of holding water. Mulch can be applied at any time. When expected precipitation fails to materialize, plants can be pruned to reduce their leaf area and the amount of water they require. Windbreaks can be installed surrounding relatively small patches of land, in a manner not conducive to working them using tractors and conventional implements, but affording much better protection from drying winds as well as providing a secondary crop of woody fiber and habitat for wildlife. If planted in low berms, those windbreaks would also help to keep what moisture there is in the fields and eliminate water erosion.
</p><p>
The benefits of such technology aren't limited to coping with drought, of course, but given that drought is likely to be a widespread, persistent problem, it can help to keep marginal land, which might otherwise turn to desert, in sustainable production, and perhaps even help to reclaim some land that has already been lost to desertification, beginning with the construction of windbreak fences (like snow fences) to accumulate wind-blown dust that will become the berms into which living windbreaks can be planted.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-27523346714580606412015-03-23T16:11:00.001-06:002015-03-26T15:19:21.110-06:00Bay Area commercial space vacancy spikelet following opening of Apple's campus 2<p>
While it's certain that Apple won't be vacating all of the commercial property they've occupied over the last few years, when their Campus 2 opens next year, some of those properties are sure to become surplus space and unnecessary expenses as far as the company is concerned. And while some of that space will be immediately snatched up by the growing collection of enterprises that participate in Apple's ecosystem or cater to their employees, there's still likely to be a spike in the commercial property vacancy rate.
</p><p>
Anticipating this, Cupertino and other nearby communities should be thinking about whether they want to allow those properties to sit idle, waiting for other suitable tenants to come along, or for Apple to again outgrow their own facilities, or should they perhaps encourage their conversion to other uses: housing, mixed use, indoor vegetable production, etc. This would be a good time to start examining and if necessary reforming their zoning ordinances, to clear away legal obstacles to alternative uses of what might otherwise become a problem.
</p><p>
Here's what a few communities are doing with abandoned shopping malls...
</p><p>
<iframe width="400" height="225" src="https://www.youtube.com/embed/vlImKDTpac8" frameborder="0" allowfullscreen></iframe>
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-81904194108028628102015-02-14T13:58:00.001-07:002015-02-14T14:11:10.460-07:00intro/blog from Robots.net (cultibot)<h4>
From my personal profile/blog on Robots.net<br />
(account name: cultibot)
</h4><h4>
26 Mar 2011 (updated 26 Mar 2011 at 02:45)<br />
Some Things Can't Be Done Without Robots
</h4><p>
I had pretensions of being a back-to-the-land hippy before I ever became seriously interested in robotics, but my brother successfully popped that bubble with a simple, unarguable observation, that most people don't want to go back to subsistence farming. So far as that went, he was right, but that didn't make the abusive practices of modern agriculture acceptable. I didn't have an answer, but I kept looking for one.
</p><p>
I had a pretty good idea of what computing was about from an introduction to CS class in which we wrote FORTRAN programs on cardpunches. At that scale there was no help to be found from that direction, but the advent of the microprocessor changed everything. Suddenly it became thinkable to have mobile devices each with its own electronic brain. My mind reeled with the possibilities, but there were a million unknowns.
</p><p>
One thing was clear, though, if Moore's Law was even close to being correct it wouldn't be long before the speed of the electronics was no longer the hangup. It would be the mechanical designs, the software, much of which would depend on transforming biological knowledge into computer code, and the chicken/egg problem of creating an industry and a market for that industry's products at the same time.
</p><p>
And that's pretty much where we are now. The speed of the electronics has so far exceeded the other pieces of the puzzle that even if we might wish for still more it's a moot point. We're not putting what's available to good use.
</p><p>
Remember, we're talking here about getting what we need from the land while honoring the back-to-the-land aesthetic of living lightly upon it, as a species, but not about people fleeing the cities to scratch out their personal livelihoods with whatever meager assemblage of skills they might manage to collect. That could be more destructive than factory farms.
</p><p>
The solution, really the only possible solution if we're to stop soil erosion, ground water and stream contamination, the loss of biodiversity, and the gutting of rural culture, is robots. That's right, robots.
</p><p>
Only by substituting machines which can be invested with some understanding of ecology, or which are at least well suited to play a role in an ecologically sound approach, for the dumb machines currently in use, can we have it all, our comfortable lives, a reliable supply of food of varied types, and a clear conscience.
</p><p>
I'd love to be telling you about all of the cool developments in cultivation robotics, how this team had succeeded in building a system that could differentiate between closely related species immediately upon sprouting, and how another had created a tiny robot that ran on the body fluids of the aphids it consumed. I wish I could report that the USDA had funded research into intermingling rare and endangered native species with crop species and making room for moderate wildlife populations without sacrificing too much commercial productivity. Heh, at least I can truthfully say it could happen, which seemed pretty far fetched just one year ago.
</p><p>
Realistically, though, nearly all of that sort of work remains to be done, and it'll be a great ride when it finally does begin to happen!
</p><h4>
25 Feb 2011
Key term: Precision Agriculture
</h4><p>
In considering how robotics might be applied to agriculture, a current trend to watch goes by the name Precision Agriculture. This series of posts on AgLeader.com provides some idea what's meant by the term and how it's used.
</p><h4>
25 Feb 2011 (updated 25 Feb 2011 at 20:11)<br />
Sony’s War On Makers, Hackers, And Innovators
</h4><p>
An article by Phillip Torrone on Make's blog declares Sony an enemy for all makers, hackers, and innovators and explores the company's long history of going after legitimate innovation, hobbyists, and competition.
</p><h4>
14 Feb 2011 (updated 14 Feb 2011 at 17:17)<br />
why I want to replace tractors
</h4><p>
Tractors are good for one thing, pulling something that's difficult to move, generally because moving it means displacing soil, turning over the top layer with a plow, slicing it and turning it slightly with a disc, or simply clawing through it with a harrow. They can, of course, be used to pull lighter loads, but their design is driven by the need to apply strain to a tow bar.
</p><p>
Displacing soil (tillage) might be termed the original sin, although overgrazing resulting from large herds of domestic animals moving too slowly/frequently over marginal land predates it. Through excessive aeration, tillage burns through humus (the organic content that, among other things improves the ability of soil to retain water), and exposes the soil surface to wind and water erosion. It also consumes a considerable amount of energy, usually in the form of diesel fuel.
</p><p>
To make matters worse, mechanical tillage works best with the worst cropping practice, monoculture, where a single type of seed is sown over an entire field, effectively all at once, and the crop typically harvested by shearing off everything more than a few inches above ground level. It's a practice that's efficient in terms of the number of man-hours required per land area, but at a terrible cost.
</p><p>
Personally, though, I have another reason for wanting to replace tractors; they're dangerous. I grew up in a farming community, and, of the farmers I knew as a child, two were crushed by overturning tractors (inherently unstable because they're designed for traction), and another was killed by a falling disc section.
</p><p>
So please forgive me if I seem a little too zealous, too much in a hurry to retire a nineteenth century technology and replace it with something not yet available, something so different that it will require a systemic overhaul, one long overdue in my humble opinion.
</p><h4>
13 Feb 2011
An Initiative to Keep America's Robotics Roadmap Relevant
</h4><p>
Did you know the United States has a roadmap for robotics? It does! In 2006, a one-day workshop titled Science and Technology Challenges for Robotics was organized by George Bekey of USC, Vijay Kumar of UPenn, and Matthew Mason of CMU. A summary report of that workshop states There was an enthusiastic response to the workshop with over 85 participants. Discussions had to be cut short because of time constraints. This could clearly have been a two-day workshop. There were many volunteers who were ready to take on more responsibilities to promote the discipline. (Vijay Kumar has recently been interviewed on Robots Podcast and was mentioned on Robots.net even more recently.)
</p><p>
During the process which followed that workshop, Matthew Mason and Henrik Christensen of Georgia Tech collaborated on an essay which summarized the state of robotics and previewed the findings of the effort to produce a roadmap for robotics. (Before occupying the KUKA Chair of Robotics at Georgia Tech's College of Computing, Henrik Christensen was the founding Chairman of EURON, the European Robotics Research Network.)
</p><p>
The final roadmap report was presented in May, 2009, before the Congressional Robotics Caucus, however, in the effort to produce that report, the call for the formation of an American Robotics Network (9th slide) appears to have fallen by the wayside.
</p><p>
On January 22nd, Professor Christensen posed the question Are we ready for an American Robotics Network on his blog, saying that he had started a discussion regarding the organization of an American Robotics Network. He has also discussed the formation of such a network in a brief essay on his website. In the recent blog post, he says I would like to get this underway as soon as possible to make sure that we can leverage the momentum from a National Robotics Initiative. It will also be an important mechanism to make sure that we can maintain a push forward.
</p><h4>
12 Feb 2011 (updated 13 Feb 2011 at 03:37)<br />
a minimal-hardware approach to weeding
</h4><p>
The idea presented here applies only to weed seedlings. Weeds growing from tubers or invasive roots will need to be handled more aggressively, but seedlings, being poorly rooted, are vulnerable to methods that destroy their single meristem. Moreover, after a few years of careful weeding, they are the only type of weed that would persist, except for those growing from runners invading from adjacent land, around the perimeter of the plot, so this method would become gradually more sufficient.
</p><p>
In a nutshell, the idea is to use video imagery to locate seedlings, an expert system (the hard part) to distinguish between desirable seedlings and weeds, and a pulse laser to first make sure it has a clear path to the weed seedling (nothing in the way), focus on the portion of the seedling containing the meristem and then deliver one or more relatively high-energy pulses to heat it sufficiently to render the meristem inert, so that the cells are no longer capable of growth and division. It isn't actually necessary to kill the meristematic tissue outright, just inactivate it, so the higher energy pulses used to accomplish this should not need to be so powerful that they present any danger of fire.
</p><p>
Of course, if the machine carrying out this task maintains or has access to a very detailed map of the plot, which precisely locates and keeps an image archive of every seedling, the next time it passes nearby it can simply check whether the plant appears to have withered, or whether it has recovered and continued growth, in which case it may be time to call in heavier equipment. In this way it can build experience with just how much energy is required to stop the growth of a weed seedling of a particular type at a particular stage in its development. Weeds that survive the surgical approach of the laser can be dealt with by more conventional mechanical methods.
</p><p>
The video system should at least combine a wide-angle view with a telescopic view (needed to distinguish between weeds and desirable seedlings). Either or both might be binocular (stereo), for 3D capability, and the telescopic view in particular would benefit from the use of a sensor that could deliver partial frames very rapidly, to help assess the effectiveness of the laser pulses (how much does the meristem swell within the first tenth of a second?).
</p><p>
I call this a minimal-hardware approach because it involves little more than a pair of cameras, one wide-angle and the other telescopic (two pair for stereo video at both focal lengths) and a laser, on a mount with two degrees of freedom, both rotational, and some means of moving that mount around a plot or field. The real complexity would be in the software that deciphered the video input, deciding which seedlings to zap and which to let live. A high-pressure water jet could be substituted for a laser, but such an arrangement would be more challenging mechanically, because the nozzle would need to either come within a few inches of the seedling or use a significant amount of water to be effective. Too much water applied at high pressure might create other problems, for example encouraging the growth of fungi.
</p><p>
The knowledge necessary to distinguish between seedlings of various species would be an appropriate addition to the RoboEarth project.
</p><h4>
6 Feb 2011<br />
a compromise between rails and walking directly on the ground
</h4><p>
If the area to be covered by a farmbot is known, and limited, it might be tempting to outfit the land with rails and the machine with wheels to match, to keep the weight of the machine off the soil and improve its mobility, but in areas where production is constrained by low precipitation or short growing seasons this could prove uneconomic.
</p><p>
A possible compromise solution would be to use long, spider-like legs to span between the tops of posts, a foot or two above the soil surface, or even just low mounds of gravel. Providing this much infrastructure would not only prevent tracking and compression of the soil over most of the area, but it would help the machine locate itself in the field, since the posts or mounds would have known, static locations.
</p><p>
While such machines might move more slowly than if they were equipped with wheels running on rail, the logistics of having several working the same field would be simpler, since they could just walk around each other.
</p><h4>
2 Feb 2011 (updated 2 Feb 2011 at 17:27)<br />
cascading distributed network
</h4><p>
Another such idea (taken through initial development as a thought experiment), in this case one that you'd have to be a chip hacker or microcode programmer to actually implement, first saw the light of day years ago, on The WELL, and then more recently in a topic in the Robots Podcast Forum (since closed).
</p><p>
This one is about very efficient addressing and message passing through a processor network having arbitrary topology, using only the minimum necessary number of bits for each step in a path, and automatically generating a return address, which can also serve to identify the source of the message.
</p><p>
It's recently occurred to me that this idea might be particularly applicable to robotics, where machines might have a separate processor to control every major joint and sub-system, and need to pass messages directly between them without going through a central switch, to keep latency manageable.
</p><p>
Such a network could also accommodate situations where hardware needed to be hot-pluggable, added and removed as the situation required, since newly attached hardware would automatically acquire predictable addresses and, in the case of removal, remaining hardware would always have return addresses for use in sending "cannot deliver, that path is closed" messages.
</p><h4>
2 Feb 2011 (updated 2 Feb 2011 at 16:56)<br />
examples (and the limits) of design through imagination
</h4><p>
At the beginning of March, 2009, two such ideas (designs or simulations running inside my head) had been taking up cerebral resources for some time, weeks or months, so, since they weren't going to be getting any better in the absence of something more tangible, either a CAD model or a mockup, neither of which I had time for, I decided to offload them to one of my blogs, in the hope that someone else might benefit.
</p><p>
The first is essentially the miniature equivalent of inserting an air hose through the tread of a tire at a very shallow angle, nearly tangent, to create a dust barrier via the resulting airflow, with the idea of using it to keep dust off of camera lenses and the like.
</p><p>
The second had its origin in the knowledge that the closer you get to the pivot point of a lever the more force is available. Applied to a robotic manipulator, this means that the outer tips of the 'fingers' should be more sensitive and delicate than segments closer to the 'wrist' (the point of attachment to the supporting arm). Conversely, it also means that those inner segments might be used where more force is needed, as in clipping through the stem of a woody shrub. Inconveniently, stems in need of clipping come at odd angles, so if a shear only operates in a single plane that plane may need to be rotated as much as 90 degrees in moving from one clipping to the next, which might require repositioning the entire machine, which could slow down the operation considerably. Giving the manipulator a set or semi-rotatable digits, that can pair in two different X-shaped configurations, 90 degrees opposed from each other, could provide as many as six shear planes without any rotation of the manipulator unit as a whole. This would allow a pruning robot to move from one clipping to the next with a simple repositioning of its digits.
</p><h4>
30 Jan 2011 (updated 2 Feb 2011 at 16:08)<br />
Further Introduction
</h4><p>
Not mentioned in my intro is that I received a Bachelor's degree in biology in 1980. I'd hoped to return to school for a second degree in engineering, but that never happened, and I spent several very hard years essentially trying to punch my way out of a cognitive bag composed of academic categories, and the emotional baggage I attached to each.
</p><p>
The resolution I found came through the discovery of General Systems Theory, itself an academic category, but one that points to the general applicability of a collection of fundamental concepts. Thus armed, I approached learning with renewed confidence.
</p><p>
It wasn't long after this that I began to become obsessive about computer processors and software, always with an eye to how they might apply to robotics, since I was already interested in mechanizing and scaling up horticulture. Being possessed of a vivid imagination at least with regard to machinery, I built many machines and set them running in my mind, frequently sharing descriptions of these designs with whomever would listen.
</p><p>
For me that was the missing ingredient, collaboration. With no one to share my enthusiasm, it was wet blankets wherever I turned. It's only recently that I've begun to feel like I might have found my tribe.
</p><p>
But I'm not a tinkerer; I'm out to change the world, by replacing big, dumb machines with smaller, smarter (wiser!) ones, beginning with agriculture.
</p><h4>
Original introduction:
</h4><p>
In 1976, I attended the Social Ecology Summer Program at Goddard College, Vermont. At the very end of that summer I saw my first personal computer, which, rightly or wrongly, I've long assumed was a pre-production Apple II, however unlikely that might seem. In any event, other experiences from that summer combined with the realization that computing was about to become ubiquitous formed in me the beginnings of a dream about using robotic machinery to transform agriculture (and land management in general) for the better.
</p><p>
This dream has persisted and grown more detailed and persuasive ever since, and, along with the increasing detail, I developed a general interest in the various technologies which together make up robotics. On The WELL, after years of scattered brainstorming and random proselytizing, I opened the Augmentation and Robotics Conference (augbot.ind). This conference has never been particularly active but it provided me with a venue where discussion of robotics was at least topical.
</p><p>
In the current, elaborated state of my dream, I now imagine intensive intercropping using soil-conserving no-till methods, combined with the protection of rare and endangered plant species and the provision of habitat for animals, all rolled together in a single system, which could also respond to weather forecasts and might even adjust itself for market conditions. Over the last few years I've shared most facets of this dream via my Cultibotics blog.
</p><p>
Another long-standing interest is automatic transportation systems, such as some of those described on the Innovative Transportation website.
</p><p>
I work as a transit dispatcher, using a GPS-generated display and voice communications to help keep a circulator bus route running smoothly.
</p>
John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0tag:blogger.com,1999:blog-32634142.post-24070715439704907652015-02-03T09:37:00.001-07:002015-02-03T09:37:56.891-07:00Crop-neutrality<p>
The US government currently favors production of certain crops, including corn (maize) and soy beans. A proposal, authored by Tamar Haspel and published yesterday in The Washington Post (<a href="http://www.washingtonpost.com/lifestyle/food/unearthed-a-rallying-cry-for-a-crop-program-that-could-change-everything/2015/02/01/ea7988b2-a741-11e4-a06b-9df2002b86a0_story.html">Unearthed: A rallying cry for a crop program that could change everything</a>), would change that by shifting subsidies from support for particular crops to crop-neutral support.
</p><p>
While this isn't specifically about robotics, it would have the effect of making more money available for equipment to produce crops other than the handful that have traditionally been subsidized, and, increasingly over time, that will mean robotic equipment, as the value added by sensors, processing, and flexible behavior will become too compelling to forego.
</p>John Paynehttp://www.blogger.com/profile/15673225286918013251noreply@blogger.com0