Saturday, February 09, 2019

Manual Memory Management in Swift

While working my way through a blog post by Jeremy Howard, a missing piece in the puzzle of working with audio data in Swift bubbled up to the surface of my mind, specifically how Swift's handling of mutable value types using copy-on-write might result in multiple memory allocations while writing values into a buffer, exactly what you don't want in a real-time context.

This realization led me to open The Swift Programming Language in the Books app and search on the word "buffer" which led to this link, Manual Memory Management, which is clearly the right starting point for investigating the handling of buffers when copy-on-write is not what you want.

Saturday, February 02, 2019

My first "Real" Programming Project, and also Real-time Coding

Back in the mid-late 80's and early-mid 90's, I went through a series of hastily foreshortened computing experiences. I was a bit late to the personal computing scene, so there was an element of catch-up involved, but, from the outset, I was really more interested in development than in personal use.

My first exposure to computing was a class in Fortran, spring semester 1972, work for which was done on a card punch machine and programs run by taking card decks to a counter that separated the computer operators from the rest of us, loading the deck into a card reader, then returning later to pick up the printout. In addition to Fortran, there was a bit of JCL at the beginning (and end?) of each deck. I also had a single experience with using a printing terminal (no screen), having figured it out enough to get started from the documentation on a nearby table.

I did well in Fortran, but bombed out of PL/I (Programming Language One) the following semester. This was probably more due to extracurricular complications than anything to do with that language, but that, combined with my initial failure to comprehend calculus, did put any notion I had of going into computer science on hold for more than a decade. (At that time, computer science was widely viewed as a branch of mathematics, and calculus was required for admission into the degree program.)

I dropped out after my second year in order to pursue an entirely different, non-academic interest, which involved moving to a different city in a different state. Nevertheless, what little I'd learned informed how I then saw the world, and I was beginning to view everything largely in terms of information.

I caught my first sight of a personal computer of some sort in late-summer 1976, just after the end of a summer program I'd attended at a small college in Vermont. I didn't get close enough to it to read any labels, and it's possible that it was merely a desk-sized word processor, but that thought didn't occur to me until much later. I thought I was looking at a stand-alone computer dedicated to use by a single person, and that idea began worming its way into my consciousness and prepared me to understand the significance of Moore's Law, when I finally encountered it as such, and (re)awakened an interest in microprocessors and integrated circuitry in general, and in the programmable code behind digital computation. Clearly there was something there that would only become increasingly important, and would sooner or later enable things that were all but inconceivable at that time.

Even so, when I did go back to school full-time, in 1978, it was in biology, with the half-hearted intention of switching into engineering after completing my bachelor's degree. I've never really regretted my foray into biology, because without it I might never have found my way to (the layman's version of) systems theory, from which I gleaned a few key concepts, including that of an open system, emergence, and the elusive idea of a strange attractor.

All of this had nothing at all to do with what I was doing for a living at the time. So, shifting gears...

The first computer I actually owned was an Atari 600XL (Great keyboard!), which came with just 16 kilobytes of RAM, but I also purchased the external module that increased that to 64 kilobytes and a 300 baud modem, which I used to connect to a mainframe at the local university so I could work online and file programs for a course in Pascal.

Next I moved up to the Atari 1040 ST, and bought the developer's kit (Mark Williams C), but made the mistake of getting the lower resolution color screen instead of the higher resolution grayscale screen, so coding was difficult, and I was unfamiliar both with C and with operating system APIs in general and GEM/TOS in particular (DRI's Graphics Environment Manager and Tramiel Operating System after Jack Tramiel who'd purchased Atari). I did learn a bit, but it was overwhelming, and I ended up selling that machine, including the dev kit, and switching to MS-DOS running on an Epson computer for my next go-round.

As I recall, I used that machine for both a class in data structures, using Pascal, and an accelerated course in C, but that whole period is a blur for me, so I might have the details wrong. I do remember owning a copy of Turbo C for it.

Next came an Amiga 500, which made the Epson superfluous, followed by an Amiga 3000, which made the 500 superfluous. I got the 3000 because I had this idea for a program I wanted to write. There was also a dongle involved, the name of which I don't remember, that converted the Amiga's video output to a standard SD video signal, recordable on VHS. In any case, as with my early attempts at programming the Atari ST, this project involved programming in C using operating system APIs, but this time I had a clue about C and the available documentation was much better!

This project involved first creating a map from code, then scrolling it around (similar to how UIScrollView works in iOS) while the mouse cursor, positioned in the middle of the screen, traced out a path. To avoid artifacts, and as a means of establishing pacing, the map scrolling had to be done during the vertical blank, the time when the electron beam in the monitor moved back from the bottom to the top. I don't clearly recall how this worked, but it probably involved creating a callback function and passing a pointer to that to the OS, so it would be called when the vertical blank happened. In any case, this was my first exposure to time-constrained (real-time) processing.

I might have a hard time proving this, since, if I do still have the code for it squirreled away somewhere, it's probably on an Amiga-formatted floppy disk, but that program did work, and I was able to make a video recording. I was not, however, successful in getting anyone else interested it the project, so I also lost interest. At the same time, my interest in Amiga as a platform was ebbing. I'd recognized the limitations (in the absence of scale and adequate investment) of their custom-hardware approach, and was ready to jump ship.

I nearly forgot one chapter of this story, which was that I replaced the Amiga with one of the original Pentium machines, a Packard Bell Legend 100, as I recall, which came with Windows For Workgroups and a ridiculous graphical wrapper. It had an optical drive as well as a hard disk and a couple of floppy drives. This was a machine I could reasonably have used for development, but my heart wasn't in it, and this was at the time when the web was taking off. I picked up a bit of HTML, but otherwise mainly used that machine as a fancy terminal emulator.

I'd been casually following Steve Jobs's NeXT since the beginning, and around the time of Apple's 'acquisition' of NeXT I donated the Pentium, then, soon after that, switched up my whole situation, which was disruptive, so it wasn't until the iMac came along that I again owned a computer, and even longer before I got into Mac programming. But that's another story, for another time.

Saturday, January 12, 2019

The Three Phases of Real-time Code

Think of this as a novice's understanding, if you like. I make no claim to being any sort of expert, certainly not an expert in real-time coding, but I do have a tiny bit of experience, and have used this pattern, even if not quite intentionally.

First there's what you can do before the real-time code runs, to smooth the way for it and minimize the amount of work that has to be done by real-time code. In my own projects, this has mainly meant creating a precomputed table of sine values, enabling the calculation of sound samples based on sines via simple table lookups, but anything you can do ahead of time that results in fewer cpu cycles to accomplish real-time work is helpful.

The real-time code itself should be as simple as possible, avoiding anything that can be done beforehand or left for later and altogether avoiding dynamic method calls (calls to code that cannot be inlined because the specific version of the method to be used cannot be determined until runtime). In Swift, if you need to access a class method, make it a base class, use the "final" keyword on either the class or the method, and, if possible, within that method avoid accessing anything other than parameters and stored values.

Avoid algorithms higher that O(n), or at the very worst O(n log n). Eliminate loops within loops, if you possibly can. Also avoid heap allocation and deallocation; whatever needs to be in the heap should be set up beforehand and left in memory until later. If need be, you can set up a flag to indicate when an object in memory is no longer needed.

Don't run real-time code on the main thread. If using a framework that includes a real-time context, take advantage of that by making use of the appropriate thread it provides, as by putting your real-time code into a callback. If you're rolling your own real-time thread, you're way deeper into this than I am already!

Finally, any cleanup that doesn't have to be done in the real-time code shouldn't be; leave it for later, passing along just enough information from the real-time code to enable the cleanup code to do its job. You can pass information out of the real-time context to your other code by modifying values stored in variables defined in the scope enclosing the definition of the callback. Grouping these into a mutable struct, an instance of a final base class, or mutable static values seems like a good idea.

And then there's testing. Think about what the worst-case scenario might be, and test for that. If your real-time code reliably returns within the time allowed under those conditions, it's time to test it on the slowest device that might be called upon to run it, and if that works you're golden!

Sunday, January 06, 2019

Swift: Origins and My Personal History with it

Once upon a time there was a graduate student named Chris Lattner, who had a bright idea. That bright idea turned into LLVM (an acronym for Low Level Virtual Machine), essentially compiler technology, the popularity of which has been on a tear ever since.

Lattner went to work for Apple, but continued to be heavily involved in the development of LLVM, notably including in the extension of Clang (LLVM's C language front-end) to support C++. (In the LLVM world, a front-end turns what we commonly think of as computer code into LLVM-IR, LLVM Intermediate Representation, from which it can then be further transformed by a back-end into machine code for some specific platform, after a bit of polishing while still in LLVM-IR.)

As I understand it, in the wake of that effort, Lattner thought there had to be a better way, and set out to create what has become known as Swift (see also). As you might guess from its name, one of the design goals for Swift was that it should run quickly, especially as compared with scripting languages like Javascript that are interpreted into machine code as they are executed, rather than being compiled ahead of time.

Another primary design goal was that it should be safe, immune to many of the categories of defects that find their way into computer software. Yet another was that it should be modern, bringing together features from an assortment of other programming languages, features that make code easier to write and comprehend.

Something that may never have been a goal so much as an underlying assumption was that it should leverage LLVM. Of course it should; who would even think to question that, myself included! Swift's most tangible existence is as a front-end compiler for LLVM, implementing and evolving in lockstep with an evolving language specification. (That front-end compiler determines what will and will not be allowed to progress to the LLVM-IR stage.)

But to get back to the story of its origin, Lattner worked alone on this for awhile, then showed it to a few others within Apple, where it at first became something of a skunkworks project, then a more substantial project involving people from outside the company, but still keeping a low profile. In fact it kept such a low profile that when it was publicly introduced at WWDC 2014 nearly everyone was taken completely off-guard.

That public introduction brings to mind another design goal, or maybe design constraint, which was that Swift had to be interoperable with Apple's existing APIs (Application Programming Interfaces), otherwise it would have had a much more difficult time gaining traction. Being interoperable with the APIs meant being interoperable with Objective-C, the language Apple began using when it acquired NeXT in 1997. I can't speak to how Swift might have turned out differently if that were not the case, but I'm relatively confident this requirement served to accelerate its development by dictating certain decisions, obviating the need for extended discussion. (Swift also inherited some of the features of Objective-C, notably including automatic reference counting and the use of both internal and external parameter labels in functions, initializers, and methods, which contribute to its readability.)

So it's June, 2014, and Swift has just been announced to the world. Despite the vast majority of the existing code base being in Objective-C, C, or C++, both at Apple and at other companies providing software for Apple's platforms, the writing was plainly on the wall that Swift would eventually largely if not entirely displace Objective-C, just not right away. Since I didn't personally have a large Objective-C code base, and what I did have I'd basically neglected for over three years, I saw nothing to hold me back from diving into Swift, well nothing other than having very limited time to give to it.

However, as I got further into it, I discovered some details that muted my enthusiasm. Most importantly for my purposes was Swift's initial unsuitability for hard real-time use, like synthesizing complex sound on the fly (my primary use-case). It still isn't really suitable for such use, but it is getting closer.

I also had quibbling issues, including the initial lack of a Set type (collections of unique elements) despite that much of the functionality of sets had already been developed as part of the Dictionary type, and then, when a Set type was introduced, it felt like a step-child, with an initializer based on Array syntax (ordered collections which may have duplicate elements). I remember thinking, if Swift had started out with a Set type with its own syntax, it would have made far more sense for Dictionary syntax to be based on that rather than on arrays, since a Dictionary is essentially a Set with associated values, all of the same type. (There are only so many options for symbolic enclosure on most keyboards — parentheses, curly braces, square brackets, and angle brackets — and these were already fully subscribed — by expressions, blocks, arrays, and type specifications, respectively. Other symbols and combinations of symbols are available but would not be as straightforward to utilize, and, in any case, it's a bit late in the game to be making anything other than additive changes, alternative syntax that does not replace what already exists.)

Another quibble revolved around numeric values not being directly usable as Booleans (true/false). In C, in a Boolean context, zero evaluates to false, and any other value evaluates to true. This can be very convenient and is one of my favorite features of that language. Yes, in Swift one can always use a construction like numericValue != 0, but when you're accustomed to just placing a numeric value in a Boolean context and having it evaluated as a Boolean, that feels slightly awkward. I get that using numerics as Booleans invites the use of complicated Boolean constructions, which can make code more difficult to read. There have been many times when I've had to parse what I had myself strung out onto a single line, by breaking it out over several lines, to be able to understand it and gauge its correctness. Even so, it initially annoyed me that I would have to give this up in Swift. (I've long since gotten over this and now prefer the absence of implicit type conversions. In Swift, with regard to types, what you see is what you get, and that's a good thing!)

But there were also aspects of Swift I liked right away! Optionals, for example, just made so much sense I was amazed they hadn't already been incorporated into every language in existence, and enums with the choice of raw or associated values were clearly a huge improvement over how enumerations work in C and Objective-C. Also, once it finally settled down, function/method declaration and call-site syntax hit the sweet spot for me. Likewise access control — with the exception of the descriptive but nevertheless slightly awkward 'fileprivate' key word, but no more than I'm likely to ever have need for that I can live with it and certainly have no intention of attempting to reopen that can of worms at this late date!

Even though it initially seemed overdone, I like the emphasis on value types (value semantics). I'm comfortable with a little bit of indirection, but begin to get nervous when dealing with pointers to pointers, and get positively fidgety when dealing with pointer arithmetic. (You know you want to keep those stack frames small, and pointers to objects allocated on the heap can be the most direct means to that end, but it can get out of hand.) Happily, Swift not only doesn't take away the reference option, it makes using it easier and safer, while also preventing value types from resulting in overly large stack frames and excessive copying.

I also like the notions of protocols and generics, but in a fuzzier way, since I really don't completely comprehend either, and there are particular fundamental protocols, like Sequence, the existence of which I am vaguely aware of but not much more than that. I suppose these are the sorts of things I'll be delving into here going forward.

But to climb back up to my preferred edge-of-space point of view, I've had this sneaking suspicion all along that Swift is composed from primitives of some sort, although you might have to be a compiler adept to really understand what they are and how they fit together. To use the example of sets, dictionaries, and arrays, from above, the essential characteristic of a set is that its elements are unique; no two are identical. The Set type, as implemented, is unordered, but you might also have OrderedSet, and it is in fact quite possible to create such a type, moreover Apple's Objective-C APIs already include both NSOrderedSet and NSMutableOrderedSet. Likewise you might want an unordered collection, the elements of which are not necessarily unique. Uniqueness and ordering are independent attributes of collections.

Besides Sequence, among the protocols I'm vaguely aware of, are such things as Collection, MutableCollection, Numeric, and so forth. While these are straining in this direction, they really aren't the primitives that exist (only?) in my imagination, which at this point I would expect to be written in C++ rather than Swift, or only expressible in LLVM-IR.

I'm more curious about this than is good for me, considering how unprepared I am to understand things at this level, but there's no point in fighting it. Show me bread and I see the ingredients and the processes applied to them to make the finished product. Show me a car and I see parts and the assembly process. It's in my nature, and so it's also likely to show up here, to the extent I make any headway in wrapping my head around such esoterica.

(17Feb2019) I just realized that there's a general principle to be extracted from this, which is that there's no point in fighting a tendency to dream about what might be. Instead, it's better to choose dreams that are both achievable and worth the effort, and build bridges to them from current reality.

Update: As if to add credibility to my hunch about Swift being built up from abstract and/or compiler-level primitives, on January 10th Jeremy Howard published High Performance Numeric Programming with Swift: Explorations and Reflections, in which he states "Chris [Lattner] described the language to me as “syntax sugar for LLVM”, since it maps so closely to many of the ideas in that compiler framework."

Friday, January 04, 2019

Filling in the Gaps with Swift

They say the best way to learn is to teach.

Well, despite having dabbled in Swift for 4.5 years, I don't think I'm quite ready to teach it, nevertheless I don't suppose it would do anyone any harm to watch over my shoulder as I attempt to wrap my own head around it.

I doubt that I'll entirely devote this blog to that purpose, but you can expect to see such posts begin to appear, directly.

For now, here's a link to an article by the amazingly prolific Paul Hudson, Hacking with Swift: Glossary of Swift Common Terms. This article contains a few breezy, imprecise definitions, but they're mostly of a nature that won't be relevant as you're just setting out to learn the language. Just be aware that there'll be more detail to learn as you advance.

Sunday, October 28, 2018

Other blogs

In my last post here, I mentioned three other blogs, but I failed to link to them. Here are those links...

  • Harmonic Lattice, renamed from Harmonic Ratio, is about a project to make musical scales based on pure intervals easier to use (loosely based on Just Intonation).
  • Regenerative AgRobotics, renamed from Cultibotics, is about the application of robotics to enabling the scalability of perennial polyculture.
  • Aging Gracefully Through Gentle Martial Practice is about what my fascination with the martial arts has evolved into and the insights I've experienced along the way.

Outside of these long-term obsessions, I haven't had much to say lately. No idea whether that will change.

Sunday, February 11, 2018

Repurposing this blog

If you don't count my well.com homepage, this blog is my first public online endeavor still in existence, predating my original Twitter account by a couple of years.

Nevertheless, it has fallen into neglect, owing in no small part to having lost my taste for the brashness with which much of it is written. Perhaps I've gotten over myself.

I'd been toying with the idea of closing it, or weeding out the more egregious posts (a bigger project than I really wanted to take on), but rather than either of those I think I'll simply repurpose it.

Henceforth you can expect less in the way of pompous broad strokes here, and, if anything, a bit more attention to detail, technical and otherwise.

I'm not setting out to be boring, but there's a ready supply of brashness to be found elsewhere, and I really don't feel as though I need to be contributing to it.

I have three other blogs for specific interests. As before, this one remains a catchall for whatever doesn't neatly fit one of those, so maybe it isn't so much a repurposing as a retuning.

Saturday, March 11, 2017

Trump, the Cyber-coding language

There ought to be a programming ("cyber"-coding) language that reflects Donald Trump's handling of information, both for the fun of creating and wielding it, and for the assistance it could provide in making his circumlocutions explicit.

Obviously, any such effort should be crowd-sourced, complete with a GitHub project. Unfortunately, I am neither a good enough programmer nor familiar enough with the ins and outs of open source software to contribute much of value to any such effort, but I do have a few suggestions.

Booleans should have four states: true, false (false but intended to be believed), crossed-fingers (truth optional, but not really intended to be believed), and indeterminate (something akin to Schrödinger's cat).

Scalar values should have only two states: too small to care about and too big to measure (expressible using the sign bit).

Assertions should exist but have no effect when they fall flat.

The switch statement should be recast to perform an operation analogous to a bait-and-switch, perhaps simply ignoring the cases (there only for show) and always performing the default.

And, of course, it should be named "Trump", and the standard library or runtime system should be named "Bannon".

I doubt that attempting to turn this into a working language, one that actually produces compilable code, would be worth the effort, but in the role of prototyping pseudocode it might actually prove to be useful.

Wednesday, October 26, 2016

Earth covered, Terraced, Molded Dome Structures

If you spin a vessel containing a liquid around the vertical axis, the lower/outer surface of the liquid will mold to the inner surface of the container, while the upper/inner surface of the liquid will form a parabolic cavity. Use a liquid that hardens to a solid, and this is a simple way to create a single-piece dome.

One advantage of a single-piece structure is that it can be very leak-resistant, and domes can be quite strong. The combination of these two characteristics makes molded domes ideal starting points for earth covered buildings, but to keep the earth from sliding off the dome, it's necessary to berm the sides thickly, so the surface of the earth covering slopes more gently than the dome itself.

However, if terrace forming indentations are built into the mold, the resulting dome will be better at supporting its earth covering, and there will be less need for wide berming.

Unless drainage is built into the mold, or drilled into the dome after molding, heavy precipitation will result in overflow, with excess water from higher terraces flowing onto the soil retained by lower terraces, so it would make sense to use a sandier soil mix in the lower terraces, and plants that thrive in such an environment.

The mold can include a protrusion in the bottom to create a hole in the top of the dome for a skylight. Similarly, holes for windows and doors (with reinforced edges and overhangs) may also be designed into the mold, and hardware for mounting doors and windows fitted into the mold before molding.

Once in use, a growing mass of plant roots will help keep the earth covering in place.

Saturday, September 03, 2016

Tipping Point or Bottleneck

I love Malcolm Gladwell, as much as I love any man I've never met in person and to whom I am not closely related, but I wonder about the central metaphor of his book The Tipping Point (published in 2000), although I do think the implication of leaving behind the possibility of going back to the way things were before is altogether accurate.

What for me seems to be missing from this metaphor is the limited capacity of any culture to process change. You might think of it as being analogous to inertia or friction, but I think it might better be characterized in terms of density and pressure.

It's as though we are being forced, by the pressure of innumerable events, into a conical channel with what at present remains a tiny opening at the pointy end, like the nozzle of an acetylene torch, being accelerated into an unpredictable future beyond anyone's control. The effect is rather like an extreme roller coaster, both exciting and terrifying.

Perhaps we should be reaching back 30 years further to the publication of Alvin Toffler's Future Shock to find the other side of the Tipping Point coin, and the explanation for why so many people are so ready to support such regressive public policies.

Afterthought: Perhaps an even more apt metaphor is quantum tunneling, in this case between paradigms. Any individual has some probability of finding themselves in an alternative paradigm at any moment, and should they find a place there they may make the transition to that new paradigm permanent.