Saturday, November 06, 2021

Peer-to-peer networking, etc, etc, ...

Among my collection of lesser obsessions (actually, at this point, a stronger obession than my once-compelling interest in the martial arts) is a way of communicating among anything from code fragments to processor cores to entirely separate devices, over what is essentially a mesh network, although it may have hierarchical attributes layered over the peer-to-peer relationships.

This tweet, and the thread in which it is embedded, should serve as a decent entry point into this constellation of ideas/interests, which reaches beyond such networks to include on-the-fly programmable gate arrays, radial bit-slice processors, plug-and-play robotics, and, potentially, a way of thinking about how memes and messages move through social networks.

A term of my own invention (so far as I'm aware) you'll see mentioned is "cascading networks", which refers to an addressing scheme for navigating a netowrk composed of nodes having ports, each dedicated to a single, typically bi-directional connection, either to another node or to code/hardware hosted by a node, inclduing but not limited to the code/hardware of the node itself (its own kernel). The nutshell version is that however many bits are needed to specify a port to receive a message are stripped away from the header when forwarding the message, and the return address appended at the end of the message. Under ideal conditions, this scheme should enable message forwarding with extremely low latency. Making such a network secure and robust would no doubt entail considerably more complication.

Monday, October 25, 2021

It seems I have more strong interests (not quite obsessions) than are represented by my other three blogs.

This seems like as good a place as any to pursue those.

Wednesday, December 18, 2019

UI latency, and how to avoid it...

What I left unsaid in my last post, in September, was that much of what initially drove my interest in SwiftUI is the hope that it will enable lower latency interactions, by avoiding things like the responder chain and delays that are built in for the purpose of distinguishing among gestures (whether a touch is actually the beginning of a swipe, for example).

The responder chain is about which view should respond to a click (in macOS) or a touch (in iOS and iPadOS). Events are first passed to the screen/window's root view, and then down a chain of more specific views until one possessing an appropriate gesture recognizer is found.

The main source of built-in delay I know about relates to scroll views, and whether a touch is intended to scroll the overall content view or to interact with a child view contained within it.

I'm now thinking that avoiding these sources of latency was probably a vane hope, since, as I understand it, (for now) SwiftUI translates the declarative code it enables into native UI entities for the platform(s) for which it is compiled. There may be some performance advantages, and are likely to be more in the future, but there are some hard constraints. No matter the nature of the code, it will still be necessary to determine which view should respond and to what type of gesture.

Happily, as a source of motivation, this hope has been replaced by some interest in the framework for its own sake, and in the other language features that enable it, although you might not know it from the paltry progress I've managed to make so far.

As for how to avoid latency, the most reliable answer seems to be the same as ever, keep your UI structure as flat as is reasonable, rather than going hog-wild with layering views within views within views. Some such layering is inevitable, just be moderate with it, and don't sweat the unavoidable milliseconds.

Tuesday, September 03, 2019

Dog-paddling behind the bow wave

Those who make it their business to be on the leading edge of the rapid evolution of Swift are deep into figuring out what the latest developments are all about, and have begun sharing the fruits of their study and experimentation with the wider community, myself included.

What I've gleaned from the sliver of this growing abundance I've actually managed to take in is more about general impressions than about, for example, precise syntax. This is typical for me, and I believe it pegs me as a 'field-dependent' learner. (I tend to build a cognitive framework first and fill in the detail later, rather than working from detail to generalities.)

Immediately following WWDC, most of the attention went to SwiftUI — at least within the Apple ecosystem an altogether new approach to user interface design and construction, coming with an interactive Xcode component that does for UI much the same thing playgrounds have done for Swift code since Swift was first introduced, and enabling a degree of code-sharing among platforms previously only dreamt about.

As interesting as this is, the technologies making it possible are arguably even more so.

Combine continues to mystify me, somewhat, but I get that it's about data synchronization, so your UI and your data store remain in continuous agreement, for example, although the utility of combine goes far beyond UI synchronization.

Property Wrappers allow commonalities among the behaviors of various properties to be expressed once, and applied in any specific case as a single line of code.

Function Builders I've barely even got a sense for, but they seem to be about reducing repetitive code by generalizing function definitions to accept varying numbers and/or types of parameters, and/or to return tuples containing varying numbers and/or types of members. (Don't try spending this, it might prove to be counterfeit!)

While I have installed the betas of Catalina and Xcode 11, what coding I've had time for has mainly been in C (more precisely C code in .m files devoid of custom Obj-C classes), since I'm working toward building a new audio generator in C, which will live within an app otherwise written in Swift. I fully expect to get into SwiftUI and all that goes into it down the line, but I want to get that audio generator working first.

Wednesday, June 19, 2019

In the wake of WWDC

WWDC has come and gone, but its wake is still rolling across the Swift language and Apple platform developer communities (lots of overlap there, but they're not identical).

Swift got a major facelift, in the form of SwiftUI, an Apple framework that shines a spotlight on recent changes to the language, property wrappers and opaque return types in particular.

Unfortunately, as I understand it, to dive into SwiftUI I'd need to install the beta version of macOS 10.15, which I'm not yet ready to do. Maybe I'll take the plunge when the public beta is released, but even that isn't guaranteed.

Other language features only require the beta of Xcode 11, which I probably will install when beta 3 becomes available. (I didn't wait for Beta 3.)

In any case, putting together blog posts is part of my learning process, so you can expect to see some of that here in coming months.

Meanwhile, there are better resources out there than this blog, and I'd encourage you to go have a look at the Swift.org website and Dave Verwer's iOS Dev Directory.

You can also follow me on Twitter at https://twitter.com/harmonicLattice, just be aware that I sometimes tweet out of confusion rather than understanding.

Saturday, February 09, 2019

Manual Memory Management in Swift

While working my way through a blog post by Jeremy Howard, a missing piece in the puzzle of working with audio data in Swift bubbled up to the surface of my mind, specifically how Swift's handling of mutable value types using copy-on-write might result in multiple memory allocations while writing values into a buffer, exactly what you don't want in a real-time context.

This realization led me to open The Swift Programming Language in the Books app and search on the word "buffer" which led to this link, Manual Memory Management, which is clearly the right starting point for investigating the handling of buffers when copy-on-write is not what you want.

Saturday, February 02, 2019

My first "Real" Programming Project, and also Real-time Coding

Back in the mid-late 80's and early-mid 90's, I went through a series of hastily foreshortened computing experiences. I was a bit late to the personal computing scene, so there was an element of catch-up involved, but, from the outset, I was really more interested in development than in personal use.

My first exposure to computing was a class in Fortran, spring semester 1972, work for which was done on a card punch machine and programs run by taking card decks to a counter that separated the computer operators from the rest of us, loading the deck into a card reader, then returning later to pick up the printout. In addition to Fortran, there was a bit of JCL at the beginning (and end?) of each deck. I also had a single experience with using a printing terminal (no screen), having figured it out enough to get started from the documentation on a nearby table.

I did well in Fortran, but bombed out of PL/I (Programming Language One) the following semester. This was probably more due to extracurricular complications than anything to do with that language, but that, combined with my initial failure to comprehend calculus, did put any notion I had of going into computer science on hold for more than a decade. (At that time, computer science was widely viewed as a branch of mathematics, and calculus was required for admission into the degree program.)

I dropped out after my second year in order to pursue an entirely different, non-academic interest, which involved moving to a different city in a different state. Nevertheless, what little I'd learned informed how I then saw the world, and I was beginning to view everything largely in terms of information.

I caught my first sight of a personal computer of some sort in late-summer 1976, just after the end of a summer program I'd attended at a small college in Vermont. I didn't get close enough to it to read any labels, and it's possible that it was merely a desk-sized word processor, but that thought didn't occur to me until much later. I thought I was looking at a stand-alone computer dedicated to use by a single person, and that idea began worming its way into my consciousness and prepared me to understand the significance of Moore's Law, when I finally encountered it as such, and (re)awakened an interest in microprocessors and integrated circuitry in general, and in the programmable code behind digital computation. Clearly there was something there that would only become increasingly important, and would sooner or later enable things that were all but inconceivable at that time.

Even so, when I did go back to school full-time, in 1978, it was in biology, with the half-hearted intention of switching into engineering after completing my bachelor's degree. I've never really regretted my foray into biology, because without it I might never have found my way to (the layman's version of) systems theory, from which I gleaned a few key concepts, including that of an open system, emergence, and the elusive idea of a strange attractor.

All of this had nothing at all to do with what I was doing for a living at the time. So, shifting gears...

The first computer I actually owned was an Atari 600XL (Great keyboard!), which came with just 16 kilobytes of RAM, but I also purchased the external module that increased that to 64 kilobytes and a 300 baud modem, which I used to connect to a mainframe at the local university so I could work online and file programs for a course in Pascal.

Next I moved up to the Atari 1040 ST, and bought the developer's kit (Mark Williams C), but made the mistake of getting the lower resolution color screen instead of the higher resolution grayscale screen, so coding was difficult, and I was unfamiliar both with C and with operating system APIs in general and GEM/TOS in particular (DRI's Graphics Environment Manager and Tramiel Operating System after Jack Tramiel who'd purchased Atari). I did learn a bit, but it was overwhelming, and I ended up selling that machine, including the dev kit, and switching to MS-DOS running on an Epson computer for my next go-round.

As I recall, I used that machine for both a class in data structures, using Pascal, and an accelerated course in C, but that whole period is a blur for me, so I might have the details wrong. I do remember owning a copy of Turbo C for it.

Next came an Amiga 500, which made the Epson superfluous, followed by an Amiga 3000, which made the 500 superfluous. I got the 3000 because I had this idea for a program I wanted to write. There was also a dongle involved, the name of which I don't remember, that converted the Amiga's video output to a standard SD video signal, recordable on VHS. In any case, as with my early attempts at programming the Atari ST, this project involved programming in C using operating system APIs, but this time I had a clue about C and the available documentation was much better!

This project involved first creating a map from code, then scrolling it around (similar to how UIScrollView works in iOS) while the mouse cursor, positioned in the middle of the screen, traced out a path. To avoid artifacts, and as a means of establishing pacing, the map scrolling had to be done during the vertical blank, the time when the electron beam in the monitor moved back from the bottom to the top. I don't clearly recall how this worked, but it probably involved creating a callback function and passing a pointer to that to the OS, so it would be called when the vertical blank happened. In any case, this was my first exposure to time-constrained (real-time) processing.

I might have a hard time proving this, since, if I do still have the code for it squirreled away somewhere, it's probably on an Amiga-formatted floppy disk, but that program did work, and I was able to make a video recording. I was not, however, successful in getting anyone else interested it the project, so I also lost interest. At the same time, my interest in Amiga as a platform was ebbing. I'd recognized the limitations (in the absence of scale and adequate investment) of their custom-hardware approach, and was ready to jump ship.

I nearly forgot one chapter of this story, which was that I replaced the Amiga with one of the original Pentium machines, a Packard Bell Legend 100, as I recall, which came with Windows For Workgroups and a ridiculous graphical wrapper. It had an optical drive as well as a hard disk and a couple of floppy drives. This was a machine I could reasonably have used for development, but my heart wasn't in it, and this was at the time when the web was taking off. I picked up a bit of HTML, but otherwise mainly used that machine as a fancy terminal emulator.

I'd been casually following Steve Jobs's NeXT since the beginning, and around the time of Apple's 'acquisition' of NeXT I donated the Pentium, then, soon after that, switched up my whole situation, which was disruptive, so it wasn't until the iMac came along that I again owned a computer, and even longer before I got into Mac programming. But that's another story, for another time.

Saturday, January 12, 2019

The Three Phases of Real-time Code

Think of this as a novice's understanding, if you like. I make no claim to being any sort of expert, certainly not an expert in real-time coding, but I do have a tiny bit of experience, and have used this pattern, even if not quite intentionally.

First there's what you can do before the real-time code runs, to smooth the way for it and minimize the amount of work that has to be done by real-time code. In my own projects, this has mainly meant creating a precomputed table of sine values, enabling the calculation of sound samples based on sines via simple table lookups, but anything you can do ahead of time that results in fewer cpu cycles to accomplish real-time work is helpful.

The real-time code itself should be as simple as possible, avoiding anything that can be done beforehand or left for later and altogether avoiding dynamic method calls (calls to code that cannot be inlined because the specific version of the method to be used cannot be determined until runtime). In Swift, if you need to access a class method, make it a base class, use the "final" keyword on either the class or the method, and, if possible, within that method avoid accessing anything other than parameters and stored values.

(Inserted 03April2019: Above, I should have said that the real-time algorithm should be as simple as possible. The code itself should be flat, so it executes inline, with a minimum of jump statements in the machine code. This may mean duplicating code to avoid indirection, and that's a reasonable tradeoff; performance matters more than code size in this context.)

Avoid algorithms higher that O(n), or at the very worst O(n log n). Eliminate loops within loops, if you possibly can. Also avoid heap allocation and deallocation; whatever needs to be in the heap should be set up beforehand and left in memory until later. If need be, you can set up a flag to indicate when an object in memory is no longer needed.

Don't run real-time code on the main thread. If using a framework that includes a real-time context, take advantage of that by making use of the appropriate thread it provides, as by putting your real-time code into a callback. If you're rolling your own real-time thread, you're way deeper into this than I am already!

Finally, any cleanup that doesn't have to be done in the real-time code shouldn't be; leave it for later, passing along just enough information from the real-time code to enable the cleanup code to do its job. You can pass information out of the real-time context to your other code by modifying values stored in variables defined in the scope enclosing the definition of the callback. Grouping these into a mutable struct, an instance of a final base class, or mutable static values seems like a good idea.

And then there's testing. Think about what the worst-case scenario might be, and test for that. If your real-time code reliably returns within the time allowed under those conditions, it's time to test it on the slowest device that might be called upon to run it, and if that works you're golden!

Sunday, January 06, 2019

Swift: Origins and My Personal History with it

Once upon a time there was a graduate student named Chris Lattner, who had a bright idea. That bright idea turned into LLVM (an acronym for Low Level Virtual Machine), essentially compiler technology, the popularity of which has been on a tear ever since.

Lattner went to work for Apple, but continued to be heavily involved in the development of LLVM, notably including in the extension of Clang (LLVM's C language front-end) to support C++. (In the LLVM world, a front-end turns what we commonly think of as computer code into LLVM-IR, LLVM Intermediate Representation, from which it can then be further transformed by a back-end into machine code for some specific platform, after a bit of polishing while still in LLVM-IR.)

As I understand it, in the wake of that effort, Lattner thought there had to be a better way, and set out to create what has become known as Swift (see also). As you might guess from its name, one of the design goals for Swift was that it should run quickly, especially as compared with scripting languages like Javascript that are interpreted into machine code as they are executed, rather than being compiled ahead of time.

Another primary design goal was that it should be safe, immune to many of the categories of defects that find their way into computer software. Yet another was that it should be modern, bringing together features from an assortment of other programming languages, features that make code easier to write and comprehend.

Something that may never have been a goal so much as an underlying assumption was that it should leverage LLVM. Of course it should; who would even think to question that, myself included! Swift's most tangible existence is as a front-end compiler for LLVM, implementing and evolving in lockstep with an evolving language specification. (That front-end compiler determines what will and will not be allowed to progress to the LLVM-IR stage.)

But to get back to the story of its origin, Lattner worked alone on this for awhile, then showed it to a few others within Apple, where it at first became something of a skunkworks project, then a more substantial project involving people from outside the company, but still keeping a low profile. In fact it kept such a low profile that when it was publicly introduced at WWDC 2014 nearly everyone was taken completely off-guard.

That public introduction brings to mind another design goal, or maybe design constraint, which was that Swift had to be interoperable with Apple's existing APIs (Application Programming Interfaces), otherwise it would have had a much more difficult time gaining traction. Being interoperable with the APIs meant being interoperable with Objective-C, the language Apple began using when it acquired NeXT in 1997. I can't speak to how Swift might have turned out differently if that were not the case, but I'm relatively confident this requirement served to accelerate its development by dictating certain decisions, obviating the need for extended discussion. (Swift also inherited some of the features of Objective-C, notably including automatic reference counting and the use of both internal and external parameter labels in functions, initializers, and methods, which contribute to its readability.)

So it's June, 2014, and Swift has just been announced to the world. Despite the vast majority of the existing code base being in Objective-C, C, or C++, both at Apple and at other companies providing software for Apple's platforms, the writing was plainly on the wall that Swift would eventually largely if not entirely displace Objective-C, just not right away. Since I didn't personally have a large Objective-C code base, and what I did have I'd basically neglected for over three years, I saw nothing to hold me back from diving into Swift, well nothing other than having very limited time to give to it.

However, as I got further into it, I discovered some details that muted my enthusiasm. Most importantly for my purposes was Swift's initial unsuitability for hard real-time use, like synthesizing complex sound on the fly (my primary use-case). It still isn't really suitable for such use, but it is getting closer.

I also had quibbling issues, including the initial lack of a Set type (collections of unique elements) despite that much of the functionality of sets had already been developed as part of the Dictionary type, and then, when a Set type was introduced, it felt like a step-child, with an initializer based on Array syntax (ordered collections which may have duplicate elements). I remember thinking, if Swift had started out with a Set type with its own syntax, it would have made far more sense for Dictionary syntax to be based on that rather than on arrays, since a Dictionary is essentially a Set with associated values, all of the same type. (There are only so many options for symbolic enclosure on most keyboards — parentheses, curly braces, square brackets, and angle brackets — and these were already fully subscribed — by expressions, blocks, arrays, and type specifications, respectively. Other symbols and combinations of symbols are available but would not be as straightforward to utilize, and, in any case, it's a bit late in the game to be making anything other than additive changes, alternative syntax that does not replace what already exists.)

Another quibble revolved around numeric values not being directly usable as Booleans (true/false). In C, in a Boolean context, zero evaluates to false, and any other value evaluates to true. This can be very convenient and is one of my favorite features of that language. Yes, in Swift one can always use a construction like numericValue != 0, but when you're accustomed to just placing a numeric value in a Boolean context and having it evaluated as a Boolean, that feels slightly awkward. I get that using numerics as Booleans invites the use of complicated Boolean constructions, which can make code more difficult to read. There have been many times when I've had to parse what I had myself strung out onto a single line, by breaking it out over several lines, to be able to understand it and gauge its correctness. Even so, it initially annoyed me that I would have to give this up in Swift. (I've long since gotten over this and now prefer the absence of implicit type conversions. In Swift, with regard to types, what you see is what you get, and that's a good thing!)

But there were also aspects of Swift I liked right away! Optionals, for example, just made so much sense I was amazed they hadn't already been incorporated into every language in existence, and enums with the choice of raw or associated values were clearly a huge improvement over how enumerations work in C and Objective-C. Also, once it finally settled down, function/method declaration and call-site syntax hit the sweet spot for me. Likewise access control — with the exception of the descriptive but nevertheless slightly awkward 'fileprivate' key word, but no more than I'm likely to ever have need for that I can live with it and certainly have no intention of attempting to reopen that can of worms at this late date!

Even though it initially seemed overdone, I like the emphasis on value types (value semantics). I'm comfortable with a little bit of indirection, but begin to get nervous when dealing with pointers to pointers, and get positively fidgety when dealing with pointer arithmetic. (You know you want to keep those stack frames small, and pointers to objects allocated on the heap can be the most direct means to that end, but it can get out of hand.) Happily, Swift not only doesn't take away the reference option, it makes using it easier and safer, while also preventing value types from resulting in overly large stack frames and excessive copying.

I also like the notions of protocols and generics, but in a fuzzier way, since I really don't completely comprehend either, and there are particular fundamental protocols, like Sequence, the existence of which I am vaguely aware of but not much more than that. I suppose these are the sorts of things I'll be delving into here going forward.

But to climb back up to my preferred edge-of-space point of view, I've had this sneaking suspicion all along that Swift is composed from primitives of some sort, although you might have to be a compiler adept to really understand what they are and how they fit together. To use the example of sets, dictionaries, and arrays, from above, the essential characteristic of a set is that its elements are unique; no two are identical. The Set type, as implemented, is unordered, but you might also have OrderedSet, and it is in fact quite possible to create such a type, moreover Apple's Objective-C APIs already include both NSOrderedSet and NSMutableOrderedSet. Likewise you might want an unordered collection, the elements of which are not necessarily unique. Uniqueness and ordering are independent attributes of collections.

Besides Sequence, among the protocols I'm vaguely aware of, are such things as Collection, MutableCollection, Numeric, and so forth. While these are straining in this direction, they really aren't the primitives that exist (only?) in my imagination, which at this point I would expect to be written in C++ rather than Swift, or only expressible in LLVM-IR.

I'm more curious about this than is good for me, considering how unprepared I am to understand things at this level, but there's no point in fighting it. Show me bread and I see the ingredients and the processes applied to them to make the finished product. Show me a car and I see parts and the assembly process. It's in my nature, and so it's also likely to show up here, to the extent I make any headway in wrapping my head around such esoterica.

(17Feb2019) I just realized that there's a general principle to be extracted from this, which is that there's no point in fighting a tendency to dream about what might be. Instead, it's better to choose dreams that are both achievable and worth the effort, and build bridges to them from current reality.

Update: As if to add credibility to my hunch about Swift being built up from abstract and/or compiler-level primitives, on January 10th Jeremy Howard published High Performance Numeric Programming with Swift: Explorations and Reflections, in which he states "Chris [Lattner] described the language to me as “syntax sugar for LLVM”, since it maps so closely to many of the ideas in that compiler framework."

Friday, January 04, 2019

Filling in the Gaps with Swift

They say the best way to learn is to teach.

Well, despite having dabbled in Swift for 4.5 years, I don't think I'm quite ready to teach it, nevertheless I don't suppose it would do anyone any harm to watch over my shoulder as I attempt to wrap my own head around it.

I doubt that I'll entirely devote this blog to that purpose, but you can expect to see such posts begin to appear, directly.

For now, here's a link to an article by the amazingly prolific Paul Hudson, Hacking with Swift: Glossary of Swift Common Terms. This article contains a few breezy, imprecise definitions, but they're mostly of a nature that won't be relevant as you're just setting out to learn the language. Just be aware that there'll be more detail to learn as you advance.