What I left unsaid in my last post, in September, was that much of what initially drove my interest in SwiftUI is the hope that it will enable lower latency interactions, by avoiding things like the responder chain and delays that are built in for the purpose of distinguishing among gestures (whether a touch is actually the beginning of a swipe, for example).
The responder chain is about which view should respond to a click (in macOS) or a touch (in iOS and iPadOS). Events are first passed to the screen/window's root view, and then down a chain of more specific views until one possessing an appropriate gesture recognizer is found.
The main source of built-in delay I know about relates to scroll views, and whether a touch is intended to scroll the overall content view or to interact with a child view contained within it.
I'm now thinking that avoiding these sources of latency was probably a vane hope, since, as I understand it, (for now) SwiftUI translates the declarative code it enables into native UI entities for the platform(s) for which it is compiled. There may be some performance advantages, and are likely to be more in the future, but there are some hard constraints. No matter the nature of the code, it will still be necessary to determine which view should respond and to what type of gesture.
Happily, as a source of motivation, this hope has been replaced by some interest in the framework for its own sake, and in the other language features that enable it, although you might not know it from the paltry progress I've managed to make so far.
As for how to avoid latency, the most reliable answer seems to be the same as ever, keep your UI structure as flat as is reasonable, rather than going hog-wild with layering views within views within views. Some such layering is inevitable, just be moderate with it, and don't sweat the unavoidable milliseconds.