NSColor
is dynamic, so its exact color value depends on the view's context. CGColor
isn't dynamic, so resolving a CGColor
produces a snapshot from the current context at the moment you ask for it. As your view's context changes, you'll be left with a stale value that might look very wrong compared to the surrounding UI.
The display cycle gives you the invalidation and update funnel points that you need to do this right. AppKit always sets up the necessary context (NSGraphicsContext
, NSAppearance
, etc. etc.) before performing display callouts like -updateLayer
.
Besides the obvious NSAppearance
changes which can occur (vibrancy, Increased Contrast mode, etc.), there are more subtle ones. For example, in the Touch Bar there are specific elements of the UI that get white balanced automagically as ambient light readings change. The glyphs and text drawn inside your "key caps" can shift pretty far toward blue, and this all happens through redisplay and dynamic NSColor
resolution. The amount of context-sensitivity is only going to increase over time.
A view’s relationship to its layer is part of that view’s contract for expressing its drawing. On one level there's an obvious encapsulation problem when you poke in and mess with someone's layer contents, but at a more fundemantal level, it's just a fragile thing to do.
If a view specifically wants to describe itself with layer contents, it can say as much with -wantsUpdateLayer
, and it's going to end up with its own backing layer for that purpose. That's great! Otherwise, it might not have its own backing layer at all... for example, the framework could hypothetically save memory by drawing an several views into a single layer since nothing’s animating[1]. Or, the layer may have been thrown away and is going to be replaced in a subsequent display pass.
From outside the view itself, you don't know how that layer is being managed. Manipulating it from the outside is questionable indeed.
The layer tree is a side effect of the view hierarchy, not its model. This probably annoys UIKit people a lot, but this approach is extremely intentional. If the CoreAnimation layer tree becomes the display API, there's very little flexibility to express drawing any other way. In fact, there's very little flexibility to even improve the way CoreAnimation is used to represent the view hierarchy, because developers have made assumptions like "a stack view that draws nothing will nonetheless have a layer that I can mess with". A view-centric API allows the framework to make choices about backing display technologies that are backward-compatible, and possibly even transparent to individual app developers.
4. The fundamental drawing primitive which is designed to be subclassed shouldn't be able to randomly do its own possibly-incompatible drawing in contention with the expectations of its subclassers
I think that about covers it.
[1] This is actually hypothetical; such behavior would need to be SDK link checked because it might break something.