Skip to content

Instantly share code, notes, and snippets.

@drewcrawford
Last active October 11, 2015 12:47
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save drewcrawford/752b2bfef7d8574eab01 to your computer and use it in GitHub Desktop.
Save drewcrawford/752b2bfef7d8574eab01 to your computer and use it in GitHub Desktop.

This, is actually the wrong conclusion. The entire purpose of the “inner class” is to provide value semantics while maintaining effeciency of implementation.

Is that an opinion? Or was there a meeting when the "entire purpose" of inner classes was established and I wasn't invited? Sure, they can be used for efficiency, but they can also be used for other purposes, including inner mutation.

Just for starters, Mike Ash seems to have uncovered that inner classes are used inside the standard library to provide destructors:

Destruction can be solved by using a class, which provides deinit. The pointer can be destroyed there. class doesn't have value semantics, but we can solve this by using class for the implementation of the struct, and exposing the struct as the external interface to the array.

So to say that the "entire purpose" of inner classes is efficiency is I think plainly false; they are used for many reasons. Efficiency may be one.

Moreover, following Mike Ash's outline in which Swift.Array calls malloc and free (read: wraps the kernel, read: mutates the free block chain), Swift.Array is directly analagous to my KernelSender. They both hide inner mutation "somewhere else" inside a struct, by moving the mutation to a place where you don't see it. So not only is inner-mutation-inside-structs an actual thing, it's being used inside the standard library.

I agree with you that inner mutability "feels" wrong in the CanSend example. But the suggestion that inner mutability should never ever be used is wrong, because of Swift.Array. So we need some rule to distinguish when inner-mutability-with-structs is bad from when it is okay. If the rule was "never do any mutation inside a struct" then we would not have value-typed-arrays to begin with.

(In fact, we didn't have value-typed-arrays to begin with. I have a serious theory that it was exactly the rigid application of "never have inner mutation" inside Swift that initially led to the silly behavior. Recall that in early betas, the value/reference behavior of arrays used to hinge on whether or not the operation caused a resize--which is exactly those cases in which the array implementation needed to mutate the free block chain.)

No, do not write the “struct with inner class wrapper” so you can simply make it pseudo-mutable.

I will look forward to the patches you will submit to the standard library on this topic when it lands on GitHub later this year.

You see, MockSender does indeed have a lifecycle; it has a history of all messages sent that varies over time.

A lifecycle is not merely a description of something that varies over time. Lifecycle here is

In object-oriented programming (OOP), the object lifetime (or life cycle) of an object is the time between an object's creation and its destruction.

...and this is what the SO poster giving the advice meant: that if our type "has a lifecycle" (read: needs a destructor, much like Mike Ash's Array), then it should be a class, because only classes have destructors. I agree with you that this rule would be stupid (just for starters, it forbids Swift.Array), but let's be clear about what kind of stupid it is. He is not making some broad point about things that change over time, he is talking about the implementation details of struct's deinit.

Even if he were saying that things that change over time should be classes, that would be wrong. We can and do implement value types that change over time (see: the mutating func) so if we say that whether to use a value type hinges on whether something changes over time would be inconsistent with e.g. the standard library.

This solution does exactly what we want. It also does it by maintaining the protocol can be conformed by immutable value-types and my mutable reference types. This is a validation of the “Structs Philosophy™”; `MockServer is not a value-type, don't try and make it one.

I agree with the solution, but it is hardly a validation of the Structs Philosophy™. If we have two choices and the oracle says "take this path and if it doesn't out work (like it won't most of the time) try the other one" then it is a shitty oracle. You had one job.

We went down an initial path (implementing CanSend with a struct) which was a dead end. In this particular case (because it is a contrived example that fits on our screen) the path to the dead-end was short, but in the general case, long paths to dead-ends are possible, and they motivated the original essay. An algorithm that gives us the wrong-answer-first is a shitty algorithm.

Even if we adopt the rule "always chose classes" (which I do not advocate, but I know smart people who do), that would be strictly better than the Structs Philsophy™ because "most custom data constructs should be classes" (Apple) and an oracle that picks the most likely path is far superior to an oracle that picks the unlikely path first.

Then of course there's the oracle I actually advocate (the Jeff Trick) which I think agrees with (nearly) all of our intuitions about reference and value types and only rarely (if ever?) sends us down dead ends.

@drewcrawford
Copy link
Author

I've thought of an argument that is both simpler and more elegant.

If we consider this case:

struct Foo {
    let a: Int
    func isEqual(b: Int)-> Bool { return b == a }
}

I think we would both agree that clearly isEqual should be declared non-mutating.

However, when we look at the x86 assembly, the claim is a lie:

cmp $eax, ebx //modifies zflags
jne $addr //modifies pc

The thing is, the idea of a "non-mutating function" is less a real thing and more of a fairy tale for children. There is no such thing, and there never has been. Maybe it exists in some kind of philosophical, platonic ideal sense, but in the real world, no.

You might not see what x86 has to do with Swift programs, but imagine for a moment that we're writing an x86 emulator. We might have this:

struct ISA {
    var zflags : Int64
    var pc : Int64
    mutating func cmp(a: Register, b: Register)
    mutating func jne (a: Register, b: Register)
}

Here we want to declare our instructions as mutating because someone working at this level of abstraction is going to want to know that zflags and pc may change.

Meanwhile, someone at a much higher level of abstraction will not:

//pseudocode
struct HighLevelOps {
    func compare(a: Int, b: Int) { //non-mutating
        self.ISA.cmp(a,b) //inner mutation
        return self.ISA.zflags & ZFLAGS_EQUAL
    }
}

The insight from this exercise is that in a sufficiently large program, that has sufficiently-deep levels of abstraction, either:

  1. Everything must be mutating (since the fundamental operations mutate, and we call them), or
  2. We must lie to the caller, pretend we don't mutate, and hope they don't notice

Now if you are writing a "shallow"-shaped program, that has basically one or two levels of abstraction--say it's a GUI drawing app, or something--then it may be perfectly reasonable to never lie, because somebody is likely to notice in the average case.

However if you are working on a "deep" program, where there are dozens of layers of abstractions, then lying may be the only workable approach. It's likely that somebody, way the hell down in the basement, needs to mutate, but telling that to somebody on the surface is a failure of encapsulation.

I suspect this whole debate is arising because in the back of your mind you are considering a program that is shallow-shaped, and in the back of my mind I am considering a program that is deep-shaped, and the positions follow from those priors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment