Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?

This, is actually the wrong conclusion. The entire purpose of the “inner class” is to provide value semantics while maintaining effeciency of implementation.

Is that an opinion? Or was there a meeting when the "entire purpose" of inner classes was established and I wasn't invited? Sure, they can be used for efficiency, but they can also be used for other purposes, including inner mutation.

Just for starters, Mike Ash seems to have uncovered that inner classes are used inside the standard library to provide destructors:

Destruction can be solved by using a class, which provides deinit. The pointer can be destroyed there. class doesn't have value semantics, but we can solve this by using class for the implementation of the struct, and exposing the struct as the external interface to the array.

So to say that the "entire purpose" of inner classes is efficiency is I think plainly false; they are used for many reasons. Efficiency may be one.

Moreover, following Mike Ash's outline in which Swift.Array calls malloc and free (read: wraps the kernel, read: mutates the free block chain), Swift.Array is directly analagous to my KernelSender. They both hide inner mutation "somewhere else" inside a struct, by moving the mutation to a place where you don't see it. So not only is inner-mutation-inside-structs an actual thing, it's being used inside the standard library.

I agree with you that inner mutability "feels" wrong in the CanSend example. But the suggestion that inner mutability should never ever be used is wrong, because of Swift.Array. So we need some rule to distinguish when inner-mutability-with-structs is bad from when it is okay. If the rule was "never do any mutation inside a struct" then we would not have value-typed-arrays to begin with.

(In fact, we didn't have value-typed-arrays to begin with. I have a serious theory that it was exactly the rigid application of "never have inner mutation" inside Swift that initially led to the silly behavior. Recall that in early betas, the value/reference behavior of arrays used to hinge on whether or not the operation caused a resize--which is exactly those cases in which the array implementation needed to mutate the free block chain.)

No, do not write the “struct with inner class wrapper” so you can simply make it pseudo-mutable.

I will look forward to the patches you will submit to the standard library on this topic when it lands on GitHub later this year.

You see, MockSender does indeed have a lifecycle; it has a history of all messages sent that varies over time.

A lifecycle is not merely a description of something that varies over time. Lifecycle here is

In object-oriented programming (OOP), the object lifetime (or life cycle) of an object is the time between an object's creation and its destruction.

...and this is what the SO poster giving the advice meant: that if our type "has a lifecycle" (read: needs a destructor, much like Mike Ash's Array), then it should be a class, because only classes have destructors. I agree with you that this rule would be stupid (just for starters, it forbids Swift.Array), but let's be clear about what kind of stupid it is. He is not making some broad point about things that change over time, he is talking about the implementation details of struct's deinit.

Even if he were saying that things that change over time should be classes, that would be wrong. We can and do implement value types that change over time (see: the mutating func) so if we say that whether to use a value type hinges on whether something changes over time would be inconsistent with e.g. the standard library.

This solution does exactly what we want. It also does it by maintaining the protocol can be conformed by immutable value-types and my mutable reference types. This is a validation of the “Structs Philosophy™”; `MockServer is not a value-type, don't try and make it one.

I agree with the solution, but it is hardly a validation of the Structs Philosophy™. If we have two choices and the oracle says "take this path and if it doesn't out work (like it won't most of the time) try the other one" then it is a shitty oracle. You had one job.

We went down an initial path (implementing CanSend with a struct) which was a dead end. In this particular case (because it is a contrived example that fits on our screen) the path to the dead-end was short, but in the general case, long paths to dead-ends are possible, and they motivated the original essay. An algorithm that gives us the wrong-answer-first is a shitty algorithm.

Even if we adopt the rule "always chose classes" (which I do not advocate, but I know smart people who do), that would be strictly better than the Structs Philsophy™ because "most custom data constructs should be classes" (Apple) and an oracle that picks the most likely path is far superior to an oracle that picks the unlikely path first.

Then of course there's the oracle I actually advocate (the Jeff Trick) which I think agrees with (nearly) all of our intuitions about reference and value types and only rarely (if ever?) sends us down dead ends.

@owensd

This comment has been minimized.

Copy link

owensd commented Jul 5, 2015

This, is actually the wrong conclusion. The entire purpose of the “inner class” is to provide value semantics while maintaining efficiency of implementation.

Is that an opinion? Or was there a meeting when the "entire purpose" of inner classes was established and I wasn't invited? Sure, they can be used for efficiency, but they can also be used for other purposes, including inner mutation.

So to say that the "entire purpose" of inner classes is efficiency is I think plainly false; they are used for many reasons. Efficiency may be one.

It’s an opinion. Though, the fact that Swift.Array is using classes for its destructor is an implementation detail of using classes for constructing a value-type semantic interface.

I think it’s safe to say that using inner classes to simply break the mutability contract of a protocol is a poor reason that’s going to lead to all sorts of bad behavior (like the original arrays in Swift).

Moreover, following Mike Ash's outline in which Swift.Array calls malloc and free (read: wraps the kernel, read: mutates the free block chain), Swift.Array is directly analagous to my KernelSender. They both hide inner mutation "somewhere else" inside a struct, by moving the mutation to a place where you don't see it. So not only is inner-mutation-inside-structs an actual thing, it's being used inside the standard library.

I think the effect is the same as your type, but the purpose, and more importantly, the contract of the types are different. Swift.Array is not changing mutability contract of the protocol. That's what your implementation choice is doing. This creates a confusing contract to the client as even their let server instance is able to have it's inner data change. That's just bad, and is again, the source of the many issues that the original Swift.Array had.

I agree with you that inner mutability "feels" wrong in the CanSend example. But the suggestion that inner mutability should never ever be used is wrong, because of Swift.Array. So we need some rule to distinguish when inner-mutability-with-structs is bad from when it is okay. If the rule was "never do any mutation inside a struct" then we would not have value-typed-arrays to begin with.

Right, I’m saying that I think that line is somewhere near the area of, “I really want to present a value-semantic type, but the performance is going to suck if I do, so let’s create an implementation with a backing inner class”. This get’s us the Swift.Array and Swift.Dictionary types. However, the protocol also needs to be changed if value types are supposed to be able to modify the state of the object. In the Swift.Array case, the protocol would be updated and marked with mutating.

(In fact, we didn't have value-typed-arrays to begin with. I have a serious theory that it was exactly the rigid application of "never have inner mutation" inside Swift that initially led to the silly behavior. Recall that in early betas, the value/reference behavior of arrays used to hinge on whether or not the operation caused a resize--which is exactly those cases in which the array implementation needed to mutate the free block chain.)

Yeah, it had a bunch of strange rules. It was also completely inconsistent with how the dictionary worked as well, which made the implementation all the more confusing. It also violated the rule that I keep bringing up; it violated the mutability contract.

I will look forward to the patches you will submit to the standard library on this topic when it lands on GitHub later this year.

There’s no need, I think it fits in the criteria above.

Even if he were saying that things that change over time should be classes, that would be wrong. We can and do implement value types that change over time (see: the mutating func) so if we say that whether to use a value type hinges on whether something changes over time would be inconsistent with e.g. the standard library.

Well, depends on how pedantic we want to get right? Even in your explanation of value types you talked about how logically it’s simply throwing the previous value away and replacing it with a new value. So, there really isn’t a lifetime of value types. They are this view of what once was. I think it could be argued that value types do not have lifetimes because they are not guaranteed to be the same instance, as a reference type, when a mutation occurs.

This solution does exactly what we want. It also does it by maintaining the protocol can be conformed by immutable value-types and my mutable reference types. This is a validation of the “Structs Philosophy™”; `MockServer is not a value-type, don't try and make it one.

I agree with the solution, but it is hardly a validation of the Structs Philosophy™. If we have two choices and the oracle says "take this path and if it doesn't out work (like it won't most of the time) try the other one" then it is a shitty oracle. You had one job.

We went down an initial path (implementing CanSend with a struct) which was a dead end. In this particular case (because it is a contrived example that fits on our screen) the path to the dead-end was short, but in the general case, long paths to dead-ends are possible, and they motivated the original essay. An algorithm that gives us the wrong-answer-first is a shitty algorithm.

But you didn't give the oracle all your information in the beginning regarding your MockServer<T>, so of course you got a questionable response out. Not only that, you already hit the first sign of problems with this approach when you tried to mutate the struct with a protocol that was non-mutating.

The questions you had at this junction was this:

  1. Do I break the mutability contract on the protocol and hide the fact that my struct is actually going to be mutating data? or
  2. Do I move to a class to absolve myself of the mutability contract so that I can build the type how I want it to behave?

You chose option #1 and then tried to use that as the basis that the Structs Philosophy™ people were wrong. But they wouldn't have advised you down that path as you are breaking of the most important parts of the contract: immutability.

@drewcrawford

This comment has been minimized.

Copy link
Owner Author

drewcrawford commented Jul 6, 2015

I think it’s safe to say that using inner classes to simply break the mutability contract of a protocol is a poor reason that’s going to lead to all sorts of bad behavior (like the original arrays in Swift).

I think this is the real dispute behind the whole disagreement. What, specifically, comprises the contract? What does that mean?

In the original example, we had this:

protocol CanSend {
    typealias Message
    func send(messgage: Message) throws
}

We have already agreed that conforming to this protocol with a mutating class is a reasonable thing to do. Since we agree on that, any consumer that uses this protocol must assume mutation is at least possible:

func sendMessage(a: CanSend) {
    a.send(Message()) //MAY MUTATE
}

Because that is true, I do not think there is any such thing as "break[ing] the mutability contract of a protocol". A protocol does not guarantee to callers that functions are non-mutating.

I think what you want is

protocol CanSend {
    typealias Message
    nonmutating func send(messgage: Message) throws
}

but Swift does not have that feature. Maybe it should; although I think there are edge cases that are non-trivial to solve. But it doesn't, and so there is no 'contract of non-mutability' being provided to clients that we could break even if we wanted to.

This creates a confusing contract to the client as even their let server instance is able to have it's inner data change. That's just bad,

Not bad at all. For example

let locale = NSLocale.autoUpdatingCurrentLocale()
locale.preferredLanguages() //not necessarily
locale.preferredLanguages() //the same value

NSLocale is implemented by a class for historical reasons but its animal spirit is very much a value type. In 2015 I think a struct with inner mutability is clearly the right way to re-implement this API, and the other ways are all obviously wrong. If you disagree I would be fascinated to hear competing proposals on this.

I think it could be argued that value types do not have lifetimes because they are not guaranteed to be the same instance, as a reference type, when a mutation occurs.

That is a really good point, thanks.

You chose option #1 and then tried to use that as the basis that the Structs Philosophy™ people were wrong. But they wouldn't have advised you down that path as you are breaking of the most important parts of the contract: immutability.

Well here we come back to: "what is the contract?" Because in my mind, there is no contract of non-mutability in a protocol. Swift does not even allow such a thing to exist.

But even if we try to infer one, I think the specific example in the post is surprisingly resistant to "naive" definitions of immutable contracts. Looking back at the example:

extension MockSender : CanSend {
    typealias Message = T
    func send(message: T) {
        self.sentMessages.append(message) //error: Immutable value 
        //of type '[T]' only has mutating members named append
    }
}

MockSender mutates sentMessages, but that is not part of the CanSend contract. So considering the client case

func sendMessage(a: CanSend) {
    a.send(Message())
    print("\(a.sentMessages.count)") //compile error
}

The client cannot know–and in fact has no way to find out–that the function actually mutated anything. Only certain privileged callers, who know the details of MockSend in particular, could even find out that any mutation has taken place.

If a contract breaks in the forest, and nobody's around to see it, is it really broken?

@owensd

This comment has been minimized.

Copy link

owensd commented Jul 6, 2015

Yes, we have different ideas on what mutability means. I don't agree with your assertions; I'll try and explain my point of view below.

I think it’s safe to say that using inner classes to simply break the mutability contract of a protocol is a poor reason that’s going to lead to all sorts of bad behavior (like the original arrays in Swift).

I think this is the real dispute behind the whole disagreement. What, specifically, comprises the contract? What does that mean?

I think the contract for protocols are contextual. The protocol means something different to value-type conformers and consumers than it does for reference type conformers and consumers.

protocol CanSend {
    typealias Message
    func send(messgage: Message) throws
}

This protocol makes the following claims:

  1. For value types, send() is a function that must be implemented and cannot modify the underlying value type.
  2. For reference types, send() is a function that must be implemented.

This part of Swift we agree on though: Swift cannot make a claim about mutability for reference types. However, it can, and does make mutability claims about value types.

The other thing about Swift, and I think Swift.Array is a great case study for, is that value-types should always behave like value types. When they do not, you get behavior that you simply cannot reason about. So, any value-type that uses internal references types, still needs to look, feel, and act like a value-type. Doing otherwise would validate a different contract that I would call the "value type semantics contract".

let locale = NSLocale.autoUpdatingCurrentLocale()
locale.preferredLanguages() //not necessarily
locale.preferredLanguages() //the same value

NSLocale is implemented by a class for historical reasons but its animal spirit is very much a value type. In 2015 I think a struct with inner mutability is clearly the right way to re-implement this API, and the other ways are all obviously wrong. If you disagree I would be fascinated to hear competing proposals on this.

Well, first of all, autoupdatingCurrentLocale is a static function, so by definition, has no state for the value type for us to even worry about changing. Secondly, the preferredLanguages() is also a class method, so you cannot call from an instance in Swift.

let locale = NSLocale.autoupdatingCurrentLocale()
NSLocale.preferredLanguages()
NSLocale.preferredLanguages()

Maybe you had something else in mind?

Well here we come back to: "what is the contract?" Because in my mind, there is no contract of non-mutability in a protocol. Swift does not even allow such a thing to exist.

But even if we try to infer one, I think the specific example in the post is surprisingly resistant to "naive" definitions of immutable contracts. Looking back at the example:

extension MockSender : CanSend {
    typealias Message = T
    func send(message: T) {
        self.sentMessages.append(message) //error: Immutable value 
        //of type '[T]' only has mutating members named append
    }
}

`MockSender` mutates `sentMessages`, but that **is not part of the `CanSend` contract.**

So this is where disagree. If MockSender is a value-type, the protocol clearly marks the function send() as being unable to modify its own values. If MockSender is a reference-type, then the protocol enforces, nor implies, any mutability rules.

And I think this brings us full circle on one of reasons people are arguing to use value-types. The type signatures for protocols and structs give the callers the ability to know what you accurately claimed about reference types: "The client cannot know–and in fact has no way to find out–that the function actually mutated anything."

If value types have value-type-semantics, that statement is absolutely knowable for value-types. This puts a burden of responsibility to not by-pass the mutability contracts (which frankly, is just part of the value semantics contract) on protocol and type signatures.

Here's another example breaking the two things I talked about: mutability contract and the value-type semantics.

protocol P {
    var name: String { get }
    func say(message: String)
}

struct S : P {
    private class W {
        var name: String = ""
    }
    private let _wrapper = W()

    var name: String { return _wrapper.name }

    func say(message: String) {
        _wrapper.name = message
    }

    init(name: String) {
        _wrapper.name = name
    }
}

let s = S(name: "Bob")
s.name                   // "Bob"
s.say("WTF?")
s.name                   // WTF?

let ss = s
ss.name
ss.say("CRAP!")
ss.name                  // CRAP!
s.name                   // CRAP!

I'm mutating state with non-mutating functions on value types, and I'm not playing by the value semantic rules.

@drewcrawford

This comment has been minimized.

Copy link
Owner Author

drewcrawford commented Jul 11, 2015

The protocol means something different to value-type conformers and consumers than it does for reference type conformers and consumers.

The problem is that a consumer does not (cannot) know if the thing is a value type or a reference type.

I mean if we have

protocol WhatAmI { }

func foo(a: WhatAmI) {
    //a: value type or reference type?
}

The language simply does not give us any way to specify whether a is a value type or a reference type. It conforms to the protocol, and that is all.

Now you could have

protocol SomeClassOnlyProtocol: class { }

but there is no equivalent protocol Foo: struct syntax, and at any rate the original example didn't use this. You could also have

func <A : WhatAmI where A : Struct> (a: A) {

but again we are hypothesizing features that are not in the language.

Now looking at your bistable formalism:

  1. For value types, send() is a function that must be implemented and cannot modify the underlying value type.
  2. For reference types, send() is a function that must be implemented.

Even assuming this is a reasonable dichotomy (which I don't concede), since the consumer does not know whether we are in case 1 or 2, then the rest of it is useless. The "contract" you are constructing provides no usable guarantees.

There may very well be cases where we would like to provide some contract about mutability. But since we can't, the question of whether or not it is a good idea is moot and doesn't need to be litigated. The thing isn't possible. The end.

Secondly, the preferredLanguages() is also a class method, so you cannot call from an instance in Swift.

You're right. I meant to choose an instance method, I misread the docs.

let locale = NSLocale.autoUpdatingCurrentLocale()
locale.objectForKey(NSLocaleLanguageDirection) //not necessarily
locale.objectForKey(NSLocaleLanguageDirection) //the same value

If MockSender is a value-type, the protocol clearly marks the function send() as being unable to modify its own values. If MockSender is a reference-type, then the protocol enforces, nor implies, any mutability rules.

The thing is, we don't know. It's like Schroedinger's cat. Is CanSend a value type or is it a reference type? Yes.

So the meaning of the protocol is simply that the thing supplied has methods such-and-such, that take such-and-such arguments. We do not know if the thing has reference semantics, value semantics, or some new and exciting semantics to be announced in Swift 3. The fact that if we opened the box, we would find a particular thing, that follows particular rules, is irrelevant, because we have not done that. The box is closed. The behavior of the thing is the superposition of value types and reference types, which is a mutating behavior for every function call.

If value types have value-type-semantics, that statement is absolutely knowable for value-types.

Well it is not knowable, for starters, because of inner mutability. But I think perhaps you are trying to express this as a moral imperative rather than as a fact.

But I think, as a moral imperative, it is fairly unworkable. [argument removed]

@drewcrawford

This comment has been minimized.

Copy link
Owner Author

drewcrawford commented Jul 11, 2015

I've thought of an argument that is both simpler and more elegant.

If we consider this case:

struct Foo {
    let a: Int
    func isEqual(b: Int)-> Bool { return b == a }
}

I think we would both agree that clearly isEqual should be declared non-mutating.

However, when we look at the x86 assembly, the claim is a lie:

cmp $eax, ebx //modifies zflags
jne $addr //modifies pc

The thing is, the idea of a "non-mutating function" is less a real thing and more of a fairy tale for children. There is no such thing, and there never has been. Maybe it exists in some kind of philosophical, platonic ideal sense, but in the real world, no.

You might not see what x86 has to do with Swift programs, but imagine for a moment that we're writing an x86 emulator. We might have this:

struct ISA {
    var zflags : Int64
    var pc : Int64
    mutating func cmp(a: Register, b: Register)
    mutating func jne (a: Register, b: Register)
}

Here we want to declare our instructions as mutating because someone working at this level of abstraction is going to want to know that zflags and pc may change.

Meanwhile, someone at a much higher level of abstraction will not:

//pseudocode
struct HighLevelOps {
    func compare(a: Int, b: Int) { //non-mutating
        self.ISA.cmp(a,b) //inner mutation
        return self.ISA.zflags & ZFLAGS_EQUAL
    }
}

The insight from this exercise is that in a sufficiently large program, that has sufficiently-deep levels of abstraction, either:

  1. Everything must be mutating (since the fundamental operations mutate, and we call them), or
  2. We must lie to the caller, pretend we don't mutate, and hope they don't notice

Now if you are writing a "shallow"-shaped program, that has basically one or two levels of abstraction--say it's a GUI drawing app, or something--then it may be perfectly reasonable to never lie, because somebody is likely to notice in the average case.

However if you are working on a "deep" program, where there are dozens of layers of abstractions, then lying may be the only workable approach. It's likely that somebody, way the hell down in the basement, needs to mutate, but telling that to somebody on the surface is a failure of encapsulation.

I suspect this whole debate is arising because in the back of your mind you are considering a program that is shallow-shaped, and in the back of my mind I am considering a program that is deep-shaped, and the positions follow from those priors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.