Skip to content

Instantly share code, notes, and snippets.

@jeremytregunna
Created February 11, 2014 19:37
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jeremytregunna/8942478 to your computer and use it in GitHub Desktop.
Save jeremytregunna/8942478 to your computer and use it in GitHub Desktop.

Let's replace Objective-C

I liked the enthusiasm that Ash Furrow posted on his blog today, but I have a hard time seeing it as an honest proposal for a starting point. Let's face it, we have compatibility issues that we need to address. So, what can we do?

Well even if we fully acknowledge that there are some things we can only reasonable do while we break the ABI, and we would like to see some forward looking, let's start to decide what kind of base level abstractions that we need irrespective of the language. This list is in no particular order:

  • Able to call into Objective-C & ASM, let's face it, we need it.
  • A partial reference tracking mechanism, maybe a full blown collector to handle cycles, if justifiable.
  • Type safety with inference
  • Constructs and flow control operations designed for the way we're writing code today, and in the future.
  • Virtual Machines are fashion, we have real hardware to run on.

Legacy

Let's face it, there's a lot of code out there, and we're still going to want to use a lot of it. This code is written in Objective-C, C++, C and Assembly, by and large. We need a way to call into this code from the language proper, and a way of passing on type information to their runtime system (if applicable). This will be the hardest problem of them all, I'm sure.

Memory management

When we made the transition from manual reference counting to automatic reference counting, all that did was replace one well known cognitive load (rules + remembering to apply those at the right time in the flow of your code) with another (baking those rules into a system + remembering to apply those at the right time in your code). It's a subtle shift, but a very real one. If ARC also came with a runtime marker for candidate objects that we are reasonably sure might result in a retain cycle, it would lighten our new cognitive load more.

Type safety

instancetype is a great start, but it only applies to return values, not parameters or local variables. It needs to be everywhere. Objective-C is a weakly typed system, and that's fine; there are API concerns here to change the weakly typed model to a more strict model, but it should be done, even if it's in the form of "optional typing". This gives the compiler more abilities to spot potential problems before we ship code. Type systems aren't always correct, so there has to be some way to overrule the system (annotations), but by and large, such a new language should aim to minimize those.

Control flow

We need to stop worrying about the explicit management of flow in our programs. We need the language to promote thinking in terms of what you want to happen, rather than the steps involved in doing that thing. Part of this could involve language support for reactive styles of development, as well as other forms of declarative programming. I can't tell you what this should look like, and I'll avoid skewing the discussion with what I'd like it to look like. Suffice to say, thinking about the way we perform work (individual steps) works great in a single threaded environment, with one thread. It even scales a little bit to multiple threads, working on independent sets of tasks. It does not scale to the level of concurrency we can make use of today, let alone what we will be able to tomorrow.

Virtual Machines

Some talk has been made about building a new virtual machine for a new language to replace Objective-C. Unfortunately, virtual machines buy you nicer implementations of languages, at the expense of costlier management of a fake machine. I'm not saying there's not some benefit to an intermediate form your language can compile to which isn't representative of any one architecture or CPU implementation, but we already have that; LLVM. We do not really need another layer of indirection on top of it.

Let's face it, there's a reason we tend to advance in software by adding layers of indirection. It helps us get more things done. However, indirection for the sake of indirection isn't a good idea, and often can harm our ability to get simple tasks done quickly. I've yet to see a compelling argument anywhere for a real virtual machine to support a language. The JVM and C# derive most of their VM support from having an intermediate representation to target which itself is portable, not the fake processing architecture it implements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment