Skip to content

Instantly share code, notes, and snippets.

@johnhamelink
Last active June 8, 2016 06:08
Show Gist options
  • Star 15 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save johnhamelink/637124af813aa225f4c4 to your computer and use it in GitHub Desktop.
Save johnhamelink/637124af813aa225f4c4 to your computer and use it in GitHub Desktop.

My Elixir Deployment Wishlist

Foreward

Based on my recent experience of deployment, I've become rather frustrated with the deployment tooling in Elixir. This document is the result of me thinking to myself, "I wish we had x...". This document isn't meant to dishearten anyone who has built tooling for elixir - thank you so much for what you've done. This is meant more as what I personally see as something that would help a lot of Erlang/Elixir newbies like myself to be able to get deploying quickly and efficiently.

1. Release files should be templates

It should be possible to add in custom configuration to the bootstrap scripts. This would allow plugins to be able to add extra steps to the startup / shutdown / upgrade procedure. One way to implement this would be to make all scripts which handle bootstrapping or controlling the machine .eex templates. This would allow other parts of the release system to inject new functionality where needed.

2. vm.args is generated at runtime

vm.args contains information about how the VM should run your elixir app. If you don't know what the server is like beforehand, it can be hard to produce a vm.args that will fit your needs. Creating the vm.args as part of the bootstrapping process would allow for more efficient use of BEAM, as well as configuration of things like the name and sname, which are often dependent on the hostname of the machine.

It's been established that this is actually already possible. Better documentation around this would be great.

3. Automatic init script generation

Init scripts will remain largely the same across projects. It makes sense to provide defaults for each init type, in the same way foreman works.

We agree that this is not the responsibility of exrm directly, but rather that of a plugin.

4. Release configuration should be held in an exs file

The release config file should be easy to configure, and match closely the patterns that people are used to following in config.exs.

5. Every step is overriddable and extendable

I think the phases of the release process should be simple to add to & reason about. I was imagining a config that has something like this in it:

def deploy_pipeline(foo) do
  foo
  |> PhoenixOverlay.compile_static_assets
  |> Exrm.build
  |> CustomOverlay.do_thing_a
  |> CustomOverlay.do_thing_b
  |> ExrmDebPlugin.package
  |> AptPlugin.release
end

Where PhoenixOverlay, ExrmDebPlugin and AptPlugin are External plugins, and CustomOverlay is some internal config specific to that app.

The release configuration file can override steps with custom steps, and also add new steps. In addition, plugin steps can be added to the pipeline.

5.1 Output Plugins

External Output plugins would be able to take a successful build, and do something with it (exrm-rpm is already capable of this). This could be:

  • SCP the result to a server and upgrade
  • Build a debian Package - DONE! exrm_deb now exists.
  • Send a notification on Slack

5.1.1 Example A: Debian/RPM/Pacman Package plugin

Advantages over not being a plugin:

  • Can use information from mix.exs automatically
  • Can upgrades can be handled automatically through apt hooks
  • Ability to create metapackages for umbrella apps

I've began work on a deb plugin, which you can check out here: https://github.com/johnhamelink/exrm-deb

5.1.2 Example B: SCP Deploy plugin

Advantages:

  • Is able to detect when an update is happening automatically
  • Simple to setup when deploying to a non-cloud VPS or bare metal

5.1.3 Example C: Slack Notification

Advantages:

  • Can provide information about
  • which apps changed (if in an umbrella)
  • what environment they were built to deploy to
@bitwalker
Copy link

Thanks for the feedback! Stuff like this is how we improve as a community rather than stagnate, so you never need to fear providing constructive criticism (at least you shouldn't have to, but as we know, this is the internet).

  1. Modular boot scripts is something I've desired for some time now, but sadly have not had the bandwidth to implement, but if you dug through the issues tracker for exrm, you'd notice that this has come up from time to time. I've also discussed this with @tsloughter, of relx, and we're in agreement that this is something much needed. As for getting it implemented, the biggest blocker is simply developer time. I would love to start on this one in particular first and foremost, but have found myself with very limited time as of late, leaving me mostly time to handle ongoing maintenance and bug fixes. I know deployment is a big issue for all of us using Elixir in production, but there hasn't been a great deal of contributions from the community. There have been contributions of course, but to use exrm as the example, I don't have anyone but myself actively working on it/supporting it. It would be great to have even one additional developer who is interested in helping out to split the work with, because it would open up opportunity to get bigger features implemented.
  2. vm.args can be overridden with your own custom one, both at release packaging time, and/or during deployment/boot, depending on your needs. From what I've gathered in general for those that require custom vm.args, they are provisioning them with something like puppet or chef, so baking something into exrm itself has not presented itself as a particularly worthwhile endeavour. If you could go into further detail on what you'd like to see here, I can either provide better docs to cover your use case, or we can get an issue tracked to cover what features need to be implemented to solve blocking issues.
  3. in the past, distro-specific packaging mechanisms have been handled via exrm plugins (one such plugin is exrm-rpm). Considering the manpower issue I raised in point 1, it's not practical for me to make this part of exrm today, because I simply can't maintain init scripts for platforms I'm not using. I would prefer to see these be built as exrm plugins instead, and maintained by those motivated to support those platforms - not only does it keep the maintenance burden down on exrm, but it also means better support for the community.
  4. I have no issue with updating the config file to something .exs based, and in fact have intended to do this at some point, however the demand has been all but non-existent, so the priority has been quite low. If this is something you're interested in contributing, I'd gladly merge it, and the difficulty level is quite low, just requires time.
  5. Unless I'm misunderstanding you, this already exists today, via the plugin system. You can't override steps completely, but you can modify the release configuration just prior to it being handed off to relx (the before_release hook), which effectively is the same thing. I can't think of anything you would want to do by overriding steps that you can't do with a plugin.

5.1. You can do this via the after_release hook in a plugin. This is in fact how the exrm-rpm package works if I recall.

5.1.1. All plugins can access the current project's mix.exs, so I don't think that one is an issue. Handling upgrades through packaging hooks is also something that can be done with a plugin (exrm-rpm does this). The only one that (currently) cannot be done is creating a metapackage of an umbrella app, but that's because umbrella apps are not fully supported in exrm the way I would like them to be. When support for building a release of a full umbrella is added, then this becomes a non-issue.

5.1.2. Could you clarify why this plugin would need to be part of exrm and not a library? My primary red flag is how to support such things, I neither use this approach for deploys, nor do I have an adequate way to test it. I do not want to offer anything in exrm that I can't support, or that I'm not already pretty much on the hook for implicitly because of decisions early on in the project. The advantage to keeping plugins such as these external is so that interested parties can build and maintain them properly. The caveat to all of this of course is that if exrm in some way prevents things like this from being built, then I'm open to developing additional hooks or configuration mechanisms to keep exrm out of the way as much as possible. There are certain limitations to how releases can be built and deployed, but as far as I'm able I want to offer the capability to customize your releases to the extent possible.

5.1.3. Definitely would have to be an external plugin, but would be trivial to do via the after_release hook, since all the information you mentioned is present at that time.

Notes

  • I suspect that by plugins you meant hooks in the boot script for a release, in which case, my response to 1 is the most applicable. If I can find time to rebuild the boot scripts from scratch to support a more modular structure, I see no reason why we can't add a way to inject your own behaviour in the various subcommands. However it's important to note that at boot time, there isn't nearly as much information available as there was during the release packaging process. Some of this can be overcome with release plugins which create metadata files or something which can then be consumed by the boot script plugins, but obviously we need the boot script in play first.
  • I want to make sure you understand that most of my responses are based on my interpretation of the feature described, pending further information. If I seem to not be on the same page regarding some detail, please correct me and I'll re-evaluate.
  • I also want to be clear about my primary concerns with exrm at the moment:
    • reliability (I want building a release to be as rock solid as possible, and for issues to be something I can produce useful hinting for)
    • stability (I want to make sure that changes to the project do not require significant changes to users existing workflow, when people get CI and the like set up, it can be a pain to go back and reconfigure everything if there are backwards incompatible changes made, when they want to make sure they keep pace with project developments. This can be handled with a combination of proper versioning, deprecation cycles, etc., but part of stability is also not introducing mountains of complexity that cause unforseen interactions when someone upgrades exrm - this is already a pretty major issue with the current boot script).
    • maintainability (I want to encourage users to contribute, and I want to do so by keeping the codebase relatively small and focused, delegating behaviour specific to an environment or configuration up to the community to develop and support, in this way exrm acts as a foundation upon which things can be built, rather than a monolithic beast which can never satisfy the needs of everyone fully)
    • usability (this one comes last in the list, but it's very important to me, I am also very aware of the current usability issues with exrm, though this in large part due to the legacy of releases in general - I am very interested in improving this however possible)

My own list of issues

  • Manpower and/or lack of time for feature development
  • Docs really need some love
  • Boot script complexity has reached a level where I'm loathe to introduce any new functionality unless it does not affect core behaviour. I see this being fixed by rewriting the boot scripts, but as I've noted, this is a time-intensive task, and one of significant complexity. It not only needs to be done right in order for it achieve the goal of a more modular, more maintainable script, but it also needs to preserve the current behaviour (the only exception to that is if there is a deprecation cycle for it, or requires minimal changes)
  • Better error reporting

@johnhamelink
Copy link
Author

Thanks for getting back to me so quickly @bitwalker!

Some responses to some of the points listed in your response (I've also updated the initial post to reflect clarifications you've made):

Modular boot scripts is something I've desired for some time now, but sadly have not had the bandwidth to implement, but if you dug through the issues tracker for exrm, you'd notice that this has come up from time to time. I've also discussed this with @tsloughter, of relx, and we're in agreement that this is something much needed. As for getting it implemented, the biggest blocker is simply developer time. I would love to start on this one in particular first and foremost, but have found myself with very limited time as of late, leaving me mostly time to handle ongoing maintenance and bug fixes. I know deployment is a big issue for all of us using Elixir in production, but there hasn't been a great deal of contributions from the community. There have been contributions of course, but to use exrm as the example, I don't have anyone but myself actively working on it/supporting it. It would be great to have even one additional developer who is interested in helping out to split the work with, because it would open up opportunity to get bigger features implemented.

I'd be happy to explore this! Would you be able to explain how you envisioned this would work? If you could propose a plan, I'd gladly attempt to submit a PR 😄

vm.args can be overridden with your own custom one, both at release packaging time, and/or during deployment/boot, depending on your needs. From what I've gathered in general for those that require custom vm.args, they are provisioning them with something like puppet or chef, so baking something into exrm itself has not presented itself as a particularly worthwhile endeavour. If you could go into further detail on what you'd like to see here, I can either provide better docs to cover your use case, or we can get an issue tracked to cover what features need to be implemented to solve blocking issues.

In my usecase, I'm using a pull-based deploy system, making use of user_data scripts in AWS EC2 servers. This means that I can't know anything about the machine (including its hostname or spec) until boot time. It sounds like the current implementation actually would satisfy my usecase, but I had no idea this was possible - I knew I could replace the vm.args at release packaging time, but of course in my case that's no use.

In the past, distro-specific packaging mechanisms have been handled via exrm plugins (one such plugin is exrm-rpm).

Wow, how'd I miss this? Might I suggest adding a section to the readme with plugins that work with exrm? I'll submit a PR for this.

Considering the manpower issue I raised in point 1, it's not practical for me to make this part of exrm today, because I simply can't maintain init scripts for platforms I'm not using. I would prefer to see these be built as exrm plugins instead, and maintained by those motivated to support those platforms - not only does it keep the maintenance burden down on exrm, but it also means better support for the community.

I agree, this makes sense for everyone. This also seems like a good way for me to learn about the codebase. I will begin contributing by building a debian plugin.

I have no issue with updating the config file to something .exs based, and in fact have intended to do this at some point, however the demand has been all but non-existent, so the priority has been quite low. If this is something you're interested in contributing, I'd gladly merge it, and the difficulty level is quite low, just requires time.

Hmm, just to clarify - which config file are you referring to in this case? I didn't make it clear I was referring to the next section.

Unless I'm misunderstanding you, this already exists today, via the plugin system. You can't override steps completely, but you can modify the release configuration just prior to it being handed off to relx (the before_release hook), which effectively is the same thing. I can't think of anything you would want to do by overriding steps that you can't do with a plugin.

So what I was thinking of when I wrote this section, was a clear explanation of the phases of the release process. I was imagining a config that has something like this in it:

def deploy_pipeline(foo) do
  foo
  |> PhoenixOverlay.compile_static_assets
  |> Exrm.build
  |> CustomOverlay.do_thing_a
  |> CustomOverlay.do_thing_b
  |> ExrmDebPlugin.package
  |> AptPlugin.release
end

Where PhoenixOverlay, ExrmDebPlugin and AptPlugin are External plugins, and CustomOverlay is some internal config specific to that app. Does that provide better context?

5.1. You can do this via the after_release hook in a plugin. This is in fact how the exrm-rpm package works if I recall.

RE notes on 5.1.1 - 5.13: all of those suggestions were supposed to be seen as examples for what the plugin system would be used for, I totally agree with you that none of that should become part of exrm! I was thinking about the different types of potential users and what they would look for in a release plugin: the user who's comfortable with something like capistrano, the user who's deploying across a cluster of machines, etc.

I suspect that by plugins you meant hooks in the boot script for a release, in which case, my response to 1 is the most applicable. If I can find time to rebuild the boot scripts from scratch to support a more modular structure, I see no reason why we can't add a way to inject your own behaviour in the various subcommands.

That is most likely correct 😄

However it's important to note that at boot time, there isn't nearly as much information available as there was during the release packaging process. Some of this can be overcome with release plugins which create metadata files or something which can then be consumed by the boot script plugins, but obviously we need the boot script in play first.

That's interesting, I wasn't aware that this is the case. Why is that? Would it make more sense to build a dedicated metadata plugin that other plugins would use to query with, making it easier for plugins to be maintained as exrm (and BEAM, I guess) changes?

I also want to be clear about my primary concerns with exrm at the moment:

  • reliability (I want building a release to be as rock solid as possible, and for issues to be something I can produce useful hinting for)
  • stability (I want to make sure that changes to the project do not require significant changes to users existing workflow, when people get CI and the like set up, it can be a pain to go back and reconfigure everything if there are backwards incompatible changes made, when they want to make sure they keep pace with project developments. This can be handled with a combination of proper versioning, deprecation cycles, etc., but part of stability is also not introducing mountains of complexity that cause unforseen interactions when someone upgrades exrm - this is already a pretty major issue with the current boot script).
  • maintainability (I want to encourage users to contribute, and I want to do so by keeping the codebase relatively small and focused, delegating behaviour specific to an environment or configuration up to the community to develop and support, in this way exrm acts as a foundation upon which things can be built, rather than a monolithic beast which can never satisfy the needs of everyone fully)
  • usability (this one comes last in the list, but it's very important to me, I am also very aware of the current usability issues with exrm, though this in large part due to the legacy of releases in general - I am very interested in improving this however possible)

I think we can all get behind that 😄

Manpower and/or lack of time for feature development

As I mentioned above, I can begin contributing plugins in order to learn how exrm works.

Docs really need some love

I agree.

Boot script complexity has reached a level where I'm loathe to introduce any new functionality unless it does not affect core behaviour. I see this being fixed by rewriting the boot scripts, but as I've noted, this is a time-intensive task, and one of significant complexity. It not only needs to be done right in order for it achieve the goal of a more modular, more maintainable script, but it also needs to preserve the current behaviour (the only exception to that is if there is a deprecation cycle for it, or requires minimal changes)

I really like way Ember handles deprecated code:

  • Release #1: Add the new functionality as a feature toggle
  • Release #2: Add a deprecation to the old functionality
  • Release #3: Switch the polarity of the feature toggle
  • Release #4: Remove old code

Thoughts?

Better error reporting

I agree, but I also have no idea how this would be implemented at this point 😄

@johnhamelink
Copy link
Author

@bitwalker quick update. I've released an initial version of a .deb plugin: https://github.com/johnhamelink/exrm-deb

@slashdotdash
Copy link

@johnhamelink Having to build a release on exactly the same architecture as deployment target has caused me most pain.

In reality it means pulling the source and building the release on the production box.

Perhaps this simply needs better documentation as I've struggled with building a release using both an EC2 instance and Virtual Box/Vagrant locally, both using same version of Erlang, Elixir and Ubuntu distro. Neither release would run on the target production box.

@bitwalker
Copy link

@johnhamelink I'm going to be replying piecemeal, as I think over some of these bits. First up, modular bootscripts:

I'd be happy to explore this! Would you be able to explain how you envisioned this would work? If you could propose a plan, I'd gladly attempt to submit a PR 😄

Sure, here's my general outline:

  • Refactor the script into a lean core (basically arg-parsing, env setup, global helper functions)
  • Extract command-handlers into sub-scripts which are imported in the core, to keep command-specific junk in their own files
  • Introduce a plugin structure:
    • A plugin is a directory composed of shell scripts which are named after commands pre/post, i.e. pre_upgrade
    • When the core script runs, it determines what plugins are available by iterating a plugins directory relative to the boot script
    • When a command is executed, the pre- plugins are run, then the core command, then the post- plugins, by iterating over each of the plugins, determining whether a relevant command handler is available for that plugin and stage (pre/post)

This gives us the greatest amount of flexibility for extension, and easier maintenance due to better code organization, and division into smaller more maintainable units.

An example of the plugin bit I've just sketched up is something like the following (bear in mind this is super simplified to show the behaviour, implementation will rely on this basic mechanism, but will obviously look different):

# main.sh
#!/bin/sh

__modules_path="$(dirname "$0")/plugins"
SOME_VAR="foobar"

function __call_plugins() {
    __handler_to_call="$1"
    for plugin in $(ls "$__modules_path"); do
        if [ -d "$__modules_path/$plugin" ]; then
            if [ -f "$__modules_path/$plugin/$__handler_to_call" ]; then
                . "$__modules_path/$plugin/$__handler_to_call"
            fi
        fi
    done
}

function upgrade() {
    __call_plugins "pre_upgrade"
    echo "upgrading"
    __call_plugins "post_upgrade"
}

upgrade
# plugins/test/pre_upgrade
#!/bin/sh
echo "test: pre_upgrade: SOME_VAR=$SOME_VAR"
# plugins/test/post_upgrade
#!/bin/sh
echo "test: post_upgrade"

This all results in the following output:

$ ./main.sh
test: pre_upgrade: SOME_VAR=foobar
upgrading
test: post_upgrade

Thoughts?

@bitwalker
Copy link

@slashdotdash

Having to build a release on exactly the same architecture as deployment target has caused me most pain.
In reality it means pulling the source and building the release on the production box.

Cross compilation for another architecture is documented, and doesn't require you to build on the production box. You just pull the Erlang lib directory from the prod machine (or from a machine running the same OS/architecture) into your build machine, and provide a rel/relx.config with {include_erts, "path/to/erlang/lib"}.. Where you can run into trouble with this of course is with C extensions, where you need to provide the necessary cross-compilation toolchain for that extension. Some other alternatives:

  • Run a build machine of the same architecture as your prod machine. Build your releases there.
  • Run a docker VM of the same architecture as your prod machine, build the release, then export to wherever for deployment, or just deploy from the VM

Perhaps this simply needs better documentation as I've struggled with building a release using both an EC2 instance and Virtual Box/Vagrant locally, both using same version of Erlang, Elixir and Ubuntu distro. Neither release would run on the target production box.

Is this a reported issue I can look at? If the release won't run, even though the OS architecture is the same, then something else is likely the issue. At work, I run releases via ECS (I'm not doing hot upgrades), and haven't had any issues so far, though in that scenario I am running on my own image created via Docker.

@slashdotdash
Copy link

@bitwalker Thank you for taking the time to reply.

I think you've actually identified the crucial step that I've been missing (as below, copying the Erlang lib from the target box to the build box).

You just pull the Erlang lib directory from the prod machine

Will have another go at creating a release with exrm on a separate build box with this in mind. Hopefully it will solve my pain point.

@johnhamelink
Copy link
Author

Here's my general outline:

  • Refactor the script into a lean core (basically arg-parsing, env setup, global helper functions)
  • Extract command-handlers into sub-scripts which are imported in the core, to keep command-specific junk in their own files
  • Introduce a plugin structure:
    • A plugin is a directory composed of shell scripts which are named after commands pre/post, i.e. pre_upgrade
    • When the core script runs, it determines what plugins are available by iterating a plugins directory relative to the boot script
    • When a command is executed, the pre- plugins are run, then the core command, then the post- plugins, by iterating over each of the plugins, determining whether a relevant command handler is available for that plugin and stage (pre/post)

Ok, I like that strategy. I'll probably have a crack at this starting this on Friday. As far as things like vm.args runtime configuration, how would that work in this case? Could we run a .exs file that pulls in data about the system and produces a vm.args from a template?

I've also added @bitwalker to a private gitlab project I've created which is effectively a skeleton demonstrating how our system is currently built and deployed. I hope it further helps to explain where I am with exrm and how it fits into my current usecase. If anyone else wants access, let me know and I'll add them (I'd rather not make it public right now though).

@slashdotdash, @andrewvy has mentioned this as being desirable too, so good to know that it's possible. I wonder if we could precompile and keep on a repository somewhere so that a plugin could be created to re-run the mix release process across a matrix of architectures?

@bitwalker
Copy link

@johnhamelink I will follow up on your other notes this weekend, perhaps tomorrow if I have a chance, work is keeping me busy during the week though for the most part.

As for vm.args runtime configuration, I'm open to suggestions, but some things to be aware of:

  • Once a release is running, vm.args changes won't take effect until the VM is restarted, so changes to this file won't be applied during hot upgrades/downgrades. My assumption is that this is fine, and templating out vm.args only needs to happen during start.
  • I'd like to make this optional, and keep the current behaviour as the default

I saw your invite, and will give the project a more in-depth look when I sit down to go over your notes this weekend.

Regarding a means of making cross-compiling easier, if there was such a repository available, we could add a flag to the mix release task which fetches the necessary architecture, and automatically uses it instead of the system Erlang. Scripting releases for multiple platforms at once would be pretty easy from there. I think we'd want more of a file server type repository like hex.pm rather than using git for that kind of thing though.

@johnhamelink
Copy link
Author

Hey @bitwalker,

Just following up on your previous message - I know you're probably very busy, but did you get a chance to think about my notes above over the w/e?

RE cross-compilation, the wheels are in motion with regards to that. @andrewvy has began work on that, and my company has sponsored an S3 bucket for at least a few months to host the binaries in.

@andrewvy
Copy link

andrewvy commented Mar 2, 2016

My question: would a repository of compiled ERTS for all architectures be helpful? I would imagine an accompanying Exrm plugin for pulling from the repository would be helpful. And also, what should be hosted? I'm not too familiar on architectures, should we compile ERTS for each distro?

Like this:

image

Or by architecture? x86, x86_64, armv5... etc.


We're tracking this through this github issue: https://github.com/erts-io/erts_web/issues/3

@bitwalker
Copy link

@johnhamelink That's great news! Thanks to you, @andrewvy and your company for taking that on! I'm going to take some time here to address your previous notes as well, following up shortly.

@andrewvy I'll reply in that issue thread :)

@bitwalker
Copy link

@johnhamelink Ok, going back to your first reply:

Hmm, just to clarify - which config file are you referring to in this case? I didn't make it clear I was referring to the next section.

I was referring to the relx.config file used today for configuring the behaviour of the release. Moving to a exrm_config.exs file would mean I'd include both the things that used to go in relx.config, as well as adding any new exrm-specific configuration. This config file would then be placed under the rel folder (I had quite a long discussion on this point in IRC awhile back, and it was made quite clear that people didn't want release configuration under config).

Where PhoenixOverlay, ExrmDebPlugin and AptPlugin are External plugins, and CustomOverlay is some internal config specific to that app. Does that provide better context?

The way I would actually lay the configuration out (in order to support some other features), would probably be something more like this:

# Would include the helper functions
use ReleaseManager.Config

# A single app release
release :foo,
  version: get_in_app(:foo, :version), # Basically calls get_in(Mix.Project.config, [:version]) for the app :foo
  include_erts: "path/to/erts",
  .. other relx.config legacy settings ..
  # These allow users to specify both raw functions or modules which adhere to the plugin behaviour,
  # they are then executed in the order defined, and could even be defined in this config file,
  # each function would receive the release Config struct as it's only parameter, and if returned modified,
  # will be passed modified to the next step
  before_release: [&PhoenixOverlay.compile_static_assets/1, SomePlugin],
  before_package: [&CustomOverlay.do_thing_a/1, &CustomOverlay.do_thing_b/2],
  after_package: [&ExrmDebPlugin.package/1, &AptPlugin.release/1]

# An example of an umbrella app packaged as a single release
release :foobar,
  version: "1.0.0"
  apps: [:foo, :bar]
  ...

That's interesting, I wasn't aware that this is the case. Why is that? Would it make more sense to build a dedicated metadata plugin that other plugins would use to query with, making it easier for plugins to be maintained as exrm (and BEAM, I guess) changes?

The primary lack of metadata I'm talking about is due to Mix not being available, even if you include it as an application in mix.exs, it still is limited in functionality because there is no mix.exs file with all your project's metadata. There is of course still the ability to get the release name, version, and static config (sys.config) for the app, so that's still a considerable amount, but not nearly the wealth of information that is represented by mix.exs (in my mind anyway). Ultimately we could probably just store plugin metadata in sys.config though, and use that for the source of truth. It wouldn't be available in to the running release (only configuration for loaded applications is accessible), but it could be queried from an escript with a simple :file.consult + get_in.

I really like way Ember handles deprecated code:

Release #1: Add the new functionality as a feature toggle
Release #2: Add a deprecation to the old functionality
Release #3: Switch the polarity of the feature toggle
Release #4: Remove old code

Definitely a fan of this approach, with some caveats. I think I would take a slightly different approach with the boot script though:

  • Rewrite to use the modular structure, preserving old behaviour where possible
  • Where not possible, raise an error if we can determine the old behaviour is relied on, with a message on how to migrate
  • Where neither of the above are possible, document it as a breaking change, with clear instructions on migrating to the new version of exrm. Similar to how I did Timex 1.x -> 2.0.
  • Add feature flags for the older, less-desirable behaviours we want to replace, automatically enabled, with deprecation warnings
  • On the next release, disable all of the feature flags for older, less-desirable behaviour, but leave them there for those who still need them
  • On the next release, remove the old code, but leave the feature flags in, raising errors if they are used
  • On the next release, remove the feature flags entirely

So there is the potential that there would be a few breaking changes early in the process (though I can't think of anything that fits the description), but for the most part, everything would remain the same, with a very paced deprecation cycle. I dislike the idea of putting feature flags for new behaviour, and would rather do as I described, with old behaviour preserved via enabled-by-default feature flags, where setting those flags to false uses the new behaviour - that way, people upgrading don't have to change anything right away, but those who want to use the new features can disable the old behaviour on a case-by-case basis, and if they do so, they don't have to change anything going forward (when the flags are removed, they would just be ignored, no reason to warn or raise an error).

Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment