Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Setting up Mirth Connect for production

Mirth Server Setup

First create ec2 Ubuntu instance

Then install all the things needed for Mirth Connect (following this video:

# Update Ubuntu
sudo aptitude update

# Safe upgrade of Ubuntu
sudo aptitude safe-upgrade

# Install rvm, with latest stable version of ruby
curl -sSL | bash -s stable --ruby

# Install rubygems
sudo apt-get install rubygems

# Update ruby gems
sudo gem update

# Switch on rvm
source /home/ubuntu/.rvm/scripts/rvm

# Make the ~/downloads folder
mkdir ~/downloads

# Change into ~/downloads/ folder
cd ~/downloads/

# Download Mirth Connect 3.0.1 installer
curl -O

# Install tasksel
sudo apt-get install tasksel

# Install LAMP server
sudo apt-get install lamp-server^

# Remove any existing versions of Java
sudo apt-get purge openjdk-\* 

# Install python software properties (required for next command)
sudo apt-get install python-software-properties

# Add apt-get repository source
sudo add-apt-repository ppa:webupd8team/java

# Add package listings from repository in previous command to listings
sudo apt-get update

# Now we can install Java
# So install Java...
sudo apt-get install oracle-java7-installer

# Now we're ready to install Mirth Connect
cd ~/downloads/

# Set the permissions to be able to run the installer script
sudo chmod a+x ~/downloads/

# Now run the Mirth Connect installer
sudo ~/downloads/

Opening ports

This is harder than it should be (mostly because things are either taking a while to take effect, or there's something I don't know about how long it takes to open a port, or whether a reboot is required or stuff like that.

Anyways, what seems to work regarding opening ports (which a new port seems to be required for each new listener channel), is to go to the Security Groups page on the AWS console, and select the security group that applies to the ec2 instance you're using for Mirth, and to add a rule that allows inbound TCP on port from (This allows inbound TCP communications on port from all hosts). Then on the instance run sudo /sbin/iptables -A INPUT -p tcp --dport 6662 -j ACCEPT. Then pray it worked, but if it didn't do something like restart iptables, or reboot the instance, or just wait and then do those two things. The point is, the security groups step and the command step together somehow work for me, but I have had frustrations before with it just. not. working. Good luck.

Monitoring the instance

Setting up monit is roughly going to follow this tutorial from Digital Ocean.


First install monit to monitor and restart Mirth Connect if it ever goes down:

sudo apt-get install monit

Not so fast- pid files

In order to use monit to monitor Mirth, however, we need to know the location of the .pid (process id) file for Mirth. A quick glance over the results turned up by sudo find / -type f -name '*.pid' shows that there is not .pid file for mcservice or mcserver (the Mirth daemons we care about), despite the fact that ps aux | grep mcserv shows that mcserver is running (good). Maybe Mirth will add .pid files in a later version, but for now, lets hack the scripts that start and stop the service/server so that they create/destroy a .pid file on start-up/shutdown. The approach we will take is mentioned and described in this thread: pidfile location in 2.1?.

So go to /usr/local/mirthconnect (or wherever you set your Mirth to install) and fire up sudo nano mcservice. Go down to the case statement that defines the behavior based on the $1 argument passed to mcservice, and for the "start" actions (start, start-launchd, and restart|force-reload), underneath the java call that starts mirth in the background, add:

echo $! > /var/run/

This has to be done AFTER the line that is the java call starting mirth, because echo $! returns the pid of the last background process started. (the java call is a background process because it has a & at the end of it, which puts the command in the background).

For the "stop" actions (stop and restart|force-reload), underneath the java call that stops mirth, add:

rm /var/run/

Now, whenever you start or stop mcservice it will take care of creating and destroying a pid file. We are now ready to monitor the process with monit.

Back to monit...

To start monitoring, add programs and processes to the configuration file:

sudo nano /etc/monit/monitrc

There are lots of comments in the monitrc file that nicely explain everything and what might go where. Here are the things that I changed/added: Uncommented:

set httpd port 2812 and
  use address localhost  # only accept connection from localhost
  allow localhost        # only localhost to connect to the server and
  allow admin:monit      # require user 'admin' with password 'monit'
  allow @monit           # allow users of group 'monit' to connect (rw)
  allow @users readonly  # allow users of group 'users' to connect readonly


-  set daemon 120            # check services at 2-minute intervals
+  set daemon 60             # check services at 1-minute intervals
-  set alert
+  set alert


set mailserver port 587
    username "" password "UOENO"
    using tlsv1
    with timeout 30 seconds
check process mcservice with pidfile /var/run/
  start program = "/usr/local/mirthconnect/mcservice start" with timeout 60 seconds
  stop program = "/usr/local/mirthconnect/mcservice stop"
  if cpu > 60% for 2 cycles then alert
  if cpu > 80% for 5 cycles then restart

Start monit

Once you are done with the config file, check the syntax:

monit -t

Just in case, give a quick

sudo monit reload

After resolving any possible syntax errors, start monitoring/running all the monitored programs. We are going to start monit with sudo because some of the commands we have it run require sudo privileges.

sudo monit start all

WAIT: Make sure monit runs on boot

According to this thread, commands put into /etc/rc.local are run as root, last, during boot. So adding the commands to start monit (without sudo) in that file should be all we need to do to have monit start at boot (and all the programs/processes we want to monitor to start at boot as well).

So fire up sudo nano /etc/rc.local and add

monit start all

Now our stuff will start whenever the system reboots.

Moving from Derby to Postgres

The default database that Mirth Connect will use is Derby. However, this is not really supposed to be used in production, so we will switch over to good ol' Postgres. Here is a nice intro to installing and using PostgreSQL on Ubuntu

Let's start with a nice update of the apt-get repository:

apt-get update

Next we'll go ahead and download Postgres and its helpful accompanying dependencies:

sudo apt-get install postgresql postgresql-contrib

Now postgres is installed.

Prep PostgreSQL for Mirth

By default postgres uses ident authentication, tying each server user to a Postgres account. So to get going we will first switch into the default user, and then create our user ("mirth").

$ sudo su - postgres
$ createuser --pwprompt
Enter name of role to add: mirth
Enter password for new role:
Enter it again:
Shall the new role be a superuser? (y/n) y

Or actually a simpler way to do it would probably be this SO answer

sudo -u postgres createuser -d -R -P APPNAME
sudo -u postgres createdb -O APPNAME APPNAME

According to this thread, once PostgreSQL is installed and the user/role that we're going to use is created, we just have to update the file with the URL and credentials to the new database and start the server, and that will automatically create the schema for us. The default port for postgres is 5432, so our URL is Our user/pw is whatever we made in the createuser --pwprompt step.

Actually, it seems that there are examples in the file that help with the database.url. So, open up sudo nano /usr/local/mirthconnect/conf/ and replace:

database = derby
database.url = jdbc:derby${dir.appdata}/mirthdb;create=true


database = postgres
database.url = jdbc:postgresql://localhost:5432/mirthdb

This part was a little troublesome I guess, but the gist of it is to get postgres installed, create a mirth user, create a database as that mirth user, update the config in /usr/local/mirthconnect/config/, and get going.

If you had used your mirth installation at all thus far, all the stuff you saved is gone with the old database. So when you log in to Mirth Connect, it will be with the default username/pw again (admin:admin), and you will be prompted to change it, and you'll have to re-create any of your channels again.

Finally, lets make sure that monit is watching postgres, and that all this will work on boot

... Friday evening, PostgreSQL starts on boot by default, and it's pretty solid software. I'll add it to monit later, but for now, I'm signing off. ...

Basically production ready. NIiiiiiiiiicee.

What is still being worked on:

Ideally all of this would be made into a couple nice shell scripts to cut away all this bs.

Copy link

molsches commented Jul 3, 2014

This is awesome! I've set up mirth a bunch of times on Windows Server, but was first time for me for linux. This was super helpful. Beer or HIPAA compliance help anytime you need it on me.

Copy link

cancan101 commented Mar 31, 2015

If you are using Postgres why are you installing the LAMP stack

Copy link

ppazos commented Jul 17, 2016

@cancan101 maybe because of monit? but not sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment