Skip to content

Instantly share code, notes, and snippets.

@dpwrussell
Last active August 8, 2017 05:30
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dpwrussell/48ae4036d34ef88797a22331ccebdc86 to your computer and use it in GitHub Desktop.
Save dpwrussell/48ae4036d34ef88797a22331ccebdc86 to your computer and use it in GitHub Desktop.
OMERO User Meeting 2017 AWS Workshop

OMERO User Meeting 2017 AWS Workshop

https://tinyurl.com/ydz73puv

What is Docker?

  • ...an open source project to pack, ship and run any application as a lightweight container.

  • An abstraction layer to "containerize" any application and allow it to run on any infrastructure
  • Used to containerize OMERO, OMERO.web and and the additional components of OMERO.cloudarchive

What is AWS?

  • ...a secure cloud services platform, offering compute power, database storage, content delivery and other functionality.

  • Use S3 object storage to store the archive once dehydrated
  • Use EC2 to provide a scalabale compute environment to hydrate archives
  • Use ECS to manage and deploy the services required by OMERO as Docker containers
  • Use LoadBalancers to provide endpoints for multiplexed OMERO.web and OMERO RO
  • Use EFS to provide shared storage for multiple OMERO and OMERO RO instances
  • Ensure AWS CLI is installed (pip install awscli) and configured
  • Make sure and use "us-east-1" as the region for now to eliminate regions as a potential source of error.

Retrieving Docker Images

In the future we will specify exact versions of our docker images to use, but for now we will just use the latest release. Docker compose will actually do the pulls for us, but for reference:

docker pull dpwrussell/omero.cloudarchive
docker pull dpwrussell/omero-grid-web
docker pull postgres:9.4

Download or create Docker Compose file

The easiest way to orchestrate several Docker containers together is by using a Docker compose file. This specifies settings for each container and how the containers will interoperate.

  • Download docker compose YAML
  • Examine docker-compose.yml
  • docker-compose up
  • That's it. Goto http://localhost:8080 and see you have a running OMERO.
  • Note: Unfortunately due to how OMERO user configuration works, the server must start, create the public-user, stop, finish the configuration and then start again. So there may be a brief window where the OMERO server appears to be down, or is up but does not have the public user.

Upload an image for testing

The OMERO docker container is now running on localhost which can be connected to with Insight. Alternately, we can log in to import an image with the CLI.

## Log in to the OMERO docker container
docker exec -it --user omero omerocloudarchivedocker_omero_1 /bin/bash
## Download an image from the web
wget <image_url>
## Import the image, use user: public-user, password: omero to import
## directly to the public user
~/OMERO.server/bin/omero import <image_file>

Prepare to dehydrate the archive

  • Check using the web interface that the image is imported correctly
  • Create a bucket in AWS S3 Console to dehydrate the archive into. I recommend a "subfolder" inside a bucket as it is easier to later make public with the AWS S3 Console. e.g. mybucket/test1

Dehydrate

  • On the machine (not inside the container) configured to access AWS, generate temporary credentials for the dehydration process.
aws sts get-session-token

Then inside the container, use the credentials and the S3 bucket to dehydrate the archive.

~/dehydrate <aws_access_key_id> <aws_secret_access_key> <aws_session_token> <s3_bucket>
  • Inspect the S3 bucket for new contents
  • Select the S3 bucket and click More -> Make Public

Hydrate on AWS with Cloudformation

  • Launch Cloudarchive Cloudformation Template in us-east-1 and login to the AWS Cloudformation Console if necessary.
  • Click Next
  • Leave most of the settings as default, but populate the KeyName, S3Bucket, SubnetIds, and VpcId.
  • Click Next. The Cloudformation stack should be provisioned. This will take several minutes.
  • Once complete, click on the Outputs tab and copy the hostname of the web endpoint. Paste this into the browser. Again, this can take a few minutes until it works correctly as the load balancer is monitoring the health of the service and it takes a little time to come up correctly.

Cleanup

  • Select the cloudformation stack in the AWS Console and goto Actions -> Delete Stack.
  • Ctrl+C the docker-compose process to stop the containers and then docker-compose rm to remove them.
@r100gs
Copy link

r100gs commented Jul 14, 2017

THX for the nice tutorial,

but what do you mean with "inside docker"
Then inside the container, use the credentials and the S3 bucket to dehydrate the archive.

~/dehydrate <aws_access_key_id> <aws_secret_access_key> <aws_session_token> <s3_bucket>

I have everything but I dont know how to do the dehydration correct.
How ca I get inside docker?

Cheers, Stefan

@r100gs
Copy link

r100gs commented Aug 8, 2017

Hello,

if I try to start it I get following error-messages.

What can I do?

Best regards,
r100gs

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/util/timeout.py", line 124, in _validate_timeout
    float(value)
TypeError: float() argument must be a string or a number, not 'Timeout'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 11, in <module>
    sys.exit(main())
  File "/usr/lib/python3.6/site-packages/compose/cli/main.py", line 68, in main
    command()
  File "/usr/lib/python3.6/site-packages/compose/cli/main.py", line 118, in perform_command
    handler(command, command_options)
  File "/usr/lib/python3.6/site-packages/compose/cli/main.py", line 926, in up
    scale_override=parse_scale_args(options['--scale']),
  File "/usr/lib/python3.6/site-packages/compose/project.py", line 388, in up
    warn_for_swarm_mode(self.client)
  File "/usr/lib/python3.6/site-packages/compose/project.py", line 614, in warn_for_swarm_mode
    info = client.info()
  File "/usr/lib/python3.6/site-packages/docker/api/daemon.py", line 90, in info
    return self._result(self._get(self._url("/info")), True)
  File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line 46, in inner
    return f(self, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 189, in _get
    return self.get(url, **self._set_request_timeout(kwargs))
  File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 521, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 508, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 618, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3.6/site-packages/requests/adapters.py", line 440, in send
    timeout=timeout
  File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 582, in urlopen
    timeout_obj = self._get_timeout(timeout)
  File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 309, in _get_timeout
    return Timeout.from_float(timeout)
  File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/util/timeout.py", line 154, in from_float
    return Timeout(read=timeout, connect=timeout)
  File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/util/timeout.py", line 97, in __init__
    self._connect = self._validate_timeout(connect, 'connect')
  File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/util/timeout.py", line 127, in _validate_timeout
    "int or float." % (name, value))
ValueError: Timeout value connect was Timeout(connect=60, read=60, total=None), but it must be an int or float.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment