Skip to content

Instantly share code, notes, and snippets.

@devdazed
Last active February 12, 2016 17:12
Show Gist options
  • Save devdazed/749d046b1a1da8869d68 to your computer and use it in GitHub Desktop.
Save devdazed/749d046b1a1da8869d68 to your computer and use it in GitHub Desktop.
Jupyter on DSE

On your client machine

As the root user

  1. Install DSE
  2. In the cassandra.yml file, ensure the datacenter and cluster match your analytics datacenter
  3. In the cassandra-env.sh file add this configuration line toward the bottom JVM_OPTS="$JVM_OPTS -Dcassandra.join_ring=false" This will make your DSE node a coordinator only, it will not own any data. You can use this node to submit jobs to DSE locally without the need to know which is the master node.
  4. start DSE
  5. Install python
  6. Install virtualenv

As the cassandra user

> virtualenv .jupyter
> source .jupyter/bin/activate
> pip install ipython
> pip install jupyter
> PYSPARK_SUBMIT_ARGS="$PYSPARK_SUBMIT_ARGS pyspark-shell" IPYTHON_OPTS="notebook --ip='*' --no-browser" dse pyspark

Notes

You can use something like supervisord to keep jupyter running in the background.

If you are getting a permission denied error when starting pyspark that look slike this: OSError: [Errno 13] Permission denied: '/run/user/505/jupyter' It is because the XDG_RUNTIME_DIR is set to your logged in user, in that case just add the following environment variable before starting pyspark: JUPYTER_RUNTIME_DIR="$HOME/.jupyter/runtime

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment