Tested with
Python 2.7, OS X 10.11.3 El Capitan, Apache Spark 1.6.0 & Hadoop 2.6
Download Apache Spark and build it or download the pre-built version.
I suggest to download the pre-built version with Hadoop 2.6.
Download and install Anaconda.
Once you have installed Anaconda open your terminal and type
conda install jupyter
conda update jupyter
Open terminal and type
echo "export PATH=$PATH:/path_to_downloaded_spark/spark-1.6.0/bin" >> .profile
echo "export PYSPARK_DRIVER_PYTHON=ipython" >> .profile
echo "export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark" >> .profile
Now you can source it to make changes available in this terminal
source .profile
or Cmd+Q
your terminal and reopen it.
Now, using your terminal, go in whatever folder you want and type pyspark
. For example
cd Documents/my_spark_folder
pyspark
Now the IPython notebook should open in your browser.
To check whether Spark is correctly linked create a new Python 2
file inside IPython Notebook, type sc
and run that line.
You should see something like this
In [1]: sc
Out[1]: <pyspark.context.SparkContext at 0x1049bdf90>
@Nomii5007 you should not run the command that way. Follow the instruction, than move to the folder where you have notebook (using terminal) and type
pyspark
@jtitusj sorry I never used it with scala, so I cannot be of any help :(
@bsullins you are welcome :)
@BethanyG yep I wrote this guide because I could not find any that really worked, so after all the struggle I thought it was worth sharing. Anyway thanks for the update, could you please share the versions of spark/python/hadoop etc so that I can update the guide and give you credit?