Steps:
- wget https://gist.github.com/rom1504/67ada3dedbecc113ae2dbdfd9c642d83/raw/865fb35e00f21330b5b82aeb7c31941b6c18f649/spark_on_slurm.sh
- wget https://gist.github.com/rom1504/67ada3dedbecc113ae2dbdfd9c642d83/raw/865fb35e00f21330b5b82aeb7c31941b6c18f649/worker_spark_on_slurm.sh
- wget https://archive.apache.org/dist/spark/spark-3.3.1/spark-3.3.1-bin-hadoop3.tgz && tar xf spark-3.3.1-bin-hadoop3.tgz
- sbatch spark_on_slurm.sh
- build a venv, install pyspark, then run something like this:
(you can get https://huggingface.co/datasets/laion/laion-coco/resolve/main/part-00000-2256f782-126f-4dc6-b9c6-e6757637749d-c000.snappy.parquet as an example parquet)