Skip to content

Instantly share code, notes, and snippets.

@cupdike
Created December 12, 2019 17:16
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save cupdike/9a124e0bd736b2351cb8e974038727bc to your computer and use it in GitHub Desktop.
Save cupdike/9a124e0bd736b2351cb8e974038727bc to your computer and use it in GitHub Desktop.
Helps debug connecting Pyarrow to Kerberized HDFS. Took a bit of doing to get it working and the guidance found on the web isn't always helpful. Useful error messages aren't always bubbling out from the driver. This will let you experiment with drivers, LIBJVM_PATH, LD_LIBRARY_PATH, CLASSPATH, HADOOP_HOME.
import pyarrow
import os
import sh
# Get obscure error without this: pyarrow.lib.ArrowIOError: HDFS list directory failed, errno: 2 (No such file or directory)
os.environ['CLASSPATH'] = str(sh.hadoop('classpath','--glob'))
# Not needed
#os.environ['HADOOP_HOME'] = '/opt/cloudera/parcels/CDH-<your version>/'
DRIVER_PATH='/opt/cloudera/parcels/CDH-<your version>/lib64'
DRIVER='libhdfs'
os.environ['ARROW_LIBHDFS_DIR'] = DRIVER_PATH
USER='myuser'
# Not needed
#LIBJVM_PATH = '/usr/java/jdk1.8.0_121/jre/lib/amd64/server'
#os.environ['LD_LIBRARY_PATH'] = ':'.join(filter(None, [os.getenv('LD_LIBRARY_PATH'), LIBJVM_PATH, '/opt/cloudera/parcels/CDH-<your version>/lib64/']))
# Way to test if lib is accessible
#import ctypes
#ctypes.CDLL('/'.join([LIBJVM_PATH, 'libjvm.so']))
# Suggest you do a kinit just to be sure your ticket is GTG
KERB_TICKET= os.getenv('KRB5CCNAME')
#KERB_TICKET='/tmp/krb5cc_<specific cache>'
args = {
'host': 'default',
'user': USER,
'kerb_ticket': KERB_TICKET,
'port': 8020,
'driver': DRIVER
}
print('ARROW_LIBHDFS_DIR', os.getenv('ARROW_LIBHDFS_DIR'))
# print('HADOOP_HOME', os.getenv('HADOOP_HOME'))
# print('LD_LIBRARY_PATH', os.getenv('LD_LIBRARY_PATH'))
from pprint import pprint
pprint(args)
fs = pyarrow.hdfs.connect(**args)
pprint(fs.ls('/user/myuser'))
@cupdike
Copy link
Author

cupdike commented Dec 17, 2019

Note that if you are trying to get this working with PySpark, make sure you also pass along the needed env variables to the executor... like this:

conf = SparkConf()

# Make sure our env vars get set for the executors
conf.setExecutorEnv('CLASSPATH', os.getenv('CLASSPATH'))
conf.setExecutorEnv('ARROW_LIBHDFS_DIR', os.getenv('ARROW_LIBHDFS_DIR'))

sc = SparkContext(conf=conf)

If you end up needing this you owe me a beer because this was a real pain to figure out ;-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment