Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
v1020-wn-252-13:aut ryandeschamps$ scala -version
Scala code runner version 2.12.4 -- Copyright 2002-2017, LAMP/EPFL and Lightbend, Inc.
v1020-wn-252-13:aut ryandeschamps$ java -version
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)
v1020-wn-252-13:aut ryandeschamps$ ls
CONTRIBUTING.md LICENSE_HEADER.txt config metastore_db pyaut.zip target
LICENSE README.md derby.log pom.xml src
v1020-wn-252-13:aut ryandeschamps$ cd target/bin
-bash: cd: target/bin: No such file or directory
v1020-wn-252-13:aut ryandeschamps$ cd target/
v1020-wn-252-13:target ryandeschamps$ ls
apidocs checkstyle-cachefile classes.timestamp maven-status
aut-0.10.1-SNAPSHOT-fatjar.jar checkstyle-checker.xml generated-sources surefire-reports
aut-0.10.1-SNAPSHOT-javadoc.jar checkstyle-result.xml generated-test-sources test-classes
aut-0.10.1-SNAPSHOT-test-javadoc.jar checkstyle-suppressions.xml javadoc-bundle-options test-classes.timestamp
aut-0.10.1-SNAPSHOT.jar classes maven-archiver testapidocs
v1020-wn-252-13:target ryandeschamps$ cd ..
v1020-wn-252-13:aut ryandeschamps$ pyspark --jars target/aut-0.10.1-SNAPSHOT-fatjar.jar --driver-class-path target/aut-0.10.1-SNAPSHOT-fatjar.jar --py-files pyaut.zip
Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/ryandeschamps/maple/aut/target/aut-0.10.1-SNAPSHOT-fatjar.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/apache-spark/2.2.0/libexec/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2017-11-20 10:58:13,031 [Thread-2] WARN NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-11-20 10:58:18,409 [Thread-2] WARN ObjectStore - Failed to get database global_temp, returning NoSuchObjectException
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.2.0
/_/
Using Python version 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014 21:07:35)
SparkSession available as 'spark'.
>>> sc
<SparkContext master=local[*] appName=PySparkShell>
>>> import RecordLoader
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named RecordLoader
>>> import sys
>>> modules.keys()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'modules' is not defined
>>> sys.modules.keys()
['numpy.core.info', 'ctypes.os', 'gc', 'logging.weakref', 'pprint', 'unittest.sys', 'numpy.core.umath', 'pyspark.sql.pyspark', 'string', 'SocketServer', 'numpy.lib.arraysetops', 'encodings.utf_8', 'pyspark.sys', 'json.encoder', 'subprocess', 'numpy.core.machar', 'unittest.StringIO', 'numpy.ma.extras', 'numpy.fft.fftpack_lite', 'shlex', 'dis', 'zlib', 'logging.threading', '_json', 'abc', 'numpy.lib.npyio', 'numpy.lib._compiled_base', 'pyspark.socket', 'pyspark.re', 'unittest.suite', 'pyspark.sql.time', '_ctypes', 'fnmatch', 'json.scanner', 'codecs', 'StringIO', 'weakref', 'numpy.core._internal', 'numpy.lib.arraypad', 'encodings.binascii', 'base64', '_sre', 'pyspark.pickle', 'pyspark.SocketServer', 'unittest.runner', 'select', 'ctypes._ctypes', '_heapq', 'numpy.lib.financial', 'binascii', 'tokenize', 'numpy.polynomial.chebyshev', 'cPickle', 'numpy.polynomial.hermite_e', 'pyspark.io', 'numpy.testing.utils', 'pyspark.find_spark_home', 'numpy.core.fromnumeric', 'unicodedata', 'numpy.ctypeslib', '_bisect', 'encodings.aliases', 'pyspark.sql.random', 'exceptions', 'sre_parse', 'pickle', 'py4j.inspect', 'pyspark.sql.window', 'numpy.random.warnings', 'logging.cStringIO', 'numpy.lib.polynomial', 'numpy.compat', 'numbers', 'numpy.core.records', 'strop', 'numpy.core.numeric', 'pyspark.__future__', 'numpy.lib.utils', 'py4j.compat', 'numpy.lib.arrayterator', 'os.path', 'pyspark.sql.calendar', '_weakrefset', 'unittest.traceback', 'unittest.os', 'pyspark.traceback_utils', 'functools', 'sysconfig', 'pyspark.types', 'pyspark.taskcontext', 'numpy.core.numerictypes', 'numpy.polynomial.legendre', 'numpy.matrixlib.defmatrix', 'tempfile', 'pyspark.sql', 'platform', 'py4j.java_collections', 'numpy.core.scalarmath', 'pyspark.warnings', 'numpy.linalg.info', 'unittest.functools', 'unittest.util', 'pyspark.heapq', 'decimal', 'numpy.lib._datasource', 'token', 'pyspark.py4j', 'py4j.finalizer', 'pyspark.atexit', 'numpy.linalg._umath_linalg', 'pyspark.sql.catalog', 'cStringIO', 'numpy.polynomial', 'numpy.add_newdocs', 'encodings', 'pyspark.cloudpickle', 'unittest.re', 'encodings.stringprep', 'pyspark.platform', 'numpy.lib.numpy', 'numpy.random.threading', 're', 'math', 'pyspark.sql.base64', 'numpy.lib.ufunclike', 'pyspark.subprocess', 'ctypes.struct', 'py4j.threading', '_locale', 'logging', 'thread', 'traceback', 'pyspark.random', 'pyspark.accumulators', 'pyspark.sql.column', '_collections', 'numpy.random', 'numpy.lib.twodim_base', 'pyspark.sql.warnings', 'array', 'ctypes.sys', 'posixpath', 'pyspark.operator', 'numpy.core.arrayprint', 'types', 'numpy.lib.stride_tricks', 'pyspark.cStringIO', 'numpy.lib.scimath', 'json._json', 'pyspark.heapq3', '_codecs', 'numpy.__config__', '_osx_support', 'copy', 'pyspark.sql.py4j', 'hashlib', 'pyspark._heapq', 'pyspark.sql.conf', 'numpy.lib.nanfunctions', 'unittest.weakref', 'stringprep', 'posix', 'pyspark.sql.group', 'pyspark.sql.readwriter', 'pyspark.sql.json', 'sre_compile', '_hashlib', 'numpy.lib.shape_base', 'numpy._import_tools', 'logging.collections', '__main__', 'numpy.fft.info', 'unittest.result', 'encodings.codecs', 'pyspark.cProfile', 'pyspark.pyspark', 'unittest.difflib', '_ssl', 'numpy.lib.index_tricks', 'warnings', 'encodings.ascii', 'pyspark.rddsampler', 'pyspark.sql.sys', 'pyspark.shutil', 'json.sys', 'pyspark.conf', 'future_builtins', 'pyspark.files', 'pyspark.signal', 'linecache', 'numpy.linalg.linalg', 'numpy.lib._iotools', 'py4j', 'pyspark.resultiterable', 'random', 'unittest.types', 'datetime', 'logging.os', 'ctypes._endian', 'encodings.encodings', 'unittest.pprint', 'numpy.random.mtrand', 'pyspark.sql.context', 'numpy.linalg', 'logging.thread', 'cProfile', 'numpy.lib._version', 'pyspark', 'repr', 'numpy.version', 'pyspark.java_gateway', '_lsprof', 'numpy.lib.type_check', 'keyword', 'bisect', 'pyspark.version', 'pydoc', 'threading', 'pyspark.zlib', 'numpy.fft.helper', 'locale', 'atexit', 'pyspark.sql.re', 'calendar', 'pyspark.storagelevel', 'py4j.signals', 'pyspark.sql.itertools', 'numpy.testing.decorators', 'fcntl', 'unittest.case', 'pyspark.sql.decimal', 'pyspark.sql.functions', 'numpy.lib.function_base', 'Queue', 'numpy.lib.info', 'ctypes', 'pyspark.serializers', 'json.re', 'pyspark.select', 'unittest.signal', 'itertools', 'numpy.fft.fftpack', 'opcode', 'pstats', 'pyspark.math', 'unittest', 'pyspark.rdd', 'pyspark.struct', 'unittest.collections', 'pkgutil', 'imp', 'pyspark.numpy', 'sre_constants', 'json', 'numpy.core.function_base', '_random', 'numpy', 'numpy.ma', 'logging.atexit', 'pyspark.bisect', 'pyspark.status', 'encodings.re', 'numpy.lib', 'pyspark.util', 'pyspark.pstats', 'json.decoder', 'copy_reg', 'numpy.core', 'site', 'pyspark.dis', 'pyspark.sql.math', 'io', 'shutil', 'encodings.utf_32_be', 'py4j.version', 'encodings.hex_codec', 'pyspark.sql.types', 'unittest.time', 'numpy.polynomial.polyutils', 'json.json', 'sys', 'numpy.compat._inspect', 'pyspark.os', '_weakref', 'difflib', 'encodings.idna', 'unittest.warnings', 'pyspark.shuffle', 'heapq', 'struct', 'numpy.random.info', 'pyspark.cPickle', 'numpy.testing', 'collections', 'unittest.main', 'zipimport', 'pyspark.sql.__future__', 'pyspark.collections', 'pyspark.copy', 'signal', 'numpy.random.operator', 'pyspark.functools', 'numpy.core.multiarray', 'numpy.ma.core', 'pyspark.weakref', 'logging.traceback', 'py4j.java_gateway', 'numpy.matrixlib', 'pyspark.opcode', 'pyspark.marshal', 'pyspark.sql.streaming', 'pyspark.sql.collections', 'UserDict', 'inspect', 'pyspark.traceback', 'numpy.polynomial.laguerre', 'logging.sys', 'pyspark.sql.dataframe', 'unittest.loader', '_functools', 'socket', 'numpy.core.memmap', 'py4j.py4j', 'numpy.linalg.lapack_lite', 'os', 'marshal', 'pyspark.statcounter', '__future__', 'numpy.core.shape_base', '__builtin__', 'pyspark.sql.utils', 'operator', 'json.struct', 'errno', '_socket', 'numpy.core._methods', 'pyspark.profiler', '_warnings', 'encodings.__builtin__', 'unittest.fnmatch', 'py4j.protocol', 'pwd', 'numpy.core.getlimits', '_sysconfigdata', '_struct', 'numpy.fft', 'numpy.random.numpy', 'logging.time', 'pyspark.sql.threading', 'pyspark.threading', 'logging.warnings', 'pyspark.itertools', 'logging.codecs', 'numpy.compat.py3k', 'numpy.polynomial._polybase', 'numpy.polynomial.hermite', 'contextlib', 'pyspark.sql.functools', 'numpy.polynomial.polynomial', 'numpy.core._dotblas', '_io', 'grp', 'pyspark.shlex', 'numpy.core.defchararray', 'pyspark.gc', 'pyspark.sql.session', '_abcoll', 'pyspark.sql.array', 'pyspark.sql.abc', 'pyspark.sql.datetime', 'pyspark.broadcast', 'genericpath', 'stat', 'pyspark.context', 'unittest.signals', 'ctypes.ctypes', 'numpy.lib.format', 'readline', 'numpy.testing.nosetester', 'pyspark.tempfile', 'encodings.unicodedata', 'pyspark.join', 'time']
>>> spark = SparkSession.builder.appName("extractLinks").getOrCreate()
>>> sc = spark.sparkContext
>>> rdd = RecordLoader.loadArchivesAsRDD("/Users/ryandeschamps/maple/aut/src/test/resources/arc/example.arc.gz", sc, spark)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'RecordLoader' is not defined
>>> import RecordLoaderPythonHelper
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named RecordLoaderPythonHelper
>>> import io.archivesunleashed.spark.matchbox.RecordLoader
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named archivesunleashed.spark.matchbox.RecordLoader
>>> dir(io)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'io' is not defined
>>> dir(pyspark.io)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'pyspark' is not defined
>>> io.__dict__.keys()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'io' is not defined
>>> _io.__dict__.keys()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name '_io' is not defined
>>> py4j
<module 'py4j' from '/usr/local/Cellar/apache-spark/2.2.0/libexec/python/lib/py4j-0.10.4-src.zip/py4j/__init__.py'>
>>> io
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'io' is not defined
>>> spark
<pyspark.sql.session.SparkSession object at 0x10366cd50>
>>> spark.RecordLoader
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'SparkSession' object has no attribute 'RecordLoader'
>>> dir(spark)
['Builder', '__class__', '__delattr__', '__dict__', '__doc__', '__enter__', '__exit__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_createFromLocal', '_createFromRDD', '_inferSchema', '_inferSchemaFromList', '_instantiatedSession', '_jsc', '_jsparkSession', '_jvm', '_jwrapped', '_repr_html_', '_sc', '_wrapped', 'builder', 'catalog', 'conf', 'createDataFrame', 'newSession', 'range', 'read', 'readStream', 'sparkContext', 'sql', 'stop', 'streams', 'table', 'udf', 'version']
>>> import spark.RecordLoader
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named spark.RecordLoader
>>> quit()
v1020-wn-252-13:aut ryandeschamps$ pyspark --jars target/aut-0.10.1-SNAPSHOT-fatjar.jar --driver-class-path target/aut-0.10.1-SNAPSHOT-fatjar.jar --py-files src/main/python/*.py
Running python applications through 'pyspark' is not supported as of Spark 2.0.
Use ./bin/spark-submit <python file>
v1020-wn-252-13:aut ryandeschamps$ pyspark --jars target/aut-0.10.1-SNAPSHOT-fatjar.jar --driver-class-path target/aut-0.10.1-SNAPSHOT-fatjar.jar
Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/ryandeschamps/maple/aut/target/aut-0.10.1-SNAPSHOT-fatjar.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/apache-spark/2.2.0/libexec/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2017-11-20 11:32:36,046 [Thread-2] WARN NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-11-20 11:32:41,459 [Thread-15] WARN ObjectStore - Failed to get database global_temp, returning NoSuchObjectException
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.2.0
/_/
Using Python version 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014 21:07:35)
SparkSession available as 'spark'.
>>> from pyspark import SparkContext
>>> sc = SparkContext("local", "pyaut", pyFiles=["src/python/RecordLoader.py"]
... )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/apache-spark/2.2.0/libexec/python/pyspark/context.py", line 115, in __init__
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File "/usr/local/Cellar/apache-spark/2.2.0/libexec/python/pyspark/context.py", line 299, in _ensure_initialized
callsite.function, callsite.file, callsite.linenum))
ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=PySparkShell, master=local[*]) created by <module> at /usr/local/Cellar/apache-spark/2.2.0/libexec/python/pyspark/shell.py:45
>>>
>>> quit()
v1020-wn-252-13:aut ryandeschamps$ pyspark --jars target/aut-0.10.1-SNAPSHOT-fatjar.jar --driver-class-path target/aut-0.10.1-SNAPSHOT-fatjar.jar --py-files pyaut.zip
Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/ryandeschamps/maple/aut/target/aut-0.10.1-SNAPSHOT-fatjar.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/apache-spark/2.2.0/libexec/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2017-11-20 11:37:50,404 [Thread-2] WARN NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-11-20 11:37:55,822 [Thread-2] WARN ObjectStore - Failed to get database global_temp, returning NoSuchObjectException
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.2.0
/_/
Using Python version 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014 21:07:35)
SparkSession available as 'spark'.
>>> spark.pyFiles
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'SparkSession' object has no attribute 'pyFiles'
>>> dir(spark)
['Builder', '__class__', '__delattr__', '__dict__', '__doc__', '__enter__', '__exit__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_createFromLocal', '_createFromRDD', '_inferSchema', '_inferSchemaFromList', '_instantiatedSession', '_jsc', '_jsparkSession', '_jvm', '_jwrapped', '_repr_html_', '_sc', '_wrapped', 'builder', 'catalog', 'conf', 'createDataFrame', 'newSession', 'range', 'read', 'readStream', 'sparkContext', 'sql', 'stop', 'streams', 'table', 'udf', 'version']
>>> dir(spark.conf)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_checkType', '_jconf', 'get', 'set', 'unset']
>>> sc.addFile("RecordLoader.py")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/apache-spark/2.2.0/libexec/python/pyspark/context.py", line 847, in addFile
self._jsc.sc().addFile(path, recursive)
File "/usr/local/Cellar/apache-spark/2.2.0/libexec/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/usr/local/Cellar/apache-spark/2.2.0/libexec/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/usr/local/Cellar/apache-spark/2.2.0/libexec/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o33.addFile.
: java.io.FileNotFoundException: File file:/Users/ryandeschamps/maple/aut/RecordLoader.py does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.spark.SparkContext.addFile(SparkContext.scala:1531)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)
>>> dir(__pyfiles__)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name '__pyfiles__' is not defined
>>> execfile("./__pyfiles__/module.py")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 2] No such file or directory: './__pyfiles__/module.py'
>>>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.