pyspark使用pandas_udf时的一个坑
在学习pyspark pandas_udf时使用了pyspark官方文档给的代码
发生了如下报错:
19/11/14 15:59:36 ERROR TaskSetManager: Task 44 in stage 10.0 failed 1 times; aborting job
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/dataframe.py", line 380, in show
print(self._jdf.showString(n, 20, vertical))
File "/opt/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/opt/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/opt/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o64.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 44 in stage 10.0 failed 1 times, most recent failure: Lost task 44.0 in stage 10.0 (TID 133, localhost, executor driver): java.lang.IllegalArgumentException
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334)
at org.apache.arrow.vector.ipc.message.MessageSerializer.readMessage(MessageSerializer.java:543)
at org.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:58)
at org.apache.arrow.vector.ipc.ArrowStreamReader.readSchema(ArrowStreamReader.java:132)
at org.apache.arrow.vector.ipc.ArrowReader.initialize(ArrowReader.java:181)
..........................
一开始以为是JDK的版本过高,换成了官方推荐的JDK8,结果还是报错
#¥%……&#¥
最后的最后才找到是:pyarrow版本过高
重新安装pyarrow 0.14.1或者更低版本就可以解决这个问题了 楼主这么热血的么?怎么把大数据相关的帖子发在这里? jiangpeng59 发表于 2019-11-14 19:08
楼主这么热血的么?怎么把大数据相关的帖子发在这里?
上次注册的号被封了,来52是想混逆向区的,可最近要做这个相关的东西,没空去那边,又不想号被封,就来这儿发发踩过的坑,万一有人来52检索相关资料呢对吧(其实主要是想保住这次的号( •̀ ω •́ )✧) 谢谢楼主,刚好也准备学习pyspark
页:
[1]