Win10搭建Pyspark2.4.4+Pycharm開(kāi)發(fā)環(huán)境的圖文教程(親測(cè))
下載資源
- hadoop3.0.0
- spark-2.4.4-bin-without-hadoop
- winutils下載(對(duì)應(yīng)hadoop3.0.1的bin目錄覆蓋本地hadoop的bin目錄)
- jdk1.8(默認(rèn)已按照配置)
- conda/anaconda(默認(rèn)已安裝)
注意:cdh6.3.2的spark為2.4.0但是使用2.4.0本地pyspark有bug,下載的文件可能在第一次解壓縮后,如未出現(xiàn)目錄,則需要修改文件后綴為zip,再次解壓縮
python環(huán)境(推薦cmd非powershell)
spark2.4.x不支持python3.7以上版本
conda create -n pyspark2.4 python=3.7 activate pyspark2.4 pip install py4j pip install psutil
pyspark安裝方法(推薦一)
- %SPARK_HOME%\python\pyspark目錄復(fù)制到%CONDA_HOME%\pyspark2.4\Lib\site-packages下
- pip install pyspark=2.4.4
配置環(huán)境變量(自行百度)
以下只是示例,根據(jù)實(shí)際情況修改,路徑不要有空格,如果有使用mklink /J 軟鏈接 目錄路徑
系統(tǒng)變量添加 HADOOP_HOME E:\bigdata\ENV\hadoop-3.0.0 SPARK_HOME E:\bigdata\ENV\spark-2.4.4-bin-without-hadoop PYSPARK_PYTHON C:\Users\zakza\anaconda3\envs\pyspark2.4\python.exe PATH添加 %HADOOP_HOME%\bin %SPARK_HOME%\bin
修改配置文件
配置一 %SPARK_HOME%\conf目錄下新建spark-env.cmd文件,內(nèi)容如下
FOR /F %%i IN ('hadoop classpath') DO @set SPARK_DIST_CLASSPATH=%%i配置二 %SPARK_HOME%\conf\目錄下新建log4j.properties文件,內(nèi)容如下
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Set everything to be logged to the console
log4j.rootCategory=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=WARN
# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR配置Pycharm
注意:配置好環(huán)境變量重啟下電腦,不然可能存在pycharm無(wú)法加載系統(tǒng)環(huán)境變量的情況
wc.txt
hello hadoop hadoop spark python flink storm spark master slave first second thrid kafka scikit-learn flume hive spark-streaming hbase
wordcount測(cè)試代碼
from pyspark import SparkContext
if __name__ == '__main__':
sc = SparkContext('local', 'WordCount')
textFile = sc.textFile("wc.txt")
wordCount = textFile.flatMap(lambda line: line.split(" ")).map(lambda word: (word, 1)).reduceByKey(
lambda a, b: a + b)
wordCount.foreach(print)正常運(yùn)行結(jié)果:

常見(jiàn)問(wèn)題:
spark-shell報(bào)錯(cuò)Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger
解決方法:見(jiàn)上述配置一
Pyspark報(bào)錯(cuò)ModuleNotFoundError: No module named 'resource'
解決方法:spark2.4.0存在的bug,使用spark2.4.4
Pyspark報(bào)錯(cuò)org.apache.spark.sparkexception: python worker failed to connect back
解決方法:環(huán)境變量未配置正確,檢查是否遺漏,并檢查pycharm的configuration的環(huán)境變量里面能夠看到
其他
關(guān)于%SPARK_HOME%\python\lib下的py4j-0.10.7-src.zip,pyspark.zip(未配置運(yùn)行正常),也可以嘗試添加到項(xiàng)目

到此這篇關(guān)于Win10搭建Pyspark2.4.4+Pycharm開(kāi)發(fā)環(huán)境的圖文教程(親測(cè))的文章就介紹到這了,更多相關(guān)Pyspark Pycharm開(kāi)發(fā)環(huán)境內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
- pycharm利用pyspark遠(yuǎn)程連接spark集群的實(shí)現(xiàn)
- windows下pycharm搭建spark環(huán)境并成功運(yùn)行 附源碼
- PyCharm搭建Spark開(kāi)發(fā)環(huán)境的實(shí)現(xiàn)步驟
- pycharm編寫(xiě)spark程序,導(dǎo)入pyspark包的3中實(shí)現(xiàn)方法
- PyCharm搭建Spark開(kāi)發(fā)環(huán)境實(shí)現(xiàn)第一個(gè)pyspark程序
- PyCharm+PySpark遠(yuǎn)程調(diào)試的環(huán)境配置的方法
- pycharm連接spark教程
相關(guān)文章
Python中關(guān)于字典的常規(guī)操作范例以及介紹
今天小編幫大家簡(jiǎn)單介紹下Python的一種數(shù)據(jù)結(jié)構(gòu): 字典,字典是 Python 提供的一種常用的數(shù)據(jù)結(jié)構(gòu),它用于存放具有映射關(guān)系的數(shù)據(jù),通讀本篇對(duì)大家的學(xué)習(xí)或工作具有一定的價(jià)值,需要的朋友可以參考下2021-09-09
利用Python的tkinter模塊實(shí)現(xiàn)界面化的批量修改文件名
這篇文章主要介紹了利用Python的tkinter模塊實(shí)現(xiàn)界面化的批量修改文件名,用Python編寫(xiě)過(guò)批量修改文件名的腳本程序,代碼很簡(jiǎn)單,運(yùn)行也比較快,詳細(xì)內(nèi)容需要的小伙伴可以參考一下下面文章內(nèi)容2022-08-08
Python Sanic框架實(shí)現(xiàn)文件上傳功能
Sanic是一個(gè)Python 3.5+的異步Web框架,它的設(shè)計(jì)理念與Flask相似,但采用了更高效的異步I/O處理,在處理文件上傳時(shí),Sanic同樣提供了方便、高效的方法,本教程將結(jié)合實(shí)際案例,詳細(xì)介紹如何在Sanic框架中實(shí)現(xiàn)文件上傳的功能,需要的朋友可以參考下2024-08-08
Python+Yolov5人臉口罩識(shí)別的詳細(xì)步驟
人臉口罩佩戴檢測(cè)(識(shí)別)是當(dāng)前急需的應(yīng)用,而YOLOv5是目前流行的強(qiáng)悍的目標(biāo)檢測(cè)技術(shù),下面這篇文章主要給大家介紹了關(guān)于Python+Yolov5人臉口罩識(shí)別的相關(guān)資料,需要的朋友可以參考下2022-12-12
opencv實(shí)現(xiàn)圖像旋轉(zhuǎn)效果
這篇文章主要為大家詳細(xì)介紹了opencv實(shí)現(xiàn)圖像旋轉(zhuǎn)效果,文中示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2021-03-03
Python Numpy 自然數(shù)填充數(shù)組的實(shí)現(xiàn)
今天小編就為大家分享一篇Python Numpy 自然數(shù)填充數(shù)組的實(shí)現(xiàn),具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2019-11-11

