pyspark連接mysql數(shù)據(jù)庫報錯的解決
使用pyspark連接mysql數(shù)據(jù)庫代碼如下
spark_conf = SparkConf().setAppName("MyApp").setMaster("local")
spark = SparkSession.builder.config(conf=spark_conf).getOrCreate()
url = "jdbc:mysql://localhost:3306/test?useUnicode=true&characterEncoding=UTF-8&useSSL=false"
table_name = "tab_tf"
properties = {
"user": "root",
"password": "root"
}
# 讀取 MySQL 數(shù)據(jù)庫中的數(shù)據(jù)
df = spark.read.jdbc(url=url, table=table_name, properties=properties)
# 展示數(shù)據(jù)
df.show()執(zhí)行時報錯了,錯誤信息如下:
py4j.protocol.Py4JJavaError: An error occurred while calling o32.jdbc.
: java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:104)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:242)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:230)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:186)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
經(jīng)查詢,是因為spark中缺少連接MySQL的驅(qū)動程序,于是乎下載了與自己mysql數(shù)據(jù)庫版本一致的jar包,下載地址:https://downloads.mysql.com/archives/c-j/
查詢mysql版本命令:mysql -V

下載完成后,解壓,將mysql-connector-java-8.0.30.jar拷貝到spark安裝目錄的libs中

重新執(zhí)行程序,問題解決,執(zhí)行結(jié)果如下:

參考:py4j.protocol.Py4JJavaError: An error occurred while calling o32.jdbc.
到此這篇關(guān)于pyspark連接mysql數(shù)據(jù)庫報錯的解決的文章就介紹到這了,更多相關(guān)pyspark連接mysql內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
Python調(diào)用IDM進行批量下載的實現(xiàn)
本文主要介紹了Python調(diào)用IDM進行批量下載的實現(xiàn),文中通過示例代碼介紹的非常詳細,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2025-04-04
Python 抓取數(shù)據(jù)存儲到Redis中的操作
這篇文章主要介紹了Python 抓取數(shù)據(jù)存儲到Redis中的操作,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2020-07-07
Python線程池模塊ThreadPoolExecutor用法分析
這篇文章主要介紹了Python線程池模塊ThreadPoolExecutor用法,結(jié)合實例形式分析了Python線程池模塊ThreadPoolExecutor的導(dǎo)入與基本使用方法,需要的朋友可以參考下2018-12-12

