Spark學(xué)習(xí)筆記 (二)Spark2.3 HA集群的分布式安裝圖文詳解
本文實例講述了Spark2.3 HA集群的分布式安裝。分享給大家供大家參考,具體如下:
一、下載Spark安裝包
1、從官網(wǎng)下載
http://spark.apache.org/downloads.html
2、從微軟的鏡像站下載
http://mirrors.hust.edu.cn/apache/
3、從清華的鏡像站下載
https://mirrors.tuna.tsinghua.edu.cn/apache/
二、安裝基礎(chǔ)
1、Java8安裝成功
2、zookeeper安裝成功
3、hadoop2.7.5 HA安裝成功
4、Scala安裝成功(不安裝進(jìn)程也可以啟動)
三、Spark安裝過程
1、上傳并解壓縮
[hadoop@hadoop1 ~]$ ls apps data exam inithive.conf movie spark-2.3.0-bin-hadoop2.7.tgz udf.jar cookies data.txt executions json.txt projects student zookeeper.out course emp hive.sql log sougou temp [hadoop@hadoop1 ~]$ tar -zxvf spark-2.3.0-bin-hadoop2.7.tgz -C apps/
2、為安裝包創(chuàng)建一個軟連接
[hadoop@hadoop1 ~]$ cd apps/ [hadoop@hadoop1 apps]$ ls hadoop-2.7.5 hbase-1.2.6 spark-2.3.0-bin-hadoop2.7 zookeeper-3.4.10 zookeeper.out [hadoop@hadoop1 apps]$ ln -s spark-2.3.0-bin-hadoop2.7/ spark [hadoop@hadoop1 apps]$ ll 總用量 36 drwxr-xr-x. 10 hadoop hadoop 4096 3月 23 20:29 hadoop-2.7.5 drwxrwxr-x. 7 hadoop hadoop 4096 3月 29 13:15 hbase-1.2.6 lrwxrwxrwx. 1 hadoop hadoop 26 4月 20 13:48 spark -> spark-2.3.0-bin-hadoop2.7/ drwxr-xr-x. 13 hadoop hadoop 4096 2月 23 03:42 spark-2.3.0-bin-hadoop2.7 drwxr-xr-x. 10 hadoop hadoop 4096 3月 23 2017 zookeeper-3.4.10 -rw-rw-r--. 1 hadoop hadoop 17559 3月 29 13:37 zookeeper.out [hadoop@hadoop1 apps]$
3、進(jìn)入spark/conf修改配置文件
(1)進(jìn)入配置文件所在目錄
[hadoop@hadoop1 ~]$ cd apps/spark/conf/ [hadoop@hadoop1 conf]$ ll 總用量 36 -rw-r--r--. 1 hadoop hadoop 996 2月 23 03:42 docker.properties.template -rw-r--r--. 1 hadoop hadoop 1105 2月 23 03:42 fairscheduler.xml.template -rw-r--r--. 1 hadoop hadoop 2025 2月 23 03:42 log4j.properties.template -rw-r--r--. 1 hadoop hadoop 7801 2月 23 03:42 metrics.properties.template -rw-r--r--. 1 hadoop hadoop 865 2月 23 03:42 slaves.template -rw-r--r--. 1 hadoop hadoop 1292 2月 23 03:42 spark-defaults.conf.template -rwxr-xr-x. 1 hadoop hadoop 4221 2月 23 03:42 spark-env.sh.template [hadoop@hadoop1 conf]$
(2)復(fù)制spark-env.sh.template并重命名為spark-env.sh,并在文件最后添加配置內(nèi)容
[hadoop@hadoop1 conf]$ cp spark-env.sh.template spark-env.sh [hadoop@hadoop1 conf]$ vi spark-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_73 #export SCALA_HOME=/usr/share/scala export HADOOP_HOME=/home/hadoop/apps/hadoop-2.7.5 export HADOOP_CONF_DIR=/home/hadoop/apps/hadoop-2.7.5/etc/hadoop export SPARK_WORKER_MEMORY=500m export SPARK_WORKER_CORES=1 export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181 -Dspark.deploy.zookeeper.dir=/spark"
注:
#export SPARK_MASTER_IP=hadoop1 這個配置要注釋掉。
集群搭建時配置的spark參數(shù)可能和現(xiàn)在的不一樣,主要是考慮個人電腦配置問題,如果memory配置太大,機(jī)器運(yùn)行很慢。
說明:
-Dspark.deploy.recoveryMode=ZOOKEEPER #說明整個集群狀態(tài)是通過zookeeper來維護(hù)的,整個集群狀態(tài)的恢復(fù)也是通過zookeeper來維護(hù)的。就是說用zookeeper做了spark的HA配置,Master(Active)掛掉的話,Master(standby)要想變成Master(Active)的話,Master(Standby)就要像zookeeper讀取整個集群狀態(tài)信息,然后進(jìn)行恢復(fù)所有Worker和Driver的狀態(tài)信息,和所有的Application狀態(tài)信息;
-Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181#將所有配置了zookeeper,并且在這臺機(jī)器上有可能做master(Active)的機(jī)器都配置進(jìn)來;(我用了4臺,就配置了4臺)-Dspark.deploy.zookeeper.dir=/spark
這里的dir和zookeeper配置文件zoo.cfg中的dataDir的區(qū)別???
-Dspark.deploy.zookeeper.dir是保存spark的元數(shù)據(jù),保存了spark的作業(yè)運(yùn)行狀態(tài);
zookeeper會保存spark集群的所有的狀態(tài)信息,包括所有的Workers信息,所有的Applactions信息,所有的Driver信息,如果集群
(3)復(fù)制slaves.template成slaves
[hadoop@hadoop1 conf]$ cp slaves.template slaves [hadoop@hadoop1 conf]$ vi slaves
添加如下內(nèi)容
hadoop1 hadoop2 hadoop3 hadoop4
(4)將安裝包分發(fā)給其他節(jié)點(diǎn)
[hadoop@hadoop1 ~]$ cd apps/ [hadoop@hadoop1 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop2:$PWD [hadoop@hadoop1 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop3:$PWD [hadoop@hadoop1 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop4:$PWD
創(chuàng)建軟連接
[hadoop@hadoop2 ~]$ cd apps/ [hadoop@hadoop2 apps]$ ls hadoop-2.7.5 hbase-1.2.6 spark-2.3.0-bin-hadoop2.7 zookeeper-3.4.10 [hadoop@hadoop2 apps]$ ln -s spark-2.3.0-bin-hadoop2.7/ spark [hadoop@hadoop2 apps]$ ll 總用量 16 drwxr-xr-x 10 hadoop hadoop 4096 3月 23 20:29 hadoop-2.7.5 drwxrwxr-x 7 hadoop hadoop 4096 3月 29 13:15 hbase-1.2.6 lrwxrwxrwx 1 hadoop hadoop 26 4月 20 19:26 spark -> spark-2.3.0-bin-hadoop2.7/ drwxr-xr-x 13 hadoop hadoop 4096 4月 20 19:24 spark-2.3.0-bin-hadoop2.7 drwxr-xr-x 10 hadoop hadoop 4096 3月 21 19:31 zookeeper-3.4.10 [hadoop@hadoop2 apps]$
4、配置環(huán)境變量
所有節(jié)點(diǎn)均要配置
[hadoop@hadoop1 spark]$ vi ~/.bashrc
#Spark export SPARK_HOME=/home/hadoop/apps/spark export PATH=$PATH:$SPARK_HOME/bin
保存并使其立即生效
[hadoop@hadoop1 spark]$ source ~/.bashrc
四、啟動
1、先啟動zookeeper集群
所有節(jié)點(diǎn)均要執(zhí)行
[hadoop@hadoop1 ~]$ zkServer.sh start ZooKeeper JMX enabled by default Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [hadoop@hadoop1 ~]$ zkServer.sh status ZooKeeper JMX enabled by default Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: follower [hadoop@hadoop1 ~]$
2、在啟動HDFS集群
任意一個節(jié)點(diǎn)執(zhí)行即可
[hadoop@hadoop1 ~]$ start-dfs.sh
3、在啟動Spark集群
在一個節(jié)點(diǎn)上執(zhí)行
[hadoop@hadoop1 ~]$ cd apps/spark/sbin/ [hadoop@hadoop1 sbin]$ start-all.sh
4、查看進(jìn)程
5、問題
查看進(jìn)程發(fā)現(xiàn)spark集群只有hadoop1成功啟動了Master進(jìn)程,其他3個節(jié)點(diǎn)均沒有啟動成功,需要手動啟動,進(jìn)入到/home/hadoop/apps/spark/sbin目錄下執(zhí)行以下命令,3個節(jié)點(diǎn)都要執(zhí)行
[hadoop@hadoop2 ~]$ cd ~/apps/spark/sbin/ [hadoop@hadoop2 sbin]$ start-master.sh
6、執(zhí)行之后再次查看進(jìn)程
Master進(jìn)程和Worker進(jìn)程都以啟動成功
五、驗證
1、查看Web界面Master狀態(tài)
hadoop1是ALIVE狀態(tài),hadoop2、hadoop3和hadoop4均是STANDBY狀態(tài)
hadoop1節(jié)點(diǎn)
hadoop2節(jié)點(diǎn)
hadoop3
hadoop4
2、驗證HA的高可用
手動干掉hadoop1上面的Master進(jìn)程,觀察是否會自動進(jìn)行切換
干掉hadoop1上的Master進(jìn)程之后,再次查看web界面
hadoo1節(jié)點(diǎn),由于Master進(jìn)程被干掉,所以界面無法訪問
hadoop2節(jié)點(diǎn),Master被干掉之后,hadoop2節(jié)點(diǎn)上的Master成功篡位成功,成為ALIVE狀態(tài)
hadoop3節(jié)點(diǎn)
hadoop4節(jié)點(diǎn)
六、執(zhí)行Spark程序on standalone
1、執(zhí)行第一個Spark程序
[hadoop@hadoop3 ~]$ /home/hadoop/apps/spark/bin/spark-submit \ > --class org.apache.spark.examples.SparkPi \ > --master spark://hadoop1:7077 \ > --executor-memory 500m \ > --total-executor-cores 1 \ > /home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar \ > 100
其中的spark://hadoop1:7077是下圖中的地址
運(yùn)行結(jié)果
2、啟動spark shell
[hadoop@hadoop1 ~]$ /home/hadoop/apps/spark/bin/spark-shell \ > --master spark://hadoop1:7077 \ > --executor-memory 500m \ > --total-executor-cores 1
參數(shù)說明:
--master spark://hadoop1:7077 指定Master的地址
--executor-memory 500m:指定每個worker可用內(nèi)存為500m
--total-executor-cores 1: 指定整個集群使用的cup核數(shù)為1個
注意:
如果啟動spark shell時沒有指定master地址,但是也可以正常啟動spark shell和執(zhí)行spark shell中的程序,其實是啟動了spark的local模式,該模式僅在本機(jī)啟動一個進(jìn)程,沒有與集群建立聯(lián)系。
Spark Shell中已經(jīng)默認(rèn)將SparkContext類初始化為對象sc。用戶代碼如果需要用到,則直接應(yīng)用sc即可
Spark Shell中已經(jīng)默認(rèn)將SparkSQl類初始化為對象spark。用戶代碼如果需要用到,則直接應(yīng)用spark即可
3、 在spark shell中編寫WordCount程序
(1)編寫一個hello.txt文件并上傳到HDFS上的spark目錄下
[hadoop@hadoop1 ~]$ vi hello.txt [hadoop@hadoop1 ~]$ hadoop fs -mkdir -p /spark [hadoop@hadoop1 ~]$ hadoop fs -put hello.txt /spark
hello.txt的內(nèi)容如下
you,jump i,jump you,jump i,jump jump
(2)在spark shell中用scala語言編寫spark程序
scala> sc.textFile("/spark/hello.txt").flatMap(_.split(",")).map((_,1)).reduceByKey(_+_).saveAsTextFile("/spark/out")
說明:
sc是SparkContext對象,該對象是提交spark程序的入口
textFile("/spark/hello.txt")是hdfs中讀取數(shù)據(jù)
flatMap(_.split(" "))先map再壓平
map((_,1))將單詞和1構(gòu)成元組
reduceByKey(_+_)按照key進(jìn)行reduce,并將value累加
saveAsTextFile("/spark/out")將結(jié)果寫入到hdfs中
(3)使用hdfs命令查看結(jié)果
[hadoop@hadoop2 ~]$ hadoop fs -cat /spark/out/p* (jump,5) (you,2) (i,2) [hadoop@hadoop2 ~]$
七、 執(zhí)行Spark程序on YARN
1、前提
成功啟動zookeeper集群、HDFS集群、YARN集群
2、啟動Spark on YARN
[hadoop@hadoop1 bin]$ spark-shell --master yarn --deploy-mode client
報錯如下:
報錯原因:內(nèi)存資源給的過小,yarn直接kill掉進(jìn)程,則報rpc連接失敗、ClosedChannelException等錯誤。
解決方法:
先停止YARN服務(wù),然后修改yarn-site.xml,增加如下內(nèi)容
<property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> <description>Whether virtual memory limits will be enforced for containers</description> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>4</value> <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description> </property>
將新的yarn-site.xml文件分發(fā)到其他Hadoop節(jié)點(diǎn)對應(yīng)的目錄下,最后在重新啟動YARN。
重新執(zhí)行以下命令啟動spark on yarn
[hadoop@hadoop1 hadoop]$ spark-shell --master yarn --deploy-mode client
啟動成功
3、打開YARN的web界面
打開YARN WEB頁面:http://hadoop4:8088
可以看到Spark shell應(yīng)用程序正在運(yùn)行
單擊ID號鏈接,可以看到該應(yīng)用程序的詳細(xì)信息
單擊“ApplicationMaster”鏈接
4、運(yùn)行程序
scala> val array = Array(1,2,3,4,5) array: Array[Int] = Array(1, 2, 3, 4, 5) scala> val rdd = sc.makeRDD(array) rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:26 scala> rdd.count res0: Long = 5 scala>
再次查看YARN的web界面
查看executors
5、執(zhí)行Spark自帶的示例程序PI
[hadoop@hadoop1 ~]$ spark-submit --class org.apache.spark.examples.SparkPi \ > --master yarn \ > --deploy-mode cluster \ > --driver-memory 500m \ > --executor-memory 500m \ > --executor-cores 1 \ > /home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar \ > 10
執(zhí)行過程
[hadoop@hadoop1 ~]$ spark-submit --class org.apache.spark.examples.SparkPi \ > --master yarn \ > --deploy-mode cluster \ > --driver-memory 500m \ > --executor-memory 500m \ > --executor-cores 1 \ > /home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar \ > 10 2018-04-21 17:57:32 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-04-21 17:57:34 INFO ConfiguredRMFailoverProxyProvider:100 - Failing over to rm2 2018-04-21 17:57:34 INFO Client:54 - Requesting a new application from cluster with 4 NodeManagers 2018-04-21 17:57:34 INFO Client:54 - Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container) 2018-04-21 17:57:34 INFO Client:54 - Will allocate AM container, with 884 MB memory including 384 MB overhead 2018-04-21 17:57:34 INFO Client:54 - Setting up container launch context for our AM 2018-04-21 17:57:34 INFO Client:54 - Setting up the launch environment for our AM container 2018-04-21 17:57:34 INFO Client:54 - Preparing resources for our AM container 2018-04-21 17:57:36 WARN Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 2018-04-21 17:57:39 INFO Client:54 - Uploading resource file:/tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720/__spark_libs__8262081479435245591.zip -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/__spark_libs__8262081479435245591.zip 2018-04-21 17:57:44 INFO Client:54 - Uploading resource file:/home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/spark-examples_2.11-2.3.0.jar 2018-04-21 17:57:44 INFO Client:54 - Uploading resource file:/tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720/__spark_conf__2498510663663992254.zip -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/__spark_conf__.zip 2018-04-21 17:57:44 INFO SecurityManager:54 - Changing view acls to: hadoop 2018-04-21 17:57:44 INFO SecurityManager:54 - Changing modify acls to: hadoop 2018-04-21 17:57:44 INFO SecurityManager:54 - Changing view acls groups to: 2018-04-21 17:57:44 INFO SecurityManager:54 - Changing modify acls groups to: 2018-04-21 17:57:44 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set() 2018-04-21 17:57:44 INFO Client:54 - Submitting application application_1524303370510_0005 to ResourceManager 2018-04-21 17:57:44 INFO YarnClientImpl:273 - Submitted application application_1524303370510_0005 2018-04-21 17:57:45 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED) 2018-04-21 17:57:45 INFO Client:54 - client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1524304664749 final status: UNDEFINED tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/ user: hadoop 2018-04-21 17:57:46 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED) 2018-04-21 17:57:47 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED) 2018-04-21 17:57:48 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED) 2018-04-21 17:57:49 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED) 2018-04-21 17:57:50 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED) 2018-04-21 17:57:51 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED) 2018-04-21 17:57:52 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED) 2018-04-21 17:57:53 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED) 2018-04-21 17:57:54 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:57:54 INFO Client:54 - client token: N/A diagnostics: N/A ApplicationMaster host: 192.168.123.104 ApplicationMaster RPC port: 0 queue: default start time: 1524304664749 final status: UNDEFINED tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/ user: hadoop 2018-04-21 17:57:55 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:57:56 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:57:57 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:57:58 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:57:59 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:00 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:01 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:02 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:03 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:04 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:05 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:06 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:07 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:08 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING) 2018-04-21 17:58:09 INFO Client:54 - Application report for application_1524303370510_0005 (state: FINISHED) 2018-04-21 17:58:09 INFO Client:54 - client token: N/A diagnostics: N/A ApplicationMaster host: 192.168.123.104 ApplicationMaster RPC port: 0 queue: default start time: 1524304664749 final status: SUCCEEDED tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/ user: hadoop 2018-04-21 17:58:09 INFO Client:54 - Deleted staging directory hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005 2018-04-21 17:58:09 INFO ShutdownHookManager:54 - Shutdown hook called 2018-04-21 17:58:09 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720 2018-04-21 17:58:09 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-06de6905-8067-4f1e-a0a0-bc8a51daf535 [hadoop@hadoop1 ~]$
希望本文所述對大家spark程序設(shè)計有所幫助。
相關(guān)文章
SpringBoot使用JdbcTemplate訪問操作數(shù)據(jù)庫基本用法
這篇文章主要介紹了SpringBoot使用JdbcTemplate訪問操作數(shù)據(jù)庫基本用法,Spring對數(shù)據(jù)庫的操作在jdbc上s面做了深層次的封裝,使用spring的注入功能,可以把DataSource注冊到JdbcTemplate之中。下文詳細(xì)內(nèi)容需要的小伙伴可以參考一下2022-02-02Java concurrency之鎖_動力節(jié)點(diǎn)Java學(xué)院整理
這篇文章主要為大家詳細(xì)介紹了Java concurrency之鎖的相關(guān)資料,具有一定的參考價值,感興趣的小伙伴們可以參考一下2017-06-06SpringBoot中使用EasyExcel并行導(dǎo)出多個excel文件并壓縮zip后下載的代碼詳解
SpringBoot的同步導(dǎo)出方式中,服務(wù)器會阻塞直到Excel文件生成完畢,在處理大量數(shù)據(jù)的導(dǎo)出功能,本文給大家介紹了SpringBoot中使用EasyExcel并行導(dǎo)出多個excel文件并壓縮zip后下載,需要的朋友可以參考下2024-09-09java 查詢oracle數(shù)據(jù)庫所有表DatabaseMetaData的用法(詳解)
下面小編就為大家?guī)硪黄猨ava 查詢oracle數(shù)據(jù)庫所有表DatabaseMetaData的用法(詳解)。小編覺得挺不錯的,現(xiàn)在就分享給大家,也給大家做個參考。一起跟隨小編過來看看吧2016-11-11有關(guān)tomcat內(nèi)存溢出的完美解決方法
下面小編就為大家?guī)硪黄嘘P(guān)tomcat內(nèi)存溢出的完美解決方法。小編覺得挺不錯的,現(xiàn)在就分享給大家,也給大家做個參考。一起跟隨小編過來看看吧2016-05-05關(guān)于SpringBoot接收json格式的Demo案例
這篇文章主要介紹了關(guān)于SpringBoot接收json格式的Demo案例,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教2024-05-05Java數(shù)據(jù)結(jié)構(gòu)之線段樹中的懶操作詳解
對于線段樹,若要求對區(qū)間中的所有點(diǎn)都進(jìn)行更新,可以引入懶操作。懶操作包括區(qū)間更新和區(qū)間查詢操作。本文將通過一個示例和大家詳細(xì)聊聊線段樹中的懶操作,需要的可以參考一下2022-10-10基于Ok+Rxjava+retrofit實現(xiàn)斷點(diǎn)續(xù)傳下載
這篇文章主要為大家詳細(xì)介紹了基于Ok+Rxjava+retrofit實現(xiàn)斷點(diǎn)續(xù)傳下載,具有一定的參考價值,感興趣的小伙伴們可以參考一下2019-06-06logback的FileAppender文件追加模式和沖突檢測解讀
這篇文章主要為大家介紹了logback的FileAppender文件追加模式和沖突檢測解讀,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-10-10