springboot實(shí)現(xiàn)以代碼的方式配置sharding-jdbc水平分表
多數(shù)項(xiàng)目可能是已經(jīng)運(yùn)行了一段時(shí)間,才開始使用sharding-jdbc。
本教程就如何配置sharding-jdbc,才能使代碼改動(dòng)最少,對功能影響最少(如果已經(jīng)做了垂直分表,只有一部分子項(xiàng)目需要水平分表)給出一個(gè)簡單方案。
關(guān)于依賴
shardingsphere-jdbc-core-spring-boot-starter
官方給出了Spring Boot Starter配置
<dependency> <groupId>org.apache.shardingsphere</groupId> <artifactId>shardingsphere-jdbc-core-spring-boot-starter</artifactId> <version>${shardingsphere.version}</version> </dependency>
但是基于已有項(xiàng)目,添加shardingsphere自動(dòng)配置是很惡心的事
為什么配置了某個(gè)數(shù)據(jù)連接池的spring-boot-starter(比如druid)和 shardingsphere-jdbc-spring-boot-starter 時(shí),系統(tǒng)啟動(dòng)會報(bào)錯(cuò)?
回答:
1. 因?yàn)閿?shù)據(jù)連接池的starter(比如druid)可能會先加載并且其創(chuàng)建一個(gè)默認(rèn)數(shù)據(jù)源,這將會使得 ShardingSphere‐JDBC 創(chuàng)建數(shù)據(jù)源時(shí)發(fā)生沖突。
2. 解決辦法為,去掉數(shù)據(jù)連接池的starter 即可,sharing‐jdbc 自己會創(chuàng)建數(shù)據(jù)連接池。
一般項(xiàng)目已經(jīng)有自己的DataSource了,如果使用shardingsphere-jdbc的自動(dòng)配置,就必須舍棄原有的DataSource。
shardingsphere-jdbc-core
為了不放棄原有的DataSource配置,我們只引入shardingsphere-jdbc-core依賴
<dependency> <groupId>org.apache.shardingsphere</groupId> <artifactId>sharding-jdbc-core</artifactId> <version>4.1.1</version> </dependency>
如果只水平分表,只支持mysql,可以排除一些無用的依賴
<dependency> <groupId>org.apache.shardingsphere</groupId> <artifactId>sharding-jdbc-core</artifactId> <version>4.1.1</version> <exclusions> <exclusion> <groupId>org.apache.shardingsphere</groupId> <artifactId>shardingsphere-sql-parser-postgresql</artifactId> </exclusion> <exclusion> <groupId>org.apache.shardingsphere</groupId> <artifactId>shardingsphere-sql-parser-oracle</artifactId> </exclusion> <exclusion> <groupId>org.apache.shardingsphere</groupId> <artifactId>shardingsphere-sql-parser-sqlserver</artifactId> </exclusion> <exclusion> <groupId>org.apache.shardingsphere</groupId> <artifactId>encrypt-core-rewrite</artifactId> </exclusion> <exclusion> <groupId>org.apache.shardingsphere</groupId> <artifactId>shadow-core-rewrite</artifactId> </exclusion> <exclusion> <groupId>org.apache.shardingsphere</groupId> <artifactId>encrypt-core-merge</artifactId> </exclusion> <exclusion> <!-- 數(shù)據(jù)庫連接池,一般原有項(xiàng)目已引入其他的連接池 --> <groupId>com.zaxxer</groupId> <artifactId>HikariCP</artifactId> </exclusion> <exclusion> <!-- 也是數(shù)據(jù)庫連接池,一般原有項(xiàng)目已引入其他的連接池 --> <groupId>org.apache.commons</groupId> <artifactId>commons-dbcp2</artifactId> </exclusion> <exclusion> <!-- 對象池,可以不排除 --> <groupId>commons-pool</groupId> <artifactId>commons-pool</artifactId> </exclusion> <exclusion> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> </exclusion> <exclusion> <!-- mysql驅(qū)動(dòng),原項(xiàng)目已引入,為了避免改變原有版本號,排除了吧 --> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> </exclusion> <exclusion> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> </exclusion> <exclusion> <groupId>com.microsoft.sqlserver</groupId> <artifactId>mssql-jdbc</artifactId> </exclusion> </exclusions> </dependency>
數(shù)據(jù)源DataSource
原DataSource
以Druid為例,原配置為
package com.xxx.common.autoConfiguration; import java.util.ArrayList; import java.util.List; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean; import org.springframework.boot.web.servlet.FilterRegistrationBean; import org.springframework.boot.web.servlet.ServletRegistrationBean; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import com.alibaba.druid.filter.Filter; import com.alibaba.druid.filter.logging.Slf4jLogFilter; import com.alibaba.druid.filter.stat.StatFilter; import com.alibaba.druid.pool.DruidDataSource; import com.alibaba.druid.support.http.StatViewServlet; import com.alibaba.druid.support.http.WebStatFilter; import com.alibaba.druid.wall.WallConfig; import com.alibaba.druid.wall.WallFilter; import lombok.extern.slf4j.Slf4j; /** * @ClassName: DruidConfiguration * @Description: Druid連接池配置 */ @Configuration @Slf4j public class DruidConfiguration { @Value("${spring.datasource.driver-class-name}") private String driver; @Value("${spring.datasource.url}") private String url; @Value("${spring.datasource.username}") private String username; @Value("${spring.datasource.password}") private String password; @Value("${datasource.druid.initialsize}") private Integer druid_initialsize = 0; @Value("${datasource.druid.maxactive}") private Integer druid_maxactive = 20; @Value("${datasource.druid.minidle}") private Integer druid_minidle = 0; @Value("${datasource.druid.maxwait}") private Integer druid_maxwait = 30000; @Bean public ServletRegistrationBean druidServlet() { ServletRegistrationBean reg = new ServletRegistrationBean(); reg.setServlet(new StatViewServlet()); reg.addUrlMappings("/druid/*"); reg.addInitParameter("loginUsername", "root"); reg.addInitParameter("loginPassword", "root!@#"); //reg.addInitParameter("logSlowSql", ""); return reg; } /** * * @Title: druidDataSource * @Description: 數(shù)據(jù)庫源Bean * @param @return 參數(shù)說明 * @return DataSource 返回類型 * @throws */ @Bean public DataSource druidDataSource() { // 數(shù)據(jù)源 DruidDataSource druidDataSource = new DruidDataSource(); druidDataSource.setDriverClassName(driver); // 驅(qū)動(dòng) druidDataSource.setUrl(url); // 數(shù)據(jù)庫連接地址 druidDataSource.setUsername(username); // 數(shù)據(jù)庫用戶名 druidDataSource.setPassword(password); // 數(shù)據(jù)庫密碼 druidDataSource.setInitialSize(druid_initialsize);// 初始化連接大小 druidDataSource.setMaxActive(druid_maxactive); // 連接池最大使用連接數(shù)量 druidDataSource.setMinIdle(druid_minidle); // 連接池最小空閑 druidDataSource.setMaxWait(druid_maxwait); // 獲取連接最大等待時(shí)間 // 打開PSCache,并且指定每個(gè)連接上PSCache的大小 druidDataSource.setPoolPreparedStatements(false); druidDataSource.setMaxPoolPreparedStatementPerConnectionSize(33); //druidDataSource.setValidationQuery("SELECT 1"); // 用來檢測連接是否有效的sql druidDataSource.setTestOnBorrow(false); // 申請連接時(shí)執(zhí)行validationQuery檢測連接是否有效,做了這個(gè)配置會降低性能。 druidDataSource.setTestOnReturn(false); // 歸還連接時(shí)執(zhí)行validationQuery檢測連接是否有效,做了這個(gè)配置會降低性能 druidDataSource.setTestWhileIdle(false); // 建議配置為true,不影響性能,并且保證安全性。申請連接的時(shí)候檢測,如果空閑時(shí)間大于timeBetweenEvictionRunsMillis,執(zhí)行validationQuery檢測連接是否有效 druidDataSource.setTimeBetweenLogStatsMillis(60000); // 配置間隔多久才進(jìn)行一次檢測,檢測需要關(guān)閉的空閑連接,單位是毫秒 druidDataSource.setMinEvictableIdleTimeMillis(1800000); // 配置一個(gè)連接在池中最小生存的時(shí)間,單位是毫秒 // 當(dāng)程序存在缺陷時(shí),申請的連接忘記關(guān)閉,這時(shí)候,就存在連接泄漏 // 配置removeAbandoned對性能會有一些影響,建議懷疑存在泄漏之后再打開。在上面的配置中,如果連接超過30分鐘未關(guān)閉,就會被強(qiáng)行回收,并且日志記錄連接申請時(shí)的調(diào)用堆棧。 druidDataSource.setRemoveAbandoned(false); // 打開removeAbandoned功能 druidDataSource.setRemoveAbandonedTimeout(1800); // 1800秒,也就是30分鐘 druidDataSource.setLogAbandoned(false); // 關(guān)閉abanded連接時(shí)輸出錯(cuò)誤日志 // 過濾器 List<Filter> filters = new ArrayList<Filter>(); filters.add(this.getStatFilter()); // 監(jiān)控 //filters.add(this.getSlf4jLogFilter()); // 日志 filters.add(this.getWallFilter()); // 防火墻 druidDataSource.setProxyFilters(filters); log.info("連接池配置信息:"+druidDataSource.getUrl()); return druidDataSource; } @Bean public FilterRegistrationBean filterRegistrationBean() { FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean(); WebStatFilter webStatFilter = new WebStatFilter(); filterRegistrationBean.setFilter(webStatFilter); filterRegistrationBean.addUrlPatterns("/*"); filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*"); return filterRegistrationBean; } /** * * @Title: getStatFilter * @Description: 監(jiān)控過濾器 * @param @return 參數(shù)說明 * @return StatFilter 返回類型 * @throws */ public StatFilter getStatFilter(){ StatFilter sFilter = new StatFilter(); //sFilter.setSlowSqlMillis(2000); // 慢sql,毫秒時(shí)間 sFilter.setLogSlowSql(false); // 慢sql日志 sFilter.setMergeSql(true); // sql合并優(yōu)化處理 return sFilter; } /** * * @Title: getSlf4jLogFilter * @Description: 監(jiān)控日志過濾器 * @param @return 參數(shù)說明 * @return Slf4jLogFilter 返回類型 * @throws */ public Slf4jLogFilter getSlf4jLogFilter(){ Slf4jLogFilter slFilter = new Slf4jLogFilter(); slFilter.setResultSetLogEnabled(false); slFilter.setStatementExecutableSqlLogEnable(false); return slFilter; } /** * * @Title: getWallFilter * @Description: 防火墻過濾器 * @param @return 參數(shù)說明 * @return WallFilter 返回類型 * @throws */ public WallFilter getWallFilter(){ WallFilter wFilter = new WallFilter(); wFilter.setDbType("mysql"); wFilter.setConfig(this.getWallConfig()); wFilter.setLogViolation(true); // 對被認(rèn)為是攻擊的SQL進(jìn)行LOG.error輸出 wFilter.setThrowException(true); // 對被認(rèn)為是攻擊的SQL拋出SQLExcepton return wFilter; } /** * * @Title: getWallConfig * @Description: 數(shù)據(jù)防火墻配置 * @param @return 參數(shù)說明 * @return WallConfig 返回類型 * @throws */ public WallConfig getWallConfig(){ WallConfig wConfig = new WallConfig(); wConfig.setDir("META-INF/druid/wall/mysql"); // 指定配置裝載的目錄 // 攔截配置-語句 wConfig.setTruncateAllow(false); // truncate語句是危險(xiǎn),缺省打開,若需要自行關(guān)閉 wConfig.setCreateTableAllow(true); // 是否允許創(chuàng)建表 wConfig.setAlterTableAllow(false); // 是否允許執(zhí)行Alter Table語句 wConfig.setDropTableAllow(false); // 是否允許修改表 // 其他攔截配置 wConfig.setStrictSyntaxCheck(true); // 是否進(jìn)行嚴(yán)格的語法檢測,Druid SQL Parser在某些場景不能覆蓋所有的SQL語法,出現(xiàn)解析SQL出錯(cuò),可以臨時(shí)把這個(gè)選項(xiàng)設(shè)置為false,同時(shí)把SQL反饋給Druid的開發(fā)者 wConfig.setConditionOpBitwseAllow(true); // 查詢條件中是否允許有"&"、"~"、"|"、"^"運(yùn)算符。 wConfig.setMinusAllow(true); // 是否允許SELECT * FROM A MINUS SELECT * FROM B這樣的語句 wConfig.setIntersectAllow(true); // 是否允許SELECT * FROM A INTERSECT SELECT * FROM B這樣的語句 //wConfig.setMetadataAllow(false); // 是否允許調(diào)用Connection.getMetadata方法,這個(gè)方法調(diào)用會暴露數(shù)據(jù)庫的表信息 return wConfig; } }
可見,如果用自動(dòng)配置的方式放棄這些原有的配置風(fēng)險(xiǎn)有多大
怎么改呢?
ShardingJdbcDataSource
第一步,創(chuàng)建一個(gè)interface,用以加載自定義的分表策略
可以在各個(gè)子項(xiàng)目中創(chuàng)建bean,實(shí)現(xiàn)此接口
public interface ShardingRuleSupport { void configRule(ShardingRuleConfiguration shardingRuleConfig); }
第二步,在DruidConfiguration.class中注入所有的ShardingRuleSupport
@Autowired(required = false) private List<ShardingRuleSupport> shardingRuleSupport;
第三步,創(chuàng)建sharding-jdbc分表數(shù)據(jù)源
//包裝Druid數(shù)據(jù)源 Map<String, DataSource> dataSourceMap = new HashMap<>(); //自定義一個(gè)名稱為ds0的數(shù)據(jù)源名稱,包裝原有的Druid數(shù)據(jù)源,還可以再定義多個(gè)數(shù)據(jù)源 //因?yàn)橹环直聿环謳?,所有定義一個(gè)數(shù)據(jù)源就夠了 dataSourceMap.put("ds0", druidDataSource); //加載分表配置 ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration(); //要加載所有的ShardingRuleSupport實(shí)現(xiàn)bean,所以用for循環(huán)加載 for (ShardingRuleSupport support : shardingRuleSupport) { support.configRule(shardingRuleConfig); } //加載其他配置 Properties properties = new Properties(); //由于未使用starter的自動(dòng)裝配,所以手動(dòng)設(shè)置,是否顯示分表sql properties.put("sql.show", sqlShow); //返回ShardingDataSource包裝的數(shù)據(jù)源 return ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, properties);
完整的ShardingJdbcDataSource配置
package com.xxx.common.autoConfiguration; import java.util.ArrayList; import java.util.List; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean; import org.springframework.boot.web.servlet.FilterRegistrationBean; import org.springframework.boot.web.servlet.ServletRegistrationBean; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import com.alibaba.druid.filter.Filter; import com.alibaba.druid.filter.logging.Slf4jLogFilter; import com.alibaba.druid.filter.stat.StatFilter; import com.alibaba.druid.pool.DruidDataSource; import com.alibaba.druid.support.http.StatViewServlet; import com.alibaba.druid.support.http.WebStatFilter; import com.alibaba.druid.wall.WallConfig; import com.alibaba.druid.wall.WallFilter; import lombok.extern.slf4j.Slf4j; /** * @ClassName: DruidConfiguration * @Description: Druid連接池配置 */ @Configuration @Slf4j public class DruidConfiguration { @Value("${spring.datasource.driver-class-name}") private String driver; @Value("${spring.datasource.url}") private String url; @Value("${spring.datasource.username}") private String username; @Value("${spring.datasource.password}") private String password; @Value("${datasource.druid.initialsize}") private Integer druid_initialsize = 0; @Value("${datasource.druid.maxactive}") private Integer druid_maxactive = 20; @Value("${datasource.druid.minidle}") private Integer druid_minidle = 0; @Value("${datasource.druid.maxwait}") private Integer druid_maxwait = 30000; /** * 默認(rèn)不顯示分表SQL */ @Value("${spring.shardingsphere.props.sql.show:false}") private boolean sqlShow; @Autowired(required = false) private List<ShardingRuleSupport> shardingRuleSupport; @Bean public ServletRegistrationBean druidServlet() { ServletRegistrationBean reg = new ServletRegistrationBean(); reg.setServlet(new StatViewServlet()); reg.addUrlMappings("/druid/*"); reg.addInitParameter("loginUsername", "root"); reg.addInitParameter("loginPassword", "root!@#"); //reg.addInitParameter("logSlowSql", ""); return reg; } /** * * @Title: druidDataSource * @Description: 數(shù)據(jù)庫源Bean * @param @return 參數(shù)說明 * @return DataSource 返回類型 * @throws */ @Bean public DataSource druidDataSource() { // 數(shù)據(jù)源 DruidDataSource druidDataSource = new DruidDataSource(); druidDataSource.setDriverClassName(driver); // 驅(qū)動(dòng) druidDataSource.setUrl(url); // 數(shù)據(jù)庫連接地址 druidDataSource.setUsername(username); // 數(shù)據(jù)庫用戶名 druidDataSource.setPassword(password); // 數(shù)據(jù)庫密碼 druidDataSource.setInitialSize(druid_initialsize);// 初始化連接大小 druidDataSource.setMaxActive(druid_maxactive); // 連接池最大使用連接數(shù)量 druidDataSource.setMinIdle(druid_minidle); // 連接池最小空閑 druidDataSource.setMaxWait(druid_maxwait); // 獲取連接最大等待時(shí)間 // 打開PSCache,并且指定每個(gè)連接上PSCache的大小 druidDataSource.setPoolPreparedStatements(false); druidDataSource.setMaxPoolPreparedStatementPerConnectionSize(33); //druidDataSource.setValidationQuery("SELECT 1"); // 用來檢測連接是否有效的sql druidDataSource.setTestOnBorrow(false); // 申請連接時(shí)執(zhí)行validationQuery檢測連接是否有效,做了這個(gè)配置會降低性能。 druidDataSource.setTestOnReturn(false); // 歸還連接時(shí)執(zhí)行validationQuery檢測連接是否有效,做了這個(gè)配置會降低性能 druidDataSource.setTestWhileIdle(false); // 建議配置為true,不影響性能,并且保證安全性。申請連接的時(shí)候檢測,如果空閑時(shí)間大于timeBetweenEvictionRunsMillis,執(zhí)行validationQuery檢測連接是否有效 druidDataSource.setTimeBetweenLogStatsMillis(60000); // 配置間隔多久才進(jìn)行一次檢測,檢測需要關(guān)閉的空閑連接,單位是毫秒 druidDataSource.setMinEvictableIdleTimeMillis(1800000); // 配置一個(gè)連接在池中最小生存的時(shí)間,單位是毫秒 // 當(dāng)程序存在缺陷時(shí),申請的連接忘記關(guān)閉,這時(shí)候,就存在連接泄漏 // 配置removeAbandoned對性能會有一些影響,建議懷疑存在泄漏之后再打開。在上面的配置中,如果連接超過30分鐘未關(guān)閉,就會被強(qiáng)行回收,并且日志記錄連接申請時(shí)的調(diào)用堆棧。 druidDataSource.setRemoveAbandoned(false); // 打開removeAbandoned功能 druidDataSource.setRemoveAbandonedTimeout(1800); // 1800秒,也就是30分鐘 druidDataSource.setLogAbandoned(false); // 關(guān)閉abanded連接時(shí)輸出錯(cuò)誤日志 // 過濾器 List<Filter> filters = new ArrayList<Filter>(); filters.add(this.getStatFilter()); // 監(jiān)控 //filters.add(this.getSlf4jLogFilter()); // 日志 filters.add(this.getWallFilter()); // 防火墻 druidDataSource.setProxyFilters(filters); log.info("連接池配置信息:"+druidDataSource.getUrl()); if (shardingRuleSupport == null || shardingRuleSupport.isEmpty()) { log.info("............分表配置為空,使用默認(rèn)的數(shù)據(jù)源............"); return druidDataSource; } log.info("++++++++++++加載sharding jdbc配置++++++++++++"); //包裝Druid數(shù)據(jù)源 Map<String, DataSource> dataSourceMap = new HashMap<>(); //自定義一個(gè)名稱為ds0的數(shù)據(jù)源名稱,包裝原有的Druid數(shù)據(jù)源,還可以再定義多個(gè)數(shù)據(jù)源 //因?yàn)橹环直聿环謳?,所有定義一個(gè)數(shù)據(jù)源就夠了 dataSourceMap.put("ds0", druidDataSource); //加載分表配置 ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration(); //要加載所有的ShardingRuleSupport實(shí)現(xiàn)bean,所以用for循環(huán)加載 for (ShardingRuleSupport support : shardingRuleSupport) { support.configRule(shardingRuleConfig); } //加載其他配置 Properties properties = new Properties(); //由于未使用starter的自動(dòng)裝配,所以手動(dòng)設(shè)置,是否顯示分表sql properties.put("sql.show", sqlShow); //返回ShardingDataSource包裝的數(shù)據(jù)源 return ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, properties); } @Bean public FilterRegistrationBean filterRegistrationBean() { FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean(); WebStatFilter webStatFilter = new WebStatFilter(); filterRegistrationBean.setFilter(webStatFilter); filterRegistrationBean.addUrlPatterns("/*"); filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*"); return filterRegistrationBean; } /** * * @Title: getStatFilter * @Description: 監(jiān)控過濾器 * @param @return 參數(shù)說明 * @return StatFilter 返回類型 * @throws */ public StatFilter getStatFilter(){ StatFilter sFilter = new StatFilter(); //sFilter.setSlowSqlMillis(2000); // 慢sql,毫秒時(shí)間 sFilter.setLogSlowSql(false); // 慢sql日志 sFilter.setMergeSql(true); // sql合并優(yōu)化處理 return sFilter; } /** * * @Title: getSlf4jLogFilter * @Description: 監(jiān)控日志過濾器 * @param @return 參數(shù)說明 * @return Slf4jLogFilter 返回類型 * @throws */ public Slf4jLogFilter getSlf4jLogFilter(){ Slf4jLogFilter slFilter = new Slf4jLogFilter(); slFilter.setResultSetLogEnabled(false); slFilter.setStatementExecutableSqlLogEnable(false); return slFilter; } /** * * @Title: getWallFilter * @Description: 防火墻過濾器 * @param @return 參數(shù)說明 * @return WallFilter 返回類型 * @throws */ public WallFilter getWallFilter(){ WallFilter wFilter = new WallFilter(); wFilter.setDbType("mysql"); wFilter.setConfig(this.getWallConfig()); wFilter.setLogViolation(true); // 對被認(rèn)為是攻擊的SQL進(jìn)行LOG.error輸出 wFilter.setThrowException(true); // 對被認(rèn)為是攻擊的SQL拋出SQLExcepton return wFilter; } /** * * @Title: getWallConfig * @Description: 數(shù)據(jù)防火墻配置 * @param @return 參數(shù)說明 * @return WallConfig 返回類型 * @throws */ public WallConfig getWallConfig(){ WallConfig wConfig = new WallConfig(); wConfig.setDir("META-INF/druid/wall/mysql"); // 指定配置裝載的目錄 // 攔截配置-語句 wConfig.setTruncateAllow(false); // truncate語句是危險(xiǎn),缺省打開,若需要自行關(guān)閉 wConfig.setCreateTableAllow(true); // 是否允許創(chuàng)建表 wConfig.setAlterTableAllow(false); // 是否允許執(zhí)行Alter Table語句 wConfig.setDropTableAllow(false); // 是否允許修改表 // 其他攔截配置 wConfig.setStrictSyntaxCheck(true); // 是否進(jìn)行嚴(yán)格的語法檢測,Druid SQL Parser在某些場景不能覆蓋所有的SQL語法,出現(xiàn)解析SQL出錯(cuò),可以臨時(shí)把這個(gè)選項(xiàng)設(shè)置為false,同時(shí)把SQL反饋給Druid的開發(fā)者 wConfig.setConditionOpBitwseAllow(true); // 查詢條件中是否允許有"&"、"~"、"|"、"^"運(yùn)算符。 wConfig.setMinusAllow(true); // 是否允許SELECT * FROM A MINUS SELECT * FROM B這樣的語句 wConfig.setIntersectAllow(true); // 是否允許SELECT * FROM A INTERSECT SELECT * FROM B這樣的語句 //wConfig.setMetadataAllow(false); // 是否允許調(diào)用Connection.getMetadata方法,這個(gè)方法調(diào)用會暴露數(shù)據(jù)庫的表信息 return wConfig; } }
分表策略
主要的類
創(chuàng)建幾個(gè)ShardingRuleSupport接口的實(shí)現(xiàn)Bean
@Component public class DefaultShardingRuleAdapter implements ShardingRuleSupport { @Override public void configRule(ShardingRuleConfiguration shardingRuleConfiguration) { Collection<TableRuleConfiguration> tableRuleConfigs = shardingRuleConfiguration.getTableRuleConfigs(); TableRuleConfiguration ruleConfig1 = new TableRuleConfiguration("table_one", "ds0.table_one_$->{0..9}"); ComplexShardingStrategyConfiguration strategyConfig1 = new ComplexShardingStrategyConfiguration("column_id", new MyDefaultShardingAlgorithm()); ruleConfig1.setTableShardingStrategyConfig(strategyConfig1); tableRuleConfigs.add(ruleConfig1); TableRuleConfiguration ruleConfig2 = new TableRuleConfiguration("table_two", "ds0.table_two_$->{0..9}"); ComplexShardingStrategyConfiguration strategyConfig2 = new ComplexShardingStrategyConfiguration("column_id", new MyDefaultShardingAlgorithm()); ruleConfig2.setTableShardingStrategyConfig(strategyConfig2); tableRuleConfigs.add(ruleConfig2); } }
@Component public class CustomShardingRuleAdapter implements ShardingRuleSupport { @Override public void configRule(ShardingRuleConfiguration shardingRuleConfiguration) { Collection<TableRuleConfiguration> tableRuleConfigs = shardingRuleConfiguration.getTableRuleConfigs(); TableRuleConfiguration ruleConfig1 = new TableRuleConfiguration(MyCustomShardingUtil.LOGIC_TABLE_NAME, MyCustomShardingUtil.ACTUAL_DATA_NODES); ComplexShardingStrategyConfiguration strategyConfig1 = new ComplexShardingStrategyConfiguration(MyCustomShardingUtil.SHARDING_COLUMNS, new MyCustomShardingAlgorithm()); ruleConfig1.setTableShardingStrategyConfig(strategyConfig1); tableRuleConfigs.add(ruleConfig1); } }
其他的分表配置類
public class MyDefaultShardingAlgorithm implements ComplexKeysShardingAlgorithm<String> { public String getShardingKey () { return "column_id"; } @Override public Collection<String> doSharding(Collection<String> availableTargetNames, ComplexKeysShardingValue<String> shardingValue) { Collection<String> col = new ArrayList<>(); String logicTableName = shardingValue.getLogicTableName() + "_"; Map<String, String> availableTargetNameMap = new HashMap<>(); for (String targetName : availableTargetNameMap) { String endStr = StringUtils.substringAfter(targetName, logicTableName); availableTargetNameMap.put(endStr, targetName); } int size = availableTargetNames.size(); //=,in Collection<String> shardingColumnValues = shardingValue.getColumnNameAndShardingValuesMap().get(this.getShardingKey()); if (shardingColumnValues != null) { for (String shardingColumnValue : shardingColumnValues) { String modStr = Integer.toString(Math.abs(shardingColumnValue .hashCode()) % size); String actualTableName = availableTargetNameMap.get(modStr); if (StringUtils.isNotEmpty(actualTableName)) { col.add(actualTableName); } } } //between and //shardingValue.getColumnNameAndRangeValuesMap().get(this.getShardingKey()); ... ... //如果分表列不是有序的,則between and無意義,沒有必要實(shí)現(xiàn) return col; } }
public class MyCustomShardingAlgorithm extends MyDefaultShardingAlgorithm implements ComplexKeysShardingAlgorithm<String> { @Override public String getShardingKey () { return MyCustomShardingUtil.SHARDING_COLUMNS; } @Override public Collection<String> doSharding(Collection<String> availableTargetNames, ComplexKeysShardingValue<String> shardingValue) { Collection<String> col = new ArrayList<>(); String logicTableName = shardingValue.getLogicTableName() + "_"; Map<String, String> availableTargetNameMap = new HashMap<>(); for (String targetName : availableTargetNameMap) { String endStr = StringUtils.substringAfter(targetName, logicTableName); availableTargetNameMap.put(endStr, targetName); } Map<String, String> specialActualTableNameMap = MyCustomShardingUtil.getSpecialActualTableNameMap(); int count = (int) specialActualTableNameMap.values().stream().distinct().count(); int size = availableTargetNames.size() - count; //=,in Collection<String> shardingColumnValues = shardingValue.getColumnNameAndShardingValuesMap().get(this.getShardingKey()); if (shardingColumnValues != null) { for (String shardingColumnValue : shardingColumnValues) { String specialActualTableName = specialActualTableNameMap.get(shardingColumnValue); if (StringUtils.isNotEmpty(specialActualTableName)) { col.add(specialActualTableName); continue; } String modStr = Integer.toString(Math.abs(shardingColumnValue .hashCode()) % size); String actualTableName = availableTargetNameMap.get(modStr); if (StringUtils.isNotEmpty(actualTableName)) { col.add(actualTableName); } } } //between and //shardingValue.getColumnNameAndRangeValuesMap().get(this.getShardingKey()); ... ... //如果分表列不是有序的,則between and無意義,沒有必要實(shí)現(xiàn) return col; } }
@Component public class MyCustomShardingUtil { /** * 邏輯表名 */ public static final String LOGIC_TABLE_NAME = "table_three"; /** * 分片字段 */ public static final String SHARDING_COLUMNS = "column_name"; /** * 添加指定分片表的后綴 */ private static final String[] SPECIAL_NODES = new String[]{"0sp", "1sp"}; // ds0.table_three_$->{((0..9).collect{t -> t.toString()} << ['0sp','1sp']).flatten()} public static final String ACTUAL_DATA_NODES = "ds0." + LOGIC_TABLE_NAME + "_$->{((0..9).collect{t -> t.toString()} << " + "['" + SPECIAL_NODES[0] + "','" + SPECIAL_NODES[1] + "']" + ").flatten()}"; private static final List<String> specialList0 = new ArrayList<>(); @Value("${special.table_three.sp0.ids:null}") private void setSpecialList0(String ids) { if (StringUtils.isBlank(ids)) { return; } String[] idSplit = StringUtils.split(ids, ","); for (String id : idSplit) { String trimId = StringUtils.trim(id); if (StringUtils.isEmpty(trimId)) { continue; } specialList0.add(trimId); } } private static final List<String> specialList1 = new ArrayList<>(); @Value("${special.table_three.sp1.ids:null}") private void setSpecialList1(String ids) { if (StringUtils.isBlank(ids)) { return; } String[] idSplit = StringUtils.split(ids, ","); for (String id : idSplit) { String trimId = StringUtils.trim(id); if (StringUtils.isEmpty(trimId)) { continue; } specialList1.add(trimId); } } private static class SpecialActualTableNameHolder { private static volatile Map<String, String> specialActualTableNameMap = new HashMap<>(); static { for (String specialId : specialList0) { specialActualTableNameMap.put(specialId, LOGIC_TABLE_NAME + "_" + SPECIAL_NODES[0]); } for (String specialId : specialList1) { specialActualTableNameMap.put(specialId, LOGIC_TABLE_NAME + "_" + SPECIAL_NODES[1]); } } } /** * @return 指定ID的表名映射 */ public static Map<String, String> getSpecialActualTableNameMap() { return SpecialActualTableNameHolder.specialActualTableNameMap; } }
ShardingAlgorithm接口的子接口除了ComplexKeysShardingAlgorithm,還有HintShardingAlgorithm,PreciseShardingAlgorithm,RangeShardingAlgorithm;本教程使用了更通用的ComplexKeysShardingAlgorithm接口。
配置TableRuleConfiguration類時(shí),使用了兩個(gè)參數(shù)的構(gòu)造器
public TableRuleConfiguration(String logicTable, String actualDataNodes) {}
TableRuleConfiguration類還有一個(gè)參數(shù)的的構(gòu)造器,沒有實(shí)際數(shù)據(jù)節(jié)點(diǎn),是給廣播表用的
public TableRuleConfiguration(String logicTable) {}
groovy行表達(dá)式說明
ds0.table_three_$->{((0…9).collect{t -> t.toString()} << [‘0sp',‘1sp']).flatten()}
sharding-jdbc的groovy行表達(dá)式支持$->{…}或${…},為了避免與spring的占位符混淆,官方推薦使用$->{…}
(0..9) 獲得0到9的集合
(0..9).collect{t -> t.toString()} 數(shù)值0到9的集合轉(zhuǎn)換成字符串0到9的數(shù)組
(0..9).collect{t -> t.toString()} << ['0sp','1sp'] 字符串0到9的數(shù)組合并['0sp','1sp']數(shù)組,結(jié)果為 ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ['0sp','1sp']]
flatten() 扁平化數(shù)組,結(jié)果為 ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0sp', '1sp']
properties配置
#是否顯示分表SQL,默認(rèn)為false spring.shardingsphere.props.sql.show=true #指定哪些列值入指定的分片表,多個(gè)列值以“,”分隔 #column_name為9997,9998,9999的記錄存入表table_three_0sp中 #column_name為1111,2222,3333,4444,5555的記錄存入表table_three_1sp中 #其余的值哈希取模后,存入對應(yīng)的table_three_模數(shù)表中 special.table_three.sp0.ids=9997,9998,9999 special.table_three.sp1.ids=1111,2222,3333,4444,5555
Sharding-jdbc的坑
任何SQL,只要select子句中包含動(dòng)態(tài)參數(shù),則拋出類型強(qiáng)轉(zhuǎn)異常
禁止修改分片鍵,如果update的set子句中存在分片鍵,則不能執(zhí)行sql
結(jié)語
至此,簡單的單表分表策略就配置完成了
代碼沒有好壞,合適的就是最好的
以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
- sharding-jdbc5.0.0實(shí)現(xiàn)分表實(shí)踐
- 利用Sharding-Jdbc進(jìn)行分庫分表的操作代碼
- 使用sharding-jdbc實(shí)現(xiàn)水平分庫+水平分表的示例代碼
- 使用sharding-jdbc實(shí)現(xiàn)水平分表的示例代碼
- SpringBoot整合sharding-jdbc實(shí)現(xiàn)自定義分庫分表的實(shí)踐
- SpringBoot整合sharding-jdbc實(shí)現(xiàn)分庫分表與讀寫分離的示例
- Java使用Sharding-JDBC分庫分表進(jìn)行操作
- Sharding-Jdbc 自定義復(fù)合分片的實(shí)現(xiàn)(分庫分表)
- 利用Sharding-Jdbc組件實(shí)現(xiàn)分表
相關(guān)文章
elasticsearch集群cluster?discovery可配式模塊示例分析
這篇文章主要為大家介紹了elasticsearch集群cluster?discovery可配式模塊示例分析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-04-04通過實(shí)例講解springboot整合WebSocket
這篇文章主要介紹了通過實(shí)例講解springboot整合WebSocket,WebSocket為游覽器和服務(wù)器提供了雙工異步通信的功能,即游覽器可以向服務(wù)器發(fā)送消息,服務(wù)器也可以向游覽器發(fā)送消息。,需要的朋友可以參考下2019-06-06Java的MyBatis框架項(xiàng)目搭建與hellow world示例
MyBatis框架為Java程序的數(shù)據(jù)庫操作帶來了很大的便利,這里我們就從最基礎(chǔ)的入手,來看一下Java的MyBatis框架項(xiàng)目搭建與hellow world示例,需要的朋友可以參考下2016-06-06SpringBoot CountDownLatch多任務(wù)并行處理的實(shí)現(xiàn)方法
本篇文章主要介紹了SpringBoot CountDownLatch多任務(wù)并行處理的實(shí)現(xiàn)方法,小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過來看看吧2018-04-04Spring 中jdbcTemplate 實(shí)現(xiàn)執(zhí)行多條sql語句示例
本篇文章主要介紹了Spring 中jdbcTemplate 實(shí)現(xiàn)執(zhí)行多條sql語句示例,可以對多個(gè)表執(zhí)行多個(gè)sql語句,有興趣的可以了解一下。2017-01-01如何利用Vue+SpringBoot實(shí)現(xiàn)評論功能
簡單的評論功能是指能夠在文章底下進(jìn)行評論,而且能夠?qū)υu論進(jìn)行回復(fù),下面這篇文章主要給大家介紹了關(guān)于如何利用Vue+SpringBoot實(shí)現(xiàn)評論功能的相關(guān)資料,需要的朋友可以參考下2023-06-06Java編程Post數(shù)據(jù)請求和接收代碼詳解
這篇文章主要介紹了Java編程Post數(shù)據(jù)請求和接收代碼詳解,涉及enctype的三種編碼,post與get等相關(guān)內(nèi)容,具有一定參考價(jià)值,需要的朋友可以了解下。2017-11-11