项目要求从多个源库(oracle,haha,pg)里面读取schema,table,字段等信息,spring提供了AbstractRoutingDataSource类实现动态数据源,下面就简单介绍一下如何实现的。

首先给出数据源信息表如下所示:

IDNAMEDATABASE_TYPECON_STRACCOUNTPASSWORDCREATORCREATE_TIMEMODIFIERMODIFY_TIMEDEL_FLAG
2xxx2HANAjdbc:sap://1.1.1.1:30015xxxxxxxxx2021/11/10 20:35xxx2021/11/9 20:350
3xxx3oraclejdbc:oracle:thin:@1.1.1.1:1521:xxxxxxxxxxxx2021/11/10 20:35xxx2021/11/10 20:350
4xxx4oraclejdbc:oracle:thin:@1.1.1.1:1521:xxxxxxxxxxxx2021/11/11 20:35xxx2021/11/11 20:350
5xxx5HANAjdbc:sap://1.1.1.1:30015xxxxxxxxx2021/11/12 20:35xxx2021/11/12 20:350
6xxx6oraclejdbc:oracle:thin:@1.1.1.1:1521:xxxxxxxxxxxx2021/11/13 20:35xxx2021/11/13 20:350
7xxx7HANAjdbc:sap://1.1.1.1:30015xxxxxxxxx2021/11/14 20:35xxx2021/11/14 20:350

在项目中可以通过上表读取数据源信息进行数据源的实时切换,对于的entity对象为:

package com.xxx.entity;

import lombok.Data;
import lombok.ToString;

/**
 * @Author : lgq
 * @CreateTime : 2021/11/16
 * @Description :
 **/
@Data
@ToString
public class DataSource {
    String datasourceId;
    String url;
    String userName;
    String passWord;
    String dataSourceName;
    String databaseType;
}

DynamicDataSource继承AbstractRoutingDataSource用来创建和维护数据源。

package com.xxx.datasource;

import java.sql.Connection;
import java.sql.DriverManager;
import java.util.Map;
import java.util.Set;

import com.alibaba.druid.pool.DruidDataSource;
import com.alibaba.druid.stat.DruidDataSourceStatManager;
import com.xxx.common.errorcode.ErrorCode;
import com.xxx.common.exception.BusinessException;
import com.xxx.entity.DataSource;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
import org.springframework.util.StringUtils;


public class DynamicDataSource extends AbstractRoutingDataSource {
    private boolean debug = true;
    public static final Logger log = LoggerFactory.getLogger(DynamicDataSource.class);
    private Map<Object, Object> dynamicTargetDataSources;
    private Object dynamicDefaultTargetDataSource;


    @Override
    protected Object determineCurrentLookupKey() {
        String datasource = DBContextHolder.getDataSource();
        if (!StringUtils.isEmpty(datasource)) {
            Map<Object, Object> dynamicTargetDataSources2 = this.dynamicTargetDataSources;
            if (dynamicTargetDataSources2.containsKey(datasource)) {
                log.info("---当前数据源:" + datasource + "---");
            } else {
                log.info("不存在的数据源:");
                throw new BusinessException(ErrorCode.DATA_SOURCE_NOT_EXIST, "数据源:" + datasource + "不存在!");
            }
        } else {
            log.info("---当前数据源:默认数据源---");
        }

        return datasource;
    }

    @Override
    public void setTargetDataSources(Map<Object, Object> targetDataSources) {

        super.setTargetDataSources(targetDataSources);

        this.dynamicTargetDataSources = targetDataSources;

    }


    // 创建数据源
    public boolean createDataSource(String key, String driveClass, String url, String username, String password, String databaseType) {
        try {
            /**
             * 排除连接不上的错误
             */
            try {
                Class.forName(driveClass);
                DriverManager.getConnection(url, username, password);// 相当于连接数据库

            } catch (Exception e) {
                log.error("数据源:" + key + "连接数据库失败!");
                return false;
            }

            @SuppressWarnings("resource")
//            HikariDataSource druidDataSource = new HikariDataSource();
            DruidDataSource druidDataSource = new DruidDataSource();
            druidDataSource.setName(key);
            druidDataSource.setDriverClassName(driveClass);
            druidDataSource.setUrl(url);
            druidDataSource.setUsername(username);
            druidDataSource.setPassword(password);
            //初始化时建立物理连接的个数。初始化发生在显示调用init方法,或者第一次getConnection时
            druidDataSource.setInitialSize(1);
            //最大连接池数量
            druidDataSource.setMaxActive(20);
            //获取连接时最大等待时间,单位毫秒。当链接数已经达到了最大链接数的时候,应用如果还要获取链接就会出现等待的现象,
            // 等待链接释放并回到链接池,如果等待的时间过长就应该踢掉这个等待,不然应用很可能出现雪崩现象
            druidDataSource.setMaxWait(60000);
            //最小连接池数量
            druidDataSource.setMinIdle(5);
            //默认的验证语句
            String validationQuery = "select 1 from dual";
            if("mysql".equalsIgnoreCase(databaseType)) {
                validationQuery = "select 1";
            } else if("oracle".equalsIgnoreCase(databaseType)){
                //是否缓存preparedStatement,也就是PSCache。PSCache对支持游标的数据库性能提升巨大,比如说oracle。在mysql下建议关闭。
                druidDataSource.setPoolPreparedStatements(true);
                druidDataSource.setMaxPoolPreparedStatementPerConnectionSize(50);
                int sqlQueryTimeout = 6000;
                //对于耗时长的查询sql,会受限于ReadTimeout的控制,单位毫秒
                druidDataSource.setConnectionProperties("oracle.net.CONNECT_TIMEOUT=6000;oracle.jdbc.ReadTimeout="+sqlQueryTimeout);
            } else if("sqlserver2000".equalsIgnoreCase(databaseType)){
                validationQuery = "select 1";
            } else if("sqlserver".equalsIgnoreCase(databaseType)){
                validationQuery = "select 1";
            } else if("hana".equalsIgnoreCase(databaseType)){
                validationQuery = "select 1 from dummy";
            } else if("pg".equalsIgnoreCase(databaseType)){
                validationQuery = "select version()";
            }

            //申请连接时执行validationQuery检测连接是否有效,这里建议配置为TRUE,防止取到的连接不可用
            druidDataSource.setTestOnBorrow(true);
            //建议配置为true,不影响性能,并且保证安全性。申请连接的时候检测,如果空闲时间大于timeBetweenEvictionRunsMillis,执行validationQuery检测连接是否有效。
            druidDataSource.setTestWhileIdle(true);
            //用来检测连接是否有效的sql,要求是一个查询语句。如果validationQuery为null,testOnBorrow、testOnReturn、testWhileIdle都不会起作用。
            druidDataSource.setValidationQuery(validationQuery);
            //属性类型是字符串,通过别名的方式配置扩展插件,常用的插件有:监控统计用的filter:stat日志用的filter:log4j防御sql注入的filter:wall
            druidDataSource.setFilters("stat");
            //配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
            druidDataSource.setTimeBetweenEvictionRunsMillis(60000);
            //配置一个连接在池中最小生存的时间,单位是毫秒,这里配置为3分钟180000
            druidDataSource.setMinEvictableIdleTimeMillis(180000);
            //打开druid.keepAlive之后,当连接池空闲时,池中的minIdle数量以内的连接,空闲时间超过minEvictableIdleTimeMillis,则会执行keepAlive操作,
            // 即执行druid.validationQuery指定的查询SQL,一般为select * from dual,只要minEvictableIdleTimeMillis设置的小于防火墙切断连接时间,
            // 就可以保证当连接空闲时自动做保活检测,不会被防火墙切断
            druidDataSource.setKeepAlive(true);
            //是否移除泄露的连接/超过时间限制是否回收。
            druidDataSource.setRemoveAbandoned(true);
            //泄露连接的定义时间(要超过最大事务的处理时间);单位为秒。这里配置为1小时
            druidDataSource.setRemoveAbandonedTimeout(3600);
            //移除泄露连接发生是是否记录日志
            druidDataSource.setLogAbandoned(true);
            druidDataSource.init();

            this.dynamicTargetDataSources.put(key, druidDataSource);
            // 将map赋值给父类的TargetDataSources
            setTargetDataSources(this.dynamicTargetDataSources);
            // 将TargetDataSources中的连接信息放入resolvedDataSources管理
            super.afterPropertiesSet();
            log.info(key + "数据源初始化成功");
            //log.info(key+"数据源的概况:"+druidDataSource.dump());
            return true;
        } catch (Exception e) {
            log.error(e + "");
            throw new BusinessException(ErrorCode.DATA_SOURCE_INIT_ERROR, "数据源:" + key + "初始化失败!");
        }
    }

    // 删除数据源
    public boolean delDataSource(String dataSourceId) {
        Map<Object, Object> dynamicTargetDataSources2 = this.dynamicTargetDataSources;
        if (dynamicTargetDataSources2.containsKey(dataSourceId)) {
            Set<DruidDataSource> druidDataSourceInstances = DruidDataSourceStatManager.getDruidDataSourceInstances();
            for (DruidDataSource ds : druidDataSourceInstances) {
                if (dataSourceId.equals(ds.getName())) {
                    dynamicTargetDataSources2.remove(dataSourceId);
                    DruidDataSourceStatManager.removeDataSource(ds);
                    // 将map赋值给父类的TargetDataSources
                    setTargetDataSources(dynamicTargetDataSources2);
                    // 将TargetDataSources中的连接信息放入resolvedDataSources管理
                    super.afterPropertiesSet();
                    return true;
                }
            }
        }
        return false;
    }

    // 测试数据源连接是否有效
    public boolean testDatasource(String key, String driveClass, String url, String username, String password) {
        try {
            Class.forName(driveClass);
            DriverManager.getConnection(url, username, password);
            return true;
        } catch (Exception e) {
            return false;
        }
    }

    @Override
    public void setDefaultTargetDataSource(Object defaultTargetDataSource) {
        super.setDefaultTargetDataSource(defaultTargetDataSource);
        this.dynamicDefaultTargetDataSource = defaultTargetDataSource;
    }

    /**
     * @param debug the debug to set
     */
    public void setDebug(boolean debug) {
        this.debug = debug;
    }

    /**
     * @return the debug
     */
    public boolean isDebug() {
        return debug;
    }

    /**
     * @return the dynamicTargetDataSources
     */
    public Map<Object, Object> getDynamicTargetDataSources() {
        return dynamicTargetDataSources;
    }

    /**
     * @param dynamicTargetDataSources the dynamicTargetDataSources to set
     */
    public void setDynamicTargetDataSources(Map<Object, Object> dynamicTargetDataSources) {
        this.dynamicTargetDataSources = dynamicTargetDataSources;
    }

    /**
     * @return the dynamicDefaultTargetDataSource
     */
    public Object getDynamicDefaultTargetDataSource() {
        return dynamicDefaultTargetDataSource;
    }

    /**
     * @param dynamicDefaultTargetDataSource the dynamicDefaultTargetDataSource to set
     */
    public void setDynamicDefaultTargetDataSource(Object dynamicDefaultTargetDataSource) {
        this.dynamicDefaultTargetDataSource = dynamicDefaultTargetDataSource;
    }

    /**
     * 检查数据源状态,没有创建或失效时重新创建
     * @param dataSource 数据源
     * @throws Exception
     */
    public void createDataSourceWithCheck(DataSource dataSource) throws Exception {
        String datasourceName = dataSource.getDataSourceName();
        log.info("正在检查数据源:" + datasourceName);

        Map<Object, Object> dynamicTargetDataSources2 = this.dynamicTargetDataSources;
        if (dynamicTargetDataSources2.containsKey(datasourceName)) {
            log.info("数据源" + datasourceName + "之前已经创建,准备测试数据源是否正常...");
            DruidDataSource druidDataSource = (DruidDataSource) dynamicTargetDataSources2.get(datasourceName);
            boolean rightFlag = true;
            Connection connection = null;
            try {
                log.info(datasourceName + "数据源的概况->当前闲置连接数:" + druidDataSource.getPoolingCount());
                long activeCount = druidDataSource.getActiveCount();
                log.info(datasourceName + "数据源的概况->当前活动连接数:" + activeCount);
                if (activeCount > 0) {
                    log.info(datasourceName + "数据源的概况->活跃连接堆栈信息:" + druidDataSource.getActiveConnectionStackTrace());
                }
                log.info("准备获取数据库连接...");
                connection = druidDataSource.getConnection();
                log.info("数据源" + datasourceName + "正常");
            } catch (Exception e) {
                //把异常信息打印到日志文件
                log.error(e.getMessage(), e);
                rightFlag = false;
                log.info("缓存数据源" + datasourceName + "已失效,准备删除...");
                if (delDataSource(datasourceName)) {
                    log.info("缓存数据源删除成功");
                } else {
                    log.info("缓存数据源删除失败");
                }
            } finally {
                if (null != connection) {
                    connection.close();
                }
            }
            if (rightFlag) {
                log.info("不需要重新创建数据源");
                return;
            } else {
                log.info("准备重新创建数据源...");
                createDataSource(dataSource);
                log.info("重新创建数据源完成");
            }
        } else {
            createDataSource(dataSource);
        }

    }

    /**
     * 创建数据源
     * @param dataSource 数据源
     */
    private void createDataSource(DataSource dataSource) {
        String datasourceName = dataSource.getDataSourceName();
        log.info("准备创建数据源" + datasourceName);
        String databaseType = dataSource.getDatabaseType();
        String username = dataSource.getUserName();
        String password = dataSource.getPassWord();
        String url = dataSource.getUrl();

        // 默认的驱动类
        String driveClass = "oracle.jdbc.driver.OracleDriver";
        if ("mysql".equalsIgnoreCase(databaseType)) {
            driveClass = "com.mysql.jdbc.Driver";
        } else if ("gp".equalsIgnoreCase(databaseType)) {
            driveClass = "org.postgresql.Driver";
        } else if ("sqlserver2000".equalsIgnoreCase(databaseType)) {
            driveClass = "com.microsoft.jdbc.sqlserver.SQLServerDriver";
        } else if ("sqlserver".equalsIgnoreCase(databaseType)) {
            driveClass = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
        } else if ("hana".equalsIgnoreCase(databaseType)) {
            driveClass = "com.sap.db.jdbc.Driver";
        }

        if (testDatasource(datasourceName, driveClass, url, username, password)) {
            boolean result = this.createDataSource(datasourceName, driveClass, url, username, password, databaseType);
            if (!result) {
                log.error("数据源" + datasourceName + "配置正确,但是创建失败");
                throw new BusinessException(ErrorCode.DATA_SOURCE_CREATE_ERROR, "数据源 "+ datasourceName +" 配置正确,但是创建失败");
            }
        } else {
            log.error("数据源配置有错误");
            throw new BusinessException(ErrorCode.DATA_SOURCE_CONFIG_ERROR, "数据源 "+ datasourceName +" 配置错误");
        }
    }

}

DruidDBConfig类用来注入动态数据源对象并创建主数据源。
package com.xxx.config;

import java.sql.SQLException;
import java.util.HashMap;
import java.util.Map;

import javax.sql.DataSource;

import com.alibaba.druid.pool.DruidDataSource;
import com.alibaba.druid.support.http.StatViewServlet;
import com.alibaba.druid.support.http.WebStatFilter;
import com.xxx.datasource.DynamicDataSource;

import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.web.servlet.FilterRegistrationBean;
import org.springframework.boot.web.servlet.ServletRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.core.io.support.ResourcePatternResolver;
import org.springframework.transaction.annotation.EnableTransactionManagement;

/**
 * @Author : lgq
 * @CreateTime : 2021/11/15
 * @Description :
 **/
@Configuration
@EnableTransactionManagement
public class DruidDBConfig {
    private final Logger log = LoggerFactory.getLogger(getClass());

    // 数据库连接信息
    @Value("${spring.datasource.url}")
    private String dbUrl;
    @Value("${spring.datasource.username}")
    private String username;
    @Value("${spring.datasource.password}")
    private String password;
    @Value("${spring.datasource.driverClassName}")
    private String driverClassName;
    // 连接池连接信息
    @Value("${spring.datasource.initialSize}")
    private int initialSize;
    @Value("${spring.datasource.minIdle}")
    private int minIdle;
    @Value("${spring.datasource.maxActive}")
    private int maxActive;
    @Value("${spring.datasource.maxWait}")
    private int maxWait;

    @Bean // 声明其为Bean实例
    @Primary // 在同样的DataSource中,首先使用被标注的DataSource
    @Qualifier("mainDataSource")
    public DataSource dataSource() throws SQLException {
        DruidDataSource datasource = new DruidDataSource();
        // 基础连接信息
        datasource.setUrl(this.dbUrl);
        datasource.setUsername(username);
        datasource.setPassword(password);
        datasource.setDriverClassName(driverClassName);
        // 连接池连接信息
        datasource.setInitialSize(initialSize);
        datasource.setMinIdle(minIdle);
        datasource.setMaxActive(maxActive);
        datasource.setMaxWait(maxWait);
        //是否缓存preparedStatement,也就是PSCache。PSCache对支持游标的数据库性能提升巨大,比如说oracle。在mysql下建议关闭。
        datasource.setPoolPreparedStatements(true);
        datasource.setMaxPoolPreparedStatementPerConnectionSize(20);
        // 对于耗时长的查询sql,会受限于ReadTimeout的控制,单位毫秒
        //  datasource.setConnectionProperties("oracle.net.CONNECT_TIMEOUT=6000;oracle.jdbc.ReadTimeout=60000");
        //对于耗时长的查询sql,会受限于ReadTimeout的控制,单位毫秒
        datasource.setConnectionProperties("druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000");
        //申请连接时执行validationQuery检测连接是否有效,这里建议配置为TRUE,防止取到的连接不可用
        datasource.setTestOnBorrow(true);
        //建议配置为true,不影响性能,并且保证安全性。申请连接的时候检测,如果空闲时间大于timeBetweenEvictionRunsMillis,执行validationQuery检测连接是否有效。
        datasource.setTestWhileIdle(true);
        String validationQuery = "select 1 from dual";
        //用来检测连接是否有效的sql,要求是一个查询语句。如果validationQuery为null,testOnBorrow、testOnReturn、testWhileIdle都不会起作用。
        datasource.setValidationQuery(validationQuery);
        //属性类型是字符串,通过别名的方式配置扩展插件,常用的插件有:监控统计用的filter:stat日志用的filter:log4j防御sql注入的filter:wall
        datasource.setFilters("stat,wall");
        //配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
        datasource.setTimeBetweenEvictionRunsMillis(60000);
        //配置一个连接在池中最小生存的时间,单位是毫秒,这里配置为3分钟180000
        datasource.setMinEvictableIdleTimeMillis(180000);
        //打开druid.keepAlive之后,当连接池空闲时,池中的minIdle数量以内的连接,空闲时间超过minEvictableIdleTimeMillis,则会执行keepAlive操作,
        // 即执行druid.validationQuery指定的查询SQL,一般为select * from dual,只要minEvictableIdleTimeMillis设置的小于防火墙切断连接时间,
        // 就可以保证当连接空闲时自动做保活检测,不会被防火墙切断
        datasource.setKeepAlive(true);
        //是否移除泄露的连接/超过时间限制是否回收。
        datasource.setRemoveAbandoned(true);
        //泄露连接的定义时间(要超过最大事务的处理时间);单位为秒。这里配置为1小时
        datasource.setRemoveAbandonedTimeout(3600);
        //移除泄露连接发生是是否记录日志
        datasource.setLogAbandoned(true);
        return datasource;

    }


    /**
     * 注册一个StatViewServlet    druid监控页面配置1-帐号密码配置
     *
     * @return servlet registration bean
     */
    @Bean
    public ServletRegistrationBean druidStatViewServlet() {
        ServletRegistrationBean servletRegistrationBean = new ServletRegistrationBean(
                new StatViewServlet(), "/druid/*");
        servletRegistrationBean.addInitParameter("loginUsername", "admin");
        servletRegistrationBean.addInitParameter("loginPassword", "123456");
        servletRegistrationBean.addInitParameter("resetEnable", "false");
        return servletRegistrationBean;
    }

    /**
     * 注册一个:filterRegistrationBean   druid监控页面配置2-允许页面正常浏览
     *
     * @return filter registration bean
     */
    @Bean
    public FilterRegistrationBean druidStatFilter() {
        FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean(
                new WebStatFilter());
        // 添加过滤规则.
        filterRegistrationBean.addUrlPatterns("/*");
        // 添加不需要忽略的格式信息.
        filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
        return filterRegistrationBean;
    }

    @Bean(name = "dynamicDataSource")
    @Qualifier("dynamicDataSource")
    public DynamicDataSource dynamicDataSource() throws SQLException {
        DynamicDataSource dynamicDataSource = new DynamicDataSource();
        dynamicDataSource.setDebug(false);
        //配置缺省的数据源
        // 默认数据源配置 DefaultTargetDataSource
        dynamicDataSource.setDefaultTargetDataSource(dataSource());
        Map<Object, Object> targetDataSources = new HashMap<Object, Object>();
        //额外数据源配置 TargetDataSources
        targetDataSources.put("mainDataSource", dataSource());
        dynamicDataSource.setTargetDataSources(targetDataSources);
        return dynamicDataSource;
    }

    @Bean
    public SqlSessionFactory sqlSessionFactory() throws Exception {
        SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean();
        sqlSessionFactoryBean.setDataSource(dynamicDataSource());
        //解决手动创建数据源后字段到bean属性名驼峰命名转换失效的问题
        sqlSessionFactoryBean.setConfiguration(configuration());

        // 设置mybatis的主配置文件
        ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver();
        // 设置别名包
          sqlSessionFactoryBean.setTypeAliasesPackage("com.xxx.mapper");
        //手动配置mybatis的mapper.xml资源路径,如果单纯使用注解方式,不需要配置该行
        sqlSessionFactoryBean.setMapperLocations(resolver.getResources("classpath:mapper/*.xml"));
        return sqlSessionFactoryBean.getObject();
    }


    /**
     * 读取驼峰命名设置
     *
     * @return
     */
    @Bean
    @ConfigurationProperties(prefix = "mybatis.configuration")
    public org.apache.ibatis.session.Configuration configuration() {
        return new org.apache.ibatis.session.Configuration();
    }


}

配置文件application.properties内容如下:

server.port = xxxx
server.address = 0.0.0.0
mybatis.mapper-locations = classpath:mapper/*.xml,classpath:mapper/*/*.xml,classpath:mapper/*/*/*.xml

spring.datasource.druid.url = jdbc:oracle:thin:@1.1.1.1:1521:xxx
spring.datasource.druid.username = xxx
spring.datasource.druid.password = xxx
spring.datasource.druid.driver-class-name = oracle.jdbc.driver.OracleDriver
spring.datasource.druid.type = com.alibaba.druid.pool.DruidDataSource
spring.datasource.druid.initial-size = 20
spring.datasource.druid.max-active = 20
spring.datasource.druid.min-idle = 10
spring.datasource.druid.max-wait = 100

#redis配置
purist.redis.enable = true
purist.redis.database = 0
purist.redis.host = 127.0.0.1
purist.redis.port = 6379
purist.redis.password = 
purist.redis.timeout = 60000
purist.redis.pool.maxActive = 8
purist.redis.pool.maxIdle = 8
purist.redis.pool.maxWait = -1
purist.redis.pool.minIdle = 0

#采用驼峰标识,解决 Mybatis resultType返回结果为null的问题
mybatis.configuration.map-underscore-to-camel-case = true
DBContextHolder使用ThreadLocal将数据源连接存储在当前线程的threadlocals(ThreadLocalMap)中,在连接数据库时自动获取当前线程对于的数据源。
package com.xxx.datasource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * @Author : lgq
 * @CreateTime : 2021/11/16
 * @Description :
 **/
public class DBContextHolder {
    private final static Logger log = LoggerFactory.getLogger(DBContextHolder.class);
    // 对当前线程的操作-线程安全的
    private static final ThreadLocal<String> contextHolder = new ThreadLocal<String>();

    // 调用此方法,切换数据源
    public static void setDataSource(String dataSource) {
        contextHolder.set(dataSource);
        log.info("已切换到数据源:{}",dataSource);
    }

    // 获取数据源
    public static String getDataSource() {
        return contextHolder.get();
    }

    // 删除数据源
    public static void clearDataSource() {
        contextHolder.remove();
        log.info("已切换到主数据源");
    }

}

 DBChangeService提供方法手动实现数据库切换。

package com.xxx.service.sourcesystem;

import java.util.List;

import com.xxx.entity.DataSource;

/**
 * @Author : lgq
 * @CreateTime : 2021/11/15
 * @Description :
 **/

public interface DBChangeService {
    List<DataSource> get();

    boolean changeDb(String datasourceId) throws Exception;

}


package com.xxx.service.sourcesystem.impl;

import java.util.List;

import javax.annotation.Resource;

import com.xxx.datasource.DBContextHolder;
import com.xxx.datasource.DynamicDataSource;
import com.xxx.entity.DataSource;
import com.xxx.mapper.DataSourceMapper;
import com.xxx.service.sourcesystem.DBChangeService;

import org.springframework.stereotype.Service;

/**
 * @Author : lgq
 * @CreateTime : 2021/11/16
 * @Description :
 **/
@Service
public class DBChangeServiceImpl implements DBChangeService {

    @Resource
    DataSourceMapper dataSourceMapper;
    @Resource
    private DynamicDataSource dynamicDataSource;
    @Override
    public List<DataSource> get() {
        return dataSourceMapper.getAllDataSources();
    }

    @Override
    public boolean changeDb(String datasourceName) throws Exception {

        //默认切换到主数据源,进行整体资源的查找
        DBContextHolder.clearDataSource();

        List<DataSource> dataSourcesList = dataSourceMapper.getAllDataSources();

        for (DataSource dataSource : dataSourcesList) {
            if (dataSource.getDataSourceName().equals(datasourceName)) {
                DynamicDataSource.log.info("需要使用的的数据源已经找到,datasourceName是:" + dataSource.getDataSourceName());
                //创建数据源连接&检查 若存在则不需重新创建
                dynamicDataSource.createDataSourceWithCheck(dataSource);
                //切换到该数据源
                DBContextHolder.setDataSource(dataSource.getDataSourceName());
                return true;
            }
        }
        return false;

    }

}

在impl类里面的查询操作前后只要切换数据源即可完成查询。

dbChangeService.changeDb(sourceSystemName);

/*
* 各种逻辑处理和查询操作
*/


//切回主数据源
DBContextHolder.clearDataSource();

最后,给出再贴出hana,gp,oracle查询schema和table等相关信息的语句。

<!-- gp -->

<!-- gp查询schema -->
<select id="getSchemaNameList"  resultType="java.lang.String">
    SELECT DISTINCT SCHEMA_NAME FROM "SYS"."M_TABLES"
       limit #{pageSize,jdbcType=DECIMAL} offset #{pageNo,jdbcType=DECIMAL}
</select>
<select id="getSchemaNameTotal"  resultType="java.lang.Long">
    SELECT count(DISTINCT SCHEMA_NAME) FROM "SYS"."M_TABLES"
</select>

<!-- gp查询table-->
<select id="getTableNameList"  resultType="java.lang.String">
    SELECT TABLE_NAME FROM "SYS"."M_TABLES" WHERE SCHEMA_NAME = #{schemaName}
</select>
<select id="getTableNameTotal"  resultType="java.lang.Long">
    SELECT count(TABLE_NAME) FROM "SYS"."M_TABLES"
</select>

<!-- oracle -->

<!-- oracle查询schema -->
<select id="getSchemaNameList" resultType="java.lang.String">
     SELECT owner FROM (SELECT r.*, ROWNUM ROW_ID FROM (SELECT DISTINCT owner FROM ALL_TABLES) r)
     WHERE ROW_ID &lt;= #{pageSize,jdbcType=DECIMAL}
     AND ROW_ID >= #{pageNo,jdbcType=DECIMAL}
</select>
<select id="getSchemaNameTotal" resultType="java.lang.Long">
     SELECT count(DISTINCT owner)
     FROM ALL_TABLES
</select>

<!-- oracle查询table-->
<select id="getTableNameList"  resultType="java.lang.String" >
     SELECT table_name FROM (SELECT r.*, ROWNUM ROW_ID FROM (SELECT table_name  FROM ALL_TABLES WHERE owner = #{schemaName, jdbcType=VARCHAR})
        r) WHERE ROW_ID &lt;= #{pageSize, jdbcType=DECIMAL} AND ROW_ID >= #{pageNo, jdbcType=DECIMAL}
</select>
<select id="getTableNameTotal"  resultType="java.lang.Long" >
    SELECT count(table_name)  FROM ALL_TABLES WHERE owner = #{schemaName, jdbcType=VARCHAR}
</select>

<!-- hana-->

<!-- hana查询schema-->
<select id="getSchemaNameList"  resultType="java.lang.String">
    SELECT DISTINCT SCHEMA_NAME FROM "SYS"."M_TABLES"
    limit #{pageSize,jdbcType=DECIMAL} offset #{pageNo,jdbcType=DECIMAL}
</select>
<select id="getSchemaNameTotal"  resultType="java.lang.Long">
    SELECT count(DISTINCT SCHEMA_NAME) FROM "SYS"."M_TABLES"
</select>

<!-- hana查询table-->
<select id="getTableNameList"  resultType="java.lang.String">
    SELECT TABLE_NAME FROM "SYS"."M_TABLES" WHERE SCHEMA_NAME = #{schemaName, jdbcType=VARCHAR}
    limit #{pageSize,jdbcType=DECIMAL} offset #{pageNo,jdbcType=DECIMAL}
</select>
<select id="getTableNameTotal"  resultType="java.lang.Long">
    SELECT count(TABLE_NAME) FROM "SYS"."M_TABLES" WHERE SCHEMA_NAME = #{schemaName}
</select>

Logo

腾讯云面向开发者汇聚海量精品云计算使用和开发经验,营造开放的云计算技术生态圈。

更多推荐