JAVA连接数据库 #03# HikariCP

时间:2023-03-08 16:38:00
JAVA连接数据库 #03# HikariCP

为什么用数据库连接池?

为什么要用数据库连接池?

如果我们分析一下典型的【连接数据库】所涉及的步骤,我们将理解为什么:

  1. 使用数据库驱动程序打开与数据库的连接
  2. 打开TCP套接字以读取/写入数据
  3. 通过套接字读取/写入数据
  4. 关闭连接
  5. 关闭套接字

很明显,【连接数据库】是相当昂贵的操作,因此,应该想办法尽可能地减少、避免这种操作。

这就是数据库连接池发挥作用的地方。通过简单地实现数据库连接容器(允许我们重用大量现有连接),我们可以有效地节省执行大量昂贵【连接数据库】的成本,从而提高数据库驱动应用程序的整体性能。

↑ 译自A Simple Guide to Connection Pooling in Java ,有删改

HikariCP快速入门

HikariCP是一个轻量级的高性能JDBC连接池。GitHub链接:https://github.com/brettwooldridge/HikariCP

1、依赖

  1. HikariCP
  2. slf4j (不需要日志实现也能跑)
  3. logback-core
  4. logback-classic

1和2以及相应数据库的JDBC驱动是必要的,日志实现可以用其它方案。

2、简单的草稿程序

package org.sample.dao;

import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import org.sample.entity.Profile;
import org.sample.exception.DaoException; import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException; public class Test {
private static HikariConfig config = new HikariConfig();
private static HikariDataSource ds; static {
config.setJdbcUrl("jdbc:mysql://127.0.0.1:3306/profiles?characterEncoding=utf8");
config.setUsername("root");
config.setPassword("???????");
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
ds = new HikariDataSource(config);
config = new HikariConfig();
} public static Connection getConnection() throws SQLException {
return ds.getConnection();
} private Test(){} public static void main(String[] args) {
Profile profile = new Profile();
profile.setUsername("testname3");
profile.setPassword("123");
profile.setNickname("testnickname");
int i = 0;
try {
Connection conn = Test.getConnection();
String sql = "INSERT ignore INTO `profiles`.`profile` (`username`, `password`, `nickname`) " +
"VALUES (?, ?, ?)"; // 添加ignore出现重复不会抛出异常而是返回0
try (PreparedStatement ps = conn.prepareStatement(sql)) {
ps.setString(1, profile.getUsername());
ps.setString(2, profile.getPassword());
ps.setString(3, profile.getNickname());
i = ps.executeUpdate();
}
} catch (SQLException e) {
throw new DaoException(e);
}
System.out.println(i);
}
}

3、设置连接池参数(只列举常用的)

一台四核的电脑基本可以全部采用默认设置?

autoCommit:控制由连接池所返回的connection默认的autoCommit状况。默认值为是true。
connectionTimeout:该参数决定无可用connection时的最长等待时间,超时将抛出SQLException。允许的最小值为250,默认值是30000(30秒)。
maximumPoolSize:该参数控制连接池所允许的最大连接数(包括在用连接和空闲连接)。基本上,此值将确定应用程序与数据库实际连接的最大数量。它的合理值最好由你的具体执行环境确定。当连接池达到最大连接数,并且没有空闲连接时,调用getConnection()将会被阻塞,最长等待时间取决于connectionTimeout。 对于这个值设定多少比较好,涉及的东西有点多,详细可参看About Pool Sizing,一般可以简单用这个公式计算:连接数 = ((核心数 * 2) + 有效磁盘数),默认值是10。
minimumIdle:控制最小的空闲连接数,当连接池内空闲的连接数少于minimumIdle,且总连接数不大于maximumPoolSize时,HikariCP会尽力补充新的连接。出于性能方面的考虑,不建议设置此值,而是让HikariCP把连接池当做固定大小的处理,minimumIdle的默认值等于maximumPoolSize。
maxLifetime:用来设置一个connection在连接池中的最大存活时间。一个使用中的connection永远不会被移除,只有在它关闭后才会被移除。用微小的负衰减来避免连接池中的connection一次性大量灭绝。我们强烈建议设置这个值,它应该比数据库所施加的时间限制短个几秒。如果设置为0,则表示connection的存活时间为无限大,当然还要受制于idleTimeout。默认值是1800000(30分钟)。(不大理解,然而mysql的时间限制不是8个小时???)
idleTimeout:控制一个connection所被允许的最大空闲时间。当空闲的连接数超过minimumIdle时,一旦某个connection的持续空闲时间超过idleTimeout,就会被移除。只有当minimumIdle小于maximumPoolSize时,这个参数才生效。默认值是600000(10分钟)。
poolName:用户定义的连接池名称,主要显示在日志记录和JMX管理控制台中,以标识连接池以及它的配置。默认值由HikariCP自动生成。

4、MySQL配置

参阅MySQL Configuration

jdbcUrl=jdbc:mysql://127.0.0.1:3306/profiles?characterEncoding=utf8
username=root
password=test
dataSource.cachePrepStmts=true
dataSource.prepStmtCacheSize=250
dataSource.prepStmtCacheSqlLimit=2048
dataSource.useServerPrepStmts=true
dataSource.useLocalSessionState=true
dataSource.rewriteBatchedStatements=true
dataSource.cacheResultSetMetadata=true
dataSource.cacheServerConfiguration=true
dataSource.elideSetAutoCommits=true
dataSource.maintainTimeStats=false

5、修改Java连接数据库#02#中的代码

① HikariCPDataSource.java,hikari.properties如上所示。

package org.sample.db;

import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource; import java.sql.Connection;
import java.sql.SQLException; public class HikariCPDataSource {
private static final String HIKARI_PROPERTIES_FILE_PATH = "/hikari.properties";
private static HikariConfig config = new HikariConfig(HIKARI_PROPERTIES_FILE_PATH);
private static HikariDataSource ds = new HikariDataSource(config); public static Connection getConnection() throws SQLException {
return ds.getConnection();
}
}

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

② ConnectionFactory.java

package org.sample.db;

import java.sql.Connection;
import java.sql.SQLException; /**
* 线程池版
*/
public class ConnectionFactory { private ConnectionFactory() {
// Exists to defeat instantiation
} private static final ThreadLocal<Connection> LocalConnectionHolder = new ThreadLocal<>(); public static Connection getConnection() throws SQLException {
Connection conn = LocalConnectionHolder.get();
if (conn == null || conn.isClosed()) {
conn = HikariCPDataSource.getConnection();
LocalConnectionHolder.set(conn);
}
return conn;
} public static void removeLocalConnection() {
LocalConnectionHolder.remove();
}
}

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

③ ConnectionProxy.java(代码分层有错误!)

package org.sample.manager;

import org.sample.db.ConnectionFactory;
import org.sample.exception.DaoException; import java.sql.Connection; /**
* 对应线程池版本ConnectionFactory,方便在Service层进行事务控制
*/
public class ConnectionProxy {
public static void setAutoCommit(boolean autoCommit) {
try {
Connection conn = ConnectionFactory.getConnection();
conn.setAutoCommit(autoCommit);
} catch (Exception e) {
throw new DaoException(e);
}
} public static void commit() {
try {
Connection conn = ConnectionFactory.getConnection();
conn.commit();
} catch (Exception e) {
throw new DaoException(e);
}
} public static void rollback() {
try {
Connection conn = ConnectionFactory.getConnection();
conn.rollback();
} catch (Exception e) {
throw new DaoException(e);
}
} public static void close() {
try {
Connection conn = ConnectionFactory.getConnection();
conn.close();
ConnectionFactory.removeLocalConnection();
} catch (Exception e) {
throw new DaoException(e);
}
} // TODO 设置隔离级别
}

其它地方把LocalConnectionFactory改为ConnectionFactory,LocalConnectionProxy改为ConnectionProxy就行了!后续如果要换其它连接池,只需要改变ConnectionFactory.java里的一小点代码。

6、测试

package org.sample.manager;

import org.junit.Test;
import org.sample.dao.ProfileDAO;
import org.sample.dao.impl.ProfileDAOImpl;
import org.sample.entity.Profile;
import org.sample.exception.DaoException; import java.util.ArrayList;
import java.util.Collections;
import java.util.LinkedList;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.logging.Logger; import static org.junit.Assert.assertTrue; public class DaoTest { private static final Logger LOGGER = Logger.getLogger(DaoTest.class.getName()); private static final String ORIGIN_STRING = "hello";
private static String RandomString() {
return Math.random() + ORIGIN_STRING + Math.random();
}
private static Profile RandomProfile() {
Profile profile = new Profile(RandomString(), ORIGIN_STRING, RandomString());
return profile;
} private static final ProfileDAO PROFILE_DAO = ProfileDAOImpl.INSTANCE; private class Worker implements Runnable {
private final Profile profile = RandomProfile(); @Override
public void run() {
LOGGER.info(Thread.currentThread().getName() + " has started his work");
try {
// ConnectionProxy.setAutoCommit(false);
PROFILE_DAO.saveProfile(profile);
// ConnectionProxy.commit();
} catch (DaoException e) {
e.printStackTrace();
} finally {
try {
ConnectionProxy.close();
} catch (DaoException e) {
e.printStackTrace();
}
}
LOGGER.info(Thread.currentThread().getName() + " has finished his work");
}
} /**
* numTasks指并发线程数。
* -- 不用连接池:
* numTasks<=100正常运行,完成100个任务耗时大概是550ms~600ms
* numTasks>100报错“too many connections”,偶尔不报错,这是来自mysql数据库本身的限制
* -- 采用连接池
* numTasks>10000仍正常运行,完成10000个任务耗时大概是26s(池大小是10)
*/
private static final int NUM_TASKS = 2000; @Test
public void test() throws Exception {
List<Runnable> workers = new LinkedList<>();
for(int i = 0; i != NUM_TASKS; ++i) {
workers.add(new Worker());
}
assertConcurrent("Dao test ", workers, Integer.MAX_VALUE);
} public static void assertConcurrent(final String message, final List<? extends Runnable> runnables, final int maxTimeoutSeconds) throws InterruptedException {
final int numThreads = runnables.size();
final List<Throwable> exceptions = Collections.synchronizedList(new ArrayList<Throwable>());
final ExecutorService threadPool = Executors.newFixedThreadPool(numThreads);
try {
final CountDownLatch allExecutorThreadsReady = new CountDownLatch(numThreads);
final CountDownLatch afterInitBlocker = new CountDownLatch(1);
final CountDownLatch allDone = new CountDownLatch(numThreads);
for (final Runnable submittedTestRunnable : runnables) {
threadPool.submit(new Runnable() {
public void run() {
allExecutorThreadsReady.countDown();
try {
afterInitBlocker.await();
submittedTestRunnable.run();
} catch (final Throwable e) {
exceptions.add(e);
} finally {
allDone.countDown();
}
}
});
}
// wait until all threads are ready
assertTrue("Timeout initializing threads! Perform long lasting initializations before passing runnables to assertConcurrent", allExecutorThreadsReady.await(runnables.size() * 10, TimeUnit.MILLISECONDS));
// start all test runners
afterInitBlocker.countDown();
assertTrue(message +" timeout! More than" + maxTimeoutSeconds + "seconds", allDone.await(maxTimeoutSeconds, TimeUnit.SECONDS));
} finally {
threadPool.shutdownNow();
}
assertTrue(message + "failed with exception(s)" + exceptions, exceptions.isEmpty());
}
}

本来打算调整连接池参数观察对性能影响的,结果发现即使参数不变,运行时间起伏也有点大。所以暂时先这样了。。。具体原因待探究!