I have an mqtt client getting request subscribing from topics, and then I give it to threadpool of fixed size 50. Im using hikaricp 2.4.2 for DB Pooling MySQL database.
我有一个mqtt客户端获取请求订阅主题,然后我把它给固定大小50的线程池。我使用hikaricp 2.4.2 for DB Pooling MySQL数据库。
Im currently using 2.4.2 and this is my setup
我目前正在使用2.4.2,这是我的设置
HikariConfig config = new HikariConfig();
config.setDataSourceClassName(CLASS_FOR_NAME);
config.setJdbcUrl(HOST);
config.setUsername(USER);
config.setPassword(PASS);
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
config.setLeakDetectionThreshold(TimeUnit.SECONDS.toMillis(30));
config.setValidationTimeout(TimeUnit.MINUTES.toMillis(1));
config.setMaximumPoolSize(10);
config.setMinimumIdle(0);
config.setMaxLifetime(TimeUnit.MINUTES.toMillis(2)); // 120 seconds
config.setIdleTimeout(TimeUnit.MINUTES.toMillis(1)); // minutes
config.setConnectionTimeout(TimeUnit.MINUTES.toMillis(5));
config.setConnectionTestQuery("/* ping */ SELECT 1");
Heres the full log message :
下面是完整的日志消息:
WARNLOG:
WARNLOG:
811439 [Hikari housekeeper (pool HikariPool-0)] WARN com.zaxxer.hikari.pool.ProxyLeakTask - Connection leak detection triggered for connection com.mysql.jdbc.JDBC4Connection@11d0896, stack trace follows java.lang.Exception: Apparent connection leak detected at com.hcpdatabase.DataSource.getConnection(DataSource.java:69) at com.database.AccessDatabase.create_alert(AccessDatabase.java:3849) at com.runnable.StartTaskRunnable2.execute(StartTaskRunnable2.java:78)
811439 [Hikari管家(池HikariPool-0)]警告com.zaxxer.hikari.pool.ProxyLeakTask - 连接泄漏检测触发连接com.mysql.jdbc.JDBC4Connection@11d0896,堆栈跟踪跟随java.lang.Exception:表观连接泄漏检测到com.hcpdatabase.AccessDatabase.create_alert(AccessDatabase.java:3849)的com.hcpdatabase.DataSource.getConnection(DataSource.java:69)com.runnable.StartTaskRunnable2.execute(StartTaskRunnable2.java:78)
Is this normal ? do i have to catch this?
这是正常的吗?我必须抓住这个吗?
2 个解决方案
#1
4
As I have reviewed my codes over and over again. I came to realize that I was barking at the wrong tree, Seems like hikari is very reliable when it comes to connection leak. The problem is when amazon aws ec2 instance is stealing some of my cpu and is even greater than what i thought. So after the cpu goes up 99%, Connection leak is detected even though my codes clearly closed it in finally block. So the problem lies with the machine.
因为我一遍又一遍地审查了我的代码。我开始意识到我在错误的树上吠叫,似乎hikari在连接泄漏方面非常可靠。问题是亚马逊aws ec2实例正在窃取我的一些CPU,甚至比我想象的更大。因此,在cpu上升99%之后,即使我的代码在finally块中清楚地关闭它,也会检测到连接泄漏。所以问题在于机器。
I thank you for all who participated to answer.
感谢所有参与回答的人。
#2
1
walk thru the code with 'stack trace' and it would lead you to un-closed connection or the connection that takes longer than threshold.
通过“堆栈跟踪”遍历代码,它将导致您进行非关闭连接或超过阈值的连接。
#1
4
As I have reviewed my codes over and over again. I came to realize that I was barking at the wrong tree, Seems like hikari is very reliable when it comes to connection leak. The problem is when amazon aws ec2 instance is stealing some of my cpu and is even greater than what i thought. So after the cpu goes up 99%, Connection leak is detected even though my codes clearly closed it in finally block. So the problem lies with the machine.
因为我一遍又一遍地审查了我的代码。我开始意识到我在错误的树上吠叫,似乎hikari在连接泄漏方面非常可靠。问题是亚马逊aws ec2实例正在窃取我的一些CPU,甚至比我想象的更大。因此,在cpu上升99%之后,即使我的代码在finally块中清楚地关闭它,也会检测到连接泄漏。所以问题在于机器。
I thank you for all who participated to answer.
感谢所有参与回答的人。
#2
1
walk thru the code with 'stack trace' and it would lead you to un-closed connection or the connection that takes longer than threshold.
通过“堆栈跟踪”遍历代码,它将导致您进行非关闭连接或超过阈值的连接。