I'm developing a simple Java EE 5 "routing" application. Different messages from a MQ queue are first transformed and then, according to the value of a certain field, stored in different datasources (stored procedures in different ds need to be called).
我正在开发一个简单的Java EE 5“路由”应用程序。首先转换来自MQ队列的不同消息,然后根据某个字段的值存储在不同的数据源中(需要调用不同ds中的存储过程)。
For example valueX -> dataSource1, valueY -> dataSource2. All datasources are setup in the application server with different jndi entries. Since the routing info usually won't change while the app is running, is it save to cache the datasource lookups? For example I would implement a singleton, which holds a hashmap where I store valueX->DataSource1. When a certain entry is not in the list, I would do the resource lookup and store the result in the map. Do I gain any performance with the cache or are these resource lookups fast enough?
例如valueX - > dataSource1,valueY - > dataSource2。所有数据源都在应用程序服务器中设置,具有不同的jndi条目。由于路由信息通常不会在应用程序运行时发生变化,是否会保存以缓存数据源查找?例如,我将实现一个单例,它包含一个hashmap,我存储了valueX-> DataSource1。当某个条目不在列表中时,我会进行资源查找并将结果存储在地图中。我是否通过缓存获得任何性能,或者这些资源查找速度是否足够快?
In general, what's the best way to build this kind of cache? I could use a cache for some other db lookups too. For example the mapping valueX -> resource name is defined in a simple table in a DB. Is it better too lookup the values on demand and save the result in a map, do a lookup all the time or even read and save all entries on startup? Do I need to synchronize the access? Can I just create a "enum" singleton implementation?
一般来说,构建这种缓存的最佳方法是什么?我也可以使用缓存来进行其他一些数据库查找。例如,映射值X - >资源名称在DB中的简单表中定义。是否更好地按需查找值并将结果保存在地图中,一直进行查找,甚至在启动时读取并保存所有条目?我需要同步访问吗?我可以创建一个“枚举”单例实现吗?
1 个解决方案
#1
4
It is safe from operational/change management point of view, but not safe from programmer's one.
从操作/变更管理的角度来看是安全的,但从程序员的角度来看并不安全。
From programmer's PoV, DataSource configuration can be changed at runtime, and therefore one should always repeat the lookup.
从程序员的PoV,DataSource配置可以在运行时更改,因此应始终重复查找。
But this is not how things are happening in real life.
但这不是现实生活中发生的事情。
When a change to a Datasource is to be implemented, this is done via a Change Management procedure. There is a c/r record, and that record states that the application will have a downtime. In other words, operational folks executing the c/r will bring the application down, do the change and bring it back up. Nobody does the changes like this on a live AS -- for safety reasons. As the result, you shouldn't take into account a possibility that DS changes at runtime.
当要实现对数据源的更改时,这是通过更改管理过程完成的。有一个c / r记录,该记录表明应用程序将有停机时间。换句话说,执行c / r的操作人员将关闭应用程序,进行更改并将其重新启动。出于安全原因,没有人在现场AS上做这样的改变。因此,您不应考虑DS在运行时更改的可能性。
So any permanent synchronized shared cache is good in the case.
因此,任何永久同步的共享缓存都是好的。
Will you get a performance boost? This depends on the AS implementation. It likely to have a cache of its own, but that cache may be more generic and so slower and in fact you cannot count on its presence at all.
你会获得性能提升吗?这取决于AS实现。它可能有自己的缓存,但是缓存可能更通用,速度更慢,实际上你根本不能指望它的存在。
Do you need to build a cache? The answer usually comes from performance tests. If there is no problem, why waste time and introduce risks?
你需要建立一个缓存吗?答案通常来自性能测试。如果没有问题,为什么要浪费时间并引入风险呢?
Resume: yes, build a simple cache and use it -- if it is justified by the performance increase.
简历:是的,构建一个简单的缓存并使用它 - 如果性能提高是合理的。
Specifics of implementation depend on your preferences. I usually have a cache that does lookups on demand, and has a synchronized map of jndi->object inside. For high-concurrency cache I'd use Read/Write locks instead of naive synchronized -- i.e. many reads can go in parallel, while adding a new entry gets an exclusive access. But those are details much depending on the application details.
具体实施取决于您的偏好。我通常有一个按需执行查找的缓存,并且内部有一个jndi-> object的同步映射。对于高并发缓存,我使用读/写锁而不是天真同步 - 即许多读取可以并行,而添加新条目则获得独占访问。但这些细节很大程度上取决于应用程序的细节。
#1
4
It is safe from operational/change management point of view, but not safe from programmer's one.
从操作/变更管理的角度来看是安全的,但从程序员的角度来看并不安全。
From programmer's PoV, DataSource configuration can be changed at runtime, and therefore one should always repeat the lookup.
从程序员的PoV,DataSource配置可以在运行时更改,因此应始终重复查找。
But this is not how things are happening in real life.
但这不是现实生活中发生的事情。
When a change to a Datasource is to be implemented, this is done via a Change Management procedure. There is a c/r record, and that record states that the application will have a downtime. In other words, operational folks executing the c/r will bring the application down, do the change and bring it back up. Nobody does the changes like this on a live AS -- for safety reasons. As the result, you shouldn't take into account a possibility that DS changes at runtime.
当要实现对数据源的更改时,这是通过更改管理过程完成的。有一个c / r记录,该记录表明应用程序将有停机时间。换句话说,执行c / r的操作人员将关闭应用程序,进行更改并将其重新启动。出于安全原因,没有人在现场AS上做这样的改变。因此,您不应考虑DS在运行时更改的可能性。
So any permanent synchronized shared cache is good in the case.
因此,任何永久同步的共享缓存都是好的。
Will you get a performance boost? This depends on the AS implementation. It likely to have a cache of its own, but that cache may be more generic and so slower and in fact you cannot count on its presence at all.
你会获得性能提升吗?这取决于AS实现。它可能有自己的缓存,但是缓存可能更通用,速度更慢,实际上你根本不能指望它的存在。
Do you need to build a cache? The answer usually comes from performance tests. If there is no problem, why waste time and introduce risks?
你需要建立一个缓存吗?答案通常来自性能测试。如果没有问题,为什么要浪费时间并引入风险呢?
Resume: yes, build a simple cache and use it -- if it is justified by the performance increase.
简历:是的,构建一个简单的缓存并使用它 - 如果性能提高是合理的。
Specifics of implementation depend on your preferences. I usually have a cache that does lookups on demand, and has a synchronized map of jndi->object inside. For high-concurrency cache I'd use Read/Write locks instead of naive synchronized -- i.e. many reads can go in parallel, while adding a new entry gets an exclusive access. But those are details much depending on the application details.
具体实施取决于您的偏好。我通常有一个按需执行查找的缓存,并且内部有一个jndi-> object的同步映射。对于高并发缓存,我使用读/写锁而不是天真同步 - 即许多读取可以并行,而添加新条目则获得独占访问。但这些细节很大程度上取决于应用程序的细节。