Java多线程 -- Map容器性能比较
版权声明:本文为博主原创文章,未经博主允许不得转载。
单线程
单线程环境下可以使用HashMap和TreeMap。TreeMap上遍历返回结果是按照Key排序的。
测试方法
记录写入Map中N条记录的时间,单位毫秒。
记录从N条记录的Map中读取10W条记录的时间,单位毫秒。
N=25W,50W,75W,100W
测试结果
写N条记录 | 25W | 50W | 75W | 100W |
HashMap | 28 | 49 | 72 | 92 |
TreeMap | 131 | 321 | 527 | 748 |
N条记录中读10W数据 | 25W | 50W | 75W | 100W |
HashMap | 4 | 5 | 5 | 5 |
TreeMap | 38 | 42 | 47 | 47 |
结果分析
TreeMap采用红黑树来实现,查找是二分查找,时间复杂度是O(log(n)),最坏情况下的时间复杂度是O(n),而HashMap查找的时间复杂度是O(1)。
测试结果也看出,TreeMap的读写性能都比HashMap差了很多。
所以单线程环境下,如果不是遍历时需要按照Key的排序来返回结果,应该采用HashMap。
多线程
多线程环境下可以使用以下四种Map容器。
1)Collections.synchronizedMap(new HashMap());
2)ConcurrentHashMap
3)Collections.synchronizedSortedMap(new TreeMap())
4)ConcurrentSkipListMap
测试方法
先启动N个写线程,每个写线程写入Map中100W/N条记录。
之后启动N个读线程,每个读线程从100W条记录的Map中读取10W条记录。
记录每个线程的写入/读取时间,单位毫秒。
在一台8核的Intel CPU上执行测试。
测试结果
1 synchronizedHashMap | 2 ConcurrentHashMap | 3 synchronizedTreeMap | 4 ConcurrentSkipListMap | |
线程数N=1 | Write time:104 Read time:9 |
Write time:135 Read time:10 |
Write time:760 Read time:50 |
Write time:1349 Read time:156 |
线程数N=2 | Write time:168 Write time:169 Read time:20 Read time:24 |
Write time:84 Write time:97 Read time:11 Read time:11 |
Write time:819 Write time:873 Read time:125 Read time:124 |
Write time:676 Write time:682 Read time:152 Read time:153 |
线程数N=4 | Write time:175 Write time:187 Write time:188 Write time:189 Read time:49 Read time:52 Read time:53 Read time:60 |
Write time:50 Write time:52 Write time:54 Write time:55 Read time:11 Read time:12 Read time:15 Read time:15 |
Write time:890 Write time:917 Write time:928 Write time:944 Read time:271 Read time:277 Read time:280 Read time:283 |
Write time:365 Write time:368 Write time:385 Write time:386 Read time:174 Read time:178 Read time:177 Read time:179 |
线程数N=8 | Write time:174 Write time:174 Write time:175 Write time:178 Write time:178 Write time:179 Write time:178 Write time:178 Read time:112 Read time:114 Read time:116 Read time:117 Read time:118 Read time:175 Read time:176 Read time:176 |
Write time:55 Write time:32 Write time:56 Write time:56 Write time:57 Write time:56 Write time:56 Write time:58 Read time:13 Read time:13 Read time:13 Read time:14 Read time:14 Read time:15 Read time:16 Read time:14 |
Write time:807 Write time:821 Write time:869 Write time:904 Write time:914 Write time:933 Write time:938 Write time:941 Read time:565 Read time:584 Read time:594 Read time:614 Read time:615 Read time:619 Read time:679 Read time:686 |
Write time:193 Write time:194 Write time:201 Write time:209 Write time:217 Write time:222 Write time:250 Write time:285 Read time:177 Read time:177 Read time:179 Read time:180 Read time:180 Read time:186 Read time:240 Read time:256 |
结果分析
Collections.synchronizedMap和Collections.synchronizedSortedMap实际上是对传入的Map对象作了个包装,每个方法都加上锁。
ConcurrentHashMap的实现使用了分段锁以及其他一些技术,多线程环境下读不用加锁,写也得到很大程度的优化。
ConcurrentSkipListMap的实现利用了跳表的数据结构,天生为了并发操作而生,同样多线程环境下可以无锁读取。
ConcurrentSkipListMap和TreeMap相同,查找是二分查找,遍历时需要按照Key的排序来返回结果。
单线程环境下,毫无疑问synchronizedHashMap 优于ConcurrentHashMap;synchronizedTreeMap 优于ConcurrentSkipListMap。
随着线程数增多,ConcurrentHashMap读写都优于synchronizedHashMap。
由于读不需要加锁,ConcurrentHashMap和ConcurrentSkipListMap读取时间都基本上不随线程数增加而增加,
而synchronizedHashMap和 synchronizedTreeMap, 因为读也要加锁,则随着线程数增加读取时间也增加。
特别是,8个线程下,ConcurrentSkipListMap的读写效率已经基本上接近synchronizedHashMap。如果线程数再增加,ConcurrentSkipListMap的性能应该会超过synchronizedHashMap。
所以多线程环境下,
如果不需要遍历时需要按照Key的排序来返回结果,首选ConcurrentHashMap;
如果需要遍历时需要按照Key的排序来返回结果,首选ConcurrentSkipListMap。
当然,ConcurrentxxxMap返回弱一致性的迭代器,如果在意这点,就只能选用synchronizedxxxMap了。
===========================================================================================
测试程序
- package learning.multithread.collection;
- import java.util.ArrayList;
- import java.util.Collections;
- import java.util.HashMap;
- import java.util.List;
- import java.util.Map;
- import java.util.TreeMap;
- import java.util.concurrent.ConcurrentHashMap;
- import java.util.concurrent.ConcurrentSkipListMap;
- /**
- * Set -Xms1024M to avoid JVM heap size increase during the test
- */
- public class ConcurrentMapTest {
- private static final int THREAD_NUM = 8;
- private static final int MAP_SIZE = 1000000;
- public static void main(String[] args) throws InterruptedException {
- List<Integer> list = new ArrayList<Integer>(MAP_SIZE);
- for (int i = 0; i < MAP_SIZE; i++) {
- list.add(Integer.valueOf(i));
- }
- Collections.shuffle(list);
- singleThreadMapTest(new HashMap<Integer, Integer>(), list);
- //singleThreadMapTest(new TreeMap<Integer, Integer>(), list);
- //concurrentMapTest(Collections.synchronizedMap(new HashMap<Integer, Integer>()), list);
- //concurrentMapTest(new ConcurrentHashMap<Integer, Integer>(), list);
- //concurrentMapTest(Collections.synchronizedSortedMap(new TreeMap<Integer, Integer>()), list);
- //concurrentMapTest(new ConcurrentSkipListMap<Integer, Integer>(), list);
- }
- private static void singleThreadMapTest(Map<Integer, Integer> map, List<Integer> l) {
- //first run to load all to memories and set map initial size
- mapReadWrite(map, l);
- mapReadWrite(map, l);
- map.clear();
- System.out.println("Now start test......");
- mapReadWrite(map, l.subList(0, (MAP_SIZE/4)));
- map.clear();
- mapReadWrite(map, l.subList(0, (MAP_SIZE/4)*2));
- map.clear();
- mapReadWrite(map, l.subList(0, (MAP_SIZE/4)*3));
- map.clear();
- mapReadWrite(map, l);
- }
- private static void mapReadWrite(Map<Integer, Integer> map, List<Integer> l) {
- MapWriter mwHash = new MapWriter(map, l);
- mwHash.run();
- MapReader mrHash = new MapReader(map, l.subList(0, (MAP_SIZE/10)));
- mrHash.run();
- System.out.println("Map Size = " + map.size());
- }
- private static void concurrentMapTest(Map<Integer, Integer> map, List<Integer> l) throws InterruptedException {
- //first run to load all to memories and set map initial size
- concurrentReadWrite(map, l);
- map.clear();
- concurrentReadWrite(map, l);
- map.clear();
- System.out.println("Now start test......");
- concurrentReadWrite(map, l);
- System.out.println("Map Size = " + map.size());
- }
- private static void concurrentReadWrite(Map<Integer, Integer> map, List<Integer> l)
- throws InterruptedException {
- Thread[] writerThreads = new Thread[THREAD_NUM];
- Thread[] readerThreads = new Thread[THREAD_NUM];
- int mapSize = MAP_SIZE/THREAD_NUM;
- for (int i = 0; i < THREAD_NUM; ++i) {
- writerThreads[i] = new Thread(new MapWriter(map, l.subList(mapSize*i, mapSize*(i+1))));
- }
- for (int i = 0; i < THREAD_NUM; ++i) {
- writerThreads[i].start();
- }
- for (Thread t : writerThreads) {
- t.join();
- }
- for (int i = 0; i < THREAD_NUM; ++i) {
- readerThreads[i] = new Thread(new MapReader(map, l.subList(mapSize*i, mapSize*i+MAP_SIZE/10)));
- }
- for (int i = 0; i < THREAD_NUM; ++i) {
- readerThreads[i].start();
- }
- for (Thread t : readerThreads) {
- t.join();
- }
- }
- private static class MapWriter implements Runnable {
- public MapWriter(Map<Integer, Integer> map, List<Integer> l) {
- this.map = map;
- this.l = l;
- }
- public void run() {
- long begin = System.currentTimeMillis();
- for(Integer i : l) {
- map.put(i, i);
- }
- long end = System.currentTimeMillis();
- System.out.println("Write time:" + (end - begin));
- }
- private final Map<Integer, Integer> map;
- private final List<Integer> l;
- }
- private static class MapReader implements Runnable {
- public MapReader(Map<Integer, Integer> map, List<Integer> l) {
- this.map = map;
- this.l = l;
- }
- public void run() {
- long begin = System.currentTimeMillis();
- for (Integer i : l) {
- map.get(i);
- }
- long end = System.currentTimeMillis();
- System.out.println("Read time:" + (end - begin));
- }
- private final Map<Integer, Integer> map;
- private final List<Integer> l;
- }
- }