并发框架分类
1. Executor相关类
Interfaces. Executor
is a simple standardized interface for defining custom thread-like subsystems, including thread pools, asynchronous I/O, and lightweight task frameworks. Depending on which concrete Executor class is being used, tasks may execute in a newly created thread, an existing task-execution thread, or the thread calling execute
, and may execute sequentially or concurrently. ExecutorService
provides a more complete asynchronous task execution framework. An ExecutorService manages queuing and scheduling of tasks, and allows controlled shutdown. The ScheduledExecutorService
subinterface and associated interfaces add support for delayed and periodic task execution. ExecutorServices provide methods arranging asynchronous execution of any function expressed as Callable
, the result-bearing analog of Runnable
. A Future
returns the results of a function, allows determination of whether execution has completed, and provides a means to cancel execution. A RunnableFuture
is aFuture
that possesses a run
method that upon execution, sets its results.
Implementations. Classes ThreadPoolExecutor
and ScheduledThreadPoolExecutor
provide tunable, flexible thread pools. The Executors
class provides factory methods for the most common kinds and configurations of Executors, as well as a few utility methods for using them. Other utilities based on Executors
include the concrete class FutureTask
providing a common extensible implementation of Futures, and ExecutorCompletionService
, that assists in coordinating the processing of groups of asynchronous tasks.
Class ForkJoinPool
provides an Executor primarily designed for processing instances of ForkJoinTask
and its subclasses. These classes employ a work-stealing scheduler that attains high throughput for tasks conforming to restrictions that often hold in computation-intensive parallel processing.
2.future相关类
3.Queue相关类
The ConcurrentLinkedQueue
class supplies an efficient scalable thread-safe non-blocking FIFO queue. The ConcurrentLinkedDeque
class is similar, but additionally supports the Deque
interface.
Five implementations in java.util.concurrent
support the extended BlockingQueue
interface, that defines blocking versions of put and take: LinkedBlockingQueue
, ArrayBlockingQueue
, SynchronousQueue
, PriorityBlockingQueue
, and DelayQueue
. The different classes cover the most common usage contexts for producer-consumer, messaging, parallel tasking, and related concurrent designs.
Extended interface TransferQueue
, and implementation LinkedTransferQueue
introduce a synchronous transfer
method (along with related features) in which a producer may optionally block awaiting its consumer.
The BlockingDeque
interface extends BlockingQueue
to support both FIFO and LIFO (stack-based) operations. Class LinkedBlockingDeque
provides an implementation.
4. atomic相关类
class Node { private volatile Node left, right; private static final AtomicReferenceFieldUpdater<Node, Node> leftUpdater = AtomicReferenceFieldUpdater.newUpdater(Node.class, Node.class, "left"); private static AtomicReferenceFieldUpdater<Node, Node> rightUpdater = AtomicReferenceFieldUpdater.newUpdater(Node.class, Node.class, "right"); Node getLeft() { return left; } boolean compareAndSetLeft(Node expect, Node update) { return leftUpdater.compareAndSet(this, expect, update); } // ... and so on }
5. lock相关类
5.1 lock
Interfaces and classes providing a framework for locking and waiting for conditions that is distinct from built-in synchronization and monitors.
Condition:Condition
factors out the Object
monitor methods (wait
, notify
and notifyAll
) into distinct objects to give the effect of having multiple wait-sets per object, by combining them with the use of arbitrary Lock
implementations.
As an example, suppose we have a bounded buffer which supports put
and take
methods. If a take
is attempted on an empty buffer, then the thread will block until an item becomes available; if a put
is attempted on a full buffer, then the thread will block until a space becomes available. We would like to keep waiting put
threads and take
threads in separate wait-sets so that we can use the optimization of only notifying a single thread at a time when items or spaces become available in the buffer. This can be achieved using two Condition
instances.
class BoundedBuffer { final Lock lock = new ReentrantLock(); final Condition notFull = lock.newCondition(); final Condition notEmpty = lock.newCondition(); final Object[] items = new Object[100]; int putptr, takeptr, count; public void put(Object x) throws InterruptedException { lock.lock(); try { while (count == items.length) notFull.await(); items[putptr] = x; if (++putptr == items.length) putptr = 0; ++count; notEmpty.signal(); } finally { lock.unlock(); } } public Object take() throws InterruptedException { lock.lock(); try { while (count == 0) notEmpty.await(); Object x = items[takeptr]; if (++takeptr == items.length) takeptr = 0; --count; notFull.signal(); return x; } finally { lock.unlock(); } } }
(The ArrayBlockingQueue
class provides this functionality, so there is no reason to implement this sample usage class.)
Lock:Lock
implementations provide more extensive locking operations than can be obtained using synchronized
methods and statements.
Lock l = ...; l.lock(); try { // access the resource protected by this lock } finally { l.unlock(); }
ReadWriteLock:A ReadWriteLock maintains a pair of associated locks
, one for read-only operations and one for writing.
6.Timing
The TimeUnit
class provides multiple granularities (including nanoseconds) for specifying and controlling time-out based operations. Most classes in the package contain operations based on time-outs in addition to indefinite waits. In all cases that time-outs are used, the time-out specifies the minimum time that the method should wait before indicating that it timed-out. Implementations make a "best effort" to detect time-outs as soon as possible after they occur. However, an indefinite amount of time may elapse between a time-out being detected and a thread actually executing again after that time-out. All methods that accept timeout parameters treat values less than or equal to zero to mean not to wait at all. To wait "forever", you can use a value ofLong.MAX_VALUE
.
7.Synchronizers
Five classes aid common special-purpose synchronization idioms.
-
Semaphore
is a classic concurrency tool. -
CountDownLatch
is a very simple yet very common utility for blocking until a given number of signals, events, or conditions hold. - A
CyclicBarrier
is a resettable multiway synchronization point useful in some styles of parallel programming. - A
Phaser
provides a more flexible form of barrier that may be used to control phased computation among multiple threads. - An
Exchanger
allows two threads to exchange objects at a rendezvous point, and is useful in several pipeline designs.
8.Concurrent Collections
Besides Queues, this package supplies Collection implementations designed for use in multithreaded contexts: ConcurrentHashMap
, ConcurrentSkipListMap
, ConcurrentSkipListSet
, CopyOnWriteArrayList
, and CopyOnWriteArraySet
. When many threads are expected to access a given collection, a ConcurrentHashMap
is normally preferable to a synchronized HashMap
, and a ConcurrentSkipListMap
is normally preferable to a synchronized TreeMap
. A CopyOnWriteArrayList
is preferable to a synchronizedArrayList
when the expected number of reads and traversals greatly outnumber the number of updates to a list.
The "Concurrent" prefix used with some classes in this package is a shorthand indicating several differences from similar "synchronized" classes. For example java.util.Hashtable
and Collections.synchronizedMap(new HashMap())
are synchronized. ButConcurrentHashMap
is "concurrent". A concurrent collection is thread-safe, but not governed by a single exclusion lock. In the particular case of ConcurrentHashMap, it safely permits any number of concurrent reads as well as a tunable number of concurrent writes. "Synchronized" classes can be useful when you need to prevent all access to a collection via a single lock, at the expense of poorer scalability. In other cases in which multiple threads are expected to access a common collection, "concurrent" versions are normally preferable. And unsynchronized collections are preferable when either collections are unshared, or are accessible only when holding other locks.
Most concurrent Collection implementations (including most Queues) also differ from the usual java.util
conventions in that their Iterators and Spliterators provide weakly consistent rather than fast-fail traversal:
- they may proceed concurrently with other operations
- they will never throw
ConcurrentModificationException
- they are guaranteed to traverse elements as they existed upon construction exactly once, and may (but are not guaranteed to) reflect any modifications subsequent to construction.
9.Memory Consistency Properties
Chapter 17 of the Java Language Specification defines the happens-before relation on memory operations such as reads and writes of shared variables. The results of a write by one thread are guaranteed to be visible to a read by another thread only if the write operation happens-before the read operation. The synchronized
and volatile
constructs, as well as the Thread.start()
and Thread.join()
methods, can form happens-before relationships. In particular:
- Each action in a thread happens-before every action in that thread that comes later in the program's order.
- An unlock (
synchronized
block or method exit) of a monitor happens-before every subsequent lock (synchronized
block or method entry) of that same monitor. And because the happens-before relation is transitive, all actions of a thread prior to unlocking happen-before all actions subsequent to any thread locking that monitor. - A write to a
volatile
field happens-before every subsequent read of that same field. Writes and reads ofvolatile
fields have similar memory consistency effects as entering and exiting monitors, but do not entail mutual exclusion locking. - A call to
start
on a thread happens-before any action in the started thread. - All actions in a thread happen-before any other thread successfully returns from a
join
on that thread.
The methods of all classes in java.util.concurrent
and its subpackages extend these guarantees to higher-level synchronization. In particular:
- Actions in a thread prior to placing an object into any concurrent collection happen-before actions subsequent to the access or removal of that element from the collection in another thread.
- Actions in a thread prior to the submission of a
Runnable
to anExecutor
happen-before its execution begins. Similarly forCallables
submitted to anExecutorService
. - Actions taken by the asynchronous computation represented by a
Future
happen-before actions subsequent to the retrieval of the result viaFuture.get()
in another thread. - Actions prior to "releasing" synchronizer methods such as
Lock.unlock
,Semaphore.release
, andCountDownLatch.countDown
happen-before actions subsequent to a successful "acquiring" method such asLock.lock
,Semaphore.acquire
,Condition.await
, andCountDownLatch.await
on the same synchronizer object in another thread. - For each pair of threads that successfully exchange objects via an
Exchanger
, actions prior to theexchange()
in each thread happen-before those subsequent to the correspondingexchange()
in another thread. - Actions prior to calling
CyclicBarrier.await
andPhaser.awaitAdvance
(as well as its variants) happen-before actions performed by the barrier action, and actions performed by the barrier action happen-before actions subsequent to a successful return from the correspondingawait
in other threads.
参考文献:
【1】 https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/package-summary.html