Internal Structure
final Segment<K,V> [] segments;
class HashEntry<K,V> {
final K key;
volatile V
final int hashCode;
final HashEntry<K,V> next;
}
class Segment<K,V> extends ReentrantLock<K> implements Serializable {
transient volatile int count;// no of elements in this segment
transient int modCount;
transient int threshold; = loadFactor * table.length
final float loadFactor;
transient volatile ConcurrentHashMap.HashEntry<K, V>[] table;
}
Constructor - Default
initialCapacity = 16
loadFactor = 0.75
concurrencyLevel = 16
keySet() - Returns Set backed by the Map
final Segment<K,V> [] segments;
class HashEntry<K,V> {
final K key;
volatile V
final int hashCode;
final HashEntry<K,V> next;
}
class Segment<K,V> extends ReentrantLock<K> implements Serializable {
transient volatile int count;// no of elements in this segment
transient int modCount;
transient int threshold; = loadFactor * table.length
final float loadFactor;
transient volatile ConcurrentHashMap.HashEntry<K, V>[] table;
}
- Following operations lock - put(), replace(),
- Following operations lock if fails first time - containsValue() | get()
- The lock is on the Segment (which extends ReentrantLock)
- concurrencyLevel decides the number of segments the CHM will create.
Constructor - Default
initialCapacity = 16
loadFactor = 0.75
concurrencyLevel = 16
- decide size of Segment[] by left shifting 1 (<<) till its greater than concurrencyLevel. So for ccy level of 16 we have Segment[32] .
- if initialCapacity > MAX_CAPACITY then initialCapacity = MAX_CAPACITY
- Find size of HashEntry[] in each Segment, c = initialCapacity / segmentSize
- Calculate capacity by left shifting 1 (<<) till its greater than c.
- Create all Segment[] elements using the above capacity and loadFactor.
public V put( K,V )
- calculate index using K.hashCode()
- Find Segment index using the hashCode
- obtain the Segment lock() [ Segment extends ReentrantLock ]
- rehash() if required ( count++ > threshold )
- Traverse the list at Segment[i] till we find matching key or reach the end.
- if we reach end - add the new HashEntry to the front of List
- else replace the existing Value.
- Return existing Value OR null if the Mapping was not present before.
- unlock() the Segment's lock
- Find the Segment index using hashCode.
- Check count > 0 // volatile read
- Traverse the list at Segment[i] by comparing hashCode && key.equals( searchKey ).
- If key not found do a synchronised lookup lock() and then unlock() in finally
- return the associated value or null if not found.
containsValue( V )
- Tries non-locking search ( 2 times ) first otherwise locks all segments and search again (same algorithm as below )
- Iterates each Segment and calls Segment.containsValue( V )
- Iterates each element in table[] and its contained list in-turn to check for Value
- If Value is null for an entry (could be under addition). lock() the segment (synchronise) and retry to get the value and unlock().
replace( K, V )
- Find the Segment
- lock() the segment
- Find the HashEntry using the Key & its hashCode
- Replace with new Value and return oldValue
- return null of Key not found.
replace( K, V old, V new)
- Replaces only if current mapping has value=old
- Find the Segment using hashCode of key.
- lock() the segment
- Find the HashEntry from the table array using the Key & its hashCode()
- Traverse the HashEntry list till we find HashEntry with a key that satisfies equals() method.
- Compare existing Value in the HashEntry with new Value
- if equal - replace HashEntry.value = new
- return null of Key not found.
iteration() - While iterating following state is maintained
- nextTableIndex, nextSegmentIndex are maintained
- nextEntry is kept in Current State
- hasNext() -> if nextEntry = null
public V putIfAbsent( K, V )
- If value == null throw NPE.
- Calculate/Get hashCode and get the corresponding segment.
- Call Segment.put() ( which executes lock() ) with putIfAbsent = true.
- Return the value returned by put()
ConcurrentSkipListMap
- A scalable concurrent ConcurrentNavigableMap implementation.
- Cross between ConcurrentMap & Treemap
- Sorted in natural order
- log(n) time cost for the containsKey, get, put and remove
- it provides more scalable concurrency even over ConcurrentHashMap
- Uses CAS on Key & Value to achieve higher concurrency.
- this concurrency is paid for with non-constant access times.
ConcurrentLinkedQueue
No comments:
Post a Comment