码迷,mamicode.com
首页 > 编程语言 > 详细

Java review-basic4

时间:2016-09-14 01:55:55      阅读:234      评论:0      收藏:0      [点我收藏+]

标签:

1. HashMap vs HashTable vs ConcurrentHashMap

1). Thread -Safe : ConcurrentHashMap is thread-safe that is the code can be accessed by single thread at a time. while HashMap is not thread-safe.

2). Synchronization Method: HashMap can be synchronized by using synchronizedMap(HashMap) method. By using this method we get a HashMap object which is equivalent to the HashTable object. So every modification is performed on Map is locked on Map object. ConcurrentHashMap synchronizes or locks on the certain portion of the Map. To optimize the performance of ConcurrentHashMap, Map is divided into different partitions depending upon the Concurrency level. So that we do not need to synchronize the whole Map Object.

3). Null Key: ConcurrentHashMap does not allow NULL values . So the key can not be null in ConcurrentHashMap While In HashMap there can only be one null key .

4). Performance: In multiple threaded environment HashMap is usually faster than ConcurrentHashMap. As only single thread can access the certain portion of the Map and thus reducing the performance. While in HashMap any number of threads can access the code at the same time. Please write in comments in case if you have any doubts.

个人理解:这一题主要是问HashMap,HashTable还有ConcurrentHashMap的区别在以下几点:

1. 线程安全,HashTable和ConcurrentHashMap是线程安全的,同时只能有一个线程访问。

2. 可以加入synchronize方法来保证HashMap的线程安全,concurrentHashMap可以根据不同的级别来分为不同的partitions,不需要同时访问,可以提高效率

3. ConcurrentHashMap not allow any null in it, hashmap only for null as key.

4. HashMap faster in multiple enviroment because it can acess many threads at same time.

 

2.Synchronous vs asynchronous

技术分享

Synchronized means "connected", or "dependent" in some way. In other words two synchronous tasks must be aware of one another, and one must execute in some way that is dependent on the other. In most cases that means that one cannot start until the other has completed. Asynchronous means they are totally independent and neither one must consider the other in any way, either in initiation or in execution.

As an aside, I should mention that technically, the concept of synchronous vs. asynchronous really does not have anything to do with threads. Although, in general, it would be unusual to find asynchronous tasks running on the same thread, it is possible, (see below for e.g.) and it is common to find two or more tasks executing synchronously on separate threads... No, the concept of synchronous/asynchronous has to do solely with whether or not a second or subsequent task can be initiated before the other task has completed, or whether it must wait. That is all. What thread (or threads), or processes, or CPUs, or indeed, what hardware, the task[s] are executed on is not relevant. Indeed, to make this point I have edited the graphics to show this

 

个人理解:关于多线程,单线程中的同步异步,上面图解释的很清楚:

同步意味着线程之间相互有影响,一个结束了才能去执行另外一个,而异步是相对独立的,自己执行自己的,相互之间不影响。

 

3.thread contention:

Essentially thread contention is a condition where one thread is waiting for a lock/object that is currently being held by another thread. Therefore, this waiting thread cannot use that object until the other thread has unlocked that particular object.

一个线程等待一个被另外线程锁住的资源

 

4. race conditions/debug them

A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don‘t know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are "racing" to access/change the data. In order to prevent race conditions from occurring, you would typically put a lock around the shared data to ensure only one thread can access the data at a time.

race conidtions:

两个线程同时申请一个资源,并且同时试图改变,解决方法是在shared resource上面加锁。

 

5. deadlocks

A deadlock is when two or more threads are blocked waiting to obtain locks that some of the other threads in the deadlock are holding. Deadlock can occur when multiple threads need the same locks, at the same time, but obtain them in different order. For instance, if thread 1 locks A, and tries to lock B, and thread 2 has already locked B, and tries to lock A, a deadlock arises. Thread 1 can never get B, and thread 2 can never get A. In addition, neither of them will ever know. They will remain blocked on each their object, A and B, forever. This situation is a deadlock.

个人理解:死锁是很常见的问题,当两个或者更多的线程为了获得同一个已经被占有的资源,然而资源被另外一个线程hold住了,然而这个资源也想要或者被先前的两个资源hold住的资源,就会发生死锁。

 

6. how to prevent deadlocks

1)Lock Ordering

Deadlock occurs when multiple threads need the same locks but obtain them in different order.

2)Lock Timeout

Another deadlock prevention mechanism is to put a timeout on lock attempts meaning a thread trying to obtain a lock will only try for so long before giving up. If a thread does not succeed in taking all necessary locks within the given timeout, it will backup, free all locks taken, wait for a random amount of time and then retry. The random amount of time waited serves to give other threads trying to take the same locks a chance to take all locks, and thus let the application continue running without locking.

3)Deadlock Detection

A better option is to determine or assign a priority of the threads so that only one (or a few) thread backs up. The rest of the threads continue taking the locks they need as if no deadlock had occurred. If the priority assigned to the threads is fixed, the same threads will always be given higher priority. To avoid this you may assign the priority randomly whenever a deadlock is detected.

个人理解:解决死锁的办法如下,1.设置锁的顺序,就是设置优先级

2. 设置timeOut 到时间直接释放

 

7. Thread confinement

Thread confinement is the practice of ensuring that data is only accessible from one thread. Such data is called thread-local as it is local, or specific, to a single thread. Thread-local data is thread-safe, as only one thread can get at the data, which eliminates the risk of races. And because races are nonexistent, thread-local data doesn‘t need locking. Thus thread confinement is a practice that makes your code safer (by eliminating a huge source of programming error) and more scalable (by eliminating locking). Most languages don‘t have mechanisms to enforce thread confinement; it is a higher-level programming pattern and not a language or OS feature. Functionality such as thread local storage (TLS) makes thread confinement easier, but the programmer must still work to ensure references to the data does not escape the owning thread.

个人理解:Thread confinement which is data only occupied by local thread, no race conidtion and thread safe, so it doesn‘t need to lock. 

 

8. cache coherence

 When multiple processors with separate caches share a common memory, it is necessary to keep the caches in a state of coherence by ensuring that any shared operand that is changed in any cache is changed throughout the entire system.

This is done in either of two ways: through a directory-based or a snooping system.

In a directory-based system, the data being shared is placed in a common directory that maintains the coherence between caches. The directory acts as a filter through which the processor must ask permission to load an entry from the primary memory to its cache. When an entry is changed the directory either updates or invalidates the other caches with that entry.

In a snooping system, all caches on the bus monitor (or snoop) the bus to determine if they have a copy of the block of data that is requested on the bus. Every cache has a copy of the sharing status of every block of physical memory it has. Cache misses and memory traffic due to shared data blocks limit the performance of parallel computing in multiprocessor computers or systems. Cache coherence aims to solve the problems associated with sharing data

个人理解:

cache coherence意思是在同一内存中有不同的缓存,一个缓存改变了,整个系统的其他部分也需要一起进行改变。

期中有两种方式:

1.在一个目录权限下的一个缓存进行了修改,其他的同样也会进行修改。

2. 另外一种是snooping,其意义为在总线上一个发生了改变,另外相同状态的也会进行改变。

 

9.False Sharing:

Memory is stored within the cache system in units know as cache lines. Cache lines are a power of 2 of contiguous bytes which are typically 32-256 in size. The most common cache line size is 64 bytes. False sharing is a term which applies when threads unwittingly impact the performance of each other while modifying independent variables sharing the same cache line. Write contention on cache lines is the single most limiting factor on achieving scalability for parallel threads of execution in an SMP system. I’ve heard false sharing described as the silent performance killer because it is far from obvious when looking at code.

To achieve linear scalability with number of threads, we must ensure no two threads write to the same variable or cache line. Two threads writing to the same variable can be tracked down at a code level. To be able to know if independent variables share the same cache line we need to know the memory layout, or we can get a tool to tell us. Intel VTune is such a profiling tool. In this article I’ll explain how memory is laid out for Java objects and how we can pad out our cache lines to avoid false sharing.

在多处理器,多线程情况下,如果两个线程分别运行在不同的CPU上,而其中某个线程修改了cache line中的元素,由于cache一致性的原因,另一个线程的cache line被宣告无效,在下一次访问时会出现一次cache line miss,哪怕该线程根本无效改动的这个元素,因此出现了False Sharing问题

Java review-basic4

标签:

原文地址:http://www.cnblogs.com/whaochen205/p/5870369.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!