Web Application Performance Design Inspection Questions - Concurrency

From Guidance Share

Jump to: navigation, search

- J.D. Meier, Srinath Vasireddy, Ashish Babbar, Rico Mariani, and Alex Mackman


Contents

Concurrency Issues

Issues

Implications

Blocking calls

Stalls the application, and reduces response time and throughput.

Nongranular locks

Stalls the application, and leads to queued requests and timeouts.

Misusing threads

Additional processor and memory overhead due to context switching and thread management overhead.

Holding onto locks longer than necessary

Causes increased contention and reduced concurrency.

Inappropriate isolation levels

Poor choice of isolation levels results in contention, long wait time, timeouts, and deadlocks.

To assess concurrency issues, review the following questions:

  • Do you need to execute tasks concurrently?
  • Do you create threads on a per-request basis?
  • Do you design thread safe types by default?
  • Do you use fine-grained locks?
  • Do you acquire late and release early?
  • Do you use the appropriate synchronization primitive?
  • Do you use an appropriate transaction isolation level?
  • Does your design consider asynchronous execution?


Do You Need to Execute Tasks Concurrently?

Concurrent execution tends to be most suitable for tasks that are independent of each other. You do not benefit from asynchronous implementation if the work is CPU bound (especially for single processor servers) instead of I/O-bound. If the work is CPU bound, an asynchronous implementation results in increased utilization and thread switching on an already busy processor. This is likely to hurt performance and throughput.

Consider using asynchronous invocation when the client can execute parallel tasks that are I/O-bound as part of the unit of work. For example, you can use an asynchronous call to a Web service to free up the executing thread to do some parallel work before blocking on the Web service call and waiting for the results.


Do You Create Threads on a Per-Request Basis?

Review your design and ensure that you use the thread pool. Using the thread pool increases the probability for the processor to find a thread in a ready to run state (for processing), which results in increased parallelism among the threads.

Threads are shared resources and are expensive to initialize and manage. If you create threads on a per-request basis in a server-side application, this affects scalability by increasing the likelihood of thread starvation and affects performance, due to the increased overhead of thread creation, processor context switching, and garbage collection.


Do You Design Thread Safe Types by Default?

Avoid making types thread safe by default. Thread safety adds an additional layer of complexity and overhead to your types, which is often unnecessary if synchronization issues are dealt with by a higher-level layer of software.


Do You Use Fine-Grained Locks?

Evaluate the tradeoff between having coarse-grained and fine-grained locks. Fine - grained locks ensure atomic execution of a small amount of code. When used properly, they provide greater concurrency by reducing lock contention. When used at the wrong places, the fine-grained locks may add complexity and decrease performance and concurrency.


Do You Acquire Late and Release Early?

Acquiring late and releasing shared resources early is the key to reducing contention. You lock a shared resource by locking all the code paths accessing the resource. Make sure to minimize the duration that you hold and lock on these code paths, because most resources tend to be shared and limited. The faster you release the lock, the earlier the resource becomes available to other threads.

The correct approach is to determine the optimum granularity of locking for your scenario:

  • Method level synchronization. It is appropriate to synchronize at the method level when all that the method does is act on the resource that needs synchronized access.
  • Synchronizing access to relevant piece of code. If a method needs to validate parameters and perform other operations beyond accessing a resource that requires serialized access, you should consider locking only the relevant lines of code that access the resource. This helps to reduce contention and improve concurrency.


Do You Use the Appropriate Synchronization Primitive?

Using the appropriate synchronization primitive helps reduce contention for resources. There may be scenarios where you need to signal other waiting threads either manually or automatically, based on the trigger of an event. Other scenarios vary by the frequency of read and write updates. Some of the guidelines that help you choose the appropriate synchronization primitive for your scenario are the following:

  • Use Mutex for interprocess communication.
  • Use AutoResetEvent and ManualResetEvent for event signaling.
  • Use System.Threading.InterLocked for synchronized increments and decrements on integers and longs.
  • Use ReaderWriterLock for multiple concurrent reads. When the write operation takes place, it is exclusive because all other read and write threads are queued up.
  • Use lock when you do want to allow one reader or writer acting on the object at a time.


Do You Use an Appropriate Transaction Isolation Level?

When considering units of work (size of transactions), you need to think about what your isolation level should be and what locking will be required to provide that isolation level and, therefore, what your risk of deadlocks and deadlock-based retrys are. You need to select appropriate isolation levels for your transactions to ensure that data integrity is preserved without unduly affecting application performance.

Selecting an isolation level higher than you need means that you lock objects in the database for longer periods of time and increase contention for those objects. Selecting an isolation level that is too low increases the probability of losing data integrity by causing dirty reads or writes.

If you are unsure of the appropriate isolation level for your database, you should use the default implementation, which is designed to work well in most scenarios.

Note You can selectively lower the isolation level used in specific queries, rather than changing it for the entire database. For more information, see Chapter 14, "Improving SQL Server Performance."


Does Your Design Consider Asynchronous Execution?

Asynchronous execution of work allows the main processing thread to offload the work to other threads, so it can continue to do some additional processing before retrieving the results of the asynchronous call, if they are required.

Scenarios that require I/O-bound work, such as file operations and calls to Web services, are potentially long-running and may block on the I/O or worker threads, depending on the implementation logic used for completing the operation. When considering asynchronous execution, evaluate the following questions:


Are you designing a Windows Forms application?

Windows Forms applications executing an I/O call, such as a call to a Web service or a file I/O operation, should generally use asynchronous execution to keep the user interface responsive. The .NET Framework provides support for asynchronous operations in all the classes related to I/O activities, except in ADO.NET.


Are you designing a server application?

Server applications should use asynchronous execution whenever the work is I/O–bound, such as calling Web services if the application is able to perform some useful work when the executing worker thread is freed.

You can free up the worker thread completely by submitting work and polling for results from the client at regular intervals. For more information about how to do this, see "How To: Submit and Poll for Long-Running Tasks" in the "How To" section of this guide.

Other approaches include freeing up the worker thread partially to do some useful work before blocking for the results. These approaches use Mutex derivates such as WaitHandle.

For server applications, you should not call the database asynchronously, because ADO.NET does not have support for such operations and it requires the use of delegates that run on worker threads for processing. You might as well block on the original thread rather than using another worker thread to complete the operation.


Do you use the asynchronous design pattern?

The .NET Framework provides a design pattern for asynchronous communication. The advantage is that it is the caller that decides whether a particular call should be asynchronous. It is not necessary for the callee to expose plumbing for asynchronous invocation. Other advantages include type safety.

For more information, see "Asynchronous Design Pattern Overview" in the .NET Framework Developer's Guide on MSDN at: http://msdn.microsoft.com/library/en-us/cpguide/html/cpconasynchronousdesignpatternoverview.asp.


More Information

For more information about the questions and issues raised in this section, see Web Application Performance Design Guidelines - Concurrency.

Personal tools