Performance Design Principles - Resource Management
- J.D. Meier, Srinath Vasireddy, Ashish Babbar, Rico Mariani, and Alex Mackman
Avoid creating threads on a per-request basis. If threads are created indiscriminately, particularly for high-volume server applications, this can hurt performance because it consumes resources (particularly on single-CPU servers) and introduces thread switching overhead for the processor. A better approach is to use a shared pool of threads, such as the process thread pool. When using a shared pool, make sure you optimize the way that you use the threads:
- Optimize the number of threads in the shared pool. For example, specific thread pool tuning is required for a high-volume Web application making outbound calls to one or more Web services. For more information about tuning the thread pool in this situation, see Chapter 10, "Improving Web Services Performance".
- Minimize the length of jobs that are running on shared threads.
An efficient thread pool implementation offers a number of benefits and allows the optimization of system resources. For example, the .NET thread pool implementation dynamically tunes the number of threads in the pool based on current CPU utilization levels. This helps to ensure that the CPU is not overloaded. The thread pool also enforces a limit on the number of threads it allows to be active in a process simultaneously, based on the number of CPUs and other factors.
Pool shared resources that are scarce or expensive to create, such as database or network connections. Use pooling to help reduce performance overhead and improve scalability by sharing a limited number of resources among a much higher number of clients. Common pools include the following:
- Thread pool. Use process-wide thread pools instead of creating threads on a per-request basis.
- Connection pool. To ensure that you use connection pooling most efficiently, use the trusted subsystem model to access downstream systems and databases. With this model, you use a single fixed identity to connect to downstream systems. This allows the connection to be efficiently pooled.
- Object pool. Objects that are expensive to initialize are ideal candidates for pooling. For example, you could use an object pool to retain a limited set of mainframe connections that take a long time to establish. Multiple objects can be shared by multiple clients as long as no client-specific state is maintained. You should also avoid any affinity to a particular resource. Creating an affinity to a particular object effectively counteracts the benefits of object pooling in the first place. Any object in the pool should be able to service any request and should not be blocked for one particular request.
For more information, see the following resources:
- For more information about the trusted subsystem model, see Chapter 14, "Building Secure Data Access," in Improving Web Application Security: Threats and Countermeasures on MSDN, at http://msdn.microsoft.com/library/en-us/dnnetsec/html/ThreatCounter.asp.
- For more information about COM+ object pooling, see "Object Pooling" in Chapter 8, "Improving Enterprise Services Performance" at http://msdn.microsoft.com/library/en-us/dnpag/html/ScaleNetChapt08.asp
Acquire Late, Release Early
Acquire resources as late as possible, immediately before you need to use them, and release them immediately after you are finished with them. Use language constructs, such as finally blocks, to ensure that resources are released even in the event of an exception.
Consider Efficient Object Creation and Destruction
Object creation should generally be deferred to the actual point of usage. This ensures that the objects do not consume system resources while waiting to be used. Release objects immediately after you are finished with them.
If objects require explicit cleanup code and need to release handles to system resources, such as files or network connections, make sure that you perform the cleanup explicitly to avoid any memory leaks and waste of resources.
For more information about garbage collection, see Chapter 5, "Improving Managed Code Performance" at http://msdn.microsoft.com/library/en-us/dnpag/html/scalenetchapt05.asp
Consider Resource Throttling
You can use resource throttling to prevent any single task from consuming a disproportionate percentage of resources from the total allocated for the application. Resource throttling prevents an application from overshooting its allocated budget of computer resources, including CPU, memory, disk I/O, and network I/O.
A server application attempting to consume large amounts of resources can result in increased contention. This causes increased response times and decreased throughput. Common examples of inefficient designs that cause this degradation include the following:
- A user query that returns a large result set from a database. This can increase resource consumption at the database, on the network, and on the Web server.
- An update that locks a large number of rows across frequently accessed tables. This causes significant increases in contention.
To help address these and similar issues, consider the following options for resource throttling:
- Paging through large result sets.
- Setting timeouts on long-running operations such that no single request continues to block on a shared resource beyond a permissible time limit.
- Setting the process and thread priorities appropriately. Avoid assigning priorities higher than normal unless the process or the thread is very critical and demands real-time attention from the processor.
If there are cases where a single request or the application as a whole needs to consume large amounts of resources, you can either consider splitting the work across multiple servers or you can offload the work to nonpeak hours when the resource utilization is generally low.