- Locate and review performance and scalability issues in managed code.
- Review ASP.NET code.
- Review the efficiency of interop code.
- Review serviced component code.
- Review Web services code.
- Review XML code.
- Review .NET remoting code.
- Review data access code.
- Select tools to help with the code review.
Overview
Code reviews should be a regular part of your development process. Performance and scalability code reviews focus on identifying coding techniques and design choices that could lead to performance and scalability issues. The review goal is to identify potential performance and scalability issues before the code is deployed. The cost and effort of fixing performance and scalability flaws at development time is far less than fixing them later in the product deployment cycle.Avoid performance code reviews too early in the coding phase because this can restrict your design options. Also, bear in mind that that performance decisions often involve tradeoffs. For example, it is easy to reduce maintainability and flexibility while striving to optimize code.
This chapter begins by highlighting the most significant issues that time and again result in inefficient code and suboptimal performance. The chapter then presents the review questions you need to ask while reviewing managed code. These questions apply regardless of the type of managed application you are building. Subsequent sections focus on questions specific to ASP.NET, interoperability with unmanaged code, Enterprise Services, Web services, .NET remoting, and data access. The chapter concludes by identifying a set of tools that you can use to help perform your code reviews.
How to Use This Chapter
This chapter presents the questions that you need to ask to expose potential performance and scalability issues in your managed code. To get the most out of this chapter, do the following:- Jump to topics or read from beginning to end. The main headings in this chapter help you locate the topics that interest you. Alternatively, you can read the chapter from beginning to end to gain a thorough appreciation of the areas to focus on while performing performance-related code reviews.
- Read Chapter 3, "Design Guidelines for Application Performance" Read Chapter 3 to help ensure that you do not introduce bottlenecks at design time.
- Know your application architecture. Before you start to review code, make sure you fully understand your application's architecture and design goals. If your application does not adhere to best practices architecture and design principles for performance, it is unlikely to perform or scale satisfactorily, even with detailed code optimization. For more information, see Chapter 3, "Design Guidelines for Application Performance," and Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability."
- Scope your review. Identify the priority areas in your application where the review should focus. For example, if you have an online transaction processing (OLTP) database, data access is typically the key area where the most number of optimizations are probable. Similarly, if your application contains complex business logic, focus initially on the business layer. While you should focus on high impact areas, keep in mind the end-to-end flow at the application level.
- Read "Application Performance" chapters. Read the "Application Performance and Scalability" chapters found in Part III of this guide to discover technical solutions to problems raised during your code review.
- Update your coding standards. During successive code reviews, identify key characteristics that appear repeatedly and add those to your development department's coding standards. Over time, this helps raise developer awareness of the important issues and helps reduce common performance-related coding mistakes and encourage best practices during development.
FxCop
A good way to start the review process is to run your compiled assemblies through the FxCop analysis tool. The tool analyzes binary assemblies (not source code) to ensure that they conform to the Microsoft® .NET Framework Design Guidelines, available on MSDN®.The tool comes with a predefined set of rules, although you can customize and extend them. For the list of performance rules that FxCop checks for, see "FxCop Performance Rules" on GotDotNet at http://msdn.microsoft.com/en-us/library/ms182260(VS.80).aspx.
More Information
For more information, see the following resources:- To download the FxCop tool, see "About Developing Reusable Libraries" on GotDotNet at http://msdn.microsoft.com/en-us/library/bb429476(VS.80).aspx.
- For general information about FxCop, see the "FxCop Team Page" on GotDotNet at http://code.msdn.microsoft.com/GotDotNet.aspx.
- To get help and support for the tool, see the GotDotNet message board for discussions about the FxCop tool at http://www.gotdotnet.com/community/messageboard/MessageBoard.aspx?ID=234 [Content link no longer available, original URL:http://www.gotdotnet.com/community/messageboard/MessageBoard.aspx?ID=234].
- For the .NET Framework design guidelines, see "Design Guidelines for Class Library Developers" in the .NET Framework General Reference on MSDN at http://msdn.microsoft.com/en-us/library/czefa0ke(VS.71).aspx.
Common Performance Issues
During your code reviews, pay particular attention to the following areas:- Frequent code paths. Prioritize your code review process by identifying code paths that are frequently executed and begin your review process in these areas.
- Frequent loops. Even the slightest inefficiency inside a loop is magnified many times over depending on the number of iterations. Specifically watch out for repetitive property access inside your loops, using foreach instead of for, performing expensive operations within your loops, and using recursion. Recursion incurs the overhead of having to repeatedly build new stack frames.
- Resource cleanup
- Exceptions
- String management
- Threading
- Boxing
Resource Cleanup
Failing to clean up resources is a common cause of performance and scalability bottlenecks. Review your code to make sure all resources are closed and released as soon as possible. This is particularly important for shared and limited resources such as connections. Make sure your code calls Dispose (or Close) on disposable resources. Make sure your code uses finally blocks or using statements to ensure resources are closed even in the event of an exception.Exceptions
While structured exception handling is encouraged because it leads to more robust code and code that is less complex to maintain than code that uses method return codes to handle error conditions, exceptions can be expensive.Make sure you do not use exception handling to control regular application flow. Use it only for exceptional conditions. Avoid exception handling inside loops — surround the loop with a try/catch block instead if that is required. Also identify code that swallows exceptions or inefficient code that catches, wraps, and rethrows exceptions for no valid reason.
String Management
Excessive string concatenation results in many unnecessary allocations, creating extra work for the garbage collector. Use StringBuilder for complex string manipulations and when you need to concatenate strings multiple times. If you know the number of appends and concatenate strings in a single statement or operation, prefer the + operator. In ASP.NET applications, consider emitting HTML output by using multiple Response.Write calls instead of using a StringBuilder.Threading
Server-side code should generally use the common language runtime (CLR) thread pool and should not create threads on a per-request basis. Review your code to ensure that appropriate use is made of the thread pool and that the appropriate synchronization primitives are used. Make sure your code does not lock whole classes or whole methods when locking a few lines of code might be appropriate. Also make sure your code does not terminate or pause threads by using Thread.Abort or Thread.Suspend.Boxing
Boxing causes a heap allocation and a memory copy operation. Review your code to identify areas where implicit boxing occurs. Pay particular attention to code inside loops where the boxing overhead quickly adds up. Avoid passing value types in method parameters that expect a reference type. Sometimes this is unavoidable. In this case, to reduce the boxing overhead, box your variable once and keep an object reference to the boxed copy as long as needed, and then unbox it when you need a value type again.Excessive boxing often occurs where you use collections that store System.Object types. Consider using an array or a custom-typed collection class instead.
To identify locations that might have boxing overhead, you can search your assembly's Microsoft intermediate language (MSIL) code for the box and unbox instructions, using the following command line.
Ildasm.exe yourcomponent.dll /text | findstr box Ildasm.exe yourcomponent.dll /text | findstr unboxTo measure the overhead, use a profiler.
Managed Code and CLR Performance
While the .NET CLR is designed for high performance, the way in which you write managed code can either take advantage of that high performance or hinder it. Use the review questions in this section to analyze your entire managed source code base. The review questions apply regardless of the type of assembly. This section helps you identify coding techniques that produce inefficient managed code, which in turn can lead to performance problems. For more information about the issues raised in this section, see Chapter 5, "Improving Managed Code Performance." This section describes the following:- Memory management
- Looping and recursion
- String operations
- Exception handling
- Arrays
- Collections
- Locking and synchronization
- Threading
- Asynchronous processing
- Serialization
- Visual Basic considerations
- Reflection and late binding
- Code access security
- Ngen.exe
Memory Management
Use the following review questions to assess how efficiently your code uses memory:- Do you manage memory efficiently?
- Do you call GC.Collect?
- Do you use finalizers?
- Do you use unmanaged resources across calls?
- Do you use buffers for I/O operations?
Do You Manage Memory Efficiently?
To identify how efficiently your code manages memory, review the following questions:- Do you call Dispose or Close?
Check that your code calls Dispose or Close on all classes that support these methods. Common disposable resources include the following:
- Database-related classes: Connection, DataReader, and Transaction.
- File-based classes: FileStream and BinaryWriter.
- Stream-based classes: StreamReader, TextReader, TextWriter, BinaryReader, and TextWriter.
- Network-based classes: Socket, UdpClient, and TcpClient.
- Do you have complex object graphs?
Analyze your class and structure design and identify those that
contain many references to other objects. These result in complex object
graphs at runtime, which can be expensive to allocate and create
additional work for the garbage collector. Identify opportunities to
simplify these structures. Simpler graphs have superior heap locality
and they are easier to maintain.
Another common problem to look out for is referencing short-lived objects from long-lived objects. Doing so increases the likelihood of short-lived objects being promoted from generation 0, which increases the burden on the garbage collector. This often happens when you allocate a new object and then assign it to a class level object reference. - Do you set member variables to null before long-running calls?
Identify potentially long-running method calls. Check that you set
any class-level member variables that you do not require after the call
to null before making the call. This enables those objects to be garbage collected while the call is executing.
Note There is no need to explicitly set local variables to null because the just-in-time (JIT) compiler can statically determine that the variable is no longer referenced.
- Do you cache data using WeakReference objects?
Look at where your code caches objects to see if there is an
opportunity to use weak references. Weak references are suitable for
medium- to large-sized objects stored in a collection. They are not
appropriate for very small objects.
By using weak references, cached objects can be resurrected easily if needed or they can be released by garbage collection when there is memory pressure.
Using weak references is just one way of implementing caching policy. For more information about caching, see "Caching" in Chapter 3, "Design Guidelines for Application Performance." - Do you call ReleaseComObject? If you create and destroy COM objects on a per-request basis under load, consider calling ReleaseComObject. Calling ReleaseComObject releases references to the underlying COM object more quickly than if you rely on finalization. For example, if you call COM components from ASP.NET, consider calling ReleaseComObject. If you call COM components hosted in COM+ from managed code, consider calling ReleaseComObject. If you are calling a serviced component that wraps a COM component, you should implement Dispose in your serviced component, and your Dispose method should call ReleaseComObject. The caller code should call your serviced component's Dispose method.
Do You Call GC.Collect?
Check that your code does not call GC.Collect explicitly. The garbage collector is self-tuning. By programmatically forcing a collection with this method, the chances are you hinder rather than improve performance.The garbage collector gains its efficiency by adopting a lazy approach to collection and delaying garbage collection until it is needed.
Do You Use Finalizers?
Finalization has an impact on performance. Objects that need finalization must necessarily survive at least one more garbage collection than they otherwise would; therefore, they tend to get promoted to older generations.As a design consideration, you should wrap unmanaged resources in a separate class and implement a finalizer on this class. This class should not reference any managed object. For example, if you have a class that references managed and unmanaged resources, wrap the unmanaged resources in a separate class with a finalizer and make that class a member of the outer class. The outer class should not have a finalizer.
Identify which of your classes implement finalizers and consider the following questions:
- Does your class need a finalizer? Only implement a finalizer for objects that hold unmanaged resources across calls. Avoid implementing a finalizer on classes that do not require it because it adds load to the finalizer thread as well as the garbage collector.
- Does your class implement IDisposable? Check that any class that provides a finalizer also implements IDisposable, using the Dispose pattern described in Chapter 5, "Improving Managed Code Performance."
- Does your Dispose implementation suppress finalization? Check that your Dispose method calls GC.SuppressFinalization. GC.SuppressFinalization instructs the runtime to not call Finalize on your object because the cleanup has already been performed.
- Can your Dispose method be safely called multiple times? Check that clients can call Dispose multiple times without causing exceptions. Check that your code throws an ObjectDisposedException exception from methods (other than Dispose) if they are invoked after calling Dispose.
- Does your Dispose method call base class Dispose methods? If your class inherits from a disposable class, make sure that it calls the base class's Dispose.
- Does your Dispose method call Dispose on class members? If you have any member variables that are disposable objects, they too should be disposed.
- Is your finalizer code simple? Check that your finalizer code simply releases resources and does not perform more complex operations. Anything else adds overhead to the dedicated finalizer thread which can result in blocking.
- Is your cleanup code thread safe? For your thread safe types, make sure that your cleanup code is also thread safe. You need to do this to synchronize your cleanup code in the case where multiple client threads call Dispose at the same time.
Do You Use Unmanaged Resources Across Calls?
Check that any class that uses an unmanaged resource, such as a database connection across method calls, implements the IDisposable interface. If the semantics of the object are such that a Close method is more logical than a Dispose method, provide a Close method in addition to Dispose.Do You Use Buffers for I/O Operations?
If your code performs I/O or long-running calls that require pinned memory, investigate where in your code the buffers are allocated. You can help reduce heap fragmentation by allocating them when your application starts. This increases the likelihood that they end up together in generation 2, where the cost of the pin is largely eliminated. You should also consider reusing and pooling the buffers for efficiency.Looping and Recursion
Inefficient looping and recursion can create many bottlenecks. Also, any slight inefficiency is magnified due to it being repeatedly executed. For this reason, you should take extra care to ensure the code within the loop or the recursive method is optimized. For more information about the questions and issues raised in this section, see "Iterating and Looping" in Chapter 5, "Improving Managed Code Performance." Use the following review questions to help identify performance issues in your loops:- Do you repetitively access properties?
- Do you use recursion?
- Do you use foreach?
- Do you perform expensive operations within your loops?
Do You Repetitively Access Properties?
Repeated accessing of object properties can be expensive. Properties can appear to be simple, but might in fact involve expensive processing operations.Do You Use Recursion?
If your code uses recursion, consider using a loop instead. A loop is preferable in some scenarios because each recursive call builds a new stack frame for the call. This results in consumption of memory, which can be expensive depending upon the number of recursions. A loop does not require any stack frame creation unless there is a method call inside the loop.If you do use recursion, check that your code establishes a maximum number of times it can recurse, and ensure there is always a way out of the recursion and that there is no danger of running out of stack space.
Do You Use foreach?
Using foreach can result in extra overhead because of the way enumeration is implemented in .NET Framework collections. .NET Framework 1.1 collections provide an enumerator for the foreach statement to use by overriding the IEnumerable.GetEnumerator. This approach is suboptimal because it introduces both managed heap and virtual function overhead associated with foreach on simple collection types. This can be a significant factor in performance-sensitive regions of your application. If you are developing a custom collection for your custom type, consider the following guidelines while implementing IEnumerable:- If you implement IEnumerable.GetEnumerator, also implement a nonvirtual GetEnumerator method. Your class's IEnumerable.GetEnumerator method should call this nonvirtual method, which should return a nested public enumerator struct.
- Explicitly implement the IEnumerator.Current property on your enumerator struct.
Consider using a for loop instead of foreach to increase performance for iterating through .NET Framework collections that can be indexed with an integer.
Do You Perform Expensive Operations Within Your Loops?
Examine the code in your loop and look for the following opportunities for optimization:- Move any code out of the loop that does not change in the loop.
- Investigate the methods called inside the loop. If the called methods contain small amounts of code, consider inlining them or parts of them.
- If the code in the loop performs string concatenation, make sure that it uses StringBuilder.
- If you test for multiple exit conditions, begin the expression with the one most likely to allow you to exit.
- Do not use exceptions as a tool to exit one or more loops.
- Avoid calling properties within loops and if you can, check what the property accessor does. Calling a property can be a very expensive operation if the property is performing complex operations.
String Operations
Review your code to see how it performs string manipulation. Intensive string manipulation can significantly degrade performance. Consider the following questions when reviewing your code's string manipulation:- Do you concatenate strings?
- Do you use StringBuilder?
- Do you perform string comparisons?
Do You Concatenate Strings?
If you concatenate strings where the number of appends is known, you should use the + operator as follows.String str = "abc" + "def" + "ghi";If the number and size of appends is unknown, such as string concatenation in a loop, you should use the StringBuilder class as follows.
for (int i=0; i< Results.Count; i++){ StringBuilder.Append (Results[i]); }
Do You Use StringBuilder?
StringBuilder is efficient for string concatenation where the number and size of appends is unknown. Some of the scenarios which demonstrate an efficient way of using StringBuilder are as follows:- String concatenation
//Prefer this StringBuilder sb; sb.Append(str1); sb.Append(str2); //over this sb.Append(str1+str2);
- Concatenating strings from various functions
//Prefer this void f1( sb, ); void f2( sb, ); void f3( sb, ); //over this StringBuilder sb; sb.Append(f1( )); sb.Append(f2( )); sb.Append(f3( ));
Do You Perform String Comparisons?
Check whether your code performs case-insensitive string comparisons. If it does, check that it uses the following overloaded Compare method.String.Compare (string strA, string strB, bool ignoreCase);Watch out for code that calls the ToLower method. Converting strings to lowercase and then comparing them involves temporary string allocations. This can be very expensive, especially when comparing strings inside a loop.
More Information
For more information about the issues raised in this section, see "String Operations" in Chapter 5, "Improving Managed Code Performance."Exception Handling
Managed code should use exception handling for robustness, security, and to ease maintenance. Used improperly, exception management can significantly affect performance. For more information about the questions and issues raised in this section, see "Exception Management" in Chapter 5, "Improving Managed Code Performance." Use the following review questions to help ensure that your code uses exception handling efficiently:- Do you catch exceptions you cannot handle?
- Do you control application logic with exception handling?
- Do you use finally blocks to ensure resources are freed?
- Do you use exception handling inside loops?
- Do you re-throw exceptions?
Do You Catch Exceptions You Cannot Handle?
You should catch exceptions for very specific reasons, because catching generally involves rethrowing an exception to the code that calls you. Rethrowing an exception is as expensive as throwing a new exception.Check that when your code catches an exception, it does so for a reason. For example, it might log exception details, attempt to retry a failed operation, or wrap the exception in a new exception and throw the outer exception back to the caller. This operation should be performed carefully and should not obscure error details.
Do You Control Application Logic with Exception Handling?
Check that your code does not use exception handling to control the flow of your normal application logic. Make sure that your code uses exceptions for only exceptional and unexpected conditions. If you throw an exception with the expectation that something other than a general purpose handler is going to do anything with it, you have probably done something wrong. You can consider using bool return values if you need to specify the status (success or failure) of a particular activity.For example, you can return false instead of throwing an exception if a user account was not found in the database. This is not a condition that warrants an exception. Failing to connect to the database, however, warrants an exception.
Do You Use Finally Blocks to Ensure Resources Are Freed?
Make sure that resources are closed after use by using try/catch blocks. The finally block is always executed, even if an exception occurs, as shown in the following example.SqlConnection conn = new SqlConnection(connString); try { conn.Open(); // Open the resource } finally { if(null!=conn) conn.Close(); // Always executed even if an exception occurs }Note C# provides the using construct that ensures an acquired resource is disposed at the end of the construct. The acquired resource must implement System.IDisposable or a type that can be implicitly converted to System.IDisposable, as shown in the following example.
Font MyFont3 = new Font("Arial", 10.0f); using (MyFont3) { // use MyFont3 } // compiler will generate code to call Dispose on MyFont3
Do You Use Exception Handling Inside Loops?
Check if your code uses exceptions inside loops. This should be avoided. If you need to catch an exception, place the try/catch block outside the loop for better performance.Do You Rethrow Exceptions?
Rethrowing exceptions is inefficient. Not only do you pay the cost for the original exception, but you also pay the cost for the exception that you rethrow.Rethrowing exceptions also makes it harder to debug your code because you cannot see the original location of the thrown exception in the debugger. A common technique is to wrap the original exception as an inner exception. However, if you then rethrow, you need to decide whether the additional information from the inner exception is better than the superior debugging you would get if you had done nothing.
Arrays
Arrays provide basic functionality for grouping types. To ensure that your use of arrays is efficient, review the following questions:- Do you use strongly typed arrays?
- Do you use multidimensional arrays?
Do You Use Strongly Typed Arrays?
Identify places in your code where you use object arrays (arrays containing the Object type). If you use object arrays to store other types, such as integers or floats, the values are boxed when you add them to the array. Use a strongly typed array instead, to avoid the boxing. For example, use the following to store integers.int[] arrIn = new int[10];Use the preceding to store integers instead of the following.
Object[] arrObj = new Object[10];
Do You Use Multidimensional Arrays?
If your code uses multidimensional arrays, see if you can replace the code with a jagged array (a single dimensional array of arrays) to benefit from MSIL performance optimizations.Note Jagged arrays are not CLS compliant and may not be used across languages. For more information, see "Use Jagged Arrays Instead of Multidimensional Arrays" in Chapter 5, "Improving Managed Code Performance."
Collections
To avoid creating bottlenecks and introducing inefficiencies, you need to use the appropriate collection type based on factors such as the amount of data you store, whether you need to frequently resize the collection, and the way in which you retrieve items from the collection.For design considerations, see Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability." Chapter 4 addresses the following questions:
- Are you using the right collection type?
- Have you analyzed the requirements?
- Are you creating your own data structures unnecessarily?
- Are you implementing custom collections? For more information see "Collection Guidelines" in Chapter 5, "Improving Managed Code Performance." Chapter 5 asks the following questions:
- Do you need to sort your collection?
- Do you need to search your collection?
- Do you need to access each element by index?
- Do you need a custom collection? Review the following questions if your code uses arrays or one of the .NET Framework collection classes:
- Have you considered arrays?
- Do you enumerate through collections?
- Do you initialize the collection to an approximate final size?
- Do you store value types in a collection?
- Have you considered strongly typed collections?
- Do you use ArrayList?
- Do you use Hashtable?
- Do you use SortedList?
Have You Considered Arrays?
Arrays avoid boxing and unboxing overhead for value types, as long as you use strongly typed arrays. You should consider using arrays for collections where possible unless you need special features such as sorting or storing the values as key/value pairs.Do You Enumerate Through Collections?
Enumerating through a collection using foreach is costly in comparison to iterating using a simple index. You should avoid foreach for iteration in performance-critical code paths, and use for loops instead.Do You Initialize the Collection to an Approximate Final Size?
It is more efficient to initialize collections to a final approximate size even if the collection is capable of growing dynamically. For example, you can initialize an ArrayList using the following overloaded constructor.ArrayList ar = new ArrayList (43);
Do You Store Value Types in a Collection?
Storing value types in a collection involves a boxing and unboxing overhead. The overhead can be significant when iterating through a large collection for inserting or retrieving the value types. Consider using arrays or developing a custom, strongly typed collection for this purpose.Note At the time of this writing, the .NET Framework 2.0 (code-named "Whidbey") introduces generics to the C# language that avoid the boxing and unboxing overhead.
Have You Considered Strongly Typed Collections?
Does your code use an ArrayList for storing string types? You should prefer StringCollection over ArrayList when storing strings. This avoids casting overhead that occurs when you insert or retrieve items and also ensures that type checking occurs. You can develop a custom collection for your own data type. For example, you could create a Cart collection to store objects of type CartItem.Do You Use ArrayList?
If your code uses ArrayList, review the following questions:- Do you store strongly typed data in ArrayLists?
Use ArrayList to store custom object types,
particularly when the data changes frequently and you perform frequent
insert and delete operations. By doing so, you avoid the boxing
overhead. The following code fragment demonstrates the boxing issue.
ArrayList al = new ArrayList(); al.Add(42.0F); // Implicit boxing because the Add method takes an object float f = (float)al[0]; // Item is unboxed here
- Do you use Contains to search ArrayLists? Store presorted data and use ArrayList.BinarySearch for efficient searches. Sorting and linear searches using Contains are inefficient. This is of particular significance for large lists. If you only have a few items in the list, the overhead is insignificant. If you need several lookups, then consider Hashtable instead of ArrayList.
Do You Use Hashtable?
If your code uses a Hashtable collection of key/value pairs, consider the following review questions:- Do you store small amounts of data in a Hashtable? If you store small amounts of data (10 or fewer items), this is likely to be slower than using a ListDictionary. If you do not know the number of items to be stored, use a HybridDictionary.
- Do you store strings? Prefer StringDictionary instead of Hashtable for storing strings, because this preserves the string type and avoids the cost of up-casting and down-casting during storing and retrieval.
Do You Use SortedList?
You should use a SortedList to store key-and-value pairs that are sorted by the keys and accessed by key and by index. New items are inserted in sorted order, so the SortedList is well suited for retrieving stored ranges.You should use SortedList if you need frequent re-sorting of data after small inserts or updates. If you need to perform a number of additions or updates and then re-sort the whole collection, an ArrayList performs better than the SortedList.
Evaluate both collection types by conducting small tests and measuring the overall overhead in terms of time taken for sorting, and choose the one which is right for your scenario.
Locking and Synchronization
To help assess the efficiency of your locking and synchronization code, use the following questions:- Do you use Mutex objects?
- Do you use the Synchronized attribute?
- Do you lock "this"?
- Do you lock the type of an object?
- Do you use ReaderWriterLocks?
Do You Use Mutex Objects?
Review your code and make sure that Mutex objects are used only for cross-process synchronization and not cross-thread synchronization in a single process. The Mutex object is significantly more expensive to use than a critical section with the Lock (C#) or SyncLock (VB) statement.Do You Use the Synchronized Attribute?
See which of your methods are annotated with the synchronized attribute. This attribute is coarse-grained and it serializes access to the entire method such that only one thread can execute the method at any given instance, with all threads waiting in a queue. Unless you specifically need to synchronize an entire method, use an appropriate synchronization statement (such as a lock statement) to apply granular synchronization around the specific lines of code that need it. This helps to reduce contention and improve performance.Do You Lock "this"?
Avoid locking "this" in your class for correctness reasons, not for any specific performance gain. To avoid this problem, provide a private object to lock on.public class A { lock(this) { } } // Change to the code below: public class A { private Object thisLock = new Object(); lock(thisLock) { } }Use this approach to safely synchronize only relevant lines of code. If you require atomic updates to a member variable, use System.Threading.Interlocked.
Do You Lock The Type of an Object?
Avoid locking the type of the object, as shown in the following code sample.lock(typeof(MyClass));If there are other threads within the same process that lock on the type of the object, this might cause your code to hang until the thread locking the type of the object is finished executing.
This also creates a potential for deadlocks. For example, there might be some other application in a different application domain in the same process that acquires a lock on the same type and never releases it.
Consider providing a static object in your class instead, and use that as a means of synchronization.
private static Object _lock = new Object(); lock(_lock);For more information, see "A Special Dr. GUI: Don't Lock Type Objects!" on MSDN at http://msdn.microsoft.com/en-us/library/aa302312.aspx.
Do You Use ReaderWriterLock?
Check whether your code uses ReaderWriterLock objects to synchronize multiple reads and occasional writes. You should prefer the ReaderWriterLock over the other locking mechanisms such as lock and Monitor, where you need to occasionally update data which is read frequently, such as a custom cache collection. The ReaderWriterLock allows multiple threads to read a resource concurrently but requires a thread to wait for an exclusive lock to write the resource.For more information, see "ReaderWriterLock Class" in the .NET Framework Class Library on MSDN at http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlock(VS.71).aspx.
More Information
For more information about the questions and issues raised in this section, see "Locking and Synchronization" and "Locking and Synchronization Guidelines" in Chapter 5, "Improving Managed Code Performance."To review your approach to locking and synchronization from a design perspective, see "Concurrency" in Chapter 4, "Architecture and Design Review for .NET Application Performance and Scalability."
Threading
If you misuse threads, you can easily reduce your application's performance rather than improve it. Review your code by using the following questions to help identify potential performance-related issues caused by misuse of threads or inappropriate threading techniques. For more information about the questions and issues raised in this section, see "Threading Guidelines" in Chapter 5, "Improving Managed Code Performance."- Do you create additional threads?
- Do you call Thread.Suspend or Thread.Resume?
- Do you use volatile fields?
- Do you execute periodic tasks?
Do You Create Additional Threads?
You should generally avoid creating threads, particularly in server-side code — use the CLR thread pool instead. In addition to the cost of creating the underlying operating system thread, frequently creating new threads can also lead to excessive context switching, memory allocation, and additional cleanup when the thread dies. Recycling threads within the thread pool generally leads to superior results.Do You Call Thread.Suspend or Thread.Resume?
Use synchronization objects if you need to synchronize threads. Calling Thread.Suspend and Thread.Resume to synchronize the activities of multiple threads can cause deadlocks. Generally, Suspend and Resume should be used only in the context of debugging or profiling, and not at all for typical applications. Use synchronization objects such as ManualResetEvent objects if you need to synchronize threads.Do You Use Volatile Fields?
Limit the use of the volatile keyword because volatile fields restrict the way the compiler reads and writes the contents of the fields. Volatile fields are not meant for ensuring thread safety.Do You Execute Periodic Tasks?
If you require a single thread for periodic tasks, it is cheaper to have just one thread explicitly executing the periodic tasks and then sleeping until it needs to perform the task again. However, if you require multiple threads to execute periodic tasks for each new request, you should use the thread pool.Use the Threading.Timer class to periodically schedule tasks. The Timer class uses the CLR thread pool to execute the code.
Note that a dedicated thread is more likely to get scheduled at the correct time than a pooled thread. This is because if all threads are busy, there could be a delay between the scheduled time of the background work and a worker thread becoming available. If there is a dedicated thread for the background work, a thread will be ready at the appointed time.
Asynchronous Processing
You can use asynchronous calls to help increase application concurrency.To ensure that you use asynchronous processing appropriately, review the following questions:
- Do you poll for asynchronous invocation resources?
- Do you call EndInvoke after calling BeginInvoke?
Do You Poll for Asynchronous Invocation Results?
Avoid polling for asynchronous invocation results. Polling is inefficient and uses precious processor cycles which can be used by other server threads. Use a blocking call instead. Methods of AsyncResult.AsyncWaitHandle.WaitHandle class such as WaitOne, WaitAll, and WaitAny are good examples of blocking calls.Do You Call EndInvoke After Calling BeginInvoke?
Review your code to see where it calls BeginInvoke to use asynchronous delegates. For each call to BeginInvoke, make sure your code calls EndInvoke to avoid resource leaks.More Information
For more information about the questions and issues raised in this section, see "Asynchronous Calls Explained" and "Asynchronous Calls Guidelines" in Chapter 5, "Improving Managed Code Performance."To review your design and how it uses asynchronous processing see "Concurrency" in Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability."
Serialization
Inefficient serialization code is a common performance-related problem area. To review whether your code uses serialization, search for the "Serializable" string. Classes that support serialization should be decorated with the SerializableAttribute; they may also implement ISerializable. If your code does use serialization, review the following questions:- Do you serialize too much data?
- Do you serialize DataSet objects?
- Do you implement ISerializable?
Do You Serialize Too Much Data?
Review which data members from an object your code serializes. Identify items that do not need to be serialized, such as items that can be easily recalculated when the object is deserialized. For example, there is no need to serialize an Age property in addition to a DateOfBirth property because the Age can easily be recalculated without requiring significant processor power. Such members can be marked with the NonSerialized attribute if you use the SoapFormatter or the BinaryFormatter or the XmlIgnore attribute if you use the XmlSerializer class, which Web services use.Also identify opportunities to use structures within your classes to encapsulate the data that needs to be serialized. Collecting the logical data in a data structure can help reduce round trips and lessen the serialization impact.
Do You Serialize DataSet Objects?
The DataSet object generates a large amount of serialization data and is expensive to serialize and deserialize. If your code serializes DataSet objects, make sure to conduct performance testing to analyze whether it is creating a bottleneck in your application. If it is, consider alternatives such as using custom classes.Do You Implement ISerializable?
If your classes implement ISerializable to control the serialization process, be aware that you are responsible for maintaining your own serialization code. If you implement ISerializable simply to restrict specific fields from being serialized, consider using the Serializable and NonSerialized attributes instead. By using these attributes, you will automatically gain the benefit of any serialization improvements in future versions of the .NET Framework.More Information
For more information about improving serialization performance and DataSet serialization, see "How To: Improve Serialization Performance" in the "How To" section of this guide.For more information about the various options for passing data across the tiers of a distributed .NET application, see Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability."
Visual Basic Considerations
When optimized, Visual Basic .NET code can perform as well as C# code. If you have ported existing Visual Basic code to Visual Basic .NET, performance is unlikely to be optimized because you are unlikely to be using the best .NET coding techniques. If you have Visual Basic .NET source code, review the following questions:- Have you switched off int checking?
- Do you use on error goto?
- Do you turn on Option Strict and Explicit?
- Do you perform lots of string concatenation?
Have You Switched Off int Checking?
Int checking is beneficial during development, but you should consider turning it off to gain performance in production. Visual Basic turns on int checking by default, to make sure that overflow and divide-by-zero conditions generate exceptions.Do You Use On Error Goto?
Review your code to see if it uses the on error goto construct. If it does, you should change your code to use the .NET structured exception handling with Try/Catch/Finally blocks. The following code uses on error goto.Sub AddOrderOld(connstring) On Error GoTo endFunc Dim dataclass As DAOrder = New DAOrder Dim conn As SqlConnection = New SqlConnection(connstring) dataclass.AddOrder(conn) EndFunc: If Not(conn is Nothing) Then conn.Close() End If End SubThe following code shows how this should be rewritten using exception handling.
Sub AddOrder(connstring) Dim conn As SqlConnection Try Dim dataclass As DAOrder = New DAOrder conn = New SqlConnection(connstring) dataclass.AddOrder(conn) Catch ex As Exception ' Exception handling code Finally If Not(conn is Nothing) Then conn.Close() End If End Try End Sub
Do You Turn on Option Strict and Explicit?
Review your code and ensure that the Strict and Explicit options are turned on. This ensures that all narrowing type coercions must be explicitly specified. This protects you from inadvertent late binding and enforces a higher level of coding discipline. Option Explicit forces you to declare a variable before using it by moving the type-inference from run time to compile time. The code for turning on Explicit and Strict is shown in the following code sample.Option Explicit On Option Strict OnIf you compile from the command line using the Vbc.exe file, you can indicate that the compiler should turn on Strict and Explicit as follows.
vbc mySource.vb /optionexplicit+ /optionstrict+
Do You Perform Lots of String Concatenation?
If your code performs lots of string concatenations, make sure that it uses the StringBuilder class for better performance.Note If you use ASP.NET to emit HTML output, use multiple Response.Write calls instead of using a StringBuilder.
Reflection and Late Binding
Use the following review questions to review your code's use of reflection:If your code uses reflection, review the following questions:
- Do you use .NET Framework classes that use reflection?
- Do you use late binding?
- Do you use System.Object to access custom objects?
Do You Use .NET Framework Classes that Use Reflection?
Analyze where your code uses reflection. It should be avoided on the critical path in an application, especially in loops and recursive methods. Reflection is used by many .NET Framework classes. Some common places where reflection is used are the following:- The page framework in ASP.NET uses reflection to create the controls on the page, and hook event handlers. By reducing the number of controls, you enable faster page rendering.
- Framework APIs such as Object.ToString use reflection. Although ToString is a virtual method, the base Object implementation of ToString uses reflection to return the type name of the class. Implement ToString on your custom types to avoid this.
- The .NET Framework remoting formatters, BinaryFormatter and SOAPFormatter, use reflection. While they are fast for referenced objects, they can be slow for value types which have to be boxed and unboxed to pass through the reflection API.
Do You Use Late Binding?
In Visual Basic .NET, a variable is late bound if it is declared as an Object or is without an explicit data type. When your code accesses members on late-bound variables, type checking and member lookup occurs at run time. As a result, early-bound objects have better performance than late-bound objects. The following example shows a data class being assigned to an object.Sub AddOrder() Dim dataclass As Object = New DAOrder ' Dim dataclass as DAOrder = New DAOrder will improve performance ' Do other processing End Sub
Do You Use System.Object to Access Custom Objects?
Avoid using System.Object to access custom objects because this incurs the performance overhead of reflection. Use this approach only in situations where you cannot determine the type of an object at design time.More Information
For more information about the questions and issues raised in this section, see "Reflection and Late Binding" in Chapter 5, "Improving Managed Code Performance."Code Access Security
Code access security supports the safe execution of semi-trusted code, protects users from malicious software, and prevents several kinds of attacks. It also supports the controlled, code identity-based access to resources. Use the following review questions to review your use of code access security:- Do you use declarative security?
- Do you call unmanaged code?
Do You Use Declarative Security?
Where possible, it is recommended that you use declarative security instead of imperative security checks. The current implementation of demand provides better performance and support with the security tools that are currently being built to help security audits.Note that if your security checks are conditional within a method, imperative security is your only option.
Do You Call Unmanaged Code?
When calling unmanaged code, you can remove the runtime security checks by using the SuppressUnmanagedCodeSecurity attribute. This converts the check to a LinkDemand check, which is much faster. However, you should only do so if you are absolutely certain that your code is not subject to luring attacks.More Information
For more information about the questions and issues raised in this section, see "Code Access Security" in Chapter 5, "Improving Managed Code Performance."For more information about the danger of luring attacks and the potential risks introduced by using SuppressUnmanagedCodeSecurity and LinkDemand, see Chapter 8, "Code Access Security in Practice" in "Improving Web Application Security: Threats and Countermeasures" on MSDN at http://msdn.microsoft.com/en-us/library/aa302424.aspx.
Class Design Considerations
Review your class design using the following questions:- Do you use properties?
- Do you define only the required variables as public?
- Do you seal your classes or methods?
Do You Use Properties?
You can expose class-level member variables by using public fields or public properties. The use of properties represents good object-oriented programming practice because it allows you to encapsulate validation and security checks and to ensure that they are executed when the property is accessed.Properties must be simple and should not contain more code than required for getting/setting and validation of the parameters. Properties can look like inexpensive fields to clients of your class, but they may end up performing expensive operations.
Do You Define Only the Required Variables As Public?
You can scope member variables as either public or private members. Think carefully about which members should be made public because with public members you run the risk of exposing sensitive data that can easily be manipulated. In addition to security concerns, you should also avoid unnecessary public members to prevent any additional serialization overhead when you use the XmlSerializer class, which serializes all public members by default.Do You Seal Your Classes or Methods?
If you do not want anybody to extend your base classes, you should mark them with the sealed keyword. Also, if you derive from a base class that has virtual members and you do not want anybody to extend the functionality of your derived class, consider sealing the virtual members in the derived class. Sealing the virtual methods makes them candidates for inlining and other compiler optimizations.Ngen.exe
The Native Image Generator utility (Ngen.exe) allows you to precompile your assemblies to avoid JIT compilation at run time. However, Ngen.exe does not guarantee improved performance and you should carefully consider whether to use it.Ngen.exe cannot be used on assemblies that need to be shared across application domains. Therefore, sharing code is one of the prime considerations for choosing Ngen.exe. When considering using Ngen.exe, review the following questions:
- Do you precompile Windows Forms applications?
- Do you create large shared libraries?
- Do you use application domains?
Do You Precompile Windows Forms Applications?
Windows Forms applications use a large number of shared libraries provided with the .NET Framework. As a result, the load and initialization time for Windows Forms applications can be much higher than other kinds of applications. While not always the case, precompiling Windows Forms applications usually improves performance. You should test your application with and without precompilation to be sure.Do You Create Large Shared Libraries?
Precompiling your code using Ngen.exe generally helps if you create large shared libraries, because you pay the cost of loading then much more often. Microsoft precompiles the .NET Framework assemblies because the assemblies are shared across applications. This reduces the working set size and improves startup time.ASP.NET
ASP.NET is often the foundation from which other technologies are used. Optimizing ASP.NET performance is critical to ensure optimum application performance. Review the following questions to help assess the efficiency of your ASP.NET applications:- Do you use caching?
- Do you use session state?
- Do you use application state?
- Do you use threading and synchronization features?
- Do you manage resources efficiently?
- Do you manage strings efficiently?
- Do you manage exceptions efficiently?
- Have you optimized your Web pages?
- Do you use view state?
- Do you use server controls?
- Do you access data from your pages?
- Do you use data binding?
- Do you call unmanaged code from ASPX pages?
- Have you reviewed the settings in Machine.config?
Do You Use Caching?
Use the following review questions to assess your code's use of ASP.NET caching features:- Do you have too many variations for output caching? Check your pages that use the output cache to ensure that the number of variations has a limit. Too many variations of an output cached page can cause an increase in memory usage. You can identify pages that use the output cache by searching for the string "OutputCache."
- Could you use output caching? When reviewing your pages, start by asking yourself if the whole page can be cached. If the whole page cannot be cached, can portions of it be cached? Consider using the output cache even if the data is not static. If your content does not need to be delivered in near real-time, consider output caching. Using the output cache to cache either the entire page or portions of the page can significantly improve performance.
- Is there static data that would be better stored in the cache? Identify application-side data that is static or infrequently updated. This type of data is a great candidate for storing in the cache.
- Do you check for nulls before accessing cache items?
You can improve performance by checking for null before accessing the cached item as shown in the following code fragment.
Object item = Cache["myitem"]; if (item==null) { // repopulate the cache }
This helps avoid any exceptions which are caused by null objects. To find where in your code you access the cache, you can search for the string "Cache."
More Information
For more information about the questions and issues raised in this section, see "Caching Guidelines" in Chapter 6, "Improving ASP.NET Performance."Do You Use Session State?
Use the following review questions to review your code's use of session state:- Do you disable session state when not required?
Session state is on by default. If your application does not use session state, disable it in Web.config as follows.
<sessionState mode="Off" />
If parts of your application need session state, identify pages that do not use it and disable it for those pages by using the following page level attribute.
<@% EnableSessionState = "false" %>Minimizing the use of session state increases the performance of your application.
- Do you have pages that do not write to a session?
Page requests using session state internally use a ReaderWriterLock to manage access to the session state. For pages that only read session data, consider setting EnableSessionState to ReadOnly.
<%@ Page EnableSessionState="ReadOnly" . . .%>
This is particularly useful when you use HTML frames. The default setting (due to ReaderWriterLock) serializes the page execution. By setting it to ReadOnly, you prevent blocking and allow more parallelism. - Do you check for nulls before accessing items in session state? You can improve performance by checking for null before accessing the item, as shown in the following code.
object item = Session["myitem"]; if(item==null) { // do something else }A common pitfall when retrieving data from session state is to not check to see if the data is null before accessing it and then catching the resulting exception. You should avoid this because exceptions are expensive. To find where your code accesses session state, you can search for the string "Session."
- Do you store complex objects in session state? Avoid storing complex objects in session state, particularly if you use an out-of-process session state store. When using out-of-process session state, objects have to be serialized and deserialized for each request, which decreases performance.
- Do you store STA COM objects in session state?
Storing single-threaded apartment (STA) COM objects in session state
causes thread affinity because the sessions are bound to the original
thread on which the component is created. This severely affects both
performance and scalability.
Make sure that you use the following page level attribute on any page that stores STA COM objects in session state.
<@%Page AspCompat = "true" %>
This forces the page to run from the STA thread pool, avoiding any costly apartment switch from the default multithreaded apartment (MTA) thread pool for ASP.NET. Where possible, avoid the use of STA COM objects.
For more information, see Knowledge Base article 817005, "FIX: Severe Performance Issues When You Bind Session State to Threads in ASPCompat Model" at http://support.microsoft.com/default.aspx?scid=kb;en-us;817005.
More Information
For more information about the questions and issues raised in this section, see "Session State" in Chapter 6, "Improving ASP.NET Performance."Do You Use Application State?
Use the following review questions to assess how efficiently your code uses application state:- Do you store STA COM components in application state? Avoid storing STA COM components in application state where possible. Doing so effectively bottlenecks your application to a single thread of execution when accessing the component. Where possible, avoid using STA COM objects.
- Do you use the application state dictionary?
You should use application state dictionary for storing read-only
values that can be set at application initialization time and do not
change afterward. There are several issues to be aware of when using
application state in your code, such as the following:
- Memory allocated to the storage of application variables is not released unless they are removed or replaced.
- Application state is not shared across a Web farm or a Web garden — variables stored in application state are global to the particular process in which the application is running. Each application process can have different values.
Consider using the following alternatives to application state:
- Create static properties for the application rather than using the
state dictionary. It is more efficient to look up a static property than
to access the state dictionary. For example, consider the following
code.
Application["name"] = "App Name";
It is more efficient to use the following code.
private static String _appName = "App Name"; public string AppName { get{return _appName;} set{_appName = value;} }
- Use configuration files for storing application configuration information.
- Consider caching data that is volatile enough that it cannot be stored in application state, but needs updates periodically from a persistent medium, in the Cache object.
- Use the session store for user-specific information.
More Information
For more information about the questions and issues raised in this section, see "Application State" in Chapter 6, "Improving ASP.NET Performance."Do You Use Threading and Synchronization Features?
The .NET Framework exposes various threading and synchronization features, and the way your code uses multiple threads can have a significant impact on application performance and scalability. Use the following review questions to assess how efficiently your ASP.NET code uses threading:- Do you create threads on a per-request basis? Avoid manually creating threads in ASP.NET applications. Creating threads is an expensive operation that requires initialization of both managed and unmanaged resources. If you do need additional threads to perform work, use the CLR thread pool. To find places in your code where you are creating threads, search for the string "ThreadStart."
- Do you perform long-running blocking operations?
Avoid blocking operations in your ASP.NET applications where
possible. If you have to execute a long-running task, consider using
asynchronous execution (if you can free the calling thread) or use the
asynchronous "fire and forget" model.
For more information, see "How To: Submit and Poll for Long-Running Tasks" in the "How To" section of this guide.
More Information
For more information about the questions and issues raised in this section, see "Threading Guidelines" in Chapter 6, "Improving ASP.NET Performance."Do You Manage Resources Efficiently?
Use the following review questions to assess how efficiently your code uses resources:- Do you explicitly close resources properly? Ensure that your code explicitly closes objects that implement IDisposable by calling the object's Dispose or Close method. Failure to close resources properly and speedily can lead to increased memory consumption and poor performance. Failing to close database connections is a common problem. Use a finally block (or a using block in C#) to release these resources and to ensure that the resource is closed even if an exception occurs.
- Do you pool shared resources? Check that you use pooling to increase performance when accessing shared resources. Ensure that shared resources, such as database connections and serviced components, that can be pooled are being pooled. Without pooling, your code incurs the overhead of initialization each time the shared resource is used.
- Do you obtain your resources late and release them early? Open shared resources just before you need them and release them as soon as you are finished. Holding onto resources for longer than you need them increases memory pressure and increases contention for these resources if they are shared.
- Do you transfer data in chunks over I/O calls? If you do need to transfer data over I/O calls in chunks, allocate and pin buffers for sending and receiving the chunks. If you need to make concurrent I/O calls, you should create a pool of pinned buffers that is recycled among various clients rather than creating a buffer on a per-request basis. This helps you avoid heap fragmentation and reduce buffer creation time.
More Information
For more information about the questions and issues raised in this section, see "Resource Management" in Chapter 6, "Improving ASP.NET Performance," and "Resource Management" in Chapter 3, "Design Guidelines for Application Performance."Do You Manage Strings Efficiently?
Use the following review questions to assess how efficiently your ASP.NET code manipulates strings:- Do you use Response.Write for formatting output? Identify areas in your code where you concatenate output, such as to create a table, and consider using Response.Write instead. Response.Write is the most efficient method for writing content to the client.
- Do you use StringBuilder to concatenate strings? If the number of appends is unknown and you cannot send the data to the client immediately by using a Response.Write, use the StringBuilder class to concatenate strings.
- Do you use += for concatenating strings? Identify places in your code where you perform string concatenation by using the += operator. If the number of appends is unknown, or you are appending an unknown size of data, consider using the StringBuilder class instead.
More Information
For more information about the questions and issues raised in this section, see "String Management" in Chapter 6, "Improving ASP.NET Performance."Do You Manage Exceptions Efficiently?
Use the following review questions to assess how efficiently your code uses exceptions:- Have you implemented an error handler in Global.asax? Although implementing an error handler in Global.asax does not necessarily increase performance, it helps you to identify unexpected exceptions that occur in your application. After you identify the exceptions that occur, take appropriate action to avoid these exceptions.
- Do you use try/finally on disposable resources? Ensure that disposable resources are released in a finally block to ensure they get cleaned up even in the event of an exception. Failing to dispose of resources is a common problem.
- Does your code avoid exceptions?
Your code should attempt to avoid exceptions to improve performance
because exceptions incur a significant overhead. Use the following
approaches:
- Check for null values.
- Do not use exceptions to control regular application logic.
- Do not catch exceptions you cannot handle and obscure useful diagnostic information.
- Use the overloaded Server.Transfer method Server.Transfer(String,bool) instead of Server.Transfer, Response.Redirect, and Response.End to avoid exceptions.
More Information
For more information about the questions and issues raised in this section, see "Exception Management" in Chapter 6, "Improving ASP.NET Performance."Have You Optimized Your Web Pages?
Use the following review questions to assess the efficiency of your .aspx pages:- Have you taken steps to reduce your page size?
Try to keep the page size to a minimum. Large page sizes place
increased load on the CPU because of increased processing and a
significant increase in network bandwidth utilization, which may lead to
network congestion. Both of these factors lead to increased response
times for clients. Consider the following guidelines to help reduce page
size:
- Use script includes (script tags rather than interspersing code with HTML).
- Remove redundant white space characters from your HTML.
- Disable view state for server controls where it is not needed.
- Avoid long control names.
- Minimize the use of graphics, and use compressed images.
- Consider using cascading style sheets to avoid sending the same formatting directives to the client repeatedly.
- Is buffering disabled?
Ensure that you have buffering enabled. Buffering causes the server
to buffer the output and send it only after it has finished the
processing of the page. If buffering is disabled, the worker process
needs to continuously stream responses from all concurrent requests;
this can be a significant overhead on memory and the processor,
especially when you use the ASP.NET process model.
To find out if you have buffering disabled, you can search your code base for the following strings: "buffer" and "BufferOutput."
Make sure that the buffer attribute is set to true on the <pages> element in your application's Web.config file.
<pages buffer="True">
- Do you use Response.Redirect?
Search your code for "Response.Redirect" and consider replacing it with Server.Transfer. This does not incur the cost of a new request because it avoids any client-side redirection.
You cannot always simply replace Response.Redirect calls with Server.Transfer calls because Server.Transfer uses a new handler during the handler phase of execution. Response.Redirect generates a second request. If you need different authentication and authorization, caching, or other run-time devices on the target, the two mechanisms are not equivalent. Response.Redirect causes an extra request to be sent to the server. Response.Redirect also makes the URL visible to the user. This may be required in some scenarios where you require the user to bookmark the new location. - Do you use Page.IsPostBack? Check that the logic in your page uses the Page.IsPostBack property to reduce redundant processing and avoid unnecessary initialization costs. Use the Page.IsPostBack property to conditionally execute code, depending on whether the page is generated in response to a server control event or whether it is loaded for the first time.
- Do you validate user input?
Check that you validate user input on the client to reduce round
trips to the server. This also provides better feedback to the user. For
security reasons, ensure that any client-side validation is
complimented with the equivalent server-side validation.
For more information about validation design guidelines for building secure .NET Web applications, see "Input Validation" in Chapter 4, "Design Guidelines for Secure Web Applications" in "Improving Web Application Security: Threats and Countermeasures" on MSDN at http://msdn.microsoft.com/en-us/library/aa302424.aspx. - Have you set Explicit and Strict to true?
Ensure you use Option Strict and Explicit to reduce inadvertent late binding when using Visual Basic .NET.
<%@ Page Language="VB" Explicit="true" Strict="true" %>
This can be easily searched for by using the Findstr.exe file with regular expressions.
C:\findstr /i /s /r /c:"<%.*@.*page.*%>" *.aspx pag\default.aspx:<%@ Page Language="VB" %> pag\login.aspx:<%@ page Language="VB" %> pag\main.aspx:<%@ Page Language="VB" Explicit="true" Strict="true" %> ...
- Have you disabled debugging?
Check your Web.config file and ensure debug is set to false in the <compilation> section and check your .aspx pages to ensure debug is set to false. If debugging is enabled, the compiler does not generate optimized code and pages are not batch compiled.
You can check your .aspx pages by using the Findstr.exe file with regular expressions.
C:\pag>findstr /i /r /c:"<%.*@.*page.*debug=.*true*.*%>" *.aspx login.aspx:<%@ page Language="VB" Debug="True" %> main.aspx:<%@ Page Language="c#" Debug="True" %>
- Have you disabled tracing?
Check your Web.config file to ensure trace is disabled in the <trace> section. Also check your .aspx pages to ensure trace is set to false.
You can check your .aspx pages by using the Findstr.exe file with regular expressions.
C:\pag>findstr /i /r /c:"<%.*@.*page.*trace=.*true*.*%>" *.aspx login.aspx:<%@ page Language="VB" Trace="True" %> main.aspx:<%@ Page Language="c#" Trace="True" %>
- Do you set aggressive timeouts?
Set timeouts aggressively and tune accordingly. Evaluate each page
and determine a reasonable timeout. The default page timeout is 90
seconds specified by the executionTimeout attribute in
Machine.config. Server resources are held up until the request is
processed completely or the execution times out, whichever is earlier.
In most scenarios, users do not wait for such a long period for the requests to complete. They either abandon the request totally or send a new request which further increases the load on the server.
For more information, see Chapter 17, "Tuning .NET Application Performance."
More Information
For more information about the questions and issues raised in this section, see "Pages" in Chapter 6, "Improving ASP.NET Performance."Do You Use View State?
Use the following review questions to assess how efficiently your applications use view state:- Do you disable view state when it is not required?
Evaluate each page to determine if you need view state enabled. View
state adds overhead to each request. The overhead includes increased
page sizes sent to the client as well as a serialization and
deserialization cost. You do not need view state under the following
conditions:
- The page does not post back to itself; the page is only used for output and does not rely on response processing.
- Your page's server controls do not handle events and you have no dynamic or data-bound property values (or they are set in code on every request).
- If you are ignoring old data and repopulating the server control every time the page is refreshed.
- Have you taken steps to reduce the size of your view state? Evaluate your use of view state for each page. To determine a page's view state size, you can enable tracing and see each how each control uses it. Disable view state on a control-by-control basis.
More Information
For more information about the questions and issues raised in this section, see "View State" in Chapter 6, "Improving ASP.NET Performance."Do You Use Server Controls?
Use the following review questions to review how efficiently your ASP.NET applications use server controls:- Do you use server controls when you do not need to?
Evaluate your use of server controls to determine if you can replace
them with lightweight HTML controls or possibly static text. You might
be able to replace a server control under the following conditions:
- The data being displayed in the control is static, for example, a label.
- You do not need programmatic access to the control on the server side.
- The control is displaying read-only data.
- The control is not needed during post back processing.
- Do you have deep hierarchies of server controls? Deeply nested hierarchies of server controls compound the cost of building the control tree. Consider rendering the content yourself by using Response.Write or building a custom control which does the rendering. To determine the number of controls and to see the control hierarchy, enable tracing for the page.
More Information
For more information about the questions and issues raised in this section, see "Server Controls" in Chapter 6, "Improving ASP.NET Performance."Do You Access Data From Your ASPX Pages?
Some form of data access is required by most ASP.NET applications. Data access is a common area where performance and scalability issues are found. Review the following questions to help improve your application's page level data access:- Do you page large result sets? Identify areas of your application where large result sets are displayed and consider paging the results. Displaying large result sets to users can have a significant impact on performance. For paging implementation details, see "How To: Page Records in .NET Applications" in the "How To" section of this guide.
- Do you use DataSets when you could be using DataReaders? If you do not need to cache data, exchange data between layers or data bind to a control and only need forward-only, read-only access to data, then use DataReader instead.
Do You Use Data Binding?
Use the following review questions to review your code's use of data binding:- Do you use Page.DataBind? Avoid calling Page.DataBind and bind each control individually to optimize your data binding. Calling Page.DataBind recursively calls DataBind on each control on the page.
- Do you use DataBinder.Eval?
DataBinder.Eval uses reflection, which affects performance. In most cases DataBinder.Eval is called many times from within a page, so implementing alternative methods provides a good opportunity to improve performance.
Avoid the following approach.
<ItemTemplate> <tr> <td><%# DataBinder.Eval(Container.DataItem,"field1") %></td> <td><%# DataBinder.Eval(Container.DataItem,"field2") %></td> </tr> </ItemTemplate>
Use explicit casting. It offers better performance by avoiding the cost of reflection. Cast the Container.DataItem as a DataRowView if the data source is a DataSet.
<ItemTemplate> <tr> <td><%# ((DataRowView)Container.DataItem)["field1"] %></td> <td><%# ((DataRowView)Container.DataItem)["field2"] %></td> </tr> </ItemTemplate>
Cast the Container.DataItem as a String if the data source is an Array or an ArrayList.
<ItemTemplate> <tr> <td><%# ((String)Container.DataItem)["field1"] %></td> <td><%# ((String)Container.DataItem)["field2"] %></td> </tr> </ItemTemplate>
More Information
For more information about the questions and issues raised in this section, see "Databinding" in Chapter 6, "Improving ASP.NET Performance."Do You Call Unmanaged Code From ASPX Pages?
Use the following review questions to review your code's use of interoperability:- Have you enabled AspCompat for calling STA COM components?
Make sure that any page that calls an STA COM component sets the AspCompat page level attribute.
<@%Page AspCompat = "true" %>
This instructs ASP.NET to execute the current page request using a thread from the STA thread pool. By default, ASP.NET uses the MTA thread pool to process a request to a page. If you are using STA components, the component is bound to the thread where it was created. This causes a costly thread switch from the thread pool thread to the thread on which the STA object is created. - Do you create STA COM components in the page constructor?
Check your pages to ensure you are not creating STA COM components in the page constructor. Create STA components in the Page_Load, Page_Init or other events instead.
The page constructor always executes on an MTA thread. When an STA COM component is created from an MTA thread, the STA COM component is created on the host STA thread. The same thread (host STA) executes all instances of apartment-threaded components that are created from MTA threads. This means that even though all users have a reference to their own instance of the COM component, all of the calls into these components are serialized to this one thread, and only one call executes at a time. This effectively bottlenecks the page to a single thread and causes substantial performance degradation.
If you are using the AspCompat attribute, these events run using a thread from the STA thread pool, which results in a smaller performance hit due to the thread switch. - Do you use Server.Create object?
Avoid using Server.CreateObject and early bind to your components at compile time wherever possible. Server.CreateObject uses late binding and is primarily provided for backwards compatibility.
Search your code base to see if you use this routine and as an alternative, create an interop assembly to take advantage of early binding.
More Information
For more information about the questions and issues raised in this section, see "COM Interop" in Chapter 6, "Improving ASP.NET Performance."Have You Reviewed the Settings in Machine.config?
Use the following review questions to review your application's deployment plan:- Is the thread pool tuned appropriately? Proper tuning of the CLR thread pool tuned improves performance significantly. Before deploying your application, ensure that the thread pool has been tuned for your application.
- Is the memory limit configured appropriately? Configuring the ASP.NET memory limit ensures optimal ASP.NET cache performance and server stability. In IIS 5.0 or when you use the ASP.NET process model under IIS 6.0, configure the memory limit in Machine.config. With IIS 6.0, you configure the memory limit by using the IIS MMC snap-in.
- Have you removed unnecessary HttpModules? Including HttpModules that you do not need adds extra overhead to ASP.NET request processing. Check that you have removed or commented out unused HttpModules in Machine.config.
More Information
For more information about the issues raised in this section, see Chapter 6, "Improving ASP.NET Performance."Interop
There is a cost associated with calling unmanaged code from managed code. There is a fixed cost associated with the transition across the boundary, and a variable cost associated with parameter and return value marshaling. The fixed contribution to the cost for both COM interop and P/Invoke is small; typically less than 50 instructions. The cost of marshaling to and from managed types depends on how different the in-memory type representations are on either side of the boundary. Additionally, when you call across thread apartments, a thread switch is incurred which adds to the total cost of the call.To locate calls to unmanaged code, scan your source files for "System.Runtime.InteropServices," which is the namespace name used when you call unmanaged code.
If your code uses interop, use the following questions when you review your code:
- Do you explicitly name the method you call when using P/Invoke? Be explicit with the name of the function you want to call. When you use the DllImport attribute, you can set the ExactSpelling attribute to true to prevent the CLR from searching for a different function name.
- Do you use Blittable types? When possible, use blittable types when calling unmanaged code. Blittable data types have the same representation in managed and unmanaged code and require no marshaling. The following types from the System namespace are blittable types: Byte, SByte, Int16, UInt16, Int32, UInt32, Int64, IntPtr, and UIntPtr.
- Do you use In/Out attribute explicitly for parameters?
By default, parameters are marshaled into and out of each call. If
you know that a parameter is used only in a single direction, you can
use the In attribute or Out attribute
to control when marshaling occurs. Combining the two is particularly
useful when applied to arrays and formatted, non-blittable types.
Callers see the changes a callee makes to these types only when you
apply both attributes. Because these types require copying during
marshaling, you can use the In attribute and Out attribute to reduce unnecessary copies.
instance string marshal( bstr) FormatNameByRef( [in][out] string& marshal( bstr) first, [in][out] string& marshal( bstr) middle, [in][out] string& marshal( bstr) last)
- Do you rely on the default interop marshaling? Sometimes it is faster to perform manual marshaling by using methods available on the Marshal class, rather than relying on default interop marshaling. For example, if large arrays of strings need to be passed across an interop boundary but the unmanaged code needs only a few of those elements, you can declare the array as IntPtr and manually access only those few elements that are required.
- Do you have Unicode to ANSI conversions? When you call functions in the Win32 API, you should call the Unicode version of the API; for example, GetModuleNameW instead of the ANSI version GetModuleNameA. All strings in the CLR are Unicode strings. If you call a Win32 API through P/Invoke that expects an ANSI character array, every character in the string has to be narrowed.
- Do you explicitly pin short-lived objects?
Pinning short-lived objects may cause fragmentation of the managed
heap. You can find places where you are explicitly pinning objects by
searching for "fixed" in your source code.
You should pin only long-lived objects and where you are sure of the buffer size; for example, those used to perform repeated I/O calls. You can reuse this type of buffer for I/O throughout the lifetime of your application. By allocating and initializing these buffers when your application starts up, you help ensure that they are promoted faster to generation 2. In generation 2, the overhead of heap fragmentation is largely eliminated. - How do you release COM objects?
Consider calling Marshal.ReleaseComObject in a finally
block to ensure that COM objects referenced through a runtime callable
wrapper (RCW) release properly even if an exception occurs.
When you reference a COM object from ASP.NET, you actually maintain a reference to an RCW. It is not enough to simply assign a value of null to the reference that holds the RCW, and instead you should call Marshal.ReleaseComObject. This is of most relevance to server applications because under heavy load scenarios, garbage collection (and finalization) might not occur soon enough and performance might suffer due to a build up of objects awaiting collection.
You do not need to call ReleaseComObject from Windows Forms applications that use a modest number of COM objects that are passed freely in managed code. The garbage collector can efficiently manage the garbage collection for these infrequent allocations.
A common pitfall when releasing COM objects is to set the object to null and call GC.Collect followed by GC.WaitForPendingFinalizers. You should not do this because the finalization thread takes precedence over the application threads to run the garbage collection. This can significantly reduce application performance. - Do you use the /unsafe switch when creating interop assemblies?
By default, RCWs perform run-time security checks that cause the
stack to be walked to ensure that the calling code has the proper
permissions to execute the code. You can create run-time callable
wrappers that perform reduced run-time security checks by running the
Tlbimp.exe file with the /unsafe option. This should be used only after careful code reviews of such APIs to ensure that it is not subjected to luring attack.
For more information see "Use SuppressUnmanagedCodeSecurity with Caution" in Chapter 8, "Code Access Security in Practice" of Improving Web Application Security: Threats and Countermeasures on MSDN at http://msdn.microsoft.com/en-us/library/aa302424.aspx.
More Information
For more information about the issues raised in this section, see Chapter 7, "Improving Interop Performance."Enterprise Services
Use the following review questions to analyze the efficiency of your serviced components and the code that calls your serviced components:- Do you use object pooling?
- Do you manage resources efficiently?
- Do you use Queued Components?
- Do you use Loosely Coupled Events?
- Do you use COM+ transactions?
- Do you use the Synchronization attribute?
Do You Use Object Pooling?
To ensure that you use object pooling most efficiently and effectively, review the following questions:- Do you use objects with heavy initialization overhead? Consider enabling object pooling for objects that perform heavy initialization. Otherwise do not use object pooling. For example, object pooling is well suited to an object that opens a legacy database connection in its constructor.
- Do you need to control the number of objects activated? You can use object pooling to limit the number of objects. For example, you might want to restrict the number of open legacy database connection. Object pooling provides an effective connection pooling mechanism for the legacy database.
- Do you release objects properly back to pool?
If you use JIT activation, calling SetAbort or SetComplete or using the AutoComplete attribute ensures that an object returns to the pool. Client code should always call Dispose on any object that implements IDisposable, including serviced components.
Consider using JIT activation with object pooling if you call only one method on the pooled object. - Do you use JIT activation when calling multiple functions? Do not use JIT activation if the client is going to instantiate the class and call multiple methods. It is more appropriate for single call scenarios.
Do You Manage Resources Efficiently?
Review the following questions to ensure you manage resources efficiently within your serviced component code:- Do you call Dispose on serviced components? Make sure that client code always calls Dispose on serviced components. This helps to ensure the speedy release of unmanaged resources. Additionally, calling Dispose on pooled objects (which are not using JIT activation) returns them to the pool.
- Do you call ReleaseComObject on objects that involve Runtime Callable Wrappers?
Identify components that are accessed using RCWs and ensure you call Marshal.ReleaseCOMObject appropriately. Do not call Marshall.ReleaseComObject
when you reference a regular (nonqueued, nonevent class) managed
serviced component in a library application. In this case, you do not
reference an RCW. You reference an RCW in the following situations:
- You reference an unmanaged COM+ component written in native code (for example, a Visual Basic 6.0 component) hosted in either a library or server application.
- You reference an unmanaged queued component in a library or server application.
- You reference an unmanaged Loosely Coupled Event (LCE) class in a library or server application.
- You reference an unmanaged COM component (no COM+).
- Do you call SetAbort as soon as possible?
Call SetAbort immediately on failure so that the transaction can be aborted and resources freed quickly. The SetAbort example shown in Table 13.1 performs faster than using AutoComplete. Note that when you have nested serviced component calls, you should call SetAbort
in lower level methods and each method should propagate an error value
upwards. Database exceptions should be caught in the class making the
call to the database. It should also call SetAbort and return an error message and error code.
Table 13.1: Examples of SetAbort and AutoCompleteSetAbort AutoComplete if( !DoSomeWork() ) { //Something goes wrong. ContextUtil.SetAbort(); } else { //All goes well. ContextUtil.SetComplete(); }
[AutoComplete] public void Debit(int amount) { // Your code // Commits if no error, otherwise aborts }
Do You Use Queued Components?
When using queued components, you should avoid any time and order dependency in the algorithm for the client. The natural programming model provided by queuing rapidly breaks down if you start to force ordering. Queued Components is a "fire and forget" model, and time and order dependencies will unnecessarily cause unexpected behavior and contention issues.Do You Use Loosely Coupled Events?
The COM+ Loosely Coupled Event (LCE) service provides a distributed publisher-subscriber model. If you use LCE, you should not use it to broadcast messages to large numbers of subscribers. Review the following questions:- Do you use LCE for a large number of subscribers?
Evaluate whether you have too many subscribers for an event because
LCE is not designed for large multicast scenarios. A good alternative is
to use broadcast. The sockets layer has broadcast packet support.
For more information about using the sockets layer, see "Using UDP Services" on MSDN at http://msdn.microsoft.com/en-us/library/tst0kwb1(VS.71).aspx. - Do you block on the executing thread for publishing of events? Using queued components lets you publish the events asynchronously without blocking your main executing thread. This can be particularly useful in scenarios where you need to publish events from ASP.NET application but do not want to block the worker thread processing the request.
- Do you fire events to one subscriber at a time?
When a publisher fires an event, the method call does not return
until all the subscriber components have been activated and contacted.
With large numbers of subscribers, this can severely affect performance.
You can use the Fire in parallel option to instruct the event system to use multiple threads to deliver events to subscribers by using the following attribute.
EventClassAttribute(FireInParallel=true)
Do You Use COM+ Transactions?
If you use COM+ transactions provided by Enterprise Services, review the following questions:- Do you need distributed COM+ transactions? The declarative programming model supported by Enterprise Services makes it very easy to add transactional support to your programs. When you need to manage transactions that span multiple resource managers, the Microsoft Distributed Transaction Coordinator (DTC) makes it easy to manage your unit of work. For transactions against a single database, consider ADO.NET or manual T - SQL transactions in stored procedures. Regardless of the type (DTC, ADO.NET transactions, or SQL Server transactions), avoid using transactions where you do not need to. For example, fetching and displaying records (that are not updatable) in a transaction is an unnecessary overhead.
- Have you chosen the appropriate isolation level?
The default isolation level in COM+ 1.5 is Serializable,
although COM+ 1.5 enables you to change this isolation level. COM+ 1.5
comes with Windows 2003 and Windows XP, but not Windows 2000. Use of the
Repeatable Read or Serializable
isolation levels can result in database requests being queued and
response time increasing, but they provide higher data consistency.
Use ReadCommitted as the default unless you have different data consistency requirements. Lower isolation levels might be appropriate for certain read-only cases. When determining an appropriate level, carefully consider your business rules and the transaction's unit of work. You can configure a component's isolation level using the Transaction attribute as shown in the following code.
[Transaction(Isolation=TransactionIsolationLevel.ReadCommitted)]
Do You Use the Synchronization Attribute?
If you use the Synchronization attribute, access to an entire object is synchronized; this ensures that only a single thread executes a given object instance at a time. You should consider more granular approaches, such as using locks or Mutex objects, to provide more granular locking and to improve concurrency.More Information
For more information about the issues raised in this section, see Chapter 8, "Improving Enterprise Services Performance."Web Services
Use the review questions in this section to assess the efficiency of your Web services as well as the client code which calls your Web services.Web Methods
Review your Web method implementation by using the following questions:- Do you use primitive types as parameters for Web methods? Regardless of the encoding style you use, you should prefer simple primitive types such as int, double, and string as parameters for Web services. These types require less serialization effort and are easily validated.
- Do you validate the input with a schema before processing it?
We strongly recommend having a schema and using it to assist in the
design and debug phases even if strong validation is inappropriate for
production. From a security standpoint, you should validate input.
Finding and rejecting invalid input early can also help avoid redundant
processing time and CPU utilization. However, validating XML input using
schemas introduces additional processing overhead; you need to balance
the benefits of validation against this additional cost for your
particular application to determine whether validation is appropriate.
If you do use validation, make sure you optimize schema validation performance, for example, by compiling and caching the schema. You can validate incoming messages in a separate HTTP module, SOAP extension or within the Web method itself. For more information, see "Validating XML" in Chapter 9, "Improving XML Performance." - Do you perform I/O operations in your Web service?
If your code performs I/O bound operations such as file access,
consider using an asynchronous Web method. An asynchronous
implementation helps in cases where you want to free up the worker
thread instead of waiting for the results to return from a potentially
long-running task.
You should not implement asynchronous Web methods when making a long-running database call because you end up using delegates that require a worker thread for asynchronous processing. This degrades performance rather than increasing it. - Does the client expect data back from the Web service?
If your client does not expect data from the Web service, check if your code uses the OneWay attribute on the Web method so that the client does not wait on any results.
public class BatchOperations : WebService { [SoapDocumentMethod(OneWay=true), WebMethod(Description="Starts long running operation 1 .")] public void ProcessLongRunningOp1(){ // Start processing } }
Web Service Clients
Use the following review questions help review your Web service consumer code:- Have you considered calling Web services asynchronously?
- Do you make long-running calls to Web services?
- Do you use XMLIgnore to reduce the amount of data sent over wire?
- Are client timeouts greater then your Web service timeout?
- Do you abort connections when ASP.NET pages timeout?
- Do you use PreAuthentication with Basic authentication?
- Do you use UnsafeAuthenticatedConnectionString with Windows authentication?
- Have you configured your connections?
- Have you tuned the thread pool on the server and client?
Have You Considered Calling Web Services Asynchronously?
You can improve performance on the client by invoking Web services asynchronously. The proxy generated by Visual Studio .NET automatically provides two extra methods for asynchronous invocation of the Web service. For example, if you have a method named MyProcess, Visual Studio .NET automatically generates two additional methods named BeginMyProcess and EndMyProcess.For Windows Forms applications, you should use asynchronous invocation to keep the user interface (UI) responsive to user actions. For server applications, you should invoke Web services asynchronously when you can free up the worker thread to do some useful work.
Do You Make Long-Running Calls to Web Services?
If your Web service calls are long-running, you can free up the worker thread for useful work by invoking the Web services asynchronously.For more information see "How To: Submit and Poll for Long-Running Tasks" in the "How To" section of this guide.
Do You Use XMLIgnore To Reduce the Amount of Data Sent Over the Wire?
Use the XMLIgnore attribute to avoid sending unnecessary data over the wire. By default, XML serialization serializes all the public properties and fields of a class. If your class includes derived data or codes that you do not want to return to the client, you can mark members with the XmlIgnore attribute.As a design consideration, you should consider passing custom classes to and from Web services. This is an efficient approach. The class does not need to correspond one-to-one with internal structures used by the clients or the Web service.
Are Client Timeouts Greater Than Your Web Service Timeout?
Ensure that the client timeouts calling the Web service are greater than the Web service timeout. Consider the following guidelines:- When calling a Web service synchronously, ensure that the proxy timeout is set appropriately.
- Set the executionTimeout attribute for the HttpRunTime element to a higher value than the proxy timeout for the Web service.
Do You Abort Connections When ASP.NET Pages Timeout?
If you have an ASP.NET page that calls a Web service, it is possible for the page request to time out before the page receives a response back from the Web service. In this event, the connection to the Web service does not get aborted and the Web service request continues to execute, eventually returning despite the client page timing out.To address this issue, tune your time-outs and modify the automatically generated proxy code to abort the request if the ASP.NET page times out.
For more information, about tuning time-outs for Web services, see "Web Services Tuning" in Chapter 17, "Tuning .NET Application Performance." For more information about how to abort Web service connections for timed-out Web pages, see "Timeouts" in Chapter 10, "Improving Web Services Performance."
Do You Use Pre-Authentication with Basic Authentication?
To save rounds trips between the client and server, use the PreAuthenticate property of the proxy when using basic authentication. Pre-authentication applies only after the Web service successfully authenticates the first time. Pre-authentication has no impact on the first Web request. For more information, see "Connections" in Chapter 10, "Improving Web Services Performance."Do You Use UnsafeAuthenticatedConnectionString with Windows Authentication?
If your ASP.NET application calls a Web service that uses Windows Integrated Authentication, consider enabling UnsafeAuthenticatedConnectionSharing. By default, when you connect using Windows authentication, connections are opened and closed per request. Enabling UnsafeAuthenticatedSharing keeps connections open so they can be reused. If you enable UnsafeAuthenticatedSharing, the same connection is reused for multiple requests from different users. This may not be desirable if you need to flow the identity of the user when making the calls.For more information, see "Connections" in Chapter 10, "Improving Web Services Performance."
Have You Configured Your Connections?
If you are calling multiple Web services, you can prioritize and allocate connections using the ConnectionManagement element in Machine.config.If you call a remote Web service from an ASP.NET application, ensure that you have configured the maxconnection setting in Machine.config. You can consider increasing this to twelve times the number of CPUs if you have processor utilization below the threshold limits.
Have You Tuned the Thread Pool on the Server and Client?
Before deploying your application, ensure that the thread pool has been tuned for your client (where appropriate) and your Web service. Appropriate tuning of the thread pool can improve performance drastically. The important attributes are: maxWorkerThreads, maxIOThreads, minFreeThreads, and minLocalRequestFreeThreads.Tuning the thread pool affects the number of requests that can concurrently be processed by the server. This drives other decisions, such as the size of the connection pool to the database, and the number of concurrent connections to a remote Web service (defined by maxconnection in Machine.config).
For more information, see "Threading" in Chapter 10, "Improving Web Services Performance."
More Information
For more information about the issues raised in this section, see Chapter 10, "Improving Web Services Performance."Remoting
Use the following review questions to analyze your use and choice of .NET remoting:- Do you use MarshalByRef and MarshalByValue appropriately?
- Do you use the HttpChannel?
- Do you need to transfer large amounts of data over the HttpChannel?
- Which formatter do you use to serialize data?
- Do you send all the data across the wire?
- Do you serialize ADO.NET objects using the BinaryFormatter?
- Have you considered calling remote components asynchronously?
Do You Use MarshalByRef and MarshalByValue Appropriately?
Identify places in your code where you are using MarshalByRef and MarshalByValue. Ensure that you are using the appropriate one.Use MarshalByRef in the following situations:
- The state of the object should stay in the host application domain.
- The size of the objects is prohibitively large.
- You do not need to update the data on the server.
- You need to pass the complete state of the object.
Do You Use the HttpChannel?
If you use the HttpChannel for .NET remoting, you should prefer IIS as the host for the remote component because the component is loaded in the ASP.NET worker process. The ASP.NET worker process loads the server garbage collector, which is more efficient for garbage collection on multiprocessor machines. If you use a custom host, such as a Windows service, you can use only the workstation garbage collector. The HttpChannel also enables you to load balance components hosted in IIS.Do You Need to Transfer Large Amounts of Data over the HttpChannel?
Consider reducing the amount of data being serialized. Mark any member that does not need to be serialized with the NonSerialized attribute to avoid serialization. However, if you still pass large amounts of data, consider using HTTP 1.1 compression by hosting the objects in IIS. You need to develop a custom proxy for compressing and decompressing the data at the client side. This can add an extra layer of complexity as well as development time for your application.Which Formatter Do You Use To Serialize Data?
If you need to use the SoapFormatter, consider using Web services instead of .NET remoting. SOAP-based communication in Web services outperforms remoting in most scenarios.Prefer the BinaryFormatter for optimum performance when using .NET remoting. The BinaryFormatter creates a compact binary wire representation for the data passed across the boundary. This reduces the amount of data getting passed over the network.
Do You Send All The Data Across The Wire?
Sending an entire data structure across the wire can be expensive. Evaluate the data structures you are sending across the wire to determine whether you need to pass all the data associated with that data structure. The internal representation of the data need not be same as the one transmitted across remoting boundaries.Mark members that do not need to be serialized with the NonSerialized attribute.
Do You Serialize ADO.NET Objects using BinaryFormatter?
Serializing ADO.NET objects using BinaryFormatter still causes them to be serialized as XML. As a result, the size of data passed over the wire is high for ADO.NET objects. In most cases, you can optimize the serialization of ADO.NET objects by implementing your own serialization for these objects.More Information
For more information, see the following resources:- "How To: Improve Serialization Performance" in the "How To" section of this guide.
- Knowledge Base article 829740, "Improving DataSet Serialization and Remoting Performance," at http://support.microsoft.com/default.aspx?scid=kb;en-us;829740.
- "Binary Serialization of ADO.NET Objects" in MSDN Magazine at http://msdn.microsoft.com/en-us/magazine/cc188907.aspx.
- If you serialize using DataSet, see "Do You Use DataSets?" in the "DataSets" section later in this chapter.
Have You Considered Asynchronous Calls to the Remote Component?
For server applications, you should consider asynchronous calls when you can free up the worker thread to do some other useful work. The worker thread can be completely freed to handle more incoming requests or partially freed to do some useful work before blocking for the results.More Information
For more information about the issues raised in this section, see Chapter 11, "Improving Remoting Performance."Data Access
Use the following questions in this section to review the efficiency of your application's data access:- Do you use connections efficiently?
- Do you use commands efficiently?
- Do you use stored procedures efficiently?
- Do you use Transact-SQL?
- Do you use Parameters?
- Do you use DataReaders?
- Do you use DataSets?
- Do you use Transactions?
- Do you use Binary Large Objects (BLOBS)?
- Do you page through data?
Do You Use Connections Efficiently?
Use the following review questions to review your code's use of database connections:- Do you close your connections properly?
Keeping too many open connections is a common pitfall. Ensure you
close your connections properly to reduce resource pressure. Identify
areas in your code where you are using connections and ensure the
following guidelines are followed:
- Open and close the connection within the method.
- Explicitly close connections using a finally or using block.
- When using DataReaders, specify CommandBehavior.CloseConnection.
- If using Fill or Update with a DataSet, do not explicitly open the connection. The Fill and Update methods automatically open and close the connection.
- Do you pool your database connections?
Creating database connections is expensive. You can reduce the creation overhead by pooling your database connections.
You can pool connections by connecting to a database as a single identity rather than flowing the identity of original caller to the database. Flowing the caller's identity results in a separate connection pool for each user. Changing the connection string even by adding an empty space creates a separate pool for that connection string. If you are pooling your database connections, make certain that you call Close or Dispose on the connection as soon as you are done with the connection. This ensures that it is promptly returned to the pool. - Is the pool size set appropriately?
It is important to optimize the maximum and minimum levels of the
pool size to maximize the throughput for your application. If you set
the maximum levels to values that are too high, you may end up creating
deadlocks and heavy resource utilization on the database. If you use
values that are too low, you run the risk of under utilizing the
database and queuing up the requests.
Determine appropriate maximum and minimum values for the pool during performance testing and performance tuning. - What data provider do you use?
Make sure that your code uses the correct data provider. Each database-specific provider is optimized for a particular database:
- Use System.Data.SqlClient for SQL Server 7.0 and later.
- Use System.Data.OleDb for SQL Server 6.5 or OLE DB providers.
- Use System.Data.ODBC for ODBC data sources.
- Use System.Data.Oracle.Client for Oracle.
- Use SQLXML managed classes for XML data and SQL Server 2000.
- Do you check the State property of OleDbConnection? Using the State property causes an additional round trip to the database. If you need to check the status of the connection, consider handling the StateChange event.
More Information
For more information about the questions and issues raised in this section, see "Connections" in Chapter 12, "Improving ADO.NET Performance."Do You Use Commands Efficiently?
Use the following review questions to help review how efficiently your code uses database commands:- Do you execute queries that do not return data? If you do not return values from your stored procedure, use ExecuteNonQuery for optimum performance.
- Do you execute queries that only return a single value? Identify queries that return only a single value. Consider changing the query to use return values and use Command.ExecuteNonQuery, or if you do not have control over the query, use Command.ExecuteScaler, which returns the value of the first column of the first row.
- Do you access very wide rows or rows with BLOBs? If you are accessing very wide rows or rows with BLOB data, use CommandBehavior.SequentialAccess in conjunction with GetBytes to access BLOB in chunks.
- Do you use CommandBuilder at runtime? CommandBuilder objects are useful for design time, prototyping, and code generation. However, you should avoid using them in production applications because the processing required to generate commands can affect performance. Ensure you are not using the CommandBuilder objects at run time.
More Information
For more information about the questions and issues raised in this section, see "Commands" in Chapter 12, "Improving ADO.NET Performance."Do You Use Stored Procedures?
Use the following review questions to review your code's use of stored procedures:- Have you analyzed the stored procedure query plan? During your application's development stage, you should analyze your stored procedure query plan. Recompilation is not necessarily a bad thing; the optimizer recompiles when initial plan is not optimal for other calls. By monitoring and reducing frequent recompilation, you could avoid performance hits. You can monitor recompiling stored procedures by creating a trace in SQL Profiler and track for the SP:Recompile event. Identify the cause of recompilation and take corrective actions. For more information, see "Execution Plan Recompiles" in Chapter 14, "Improving SQL Server Performance."
- Do you have multiple statements within the stored procedure? Use SET NOCOUNT ON when you have multiple statements within your stored procedures. This prevents SQL Server from sending the DONE_IN_PROC message for each statement in the stored procedure and reduces the processing SQL Server performs as well as the size of the response sent across the network.
- Do you return a result set for small amounts of data? You should use output parameters and ExecuteNonQuery to return small amounts of data instead of returning a result set that contains a single row. This avoids the performance overhead associated with creating the result set on the server. If you need to return several output parameters, you can select them into variables and then emit a single row by selecting with all the variables so there's one resultset for all.
- Do you use CommandType.Text with OleDbCommand? If you use the OleDbCommand, use CommandType.Text. If you use CommandType.StoredProcedure, ODBC call syntax is generated by the provider anyway. By using explicit call syntax, you reduce the work of the provider.
More Information
For more information about the questions and issues raised in this section, see "Stored Procedures" in Chapter 12, "Improving ADO.NET Performance."Do You Use Transact-SQL?
If you use T-SQL, review the following questions:- Do you restrict the amount of data selected? Returning large amounts of data increases query time and the time it takes to transfer the data across the network. Similarly updating large amounts of data increases the load on the database server. Avoid using SELECT * in your queries and check that you restrict the amount of data that you select in your queries, for example, by using an appropriate WHERE clause.
- Do you use Select Top in rows?
Using Top in your SELECT statements enables you to
limit the number of rows that can be returned by the select command. If
you implement client-side paging, it makes sense to make use this
feature. The query processing is aborted when the specified number of
rows have been retrieved.
For more information about paging data, see "How To: Page Records in .NET Applications" in the "How To" section of this guide. - Do you select only the columns you need?
Select only columns you need instead of using SELECT * queries. This
reduces the network traffic in addition to reducing the processing on
the database server.
Reducing your columns to the minimum also makes it easier for SQL Server to use an index to cover your query. If all the columns you need are in a usable index that is smaller than the main table, less I/O is required because the index contains the full result. Indexes are often created exactly for this reason, or columns are added to existing indexes not because of the sorting needs but to make the index better at "covering" the necessary queries. Creation of "covering" indexes is vital because if the index does not cover the query, the main table needs to be access (a so-called bookmark lookup from the index). From a performance perspective, these are equivalent to using joins. - Do you batch multiple queries to avoid round trips? Batching is the process of sending several SQL statements in one trip to the server. Batching can increase performance by reducing round trips to the database. Where possible, batch multiple SQL statements together and use the DataReader.NextResult method to improve performance. Another alternative is to batch multiple SQL statements within a stored procedure.
Do You Use Parameters?
Use the following review questions to review your code's use of parameters:- Do you use parameters for all your stored procedures and SQL statements? Using parameters when calling SQL statements as well as stored procedures can increase performance. Identify areas in your code where you call SQL statements or stored procedures and ensure that you are explicitly creating parameters and supplying the parameter type, size, precision, and scale.
- Do you explicitly specify the parameter types? Specifying the parameter types prevents unnecessary type conversions that are otherwise performed by the data provider. Use the enumeration type that is relevant for the connection used by you; for example, SqlDbType or OledbType.
- Do you cache the parameters for a frequently called stored procedure? Consider caching the stored procedure parameters if you invoke stored procedures frequently to improve performance. If ASP.NET pages calls stored procedures, you can use cache APIs. If your data access code is factored into a separate component, caching helps only if your components are stateful. A good approach is to cache parameter arrays in a Hashtable. Each parameter array contains the parameters that are required by a particular stored procedure used by a particular connection.
More Information
For more information about the questions and issues raised in this section, see "Parameters" in Chapter 12, "Improving ADO.NET Performance."Do you use DataReaders?
If you use DataReaders, review the following questions:- Do you close your DataReaders? Scan your code to ensure you are closing your DataReaders as soon as you are finished with them. You should call Close or Dispose in a finally block. If you pass a DateReader back from a method, use CommandBahavior.CloseConnection to ensure the connection gets closed when the reader is closed.
- Do you use index to read from a DataReader?
All output from a DataReader should be read using an
index (for example, rdr.GetString(0)) which is faster, but for
readability and maintainability, you might prefer to use the string
names of the columns. If you are accessing the same columns multiple
times (for example, when you retrieve a number of rows), you should use
local variables that store the index number of the columns. You can use
rdr.GetOrdinal() to retrieve the ordinal position of a column.
For more information, see "Use GetOrdinal when Using an Index-Based Lookup" in Chapter 12, "Improving ADO.NET Performance."
Do You Use DataSets?
Use the following review questions to review your code's use of DataSets:- Do you serialize DataSets?
Inefficient serializing of DataSets is a major performance issue for remote calls. You should avoid sending DataSets
(especially when using .NET remoting) and consider alternative means of
sending data over the wire, such as arrays or simple collections, where
possible.
If you serialize DataSets, make sure you adhere to the following guidelines:
- Only return relevant data in the DataSet.
- Consider using alias column names to shorter actual column names. This helps reduce the size of the DataSet.
- Avoid multiple versions of the data. Call AcceptChanges before serializing a DataSet.
- When serializing a DataSet over a Remoting channel, use the DataSetSurrogate class.
- Do you search data which has a primary key column? If you need to search a DataSet using a primary key, create the primary key on the DataTable. This creates an index that the Rows.Find method can use to quickly find the required records. Avoid using DataTable.Select, which does not use indices.
- Do you search data which does not have a primary key? If you need to repetitively search by nonprimary key data, create a DataView with a sort order. This creates an index that can be used to improve search efficiency. This is best suited to repetitive searches as there is some cost to creating the index.
- Do you use DataSets for XML data? If you do not pass the schema for the XML data, the DataSet tries to infer the schema at run time. Pass XmlReadMode.IgnoreSchema to the ReadXml method to ensure that schema is not inferred.
More Information
For more information about the questions and issues raised in this section, see "Connections" in Chapter 12, "Improving ADO.NET Performance."Do You Use Transactions?
Use the following review questions to review your code's use of transactions:- What isolation level do you use? Different isolation levels have different costs. Applications may have to operate at different transaction isolation levels, depending on their business needs. You need to choose the isolation level that is appropriate for the scenario. For example, scenarios that require a high degree of data integrity need a higher isolation level.
- Do you have long-running transactions? Having a long-running transaction with high isolation levels prevents other users from reading the data. Instead of locking resources for the duration of the transaction, consider accommodating various states within your schema (for example, ticket status PENDING, instead of locking the row). Another option is to use compensating transactions.
- Did you turn off automatic transaction enlistment if it's not needed?
If you use the.NET Framework Data Provider for SQL Server, you can turn off automatic transaction enlistment by setting Enlist to false in the connection string, as shown in the following code, when you are not dealing with an existing distribution transaction.
SqlConnection LondonSqlConnection = new SqlConnection( "Server=London;Integrated Security=true;Enlist=false;");
More Information
For more information about the questions and issues raised in this section, see "Transactions" in Chapter 12, "Improving ADO.NET Performance."Do You Use Binary Large Objects (BLOBS)?
Use the following review questions to review your code's use of BLOB data:- Do you store BLOBs in the database?
Reading and writing BLOBs to and from a database is an expensive
operation, not only from a database perspective, but also from a code
perspective. This is because there is also a memory impact associated
with accessing BLOB data. If you store files such as images or documents
that are frequently accessed by a Web server, consider storing the
files on the Web server's file system and maintaining a list of all the
objects in the database. This can increase performance by avoiding
frequent moving of BLOBs from the database to the Web server.
Note This approach adds a maintenance overhead of having to update the links if the file path changes.
If you have a large store of images that is too large for a Web server, storing it in the SQL database as BLOBs is the right choice. - Do you use a DataReader to read BLOBs? If you access BLOB data, check that you use CommandBehavior.SequentialAccess in conjunction with the GetBytes, GetChars, or GetString methods to read BLOB in chunks.
- Do you read or write BLOBs to SQL Server database?
Ensure that you use READTEXT and UPDATETEXT to read and write large BLOBs to a SQL Server database. Use READTEXT to read text, ntext, varchar, varbinary, or image values. This enables you to read the data in chunks to improve performance. Use UPDATETEXT to write data in chunks.
However, if you "BLOB" an item that is relatively small, you can consider reading it in a statement or operation rather than in chunks. This depends on your network bandwidth and workload. - Do you read or write BLOBs to an Oracle database? Ensure that you use the System.Data.OracleClient.OracleLob class to read and write BLOBs to an Oracle database. The Read and Write methods provide the flexibility of reading and writing the data in chunks.
More Information
For more information about the questions and issues raised in this section, see "Binary Large Objects" in Chapter 12, "Improving ADO.NET Performance."Do You Page Through Data?
Use the following review questions to review your code's use of paging records:- Do you page data based on user query (such as results of a search query)? If you need to page through a large amount of data based on user queries, consider using SELECT TOP along with the table data type in your stored procedures. For more information, see "How To: Page Records in .NET Applications" in the "How To" section of this guide.
- Do you page through data which is mostly static over a period of time? If you need to page through large amounts of data that is same for all users and is mostly static, consider using SELECT TOP along with the global temptable in your stored procedures. If you take this approach, ensure you have a policy in place to manage factors, such as refreshing the temp table with current data. For more information refer to "How To: Page Records in .NET Applications".
More Information
For more information about the issues raised in this section, see Chapter 12, "Improving ADO.NET Performance."Summary
Performance and scalability code reviews are similar to regular code reviews or inspections, except that the focus is on the identification of coding flaws that can lead to reduced performance and scalability.This chapter has shown how to review managed code for top performance and scalability issues. It has also shown you how to identify other more subtle flaws that can lead to performance and scalability issues.
Performance and scalability code reviews are not a panacea. However, they can be very effective and should be a regular milestone in the development life cycle.
No comments:
Post a Comment