Class Design
·
Declare all fields (data members) private
·
Provide a default private constructor if there are only static methods and
properties on a class
·
Explicitly define a protected constructor on an abstract base class
·
Selection statements (if-else and switch) should be used when the control
flow depends on an object's value; dynamic binding should be used when the
control flow depends on the object's type
·
All variants of an overloaded method shall be used for the same purpose and
have similar behavior
·
If you must provide the ability to override a method, make only the most
complete overload virtual and define the other operations in terms of it
·
Use code to describe preconditions, postconditions, exceptions, and class
invariants
·
It shall be possible to use a reference to an object of a derived class
wherever a reference to that object?s base class object is used 7@501 Do not
overload any `modifying? operators on a class type
·
Do not modify the value of any of the operands in the implementation of an
overloaded operator
·
Use a struct when value semantics are desired
·
Implement the GetHashCode method whenever you implement the Equals method
·
Override the Equals method whenever you implement the == operator, and make
them do the same thing
·
Override the Equals method any time you implement the IComparable Interface
·
Consider implementing the Equals method on value types
·
Reference types should not override the equality operator (==)
·
Consider implementing operator overloading for the equality (==), not equal
(!=), less than (<), and greater than (>) operators when you implement
IComparable
·
Consider overloading the equality operator (==), when you overload the
addition (+) operator and/or subtraction (-) operator
·
Consider implementing all relational operators (<, <=, >, >=)
if you implement any
·
Allow properties to be set in any order
·
Use a property rather than a method when the member is a logical data
member
·
Use a method rather than a property when this is more appropriate
·
Do not create a constructor that does not yield a fully initialized object
·
Always check the result of an as operation
·
Are properties used instead of implementing getter and setter methods?
·
Are readonly variables used in preference to properties without
setters?
C# Basic: Checklist and Code Snippets
Knowing when to use StringBuilder
You must have heard before that a StringBuilder object is much faster at appending strings together than normal string types.The thing is StringBuilder is faster mostly with big strings. This means if you have a loop that will add to a single string for many iterations then a StringBuilder class is definitely much faster than a string type.
However if you just want to append something to a string a single time then a StringBuilder class is overkill. A simple string type variable in this case improves on resources use and readability of the C# source code.
Simply choosing correctly between StringBuilder objects and string types you can optimize your code.
StringBuilder is used when the number of concatenations is unknown, such
as in loops or multiple iterations. Concatenation using the + operator
may result in a lot of side effect allocations that are discarded after each
new iteration. The usage of StringBuilder is shown in the following
code.
StringBuilder sb = new StringBuilder();
Array arrOfStrings = GetStrings();
for(int i=0; i<10; i++){
sb.Append(arrOfStrings.GetValue(i));
}
Strings: Watch
out for code that calls the ToLower method. Converting strings to
lowercase and then comparing them involves temporary string allocations. This
can be very expensive, especially when comparing strings inside a loop. You
should give preference to using the String.Compare method. If you need to
perform case-insensitive string comparisons, you can use the overloaded String.Compare
method shown in the following code.
//
The last argument is to signify a case-sensitive or case-//insensitive
comparison
String.Compare
(string strA, string strB, bool ignoreCase);
Comparing Non-Case-Sensitive Strings
In an application sometimes it is necessary to compare two string variables, ignoring the cases. The tempting and traditionally approach is to convert both strings to all lower case or all upper case and then compare them, like such:str1.ToLower() == str2.ToLower()However repetitively calling the function ToLower() is a bottleneck in performace. By instead using the built-in string.Compare() function you can increase the speed of your applications.
To check if two strings are equal ignoring case would look like this:
string.Compare(str1, str2, true) == 0 //Ignoring casesThe C# string.Compare function returns an integer that is equal to 0 when the two strings are equal.
Use string.Empty
This is not so much a performance improvement as it is a readability improvement, but it still counts as code optimization. Try to replace lines like:if (str == "")with:
if (str == string.Empty)This is simply better programming practice and has no negative impact on performance.
Note, there is a popular practice that checking a string's length to be 0 is faster than comparing it to an empty string. While that might have been true once it is no longer a significant performance improvement. Instead stick with string.Empty.
Replace ArrayList with List<>
ArrayList are useful when storing multiple types of objects within the same list. However if you are keeping the same type of variables in one ArrayList, you can gain a performance boost by using List<> objects instead.Take the following ArrayList:
ArrayList intList = new ArrayList();
intList.add(10);
return (int)intList[0] + 20;Notice it only contains intergers. Using the List<> class is a lot better. To convert it to a typed List, only the variable types need to be changed:
List<int> intList = new List<int>();
intList.add(10)
return intList[0] + 20;There is no need to cast types with List<>. The performance increase can be especially significant with primitive data types like integers.
Use && and || operators
When building if statements, simply make sure to use the double-and notation (&&) and/or the double-or notation (||), (in Visual Basic they are AndAlso and OrElse).If statements that use & and | must check every part of the statement and then apply the "and" or "or". On the other hand, && and || go thourgh the statements one at a time and stop as soon as the condition has either been met or not met.
Executing less code is always a performace benefit but it also can avoid run-time errors, consider the following C# code:
if (object1 != null && object1.runMethod())If object1 is null, with the && operator, object1.runMethod()will not execute. If the && operator is replaced with &, object1.runMethod() will run even if object1 is already known to be null, causing an exception.
Smart Try-Catch
Try-Catch statements are meant to catch exceptions that are beyond the programmers control, such as connecting to the web or a device for example. Using a try statement to keep code "simple" instead of using if statements to avoid error-prone calls makes code incredibly slower. Restructure your source code to require less try statements.Replace Divisions
C# is relatively slow when it comes to division operations. One alternative is to replace divisions with a multiplication-shift operation to further optimize C#. The article explains in detail how to make the conversion.Static methods In the C# language, non-inlined instance methods are always slower than non-inlined static methods. The reason for this is that to call an instance method, the instance reference must be resolved, to determine what method to call. Static methods do not use an instance reference. If you look at the intermediate language, you will see that static methods can be invoked with fewer instructions. You can see an experiment based on the callvirt and call instructions on this site.
Avoid parameters When you call any method that was not inlined, the runtime will actually physically copy the variables you pass as arguments to the formal parameter slot memory in the called method. This causes stack memory operations and incurs a performance hit. It is faster to minimize arguments, and even use constants in the called methods instead of passing them arguments. Avoid local variables When you call a method in your C# program, the runtime allocates a separate memory region to store all the local variable slots. This memory is allocated on the stack even if you do not access the variables in the function call. Therefore: You can call methods faster if they have fewer variables in them. One way you can do this is isolate rarely used parts of methods in separate methods. This makes the fast path in the called method more efficient, which can have a significant performance gain. Arithmetic Expression Optimization
Constants In the .NET Framework, constants are not assigned a memory region, but are instead considered values. Therefore, you can never assign a constant, but loading the constant into memory is more efficient because it can injected directly into the instruction stream. This eliminates any memory accesses outside of the memory, improving locality of reference.
Static fields Static fields are faster than instance fields, for the same reason that static methods are faster than instance methods. When you load a static field into memory, you do not need the runtime to resolve the instance expression. Loading an instance field must have the object instance first resolved. Even in an object instance, loading a static field is faster because no instance expression instruction is ever used.
Inline methods Unlike the C++ language, the C# language does not allow you to suggest a method be inlined into its enclosing method call spots. Often, the .NET Framework is conservative here and will not inline medium-sized or large methods. However: You can manually paste a method body into its call spot. Typically, this improves performance in micro-benchmarks, and it is really easy to do. However, it will make code harder to modify. It is only suggested for a few critical spots in programs.
Switch You will find that the switch statement compiles in a different way than if-statements typically do. For example, if you use a switch on an int, you will often get jump statements, which are similar to a computed goto mechanism. Using jump tables makes switches much faster than some if-statements. Also, using a char switch on a string is fast.
Flatten Array Jagged arrays While flattened arrays are typically most efficient, they are sometimes impractical. In these cases, you can use jagged arrays to improve the lookup performance. The .NET Framework enables faster accesses to jagged arrays than to 2D arrays. Jagged Array Examples Please note: Jagged arrays may cause slower garbage collections, because each jagged array element will be treated separately by the garbage collector.
Code Snippnets
1. Boxing and unboxing |
static
private void TestBoxingAndUnboxing()
{ int i = 123; object o = i; // Implicit boxing i = 456; // Change the contents of i int j = (int)o; // Unboxing (may throw an exception
}
//this function is about 2*Log(N) faster static private void TestNoBoxingAndUnboxing() { int i = 123; i = 456; // Change the contents of i int j = i; // Compatible types } |
||||||
2. Collections |
//Using actual size for data rather than some default size increases
performance up to 50%
ht = new Hashtable(count); // Creates an Hashtable using size = Count LoadData(ht, count); // Display results Display.Show(1, "Elapsed time: " + t.ElapsedTime.ToString() + " ms \n", 0); // End of the demo Display.Show(-1, "Collections demo end.\n", 1); } static private void LoadData(Hashtable ht, int Count) { // Fill the employee collection with data for (int i = 0; i < Count; i++) { Employee employee = new Employee(i, "Employee" + i.ToString()); ht.Add(i, employee); } |
||||||
3. Exceptions |
//throwing
and catching an exception is 1000 slower than checking for an error code
static void FunctionThatThrows() { throw new Exception(); } static int FunctionThatReturnsErrorCode() { return -1; } |
||||||
|
|||||||
6.LINQ – ‘Where’ with ‘First’ instead of FirstOrDefault |
|
||||||
7. Casting by means of ‘(T)’ instead of ‘as (T)’ when possibly not castable |
|
||||||
8. Incorrect exceptions re-throwing |
//the below code:
using(SomeDisposableClass someDisposableObject = new SomeDisposableClass())
{
someDisposableObject.DoTheJob();
}
//does the same as:
SomeDisposableClass someDisposableObject = new SomeDisposableClass();
try
{
someDisposableObject.DoTheJob();
}
finally
{
someDisposableObject.Dispose();
}
|
C# Advanced: Checklist and code snippets
Threading
Do You
Create Additional Threads?
You should generally avoid creating threads, particularly in
server-side code — use the CLR thread pool instead. In addition to the cost of
creating the underlying operating system thread, frequently creating new
threads can also lead to excessive context switching, memory allocation, and
additional cleanup when the thread dies. Recycling threads within the thread
pool generally leads to superior results.
Do You Call Thread.Suspend or Thread.Resume?
Use synchronization objects if you need to synchronize threads.
Calling Thread.Suspend and Thread.Resume to synchronize
the activities of multiple threads can cause deadlocks. Generally, Suspend and Resume should be used only in
the context of debugging or profiling, and not at all for typical applications.
Use synchronization objects such as ManualResetEvent
objects if you need to synchronize threads.
Do You Use Volatile Fields?
Limit the use of the volatile
keyword because volatile fields restrict the way the compiler reads and writes
the contents of the fields. Volatile fields are not meant for ensuring thread
safety.
Do You Execute Periodic Tasks?
If you require a single thread for periodic tasks, it is
cheaper to have just one thread explicitly executing the periodic tasks and
then sleeping until it needs to perform the task again. However, if you require
multiple threads to execute periodic tasks for each new request, you should use
the thread pool.
Use the Threading.Timer class to periodically schedule tasks. The Timer class uses the CLR thread pool to execute the code.
Use the Threading.Timer class to periodically schedule tasks. The Timer class uses the CLR thread pool to execute the code.
Serialization
Do You Serialize Too Much Data?
Review which data members from an object your code
serializes. Identify items that do not need to be serialized, such as items
that can be easily recalculated when the object is deserialized. For example,
there is no need to serialize an Age property
in addition to a DateOfBirth
property because the Age can
easily be recalculated without requiring significant processor power. Such
members can be marked with the NonSerialized
attribute if you use the SoapFormatter
or the BinaryFormatter or the XmlIgnore attribute if you use
the XmlSerializer class, which Web
services use.
Also identify opportunities to use structures within your classes to encapsulate the data that needs to be serialized. Collecting the logical data in a data structure can help reduce round trips and lessen the serialization impact.
Also identify opportunities to use structures within your classes to encapsulate the data that needs to be serialized. Collecting the logical data in a data structure can help reduce round trips and lessen the serialization impact.
Do You Serialize DataSet Objects?
The DataSet
object generates a large amount of serialization data and is expensive to
serialize and deserialize. If your code serializes DataSet objects, make sure to
conduct performance testing to analyze whether it is creating a bottleneck in
your application. If it is, consider alternatives such as using custom classes.
Do You Implement ISerializable?
If your classes implement ISerializable
to control the serialization process, be aware that you are responsible for
maintaining your own serialization code. If you implement ISerializable simply to
restrict specific fields from being serialized, consider using the Serializable and NonSerialized attributes
instead. By using these attributes, you will automatically gain the benefit of
any serialization improvements in future versions of the .NET Framework.
Garbage collection
The resources should be released as early as
possible. This is especially true in cases of shared resources, such as
database connections or file handles. As much as possible, the code should be
using the Dispose (or Close) pattern on disposable resources. For example, if
the application block is accessing an unmanaged resource, such as a network
connection or a database connection across client calls, it provides clients a
suitable mechanism for deterministic cleanup by implementing the Dispose
pattern. The following code sample shows the Dispose pattern.
public sealed class MyClass: IDisposable
{
//
Variable to track if Dispose has been called
private bool disposed = false;
//
Implement the IDisposable.Dispose() method
public void Dispose(){
//
Check if Dispose has already been called
if
(!disposed)
{
//
Call the overridden Dispose method that contains common cleanup
//
code
// Pass true to indicate that it is called
from Dispose
Dispose(true);
//
Prevent subsequent finalization of this object. This is not
//
needed because managed and unmanaged resources have been
//
explicitly released
GC.SuppressFinalize(this);
}
}
//
Implement a finalizer by using destructor style syntax
~MyClass() {
//
Call the overridden Dispose method that contains common cleanup
//
code
// Pass false to indicate the it is not
called from Dispose
Dispose(false);
}
//
Implement the override Dispose method that will contain common
//
cleanup functionality
protected virtual void Dispose(bool disposing){
if(disposing){
// Dispose time code
. . .
}
// Finalize time code
. .
.
}
...
}
Caching
Use the following review questions to assess your code's use
of ASP.NET caching features:
- Do you have too many variations for output caching? Check your pages that use the output cache to ensure that the number of variations has a limit. Too many variations of an output cached page can cause an increase in memory usage. You can identify pages that use the output cache by searching for the string "OutputCache."
- Could you use output caching? When reviewing your pages, start by asking yourself if the whole page can be cached. If the whole page cannot be cached, can portions of it be cached? Consider using the output cache even if the data is not static. If your content does not need to be delivered in near real-time, consider output caching. Using the output cache to cache either the entire page or portions of the page can significantly improve performance.
- Is there static data that would be better stored in the cache? Identify application-side data that is static or infrequently updated. This type of data is a great candidate for storing in the cache.
- Do you check for nulls before accessing cache items? You can improve performance by checking for null before accessing the cached item as shown in the following code fragment.
· Object item = Cache["myitem"];
· if (item==null)
· {
· // repopulate the cache
· }
This helps avoid any exceptions which are caused by null
objects. To find where in your code you access the cache, you can search for
the string "Cache."
More Information
For more information about the questions and issues raised
in this section, see "Caching Guidelines" in Chapter 6, "Improving ASP.NET
Performance."
Use Session State
Use the following review questions to review your code's use
of session state:
- Do you disable session state when not required? Session state is on by default. If your application does not use session state, disable it in Web.config as follows.
· <sessionState mode="Off" />
If parts of your application need session state, identify
pages that do not use it and disable it for those pages by using the following
page level attribute.
<@% EnableSessionState = "false" %>
Minimizing the use of session state increases the
performance of your application.
- Do you have pages that do not write to a session? Page requests using session state internally use a ReaderWriterLock to manage access to the session state. For pages that only read session data, consider setting EnableSessionState to ReadOnly.
· <%@ Page EnableSessionState="ReadOnly" . . .%>
This is particularly useful when you use HTML frames. The
default setting (due to ReaderWriterLock)
serializes the page execution. By setting it to ReadOnly,
you prevent blocking and allow more parallelism.
- Do you check for nulls before accessing items in session state? You can improve performance by checking for null before accessing the item, as shown in the following code.
object item = Session["myitem"];
if(item==null)
{
// do something else
}
A common pitfall when retrieving data from session state is
to not check to see if the data is null before
accessing it and then catching the resulting exception. You should avoid this
because exceptions are expensive. To find where your code accesses session
state, you can search for the string "Session."
- Do you store complex objects in session state? Avoid storing complex objects in session state, particularly if you use an out-of-process session state store. When using out-of-process session state, objects have to be serialized and deserialized for each request, which decreases performance.
- Do you store STA COM objects in session
state? Storing single-threaded apartment (STA) COM objects
in session state causes thread affinity because the sessions are bound to
the original thread on which the component is created. This severely
affects both performance and scalability.
Make sure that you use the following page level attribute on any page that stores STA COM objects in session state.
· <@%Page AspCompat = "true" %>
This forces the page to run from the STA thread pool, avoiding
any costly apartment switch from the default multithreaded apartment (MTA)
thread pool for ASP.NET. Where possible, avoid the use of STA COM objects.
For more information, see Knowledge Base article 817005, "FIX: Severe Performance Issues When You Bind Session State to Threads in ASPCompat Model" at http://support.microsoft.com/default.aspx?scid=kb;en-us;817005.
For more information, see Knowledge Base article 817005, "FIX: Severe Performance Issues When You Bind Session State to Threads in ASPCompat Model" at http://support.microsoft.com/default.aspx?scid=kb;en-us;817005.
More Information
For more information about the questions and issues raised
in this section, see "Session State" in Chapter 6, "Improving ASP.NET
Performance."
Application State
Use the following review questions to assess how efficiently
your code uses application state:
- Do you store STA COM components in application state? Avoid storing STA COM components in application state where possible. Doing so effectively bottlenecks your application to a single thread of execution when accessing the component. Where possible, avoid using STA COM objects.
- Do you use the application state dictionary? You should use application state dictionary for storing read-only values that can be set at application initialization time and do not change afterward. There are several issues to be aware of when using application state in your code, such as the following:
- Memory allocated to the storage of application variables is not released unless they are removed or replaced.
- Application state is not shared across a Web farm or a Web garden — variables stored in application state are global to the particular process in which the application is running. Each application process can have different values.
For a complete list of the pros and cons of using application
state, see "Am I Losing My Memory?" on MSDN at http://msdn.microsoft.com/en-us/library/aa302320.aspx.
Consider using the following alternatives to application state:
Consider using the following alternatives to application state:
- Create static properties for the application rather than using the state dictionary. It is more efficient to look up a static property than to access the state dictionary. For example, consider the following code.
o Application["name"] = "App Name";
It is more efficient to use the following code.
private static String _appName = "App Name";
public string AppName
{
get{return _appName;}
set{_appName = value;}
}
- Use configuration files for storing application configuration information.
- Consider caching data that is volatile enough that it cannot be stored in application state, but needs updates periodically from a persistent medium, in the Cache object.
- Use the session store for user-specific information.
You can identify places where your code uses application
state by searching for the string "Application."
More Information
For more information about the questions and issues raised
in this section, see "Application State" in Chapter 6, "Improving ASP.NET
Performance."
Threading and Synchronization Features
The .NET Framework exposes various threading and
synchronization features, and the way your code uses multiple threads can have
a significant impact on application performance and scalability. Use the
following review questions to assess how efficiently your ASP.NET code uses
threading:
- Do you create threads on a per-request basis? Avoid manually creating threads in ASP.NET applications. Creating threads is an expensive operation that requires initialization of both managed and unmanaged resources. If you do need additional threads to perform work, use the CLR thread pool. To find places in your code where you are creating threads, search for the string "ThreadStart."
- Do you perform long-running blocking
operations? Avoid blocking operations in your ASP.NET
applications where possible. If you have to execute a long-running task,
consider using asynchronous execution (if you can free the calling thread)
or use the asynchronous "fire and forget" model.
For more information, see "How To: Submit and Poll for Long-Running Tasks" in the "How To" section of this guide.
More Information
For more information about the questions and issues raised
in this section, see "Threading Guidelines" in Chapter 6, "Improving ASP.NET
Performance."
Manage Resources
Use the following review questions to assess how efficiently
your code uses resources:
- Do you explicitly close resources properly? Ensure that your code explicitly closes objects that implement IDisposable by calling the object's Dispose or Close method. Failure to close resources properly and speedily can lead to increased memory consumption and poor performance. Failing to close database connections is a common problem. Use a finally block (or a using block in C#) to release these resources and to ensure that the resource is closed even if an exception occurs.
- Do you pool shared resources? Check that you use pooling to increase performance when accessing shared resources. Ensure that shared resources, such as database connections and serviced components, that can be pooled are being pooled. Without pooling, your code incurs the overhead of initialization each time the shared resource is used.
- Do you obtain your resources late and release them early? Open shared resources just before you need them and release them as soon as you are finished. Holding onto resources for longer than you need them increases memory pressure and increases contention for these resources if they are shared.
- Do you transfer data in chunks over I/O calls? If you do need to transfer data over I/O calls in chunks, allocate and pin buffers for sending and receiving the chunks. If you need to make concurrent I/O calls, you should create a pool of pinned buffers that is recycled among various clients rather than creating a buffer on a per-request basis. This helps you avoid heap fragmentation and reduce buffer creation time.
View State
Use the following review questions to assess how efficiently
your applications use view state:
- Do you disable view state when it is not required? Evaluate each page to determine if you need view state enabled. View state adds overhead to each request. The overhead includes increased page sizes sent to the client as well as a serialization and deserialization cost. You do not need view state under the following conditions:
- The page does not post back to itself; the page is only used for output and does not rely on response processing.
- Your page's server controls do not handle events and you have no dynamic or data-bound property values (or they are set in code on every request).
- If you are ignoring old data and repopulating the server control every time the page is refreshed.
- Have you taken steps to reduce the size of your view state? Evaluate your use of view state for each page. To determine a page's view state size, you can enable tracing and see each how each control uses it. Disable view state on a control-by-control basis.
More Information
For more information about the questions and issues raised
in this section, see "View State" in Chapter 6, "Improving ASP.NET
Performance."
Server Controls?
Use the following review questions to review how efficiently
your ASP.NET applications use server controls:
- Do you use server controls when you do not need to? Evaluate your use of server controls to determine if you can replace them with lightweight HTML controls or possibly static text. You might be able to replace a server control under the following conditions:
- The data being displayed in the control is static, for example, a label.
- You do not need programmatic access to the control on the server side.
- The control is displaying read-only data.
- The control is not needed during post back processing.
- Do you have deep hierarchies of server controls? Deeply nested hierarchies of server controls compound the cost of building the control tree. Consider rendering the content yourself by using Response.Write or building a custom control which does the rendering. To determine the number of controls and to see the control hierarchy, enable tracing for the page.
More Information
For more information about the questions and issues raised
in this section, see "Server Controls" in Chapter 6, "Improving ASP.NET
Performance."
DataReaders?
If you use DataReaders,
review the following questions:
- Do you close your DataReaders? Scan your code to ensure you are closing your DataReaders as soon as you are finished with them. You should call Close or Dispose in a finally block. If you pass a DateReader back from a method, use CommandBahavior.CloseConnection to ensure the connection gets closed when the reader is closed.
- Do you use index to read from a DataReader?
All output from a DataReader
should be read using an index (for example, rdr.GetString(0)) which is
faster, but for readability and maintainability, you might prefer to use
the string names of the columns. If you are accessing the same columns
multiple times (for example, when you retrieve a number of rows), you
should use local variables that store the index number of the columns. You
can use rdr.GetOrdinal() to retrieve the ordinal position of a column.
For more information, see "Use GetOrdinal when Using an Index-Based Lookup" in Chapter 12, "Improving ADO.NET Performance."
DataSets?
Use the following review questions to review your code's use
of DataSets:
- Do you serialize DataSets?
Inefficient serializing of DataSets
is a major performance issue for remote calls. You should avoid sending DataSets (especially when
using .NET remoting) and consider alternative means of sending data over
the wire, such as arrays or simple collections, where possible.
If you serialize DataSets, make sure you adhere to the following guidelines: - Only return relevant data in the DataSet.
- Consider using alias column names to shorter actual column names. This helps reduce the size of the DataSet.
- Avoid multiple versions of the data. Call AcceptChanges before serializing a DataSet.
- When serializing a DataSet over a Remoting channel, use the DataSetSurrogate class.
For more information, see "How To: Improve Serialization
Performance" in the "How To" section of this guide and Knowledge
Base article 829740, "Improving DataSet Serialization and Remoting
Performance," at http://support.microsoft.com/default.aspx?scid=kb;en-us;829740.
- Do you search data which has a primary key column? If you need to search a DataSet using a primary key, create the primary key on the DataTable. This creates an index that the Rows.Find method can use to quickly find the required records. Avoid using DataTable.Select, which does not use indices.
- Do you search data which does not have a primary key? If you need to repetitively search by nonprimary key data, create a DataView with a sort order. This creates an index that can be used to improve search efficiency. This is best suited to repetitive searches as there is some cost to creating the index.
- Do you use DataSets for XML data? If you do not pass the schema for the XML data, the DataSet tries to infer the schema at run time. Pass XmlReadMode.IgnoreSchema to the ReadXml method to ensure that schema is not inferred.
More Information
For more information about the questions and issues raised
in this section, see "Connections" in Chapter 12, "Improving ADO.NET
Performance."
Page and Control Callbacks
Recommendation: Stop using page and control callbacks, and instead use any of the following: AJAX, UpdatePanel, MVC action methods, Web API, or SignalR.In earlier versions of ASP.NET, Page and Control callback methods enabled you to update part of the web page without refreshing an entire page. You can now accomplish partial-page updates through AJAX, UpdatePanel, MVC, Web API or SignalR. You should stop using callback methods because they can cause issues with friendly URLs and routing. By default, controls do not enable callback methods, but if you enabled this feature in a control, you should disable it.
Browser Capability Detection
Recommendation: Stop using static browser capability detection, and instead use dynamic feature detection.In earlier versions of ASP.NET, the supported features for each browser were stored in an XML file. Detecting feature support through a static lookup is not the best approach. Now, you can dynamically detect a browser’s supported features by using a feature detection framework, such as Modernizr. Feature detection determines support by attempting to use a method or property and then checking to see if the browser produced the desired result. By default, Modernizr is included in the Web application templates.
Security
Request Validation
Recommendation: Validate user input, and encode output from users.Request validation is a feature of ASP.NET that inspects each request and stops the request if a perceived threat is found. Do not depend on request validation for securing your application against cross-site scripting attacks. Instead, validate all input from users and encode the output. In some limited cases, you can use regular expressions to validate the input, but in more complicated cases you should validate user input by using .NET classes that determine if the value matches allowed values.
The following example shows how to use a static method in the Uri class to determine whether the Uri provided by a user is valid.
var isValidUri = Uri.IsWellFormedUriString(passedUri, UriKind.Absolute);However, to sufficiently verify the Uri, you should also check to make sure it specifies
http
or https
. The following example uses
instance methods to verify that the Uri is valid.var uriToVerify = new Uri(passedUri);
var isValidUri = uriToVerify.IsWellFormedOriginalString();
var isValidScheme = uriToVerify.Scheme == "http" || uriToVerify.Scheme == "https";Before rendering user input as HTML or including user input in a SQL query, encode the values to ensure malicious code is not included.
You can HTML encode the value in markup with the <%: %> syntax, as shown below.
<span><%: userInput %></span>Or, in Razor syntax, you can HTML encode with @, as shown below.
<span>@userInput</span>The next example shows how to HTML encode a value in code-behind.
var encodedInput = Server.HtmlEncode(userInput);To safely encode a value for SQL commands, use command parameters such as the SqlParameter.
Cookieless Forms Authentication and Session
Recommendation: Require cookies.Passing authentication information in the query string is not secure. Therefore, require cookies when your application includes authentication. If your cookie stores sensitive information, consider requiring SSL for the cookie.
The following example shows how to specify in the Web.config file that Forms Authentication requires a cookie that is transmitted over SSL.
<authentication mode="Forms">
<forms loginUrl="member_login.aspx"
cookieless="UseCookies"
requireSSL="true"
path="/MyApplication" />
</authentication>
EnableViewStateMac
Recommendation: Never set to false.By default, EnbableViewStateMac is set to true. Even if your application is not using view state, do not set EnableViewStateMac to false. Setting this value to false will make your application vulnerable to cross-site scripting.
The following example shows how to set EnableViewStateMac to true. You do not need to actually set this value to true because it is true by default. However, if you have set it to false on any page in your application, you must immediately correct this value.
<%@ Page language="C#" EnableViewStateMac="true" %>
Medium Trust
Recommendation: Do not depend on Medium Trust (or any other trust level) as a security boundary.Partial trust does not adequately protect your application and should not be used. Instead, use Full Trust, and isolate untrusted applications in separate application pools. Also, run each application pool under a unique identity. For more information, see ASP.NET Partial Trust does not guarantee application isolation.
<appSettings>
Recommendation: Do not disable security settings in <appSettings> element.The appSettings element contains many values which are required for security updates. You should not change or disable these values. If you must disable these values when deploying an update, immediately re-enable after completing deployment.
For details, see ASP.NET appSettings Element.
UrlPathEncode
Recommendation: Use UrlEncode instead.The UrlPathEncode method was added to the .NET Framework to resolve a very specific browser compatibility problem. It does not adequately encode a URL, and does not protect your application from cross-site scripting. You should never use it in your application. Instead, use UrlEncode.
The following example shows how to pass an encoded URL as a query string parameter for a hyperlink control.
string destinationURL = "http://www.contoso.com/default.aspx?user=test";
NextPage.NavigateUrl = "~/Finish?url=" + Server.UrlEncode(destinationURL);
Reliability and Performance
PreSendRequestHeaders and PreSendRequestContext
Recommendation: Do not use these events with managed modules. Instead, write a native IIS module to perform the required task. See Creating Native-Code HTTP Modules.You can use the PreSendRequestHeaders and PreSendRequestContext events with native IIS modules, but do not use them with managed modules that implement IHttpModule. Setting these properties can cause issues with asynchronous requests.
Asynchronous Page Events with Web Forms
Recommendation: In Web Forms, avoid writing async void methods for Page lifecycle events, and instead use Page.RegisterAsyncTask for asynchronous code.When you mark a page event with async and void, you cannot determine when the asynchronous code has finished. Instead, use Page.RegisterAsyncTask to run the asynchronous code in a way that enables you to track its completion.
The following example shows a button click handler that contains asynchronous code. This example includes reading a string value asynchronously, which is provided only as a simplified example of an asynchronous task and not as a recommended practice.
protected void StartAsync_Click(object sender, EventArgs e)
{
Page.RegisterAsyncTask(new PageAsyncTask(async() =>
{
string stringToRead = "Long text value";
using (StringReader reader = new StringReader(stringToRead))
{
string readText = await reader.ReadToEndAsync();
Result.Text = readText;
}
}));
}
If you are using asynchronous Tasks, set the Http runtime target framework
to 4.5 in the Web.config file. Setting the target framework to 4.5 turns on the
new synchronization context that was added in .NET 4.5. This value is set by
default in new projects in Visual Studio 2012, but is not be set if you are
working with an existing project.<system.web>
<httpRuntime TargetFramework="4.5" />
</system.web>
Fire-and-Forget Work
Recommendation: When handling a request within ASP.NET, avoid launching fire-and-forget work (such calling the ThreadPool.QueueUserWorkItem method or creating a timer that repeatedly calls a delegate).If your application has fire-and-forget work that runs within ASP.NET, your application can get out of sync. At any time, the app domain can be destroyed which means your ongoing process may no longer match the current state of the application.
You should move this type of work outside of ASP.NET. You can use a Windows Service or a Worker role in Windows Azure to perform ongoing work, and run that code from another process.
If you must perform this work within ASP.NET, you can add the Nuget package called WebBackgrounder to run the code.
Request Entity Body
Recommendation: Avoid reading Request.Form or Request.InputStream before the handler's execute event.The earliest you should read from Request.Form or Request.InputStream is during the handler's execute event. In MVC, the Controller is the handler and the execute event is when the action method runs. In Web Forms, the Page is the handler and the execute event is when the Page.Init event fires. If you read the request entity body earlier than the execute event, you interfere with the processing of the request.
If you need to read the request entity body before the execute event, use either Request.GetBufferlessInputStream or Request.GetBufferedInputStream. When you use GetBufferlessInputStream, you get the raw stream from the request, and assume responsibility for processing the entire request. After calling GetBufferlessInputStream, Request.Form and Request.InputStream are not available because they have not been populated by ASP.NET. When you use GetBufferedInputStream, you get a copy of the stream from the request. Request.Form and Request.InputStream are still available later in the request because ASP.NET populates the other copy.
Response.Redirect and Response.End
Recommendation: Be aware of differences in how thread is handled after calling Response.Redirect(String).The Response.Redirect(String) method calls the Response.End method. In a synchronous process, calling Request.Redirect causes the current thread to immediately abort. However, in an asynchronous process, calling Response.Redirect does not abort the current thread, so code execution continues for the request. In an asynchronous process, you must return the Task from the method to stop the code execution.
In an MVC project, you should not call Response.Redirect. Instead, return a RedirectResult.
EnableViewState and ViewStateMode
Recommendation: Use ViewStateMode, instead of EnableViewState, to provide granular control over which controls use view state.When you set EnableViewState to false in the Page directive, view state is disabled for all controls within the page and cannot be enabled. If you want to enable view state for only certain controls in your page, set ViewStateMode to Disabled for the Page.
<%@ Page ViewStateMode="Disabled" . . . %>Then, set ViewStateMode to Enabled on only the controls that actually need view state.
<asp:GridView ViewStateMode="Enabled" runat="server">By enabling view state for only the controls that need it, you can shrink the size of the view state for your web pages.
SqlMembershipProvider
Recommendation: Use Universal Providers.In the current project templates, SqlMembershipProvider has been replaced by ASP.NET Universal Providers, which is available as a NuGet package. If you are using SqlMembershipProvider in a project that was built with an earlier version of the templates, you should switch to Universal Providers. The Universal Providers work with all databases that are supported by Entity Framework.
For more information, see Introducing ASP.NET Universal Providers.
Long-running Requests (>110 seconds)
Recommendation: Use WebSockets or SignalR for connected clients, and use asynchronous I/O operations.Long-running requests can cause unpredictable results and poor performance in your web application. The default timeout setting for a request is 110 seconds. If you are using session state with a long-running request, ASP.NET will release the lock on the Session object after 110 seconds. However, your application might be in the middle of an operation on the Session object when the lock is released, and the operation might not complete successfully. If a second request from the user is blocked while the first request is running, the second request might access the Session object in an inconsistent state.
If your application includes blocking (or synchronous) I/O operations, the application will be unresponsive.
To improve performance, use the asynchronous I/O operations in the .NET Framework. Also, use WebSockets or SignalR for connecting clients to the server. These features are designed to efficiently handle long-running requests.
ASP.Net Performance:
SQL SERVER
Choose Appropriate Data Type
Choose appropriate SQL Data Type to store your data
since it also helps in to improve the query performance. Example: To store
strings use varchar in place of text data type since varchar performs better
than text. Use text data type, whenever you required storing of large text data
(more than 8000 characters). Up to 8000 characters data you can store in
varchar.
Avoid nchar and nvarchar
Practice to avoid nchar and nvarchar data type
since both the data types takes just double memory as char and varchar. Use
nchar and nvarchar when you required to store Unicode (16-bit characters) data
like as Hindi, Chinese characters etc.
Avoid NULL in fixed-length field
Practice to avoid the insertion of NULL values in
the fixed-length (char) field. Since, NULL takes the same space as desired
input value for that field. In case of requirement of NULL, use variable-length
(varchar) field that takes less space for NULL.
Avoid * in SELECT statement
Practice to avoid * in Select statement since SQL
Server converts the * to columns name before query execution. One more thing,
instead of querying all columns by using * in select statement, give the name
of columns which you required.
1. -- Avoid
2. SELECT * FROM tblName
3. --Best practice
4. SELECT col1,col2,col3 FROM tblName
Use EXISTS instead of IN
Practice to use EXISTS to check existence instead
of IN since EXISTS is faster than IN.
5. -- Avoid
6. SELECT Name,Price FROM tblProduct
7. where ProductID IN (Select distinct ProductID from tblOrder)
8. --Best practice
9. SELECT Name,Price FROM tblProduct
10.where ProductID EXISTS (Select distinct ProductID from tblOrder)
Avoid Having Clause
Practice to avoid Having Clause since it acts as
filter over selected rows. Having clause is required if you further wish to
filter the result of an aggregations. Don't use HAVING clause for any other
purpose.
Create Clustered and Non-Clustered Indexes
Practice to create clustered and non clustered
index since indexes helps in to access data fastly. But be careful, more
indexes on a tables will slow the INSERT,UPDATE,DELETE operations. Hence try to
keep small no of indexes on a table.
Keep clustered index small
Practice to keep clustered index as much as
possible since the fields used in clustered index may also used in nonclustered
index and data in the database is also stored in the order of clustered index.
Hence a large clustered index on a table with a large number of rows increase
the size significantly. Please refer the article Effective Clustered Indexes
Avoid Cursors
Practice to avoid cursor since cursor are very slow
in performance. Always try to use SQL Server cursor alternative. Please refer
the article Cursor Alternative.
Use Table variable inplace of Temp table
Practice to use Table varible in place of Temp
table since Temp table resides in the TempDb database. Hence use of Temp tables
required interaction with TempDb database that is a little bit time taking
task.
Use UNION ALL inplace of UNION
Practice to use UNION ALL in place of UNION since
it is faster than UNION as it doesn't sort the result set for distinguished
values.
Use Schema name before SQL objects name
Practice to use schema name before SQL object name
followed by "." since it helps the SQL Server for finding that object
in a specific schema. As a result performance is best.
11. --Here dbo is schema name
12.SELECT col1,col2 from dbo.tblName
13.-- Avoid
14.SELECT col1,col2 from tblName
Keep Transaction small
Practice to keep transaction as small as possible
since transaction lock the processing tables data during its life. Some times
long transaction may results into deadlocks. Please refer the article SQL Server Transactions Management
SET NOCOUNT ON
Practice to set NOCOUNT ON since SQL Server returns
number of rows effected by SELECT,INSERT,UPDATE and DELETE statement. We can
stop this by setting NOCOUNT ON like as:
15. CREATE PROCEDURE dbo.MyTestProc
16.AS
17.SET NOCOUNT ON
18.BEGIN
19..
20..
21.END
Use TRY-Catch
Practice to use TRY-CATCH for handling errors in
T-SQL statements. Sometimes an error in a running transaction may cause
deadlock if you have no handle error by using TRY-CATCH. Please refer the
article Exception Handling by TRY…CATCH
Use Stored Procedure for frequently used data and more complex queries
Practice to create stored procedure for quaery that
is required to access data frequently. We also created stored procedure for resolving
more complex task.
Avoid prefix "sp_" with user defined stored procedure name
Practice to avoid prefix "sp_" with user
defined stored procedure name since system defined stored procedure name starts
with prefix "sp_". Hence SQL server first search the user defined
procedure in the master database and after that in the current session
database. This is time consuming and may give unexcepted result if system
defined stored procedure have the same name as your defined procedure.
LINQ Optimization
linq vs stored procs
10 Tips to Improve your LINQ to SQL Application Performance
Hey there, back again. In my first post about LINQ I tried to provide a brief(okay, bit detailed) introduction for those who want to get involved with LINQ to SQL. In that post I promised to write about a basic integration of WCF and LINQ to SQL working together, but this is not that post.
Since LINQ to SQL is a code generator and an ORM and it offers a lot of things, it is normal to be suspicious about performance of it. These are right up to a certain point as LINQ comes with its own penalties. But there are several benchmarks showing that DLINQ brings us up to %93 of the ADO.NET SQL DataReader performance if optimizations are done correctly.
Hence I summed up 10 important points for me that needs to be considered during tuning your LINQ to SQL’s data retrieval and data modifying process:
1 – Turn off ObjectTrackingEnabled Property of Data Context If Not Necessary
If you are trying only to retrieve data as read only, and not modifying anything, you don’t need object tracking. So turn it off using it like in the example below:
using (NorthwindDataContext context = new NorthwindDataContext())
{
context.ObjectTrackingEnabled = false;
}
This will allow you to turn off the unnecessary identity management of the objects – hence Data Context will not have to store them because it will be sure that there will be no change statements to generate.
2 – Do NOT Dump All Your DB Objects into One Single DataContext
DataContext represents a single unit of work, not all your database. If you have several database objects that are not connected, or they are not used at all (log tables, objects used by batch processes,etc..). These objects just unnecessarily consume space in the memory hence increasing the identity management and object tracking costs in CUD engine of the DataContext.
Instead think of separating your workspace into several DataContexts where each one represents a single unit of work associated with it. You can still configure them to use the same connection via its constructors to not to loose the benefit of connection pooling.
3 – Use CompiledQuery Wherever Needed
When creating and executing your query, there are several steps for generating the appropriate SQL from the expression, just to name some important of them:
Create expression tree
Convert it to SQL
Run the query
Retrieve the data
Convert it to the objects
As you may notice, when you are using the same query over and over, hence first and second steps are just wasting time. This is where this tiny class in System.Data.Linq namespace achieves a lot. With CompiledQuery, you compile your query once and store it somewhere for later usage. This is achieved by static CompiledQuery.Compile method.
Below is a Code Snippet for an example usage:
Func<NorthwindDataContext, IEnumerable<Category>> func =
CompiledQuery.Compile<NorthwindDataContext, IEnumerable<Category>>
((NorthwindDataContext context) => context.Categories.
Where<Category>(cat => cat.Products.Count > 5));
And now, “func” is my compiled query. It will only be compiled once when it is first run. We can now store it in a static utility class as follows :
/// <summary>
/// Utility class to store compiled queries
/// </summary>
public static class QueriesUtility
{
/// <summary>
/// Gets the query that returns categories with more than five products.
/// </summary>
/// <value>The query containing categories with more than five products.</value>
public static Func<NorthwindDataContext, IEnumerable<Category>>
GetCategoriesWithMoreThanFiveProducts
{
get
{
Func<NorthwindDataContext, IEnumerable<Category>> func =
CompiledQuery.Compile<NorthwindDataContext, IEnumerable<Category>>
((NorthwindDataContext context) => context.Categories.
Where<Category>(cat => cat.Products.Count > 5));
return func;
}
}
}
And we can use this compiled query (since it is now a nothing but a strongly typed function for us) very easily as follows:
using (NorthwindDataContext context = new NorthwindDataContext())
{
QueriesUtility.GetCategoriesWithMoreThanFiveProducts(context);
}
Storing and using it in this way also reduces the cost of doing a virtual call that’s done each time you access the collection – actually it is decreased to 1 call. If you don’t call the query don’t worry about compilation too, since it will be compiled whenever the query is first executed.
4 – Filter Data Down to What You Need Using DataLoadOptions.AssociateWith
When we retrieve data with Load or LoadWith we are assuming that we want to retrieve all the associated data those are bound with the primary key (and object id). But in most cases we likely need additional filtering to this. Here is where DataLoadOptions.AssociateWith generic method comes very handy. This method takes the criteria to load the data as a parameter and applies it to the query – so you get only the data that you need.
The following code below associates and retrieves the categories only with continuing products:
using (NorthwindDataContext context = new NorthwindDataContext())
{
DataLoadOptions options = new DataLoadOptions();
options.AssociateWith<Category>(cat=> cat.Products.Where<Product>(prod => !prod.Discontinued));
context.LoadOptions = options;
}
5 – Turn Optimistic Concurrency Off Unless You Need It
LINQ to SQL comes with out of the box Optimistic Concurrency support with SQL timestamp columns which are mapped to Binary type. You can turn this feature on and off in both mapping file and attributes for the properties. If your application can afford running on “last update wins” basis, then doing an extra update check is just a waste.
UpdateCheck.Never is used to turn optimistic concurrency off in LINQ to SQL.
Here is an example of turning optimistic concurrency off implemented as attribute level mapping:
[Column(Storage=“_Description”, DbType=“NText”,
UpdateCheck=UpdateCheck.Never)]
public string Description
{
get
{
return this._Description;
}
set
{
if ((this._Description != value))
{
this.OnDescriptionChanging(value);
this.SendPropertyChanging();
this._Description = value;
this.SendPropertyChanged(“Description”);
this.OnDescriptionChanged();
}
}
}
6 – Constantly Monitor Queries Generated by the DataContext and Analyze the Data You Retrieve
As your query is generated on the fly, there is this possibility that you may not be aware of additional columns or extra data that is retrieved behind the scenes. Use Data Context’s Log property to be able to see what SQL are being run by the Data Context. An example is as follows:
using (NorthwindDataContext context = new NorthwindDataContext())
{
context.Log = Console.Out;
}
Using this snippet while debugging you can see the generated SQL statements in the Output Window in Visual Studio and spot performance leaks by analyzing them. Don’t forget to comment that line out for production systems as it may create a bit of an overhead. (Wouldn’t it be great if this was configurable in the config file?)
To see your DLINQ expressions in a SQL statement manner one can use SQL Query Visualizer which needs to be installed separately from Visual Studio 2008.
7 – Avoid Unnecessary Attaches to Tables in the Context
Since Object Tracking is a great mechanism, nothing comes for free. When you Attach an object to your context, you mean that this object was disconnected for a while and now you now want to get it back in the game. DataContext then marks it as an object that potentially will change - and this is just fine when you really intent to do that.
But there might be some circumstances that aren’t very obvious, and may lead you to attach objects that arent changed. One of such cases is doing an AttachAll for collections and not checking if the object is changed or not. For a better performance, you should check that if you are attaching ONLY the objects in the collection those are changed.
I will provide a sample code for this soon.
8 – Be Careful of Entity Identity Management Overhead
During working with a non-read only context, the objects are still being tracked – so be aware that non intuitive scenarios this can cause while you proceed. Consider the following DLINQ code:
using (NorthwindDataContext context = new NorthwindDataContext())
{
var a = from c in context.Categories
select c;
}
Very plain, basic DLINQ isn’t it? That’s true; there doesn’t seem any bad thing in the above code. Now let’s see the code below:
using (NorthwindDataContext context = new NorthwindDataContext())
{
var a = from c in context.Categories
select new Category
{
CategoryID = c.CategoryID,
CategoryName = c.CategoryName,
Description = c.Description
};
}
The intuition is to expect that the second query will work slower than the first one, which is WRONG. It is actually much faster than the first one.
The reason for this is in the first query, for each row the objects need to be stored, since there is a possibility that you still can change them. But in the 2nd one, you are throwing that object away and creating a new one, which is more efficient.
9 – Retrieve Only the Number of Records You Need
When you are binding to a data grid, and doing paging – consider the easy to use methods that LINQ to SQL provides. These are mainly Take and Skip methods. The code snippet involves a method which retrieves enough products for a ListView with paging enabled:
/// <summary>
/// Gets the products page by page.
/// </summary>
/// <param name=”startingPageIndex”>Index of the starting page.</param>
/// <param name=”pageSize”>Size of the page.</param>
/// <returns>The list of products in the specified page</returns>
private IList<Product> GetProducts(int startingPageIndex, int pageSize)
{
using (NorthwindDataContext context = new NorthwindDataContext())
{
return context.Products
.Take<Product>(pageSize)
.Skip<Product>(startingPageIndex * pageSize)
.ToList<Product>();
}
}
10 – Don’t Misuse CompiledQuery
I can hear you saying “What? Are you kiddin’ me? How can such a class like this be misused?”
Well, as it applies to all optimization LINQ to SQL is no exception:
“Premature optimization is root all of evil” – Donald Knuth
If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. But why?
That’s because the resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it. It is not compiled like the way regular expressions are compiled. And your delegate has the ability to replace the variables (or parameters) in the resulting query.
CACHING
·
Taking content uniqueness and access frequency
into account when deciding whether to cache in a particular tier
·
Using IIS and ASP.NET to enable or disable
browser caching
·
Using ViewState to cache information that’s
specific to a particular page
·
Understanding the importance of minimizing the
size of ViewState: You should disable it by default on a per-page basis and
enable it only when you need it
·
Creating a custom template in Visual Studio and
using it to help simplify the process of establishing consistent per-page
defaults
·
Storing ViewState on the server when needed
·
Using cookies to cache state information on the
client
·
Setting cookies and their properties and reading
the resulting name/value pairs from ASP.NET, JavaScript, and Silverlight
·
Setting the path property on cookies to limit
how often the browser sends them to the server, since cookies consume bandwidth
and add latency
·
Encoding binary data in cookies
·
Using a compact privacy policy to help make sure
that your user’s browser accepts your cookies
·
Using web storage as an alternative to cookies
for caching on the client
·
Using isolated storage to cache data in the
user’s filesystem
·
Using ASP.NET and IIS to make your content
cacheable by proxies by setting the Cache-Control: public HTTP header
·
Enabling the high-performance kernel cache on
your web servers to cache your dynamic content
·
Finding that caching a page using http.sys
satisfied 84 percent more requests per second
·
Using IIS to cache dynamic content that varies
based on query strings or HTTP headers
·
Configuring ASP.NET output caching for pages and
page fragments using the OutputCache directive
·
Using substitution caching
·
Removing items from the output cache with
RemoveOutputCacheItem() and AddCacheItemDependency().
·
Using page-level database dependencies with the
SqlDependency parameter in the OutputCache directive
·
Using SqlDependency with LINQ to SQL
·
Varying the output cache based on objects such
as cookies
·
Using a cache validation callback to decide
whether the cached version of your content is still valid
·
Using a custom OutputCache provider
·
Caching objects using HttpApplicationState,
HttpContext.Items, HttpContext.Cache, and WeakReferences
·
Using file and database dependencies with cached
objects • Using locking around references to static fields and cached objects
that might be accessible to more than one thread
·
Using SQL Server as an extended cache
·
Potential pitfalls to watch for if you’re
considering distributed caching
·
Managing cache expiration times, with suggested
defaults of 1 year for static content and between 1 and 30 days for dynamic
content
References:
ASP.NET
Ado.net
Linq Optimization
C# bench marking
Async best practices
Books:
Ultrafast ASP.net 4.5
by Richard Keissig.