Showing posts with label Advanced features. Show all posts
Showing posts with label Advanced features. Show all posts

Wednesday, 7 June 2017

Advanced Use Cases for the Repository Pattern in .NET


In our previous article, Implementation Strategies for the Repository Pattern with Entity Framework, Dapper, and Chain, we looked at the basic patterns needed to implement a repository. In many cases these patterns were such a thin layer around the underlying data access technology they were essentially unnecessary. However, once you have a repository in place, many new opportunities become available.
When designing a repository, you should be thinking in terms of “what must happen”. For example, let us say you have a rule that whenever a record is updated, its “LastModifiedBy” column must be set to the current user. Rather than trying to remember to update the LastModifiedBy in application code before every save, you can bake that functionality right into the repository.

Note: where appropriate, this article will include code samples for Entity FrameworkDapper, and/or Tortuga Chain. However, you will find most repository features can be implemented in an ORM-agnostic fashion.By treating your data access layer as a standalone library that manages all of the “must happen” details, you can dramatically reduce implementation errors. At the same time, you can simplify the code that is built on top of the repository, as it no longer needs to be concerned about bookkeeping tasks.

Audit Columns

Most applications eventually find the need to track who made changes to the database and when. For simple databases this takes the form of audit columns. The names vary, but they usually fall into these four roles:
  • Created by User Key
  • Created Date/Time
  • Last Modified by User Key
  • Last Modified Date/Time
Depending on the security requirements of the application, additional audit columns may be considered such as:
  • Deleted by User Key
  • Deleted Date/Time
  • [Created | Last Modified | Deleted] by Application Key
  • [Created | Last Modified | Deleted] by IP Address
The date columns are easy enough to handle behind the scenes, but for the user keys you need to do something a bit more interesting. What you need is a “context aware repository”.
Normally repositories are context free, meaning they have no information other than what’s absolutely necessary to connect to the database. When correctly designed, the repository can be entirely stateless, allowing you to share one instance across the whole application.
Context aware repositories are a bit more complex. They cannot be constructed until you know the context, which at the very least includes the currently active user’s id or key. For some applications, this is enough. For others, you may need to pass in an entire user object and/or an object representing the running application.

Chain

Chain has built in support through a concept known as audit rules. Audit rules allow you to specify overrides based on a column name. Out of the box, it includes date-based rules and rules that copy a property from a user object into a column. Here is an example,
dataSource = dataSource.WithRules(
    new UserDataRule("CreatedByKey", "UserKey", OperationType.Insert),
    new UserDataRule("UpdatedByKey", "UserKey", OperationType.InsertOrUpdate),
    new DateTimeRule("CreatedDate", DateTimeKind.Local, OperationType.Insert),
    new DateTimeRule("UpdatedDate", DateTimeKind.Local, OperationType.InsertOrUpdate)
    );
As mentioned above, you are going to need a context aware repository for this to work. In the constructor below you can see how the context is passed to an immutable data source, creating a new data source with the necessary information.
public EmployeeRepository(DataSource dataSource, User user)
{
    m_DataSource = dataSource.WithUser(user);
}
Thus setup, you can leverage your DI framework of choice to automatically create and populate the repository on a per-request basis.

Entity Framework

To globally apply audit columns in Entity Framework, you need to leverage the ObjectStateManager and create a specialized interface. The interface, or base class if you prefer, will look something like this:
public interface IAuditableEntity 
{
    DateTime CreatedDate {get; set;}
    DateTime UpdatedDate {get; set;}
    DateTime CreatedDate {get; set;}
    DateTime CreatedDate {get; set;}
}
The interface (or base class) is then applied to every entity for which the database has matching audit columns.
Next you need to override the Save method of your DataContext class as follows.
public override int SaveChanges()
{
    // Get added entries
    IEnumerable<ObjectStateEntry> addedEntryCollection = Context
        .ObjectContext
        .ObjectStateManager
        .GetObjectStateEntries(EntityState.Added)
        .Where(m => m != null && m.Entity != null);

    // Get modified entries
    IEnumerable<ObjectStateEntry> modifiedEntryCollection = Context
        .ObjectContext
        .ObjectStateManager
        .GetObjectStateEntries(EntityState.Modified)
        .Where(m => m != null && m.Entity != null);

    // Set audit fields of added entries
    foreach (ObjectStateEntry entry in addedEntryCollection)
    {                
        var addedEntity = entry.Entity as IAuditableEntity;
        if (addedEntity != null)
        {
            addedEntity.CreatedDate = DateTime.Now;
            addedEntity.CreatedByKey = m_User.UserKey;
            addedEntity.UpdatedDate = DateTime.Now;
            addedEntity.UpdatedByKey = m_User.UserKey;
        }

    }

    // Set audit fields of modified entries
    foreach (ObjectStateEntry entry in modifiedEntryCollection)
    {
        var modifiedEntity = entry.Entity as IAuditableEntity;
        if (modifiedEntity != null)
        {
            modifiedEntity.UpdatedDate = DateTime.Now;
            modifiedEntity.UpdatedByKey = m_User.UserKey;
        }
    }
    return SaveChanges();
}
If you are going to be working a lot with EF, it really pays to become intimately familiar with the ObjectStateManager and its capabilities. This is where most of the useful metadata about transactions in progress can be found.
Finally, you need to modify the constructor of your data context (and possibly repository) to accept a user object.
While this looks like a lot of code, it only needs to be done once per EF data context. And as with the previous example, the actual creation of the data context and repository can be performed by your DI framework.

History Table

Tracking changes to records is often required due to local laws and regulations. Other times it is desirable simply to make diagnostics easier.
Our general recommendation is to simply allow the database to do this. Some databases have this capability built-in, which is often referred to as a temporal table. Others can emulate it using triggers. In either case, the application is unaware of the additional logging, which makes the technique far less error prone.
If for some reason you cannot use a temporal table or trigger, then the repository needs to explicitly write to the history table.
Regardless of where you put the code that maintains the history table, there are two basic conventions that you can follow. Consistency is really important here, as it would be quite confusing to have one convention for some tables and the other convention for the rest.
Copy before Write: In this convention you copy the old record from live table to the history table prior to performing the update or delete operation. This means the history table never contains the current record. As a result, you’ll need to join the live and history tables together to see a complete history of changes.
Write before Copy: Alternately you can update the live table first, then copy that row to the history table. This has the advantage of having a complete picture in the history table, eliminating the aforementioned join. The downside is it takes more space due to the duplicated data.
With either convention, you’ll want to use soft deletes in order to track who is actually deleting the row. If hard deletes are desired, then they should only be performed following a soft delete.

Soft Deletes

One of the advantages of using a repository is you can switch from hard deletes to soft deletes without the rest of the application realizing it. A soft delete removes the record as far as the application is concerned, but allows it to remain in the database for auditing purposes. Optionally, an application can even undelete records.
To avoid data loss, applications should not be granted DELETE privileges on tables that support soft deletes. If the application accidentally tries to perform a hard delete, the permission check will raise an error instead of silently deleting the row.

Chain

Chain offers implicit soft delete support via the audit rules infrastructure. When configuring a soft delete rule, it is customary to also configure the matching audit columns:
var dataSource = dataSource.WithRules(
    new SoftDeleteRule("DeletedFlag", true, OperationTypes.SelectOrDelete),
    new UserDataRule("DeletedByKey", "EmployeeKey", OperationTypes.Delete),
    new DateTimeRule("DeletedDate", DateTimeKind.Local, OperationTypes.Delete)
    );
Whenever a table is found with a soft delete column (DeletedFlag in this example), two things happen automatically:
  • All queries implicitly add “AND DeletedFlag = 0” to the WHERE clause.
  • All calls to DataSource.Delete become update statements to set the deleted flag.

Entity Framework

In Entity Framework, you can include an additional where clause on every query that reads from a table supporting soft deletes. You’ll also need to manually convert any delete operation into an update, which can be tricky when working with object graphs.
Another option takes quite a bit of work, but may be less error prone. It starts by explicitly listing every table that supports soft deletes in the DataContext.OnModelCreating override.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
   modelBuilder.Entity<Employee>().Map(m => m.Requires("IsDeleted").HasValue(false));
}
You then need to override the Save method to ensure deletes become updates. Colin on Stackoverflow offers this pattern.
public override int SaveChanges()
{
   foreach (var entry in ChangeTracker.Entries()
             .Where(p => p.State == EntityState.Deleted 
             && p.Entity is ModelBase))
    SoftDelete(entry);
    return base.SaveChanges();
}

private void SoftDelete(DbEntityEntry entry)
{
    var e = (ModelBase)entry.Entity;
    string tableName = GetTableName(e.GetType());
    Database.ExecuteSqlCommand(
             String.Format("UPDATE {0} SET IsDeleted = 1 WHERE ID = @id", tableName)
             , new SqlParameter("id", e.ID));

    //Marking it Detached prevents the hard delete
    entry.State = EntityState.Detached;
}
You’ll want to read the rest of Colin’s answer, as there are a lot of edge cases to be addressed.

Access Logging

While audit columns, history tables, and soft deletes cover all of the write scenarios, there are times when you also need to log reads. An example of this is the US health care industry. Any doctor or nurse needs the ability to access any patient’s medical records in the event of an emergency. But in the normal course of business, they are only legally allowed to do so when they are actively treating that patient.
Since the records cannot be fully locked down, the next best thing is to track who is reading each record. This can be easily accomplished at the repository level by logging each query involving sensitive data. This is most easily done manually at the top of the relevant repository methods.

Performance Logging

When user experience is a feature, it is important to know how much time is being spent on a per-query basis. Merely tracking per-page performance isn’t enough, as one page may involve multiple queries. This is especially true of Entity Framework, as lazy-loading can hide database calls.

Explicit Logging in the Repository

Though it is tedious and easy to miss a query, one can simply wrap every query in a disposable timer. The pattern is as follows:
public class OperationTimer : IDisposable
{
    readonly object m_Context;
    readonly Stopwatch m_Timer;

    public OperationTimer(object context)
    {
        m_Context = context;
        m_Timer = Stopwatch.StartNew();
    }
    public void Dispose()
    {
        //Write to log here using timer and context
    }
}
And the usage:
using(new OperationTimer("Load employees"))
{
    //execute query here
} 

Chain

Chain exposes a set of events at the data source level. The one needed in this case is DataSource.ExecutionFinished. Here is an example:
static void DefaultDispatcher_ExecutionFinished(object sender, ExecutionEventArgs e)
{
    Debug.WriteLine($"Execution finished: {e.ExecutionDetails.OperationName}. Duration: {e.Duration.Value.TotalSeconds.ToString("N3")} sec. Rows affected: {(e.RowsAffected != null ? e.RowsAffected.Value.ToString("N0") : "<NULL>")}.");
}
You can also attach a handler to DataSource.GlobalExecutionFinished, which listens to events from all data sources.

Entity Framework

The built-in logging capabilities of Entity Framework don’t make it possible to time individual queries. You can work around this limitation using a custom IDbCommandInterceptor.
public class EFLoggerForTesting : IDbCommandInterceptor
{
    static readonly ConcurrentDictionary<DbCommand, DateTime> m_StartTime = new ConcurrentDictionary<DbCommand, DateTime>();

    public void ReaderExecuted(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext)
    {
        Log(command, interceptionContext);
    }

    public void NonQueryExecuted(DbCommand command, DbCommandInterceptionContext<int> interceptionContext)
    {
        Log(command, interceptionContext);
    }

    public void ScalarExecuted(DbCommand command, DbCommandInterceptionContext<object> interceptionContext)
    {
        Log(command, interceptionContext);
    }

    private static void Log<T>(DbCommand command, DbCommandInterceptionContext<T> interceptionContext)
    {
        DateTime startTime;
        TimeSpan duration;

        m_StartTime.TryRemove(command, out startTime);
        if (startTime != default(DateTime))
        {
            duration = DateTime.Now - startTime;
        }
else
            duration = TimeSpan.Zero;

        string message;

        var parameters = new StringBuilder();
        foreach (DbParameter param in command.Parameters)
        {
            parameters.AppendLine(param.ParameterName + " " + param.DbType + " = " + param.Value);
        }

        if (interceptionContext.Exception == null)
        {
            message = string.Format("Database call took {0} sec. RequestId {1} \r\nCommand:\r\n{2}", duration.TotalSeconds.ToString("N3"), requestId, parameters.ToString() + command.CommandText);
        }
        else
        {
            message = string.Format("EF Database call failed after {0} sec. RequestId {1} \r\nCommand:\r\n{2}\r\nError:{3} ", duration.TotalSeconds.ToString("N3"), requestId, parameters.ToString() + command.CommandText, interceptionContext.Exception);
        }

        Debug.WriteLine(message);
    }


    public void NonQueryExecuting(DbCommand command, DbCommandInterceptionContext<int> interceptionContext)
    {
        OnStart(command);
    }

    public void ReaderExecuting(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext)
    {
        OnStart(command);
    }

    public void ScalarExecuting(DbCommand command, DbCommandInterceptionContext<object> interceptionContext)
    {
        OnStart(command);
    }

    private static void OnStart(DbCommand command)
    {
        m_StartTime.TryAdd(command, DateTime.Now);
    }
}
This doesn’t offer a way to capture contextual data, but you can work around that limitation by shoving the context in ThreadLocal or AsyncLocal as appropriate.

Permission Checks – Table Level

While permission checks can be done at the application level, it is often advantageous to also enforce them in the repository. This eliminates the possibility the permission check is forgotten on newly created screens/pages.

Repository Enforcement

The simplest way to implement this is a role check at the beginning of each relevant function. For example,
public int Insert(Employee employee)
        {
            if (!m_User.IsAdmin)
                throw new SecurityException("Only admins may add employees");

Database Enforced

A more sophisticated option would involve creating multiple connection strings. When the repository is created, a connection string is selected based on the user’s role. In this case, the connection string for non-admin users wouldn’t have INSERT privileges on the employee table.
Due to the complexity and maintenance headaches involved, this approach is not recommended except under very high security environments where multiple layers of defense are desired. Even then, it requires extensive automated testing to ensure every connection string has all the permissions it needs.

Permission Checks – Column Level

Permission checks may also be needed at the column level. For example, you may want to prevent users from giving themselves admin privileges. Or you may want to block non-managers from seeing employee salaries.

Chain

Chain leverages its audit rules capabilities to implement column level permission checks. An anonymous function is passed to the RestrictColumn constructor along with the column name and list of restricted operations. (A table name can be optionally specified as well.)
var IsAdminCheck = user => ((User)user).IsAdmin;

dataSource = dataSource.WithRules(
    new RestrictColumn("Users", "IsAdmin", OperationTypes.Insert|OperationTypes.Update, IsAdminCheck));
To prevent reading of a restricted column, pass in the OperationTypes.Select flag.

Dapper

The easiest way to do this in Dapper is to simply have multiple SQL statements. If the user lacks a specific privilege, you simply select the SQL statement that omits those columns.

Entity Framework

For queries, there are a couple of options available.
  1. Manually create difference projections (i.e. Select clauses) depending on the user’s role
  2. Perform the query normally. Then if the permission check fails, loop through the result set, setting the restricted properties to null/0.
For inserts, simply blank out the restricted properties as above.
Updates are trickier. When restricting writes to individual columns, you cannot attach entities. Rather, you need to re-fetch the original record, copy across the permitted values, and then save that object instead of the one passed in by the application code. (Essentially our “novice” pattern from the previous article.)

Mapping One Model to Multiple Tables

An important concept in data architecture is the idea that you don’t need a one-to-one mapping between tables and classes. In order to make the database work more efficiently or to address a particular business rule, you will often find it advantageous to map one class to multiple tables.
Say, for example, you were tracking baseball teams. You may have these tables:
Table
Primary Key
Team
TeamKey
TeamSeasonMap
TeamKey+SeasonKey

If your application only understands the concept of a team in the context of a season, you may have one Team object that covers both tables.

Chain

In Chain, there isn’t a strong relationship between classes and tables. This means for updates you would write code such as this:
dataSource.Update("Team", myTeam).Execute();
dataSource.Update("TeamSeasonMap", myTeam).Execute();
At runtime it will determine which properties are applicable to which tables and generate the SQL accordingly.
Under this model, you would fetch the Team object from a view that joined both tables. (Chain doesn’t support joins directly and assumes they will always occur via views.)

Entity Framework

Entity Framework expects that multiple tables mapping to a single entity all share exactly the same primary key. This means that it will not support that scenario.
  • For reads, you can perform the join and projection using EF’s normal LINQ syntax.
  • For updates, you will need to copy the model into a separate entity for each table.

Caching

Generally speaking, caching is a repository concern. Since the repository knows when data is being altered, it is the best equipped to handle cache invalidation.

Chain

Caching is supported by Chain, but it needs to be applies on a query by query basis using appenders. Appenders are attached to operations before they are executed. In this case there are four appenders we care about:
  • .Cache(...)
  • .CacheAllItems(...)
  • .InvalidateCache(...)
  • .ReadOrCache(...)
They are best explained by means of an example repository. Here you can see the interplay between caching individual records and caching collections using `CacheAllItems`.
public class EmployeeCachingRepository
{

    private const string TableName = "HR.Employee";
    private const string AllCacheKey = "HR.Employee ALL";

    public IClass1DataSource Source { get; private set; }
    public CachePolicy Policy { get; private set; }

    public EmployeeCachingRepository(IClass1DataSource source, CachePolicy policy = null)
    {
        Source = source;
        Policy = policy;
    }

    protected string CacheKey(int id)
    {
        return $"HR.Employee EmployeeKey={id}";
    }

    protected string CacheKey(Employee entity)
    {
        return CacheKey(entity.EmployeeKey.Value);
    }

    public Employee Get(int id)
    {
        return Source.GetByKey(TableName, id).ToObject<Employee>().ReadOrCache(CacheKey(id), policy: Policy).Execute();
    }

    public IList<Employee> GetAll()
    {
        return Source.From(TableName).ToCollection<Employee>().CacheAllItems((Employee x) => CacheKey(x), policy: Policy).ReadOrCache(AllCacheKey, policy: Policy).Execute();
    }

    public Employee Insert(Employee entity)
    {
        return Source.Insert(TableName, entity).ToObject<Employee>().InvalidateCache(AllCacheKey).Cache((Employee x) => CacheKey(x), policy: Policy).Execute();
    }

    public Employee Update(Employee entity)
    {
        return Source.Update(TableName, entity).ToObject<Employee>().Cache(CacheKey(entity)).InvalidateCache(AllCacheKey).Execute();
    }

    public void Delete(int id)
    {
        Source.DeleteByKey(TableName, id).InvalidateCache(CacheKey(id)).InvalidateCache(AllCacheKey).Execute();
    }
}
As you can see, Chain gives you a lot of control over your invalidation logic at the cost of having to carefully specify everything.

Entity Framework

Entity Framework has two levels of caching. The first level is limited to the data context and is primarily concerned with ensuring the object graph doesn’t have duplication entities that represent the same physical database record. Since this cache is destroyed along with the data context, it is not relevant to most caching scenarios.
In EF terminology, what we’re looking for is known as a “second level cache”. While available in EF 5, version 6 of Entity Framework didn’t ship with any sort of caching out of the box. For this, you’ll need to turn to the third party libraries such as EntityFramework.Cache or EFSecondLevelCache. As you can see from these example libraries, there is no standard pattern for adding second level caching to EF.

Monday, 1 December 2014

Advanced Entity Framework Scenarios for an MVC Web Application

In the previous tutorial you implemented the repository and unit of work patterns. This tutorial covers the following topics:
  • Performing raw SQL queries.
  • Performing no-tracking queries.
  • Examining queries sent to the database.
  • Working with proxy classes.
  • Disabling automatic detection of changes.
  • Disabling validation when saving changes.
For most of these you will work with pages that you already created. To use raw SQL to do bulk updates you'll create a new page that updates the number of credits of all courses in the database:
Update_Course_Credits_initial_page
And to use a no-tracking query you'll add new validation logic to the Department Edit page:
Department_Edit_page_with_duplicate_administrator_error_message

Performing Raw SQL Queries

The Entity Framework Code First API includes methods that enable you to pass SQL commands directly to the database. You have the following options:
  • Use the DbSet.SqlQuery method for queries that return entity types. The returned objects must be of the type expected by the DbSet object, and they are automatically tracked by the database context unless you turn tracking off. (See the following section about the AsNoTracking method.)
  • Use the DbDatabase.SqlQuery method for queries that return types that aren't entities. The returned data isn't tracked by the database context, even if you use this method to retrieve entity types.
  • Use the DbDatabase.SqlCommand for non-query commands.
One of the advantages of using the Entity Framework is that it avoids tying your code too closely to a particular method of storing data. It does this by generating SQL queries and commands for you, which also frees you from having to write them yourself. But there are exceptional scenarios when you need to run specific SQL queries that you have manually created, and these methods make it possible for you to handle such exceptions.
As is always true when you execute SQL commands in a web application, you must take precautions to protect your site against SQL injection attacks. One way to do that is to use parameterized queries to make sure that strings submitted by a web page can't be interpreted as SQL commands. In this tutorial you'll use parameterized queries when integrating user input into a query.

Calling a Query that Returns Entities

Suppose you want the GenericRepository class to provide additional filtering and sorting flexibility without requiring that you create a derived class with additional methods. One way to achieve that would be to add a method that accepts a SQL query. You could then specify any kind of filtering or sorting you want in the controller, such as a Where clause that depends on a joins or subquery. In this section you'll see how to implement such a method.
Create the GetWithRawSql method by adding the following code to GenericRepository.cs:
public virtual IEnumerable<TEntity> GetWithRawSql(string query, params object[] parameters)
{
    return dbSet.SqlQuery(query, parameters).ToList();
}
In CourseController.cs, call the new method from the Details method, as shown in the following example:
public ActionResult Details(int id)
{
    var query = "SELECT * FROM Course WHERE CourseID = @p0";
    return View(unitOfWork.CourseRepository.GetWithRawSql(query, id).Single());
}
In this case you could have used the GetByID method, but you're using the GetWithRawSqlmethod to verify that the GetWithRawSQL method works.
Run the Details page to verify that the select query works (select the Course tab and thenDetails for one course).
Course_Details_page

Calling a Query that Returns Other Types of Objects

Earlier you created a student statistics grid for the About page that showed the number of students for each enrollment date. The code that does this in HomeController.cs uses LINQ:
var data = from student in db.Students
           group student by student.EnrollmentDate into dateGroup
           select new EnrollmentDateGroup()
           {
               EnrollmentDate = dateGroup.Key,
               StudentCount = dateGroup.Count()
           };
Suppose you want to write the code that retrieves this data directly in SQL rather than using LINQ. To do that you need to run a query that returns something other than entity objects, which means you need to use the Database.SqlQuery method.
In HomeController.cs, replace the LINQ statement in the About method with the following code:
var query = "SELECT EnrollmentDate, COUNT(*) AS StudentCount "
    + "FROM Person "
    + "WHERE EnrollmentDate IS NOT NULL "
    + "GROUP BY EnrollmentDate";
var data = db.Database.SqlQuery<EnrollmentDateGroup>(query);
Run the About page. It displays the same data it did before.
About_page

Calling an Update Query

Suppose Contoso University administrators want to be able to perform bulk changes in the database, such as changing the number of credits for every course. If the university has a large number of courses, it would be inefficient to retrieve them all as entities and change them individually. In this section you'll implement a web page that allows the user to specify a factor by which to change the number of credits for all courses, and you'll make the change by executing a SQL UPDATE statement. The web page will look like the following illustration:
Update_Course_Credits_initial_page
In the previous tutorial you used the generic repository to read and update Course entities in the Course controller. For this bulk update operation, you need to create a new repository method that isn't in the generic repository. To do that, you'll create a dedicatedCourseRepository class that derives from the GenericRepository class.
In the DAL folder, create CourseRepository.cs and replace the existing code with the following code:
using System;
using ContosoUniversity.Models;
namespace ContosoUniversity.DAL{
    public class CourseRepository : GenericRepository<Course>
    {
        public CourseRepository(SchoolContext context)
            : base(context)
        {
        }

        public int UpdateCourseCredits(int multiplier)
        {
            return context.Database.ExecuteSqlCommand("UPDATE Course SET Credits = Credits * {0}", multiplier);
        }

    }
}
In UnitOfWork.cs, change the Course repository type from GenericRepository<Course> toCourseRepository:
private CourseRepository courseRepository;
public CourseRepository CourseRepository
{
    get
    {

        if (this.courseRepository == null)
        {
            this.courseRepository = new CourseRepository(context);
        }
        return courseRepository;
    }
}
In CourseContoller.cs, add an UpdateCourseCredits method:
public ActionResult UpdateCourseCredits(int? multiplier)
{
    if (multiplier != null)
    {
        ViewBag.RowsAffected = unitOfWork.CourseRepository.UpdateCourseCredits(multiplier.Value);
    }
    return View();
}
This method will be used for both HttpGet and HttpPost. When the HttpGetUpdateCourseCredits method runs, the multiplier variable will be null and the view will display an empty text box and a submit button, as shown in the preceding illustration.
When the Update button is clicked and the HttpPost method runs, multiplier will have the value entered in the text box. The code then calls the repository UpdateCourseCreditsmethod, which returns the number of affected rows, and that value is stored in the ViewBagobject. When the view receives the number of affected rows in the ViewBag object, it displays that number instead of the text box and submit button, as shown in the following illustration:
Update_Course_Credits_rows_affected_page
Create a view in the Views\Course folder for the Update Course Credits page:
Add_View_dialog_box_for_Update_Course_Credits
In Views\Course\UpdateCourseCredits.cshtml, replace the existing code with the following code:
@model ContosoUniversity.Models.Course

@{
    ViewBag.Title = "UpdateCourseCredits";
}
<h2>Update Course Credits</h2>

@if (ViewBag.RowsAffected == null)
{
    using (Html.BeginForm())
    {
        <p>
            Enter a number to multiply every course's credits by: @Html.TextBox("multiplier")
        </p>
        <p>
            <input type="submit" value="Update" />
        </p>
    }
}
@if (ViewBag.RowsAffected != null)
{
    <p>
        Number of rows updated: @ViewBag.RowsAffected
    </p>
}<div>
    @Html.ActionLink("Back to List", "Index")</div>
Run the page by selecting the Courses tab, then adding "/UpdateCourseCredits" to the end of the URL in the browser's address bar (for example:http://localhost:50205/Course/UpdateCourseCredits). Enter a number in the text box:
Update_Course_Credits_initial_page_with_2_entered
Click Update. You see the number of rows affected:
Update_Course_Credits_rows_affected_page
Click Back to List to see the list of courses with the revised number of credits.
Courses_Index_page_showing_revised_credits
For more information about raw SQL queries, see Raw SQL Queries on the Entity Framework team blog.

No-Tracking Queries

When a database context retrieves database rows and creates entity objects that represent them, by default it keeps track of whether the entities in memory are in sync with what's in the database. The data in memory acts as a cache and is used when you update an entity. This caching is often unnecessary in a web application because context instances are typically short-lived (a new one is created and disposed for each request) and the context that reads an entity is typically disposed before that entity is used again.
You can specify whether the context tracks entity objects for a query by using theAsNoTracking method. Typical scenarios in which you might want to do that include the following:
  • The query retrieves such a large volume of data that turning off tracking might noticeably enhance performance.
  • You want to attach an entity in order to update it, but you earlier retrieved the same entity for a different purpose. Because the entity is already being tracked by the database context, you can't attach the entity that you want to change. One way to prevent this from happening is to use the AsNoTracking option with the earlier query.
In this section you'll implement business logic that illustrates the second of these scenarios. Specifically, you'll enforce a business rule that says that an instructor can't be the administrator of more than one department.
In DepartmentController.cs, add a new method that you can call from the Edit and Createmethods to make sure that no two departments have the same administrator:
private void ValidateOneAdministratorAssignmentPerInstructor(Department department)
{
    if (department.PersonID != null)
    {
        var duplicateDepartment = db.Departments
            .Include("Administrator")
            .Where(d => d.PersonID == department.PersonID)
            .FirstOrDefault();
        if (duplicateDepartment != null && duplicateDepartment.DepartmentID != department.DepartmentID)
        {
            var errorMessage = String.Format(
                "Instructor {0} {1} is already administrator of the {2} department.",
                duplicateDepartment.Administrator.FirstMidName,
                duplicateDepartment.Administrator.LastName,
                duplicateDepartment.Name);
            ModelState.AddModelError(string.Empty, errorMessage);
        }
    }
}
Add code in the try block of the HttpPost Edit method to call this new method if there are no validation errors. The try block now looks like the following example:
if (ModelState.IsValid)
{
    ValidateOneAdministratorAssignmentPerInstructor(department);
}
if (ModelState.IsValid)
{
    db.Entry(department).State = EntityState.Modified;
    db.SaveChanges();
    return RedirectToAction("Index");
}
Run the Department Edit page and try to change a department's administrator to an instructor who is already the administrator of a different department. You get the expected error message:
Department_Edit_page_with_duplicate_administrator_error_message
Now run the Department Edit page again and this time change the Budget amount. When you click Save, you see an error page:
Department_Edit_page_with_object_state_manager_error_message
The exception error message is "An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key." This happened because of the following sequence of events:
  • The Edit method calls theValidateOneAdministratorAssignmentPerInstructor method, which retrieves all departments that have Kim Abercrombie as their administrator. That causes the English department to be read. Because that's the department being edited, no error is reported. As a result of this read operation, however, the English department entity that was read from the database is now being tracked by the database context.
  • The Edit method tries to set the Modified flag on the English department entity created by the MVC model binder, but that fails because the context is already tracking an entity for the English department.
One solution to this problem is to keep the context from tracking in-memory department entities retrieved by the validation query. There's no disadvantage to doing this, because you won't be updating this entity or reading it again in a way that would benefit from it being cached in memory.
In DepartmentController.cs, in the ValidateOneAdministratorAssignmentPerInstructormethod, specify no tracking, as shown in the following example:
var duplicateDepartment = db.Departments
   .Include("Administrator")
   .Where(d => d.PersonID == department.PersonID)
   .AsNoTracking()
   .FirstOrDefault();
Repeat your attempt to edit the Budget amount of a department. This time the operation is successful, and the site returns as expected to the Departments Index page, showing the revised budget value.

Examining Queries Sent to the Database

Sometimes it's helpful to be able to see the actual SQL queries that are sent to the database. To do this, you can examine a query variable in the debugger or call the query's ToStringmethod. To try this out, you'll look at a simple query and then look at what happens to it as you add options such eager loading, filtering, and sorting.
In Controllers/CourseController, replace the Index method with the following code:
public ViewResult Index()
{
    var courses = unitOfWork.CourseRepository.Get();
    return View(courses.ToList());
}
Now set a breakpoint in GenericRepository.cs on the return query.ToList(); and thereturn orderBy(query).ToList(); statements of the Get method. Run the project in debug mode and select the Course Index page. When the code reaches the breakpoint, examine the queryvariable. You see the query that's sent to SQL Server Compact. It's a simple Selectstatement:
{SELECT [Extent1].[CourseID] AS [CourseID], [Extent1].[Title] AS [Title], [Extent1].[Credits] AS [Credits], [Extent1].[DepartmentID] AS [DepartmentID]
FROM [Course] AS [Extent1]}
Queries can be too long to display in the debugging windows in Visual Studio. To see the entire query, you can copy the variable value and paste it into a text editor:
Copy_value_of_variable_in_debug_mode
Now you'll add a drop-down list to the Course Index page so that users can filter for a particular department. You'll sort the courses by title, and you'll specify eager loading for theDepartment navigation property. In CourseController.cs, replace the Index method with the following code:
public ActionResult Index(int? SelectedDepartment)
{
    var departments = unitOfWork.DepartmentRepository.Get(
        orderBy: q => q.OrderBy(d => d.Name));
    ViewBag.SelectedDepartment = new SelectList(departments, "DepartmentID", "Name", SelectedDepartment);

    int departmentID = SelectedDepartment.GetValueOrDefault(); 
    return View(unitOfWork.CourseRepository.Get(
        filter: d => !SelectedDepartment.HasValue || d.DepartmentID == departmentID,
        orderBy: q => q.OrderBy(d => d.CourseID),
        includeProperties: "Department"));
}
The method receives the selected value of the drop-down list in the SelectedDepartmentparameter. If nothing is selected, this parameter will be null.
SelectList collection containing all departments is passed to the view for the drop-down list. The parameters passed to the SelectList constructor specify the value field name, the text field name, and the selected item.
For the Get method of the Course repository, the code specifies a filter expression, a sort order, and eager loading for the Department navigation property. The filter expression always returns true if nothing is selected in the drop-down list (that is, SelectedDepartment is null).
In Views\Course\Index.cshtml, immediately before the opening table tag, add the following code to create the drop-down list and a submit button:
@using (Html.BeginForm())
{
    <p>Select Department: @Html.DropDownList("SelectedDepartment","All")   
    <input type="submit" value="Filter" /></p>
}
With the breakpoints still set in the GenericRepository class, run the Course Index page. Continue through the first two times that the code hits a breakpoint, so that the page is displayed in the browser. Select a department from the drop-down list and click Filter:
Course_Index_page_with_department_selected
This time the first breakpoint will be for the departments query for the drop-down list. Skip that and view the query variable the next time the code reaches the breakpoint in order to see what the Course query now looks like. You'll see something like the following:
{SELECT [Extent1].[CourseID] AS [CourseID], [Extent1].[Title] AS [Title], [Extent1].[Credits] AS [Credits], [Extent1].[DepartmentID] AS [DepartmentID], [Extent2].[DepartmentID] AS [DepartmentID1], [Extent2].[Name] AS [Name], [Extent2].[Budget] AS [Budget], [Extent2].[StartDate] AS [StartDate], [Extent2].[PersonID] AS [PersonID], [Extent2].[Timestamp] AS [Timestamp]
FROM  [Course] AS [Extent1]
INNER JOIN [Department] AS [Extent2] ON [Extent1].[DepartmentID] = [Extent2].[DepartmentID]
WHERE (@p__linq__0 IS NULL) OR ([Extent1].[DepartmentID] = @p__linq__1)}
You can see that the query is now a JOIN query that loads Department data along with theCourse data, and that it includes a WHERE clause.

Working with Proxy Classes

When the Entity Framework creates entity instances (for example, when you execute a query), it often creates them as instances of a dynamically generated derived type that acts as a proxy for the entity. This proxy overrides some virtual properties of the entity to insert hooks for performing actions automatically when the property is accessed. For example, this mechanism is used to support lazy loading of relationships.
Most of the time you don't need to be aware of this use of proxies, but there are exceptions:
  • In some scenarios you might want to prevent the Entity Framework from creating proxy instances. For example, serializing non-proxy instances might be more efficient than serializing proxy instances.
  • When you instantiate an entity class using the new operator, you don't get a proxy instance. This means you don't get functionality such as lazy loading and automatic change tracking. This is typically okay; you generally don't need lazy loading, because you're creating a new entity that isn't in the database, and you generally don't need change tracking if you're explicitly marking the entity asAdded. However, if you do need lazy loading and you need change tracking, you can create new entity instances with proxies using the Create method of theDbSet class.
  • You might want to get an actual entity type from a proxy type. You can use theGetObjectType method of the ObjectContext class to get the actual entity type of a proxy type instance.
For more information, see Working with Proxies on the Entity Framework team blog.

Disabling Automatic Detection of Changes

The Entity Framework determines how an entity has changed (and therefore which updates need to be sent to the database) by comparing the current values of an entity with the original values. The original values are stored when the entity was queried or attached. Some of the methods that cause automatic change detection are the following:
  • DbSet.Find
  • DbSet.Local
  • DbSet.Remove
  • DbSet.Add
  • DbSet.Attach
  • DbContext.SaveChanges
  • DbContext.GetValidationErrors
  • DbContext.Entry
  • DbChangeTracker.Entries
If you're tracking a large number of entities and you call one of these methods many times in a loop, you might get significant performance improvements by temporarily turning off automatic change detection using the AutoDetectChangesEnabled property. For more information, seeAutomatically Detecting Changes on the Entity Framework team blog.

Disabling Validation When Saving Changes

When you call the SaveChanges method, by default the Entity Framework validates the data in all properties of all changed entities before updating the database. If you've updated a large number of entities and you've already validated the data, this work is unnecessary and you could make the process of saving the changes take less time by temporarily turning off validation. You can do that using the ValidateOnSaveEnabled property. For more information, see Validation on the Entity Framework team blog.

Links to Entity Framework Resources

This completes this series of tutorials on using the Entity Framework in an ASP.NET MVC application. For more information about the Entity Framework, see the following resources:
The following posts on the Entity Framework Team Blog provide more information about some of the topics covered in these tutorials:
Many of the blog posts listed here are for the CTP5 version of Entity Framework Code First. Most of the information in them remains accurate, but there are some changes between CTP5 and the officially released version of Code First.

Angular Tutorial (Update to Angular 7)

As Angular 7 has just been released a few days ago. This tutorial is updated to show you how to create an Angular 7 project and the new fe...