Friday, 27 July 2018

C# Bot as an Enabler—Azure Web App Bot (Quick Start)

This article introduces the Live Assist for Microsoft Dynamics 365 Bot as an Enabler Bot C# SDK, running within the Azure Web App Bot Framework.
The examples contained here do not provide a detailed guide for developing bots within the Microsoft Azure framework. We assume that the developer has experience of C# and the Microsoft Azure Framework, and that they are able to use this quick start guide as a starting point to become familar with the Live Assist SDK.
Note: The Microsoft Web App Bot Framework is currently in development. You may need to make some changes in the steps, or the code, for your development effort. We recommend that you develop your application using Microsoft Visual Studio Professional—do not use the App Service Editor provided in the Azure web interface.
This bot receives chats initiated with Microsoft Web App Bot Service. The bot echoes any messages that a vistor sends, sending their original message back to them. When the visitor sends a message containing 'help', the Live Assist Bot SDK is invoked to escalate the call to a Live Assist Skill Group for processing.
The bot, by using the Live Assist SDK, proxies messages between the Azure Web Chat service and the Agent using Live Assist.
See also: BotBuilder samples repository  

Step 1—Creating a C# basic Web App bot

  1. In Azure, create a new C# Web App Bot.
  2. Set the appropriate Storage and Billing requirements for your application.
    The significant parts of this process are:
    • Selecting a name which is unique within Azure
    • Selecting the C# (Basic) Application
    2018-05-02 16_57_25-Web App Bot - Microsoft Azure.png
    This process may take Azure a few minutes to build.

Step 2—Downloading the Project Code from Azure

  • When the Web App Bot has been created, you can download the Project code:
    2018-04-13 16_10_22-Choose how to work with your code - Microsoft Azure.png
    Azure prepares the source code to download—you need to unzip the archive.

Step 3—Loading the code into Visual Studio, and importing the Live Assist Reference

  1. The unzipped archive contains the follow Solution file:
    Microsoft.Bot.Sample.SimpleEchoBot.sln
    Open this file in Visual Studio Professional.
    Note: We used Visual Studio 2017 for this example. Other IDEs may appear differently.
  2. Import the CaféX Live Assist Reference file from NuGet.
    From the Solution Explorer, right-click References and click Manage Nuget Packages
    2018-04-13 16_18_38-Microsoft.Bot.Sample.SimpleEchoBot - Microsoft Visual Studio  (Administrator).png
    You may see a warning from NuGet saying that packages are missing. Click Restore.
  3. Under Browsesearch for Live Assist and install the LiveAssistBotSDK package.2018-04-13 16_22_58-Microsoft.Bot.Sample.SimpleEchoBot - Microsoft Visual Studio  (Administrator).png
  4. When the NuGet package is installed, you can begin development.

Step 4—Developing your application

Based on the Microsoft SimpleEchoBot sample bot, make the following changes to the EchoDialog.cs file:
  1. Managing Imports:
    The following packages are required:
    using Microsoft.Bot.Builder.ConnectorEx;
    using Newtonsoft.Json;
    using System.Timers;
    using Cafex.LiveAssist.Bot;
  2. Declaring your static variables of the EchoDialog:
    private static Sdk sdk;
    private static ChatContext chatContext;
    private static string conversationRef;
    private static Timer timer;
  3. Load the SDK in the StartAsync method
    Make sure that you define your Account Number (Find your Account Number )
                sdk = sdk ?? new Sdk(new SdkConfiguration()
                {
                    AccountNumber = "__CHANGE_ME__"
                });
  4. The beginning of the if block needs to check if the chat has already been escalated, so the echo doesn't occur
                if (chatContext != null) 
                {
                  // As chatContext is not null we already have an escalated chat.
                  // Post the incoming message line to the escalated chat
                  await sdk.PostLine(activity.Text, chatContext);
                }else if (activity.Text == "reset")
          
  5. Add an extra trigger to handle 'help' in the MessageReceivedAsyc method
                else if (activity.Text.Contains("help"))
                {
                    // "help" within the message is our escalation trigger.
                    await context.PostAsync("Escalating to agent");
                    await Escalate(activity); // Implemented in next step.
                }
  6. Define the Escalate method
    Make sure that you specify an Agent Skill group to target.
            private async Task Escalate(Activity activity)
            {
                // This is our reference to the upstream conversation
                conversationRef = JsonConvert.SerializeObject(activity.ToConversationReference());
    
                var chatSpec = new ChatSpec()
                {
                    // Set Agent skill to target
                    Skill = "__CHANGE_ME__",
                    VisitorName = activity.From.Name
                };
    
                // Start timer to poll for Live Assist chat events
                if (timer == null)
                {
                    timer = timer ?? new Timer(5000);
                    // OnTimedEvent is implemented in the next step
                    timer.Elapsed += (sender, e) => OnTimedEvent(sender, e);
                    timer.Start();
                }
    
                // Request a chat via the Sdk    
                chatContext = await sdk.RequestChat(chatSpec);
            }
  7. Define the Enabler Logic
    When the OnTimedEvent fires, the messages are passed between the Visitor and the Bot, and the Agent and the Bot.
            async void OnTimedEvent(Object source, ElapsedEventArgs eea)
            {
                if (chatContext != null)
                {
                    // Create an upstream reply
                    var reply = JsonConvert.DeserializeObject<ConversationReference>(conversationRef)
                        .GetPostToBotMessage().CreateReply();
    
                    // Create upstream connection on which to send reply 
                    var client = new ConnectorClient(new Uri(reply.ServiceUrl));
    
                    // Poll Live Assist for events
                    var chatInfo = await sdk.Poll(chatContext);
    
                    if (chatInfo != null)
                    {
                        // ChatInfo.ChatEvents will contain events since last call to poll.
                        if (chatInfo.ChatEvents != null && chatInfo.ChatEvents.Count > 0)
                        {
                            foreach (ChatEvent e in chatInfo.ChatEvents)
                            {
                                switch (e.Type)
                                {
                                    // type is either "state" or "line".
                                    case "line":
                                        // Source is either: "system", "agent" or "visitor"
                                        if (e.Source.Equals("system"))
                                        {
                                            reply.From.Name = "system";
                                        }
                                        else if (e.Source.Equals("agent"))
                                        {
                                            reply.From.Name = chatInfo.AgentName;
    
                                        }
                                        else
                                        {
                                            break;
                                        }
    
                                        reply.Type = "message";
                                        reply.Text = e.Text;
                                        client.Conversations.ReplyToActivity(reply);
                                        break;
    
                                    case "state":
                                        // State changes
                                        // Valid values: "waiting", "chatting", "ended"
                                        if (chatInfo.State.Equals("ended"))
                                        {
                                            chatContext = null;
                                        }
                                        break;
                                }
                            }
                        }
                    }
                }
            }

    A completed example of the EchoDialog.cs is available from our GitHub repository.

Step 5—Publishing your bot

Step 6—Testing your bot

  • When you have published your bot, you can quickly test the new Chat Bot from Microsoft Azure > Test Web Chat tab.
    Important: You need an Agent logged into LiveAssist for Microsoft Dynamics 365, under the correct skill group, to receive the escalated chat.
    2018-04-16 12_53_41-MyCafexBot - Microsoft Azure.pngimage (25).png

Step 7—Debugging your bot

There are numerous ways to debug your bot application.
The following Microsoft documents may assist you with this.

Loading Shared CSX Files in Azure Functions

As I mentioned in a recent post, I have been spending some time getting to know Azure Functions lately. A friend and I are taking the opportunity to learn about Azure Functions and build something that will help us with activities related to the community conferences we organize. As always, this is more of a breadcrumb trail for me, but leave a comment if this helps you! I'd love to hear from you.

Why Do We Need Shared CSX Code?

To help us focus on a real world problem, we are using Azure Functions to automate the management of some social media tasks for some conferences we organize. When we accept a speaker session or confirm a sponsor, we want to schedule tweets to notify attendees to sessions and thank speakers and sponsors for supporting the event.
Our initial goal is to have a function that accepts an HttpRequest and queues the information for a Speaker, Sponsor or Session. We then want a QueueTrigger to process the speaker, sponsor or session data and schedule a tweet. The Speaker, Sponsor and Session objects will all be strongly typed POCO objects, but we don't want to reproduce the class definitions across functions in our Function App.

Loading Shared CSX Files in Azure Functions

For this post, we'll just focus on a simple example of the Http POST to receive a Speaker and then the QueueTrigger code to deserialize the Speaker object from the queue. The following code is an HttpTrigger in C# that simply takes an HttpRequestMessage and adds the content to a Azure Storage Queue.
Notice the #load "..\Shared\Speaker.csx" line at the top of the code. This line references the Speaker.csx file that contains our Speaker class definition that we want to serialize into the queue.
#load "..\Shared\Speaker.csx"
#r "Newtonsoft.Json"

using System;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using System.Diagnostics;
using Microsoft.Azure.WebJobs.Host;
using Newtonsoft.Json;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, IAsyncCollector<string> speakersOut, TraceWriter log)
{
    dynamic data = await req.Content.ReadAsAsync<object>();

    HttpResponseMessage res = null;
    string twitterHandle = data?.twitterHandle;
    if (!string.IsNullOrEmpty(twitterHandle))
    {
        var speaker = new Speaker(){FirstName = data?.firstName, LastName = data?.lastName, TwitterHandle = data?.twitterHandle };
        await speakersOut.AddAsync(JsonConvert.SerializeObject(speaker));
        res = new HttpResponseMessage(HttpStatusCode.OK);
    }
    else
    {
        res = new HttpResponseMessage(HttpStatusCode.BadRequest)
        {
            Content = new StringContent("Please pass a valid speaker.")
        };
    }
    return res;
}
Once the await speakersOut.AddAsync(JsonConvert.SerializeObject(speaker)); completes, the speaker message is now on the queue. Time to deserialize. The ProcessSpeaker function receives the message containing the Speaker object and deserializes to process the Speaker. The var speaker = JsonConvert.DeserializeObject<Speaker>(queuedSpeaker); line deserializes the speaker. I am simply using the object to call a function on the Speaker instance and write out the twitter handle to the logs in this sample, the tweeting comes later.
#load "..\Shared\Speaker.csx"
#r "Newtonsoft.Json"

using System;
using Newtonsoft.Json;

public static void Run(string queuedSpeaker, TraceWriter log)
{
    
    log.Info($"C# Queue trigger function processed: {queuedSpeaker}");
    var speaker = JsonConvert.DeserializeObject<Speaker>(queuedSpeaker);
    log.Info($"{speaker.FullName()} tweets from   {speaker.TwitterHandle}");

}
Finally, the Speaker.csx file that has the properties and methods we want shared across our functions is very simple for this example, but there is real power here. For example, our class that is responsible for Tweets might be a shared class that we can consume across multiple functions. Below is a screen shot of the Speaker class being edited in the Kudu console.

Global Configuration for Azure Functions

What if some of your code changes, how will your functions reload and use the new changes? At the root of a Function App's directory structure there is a host.json file, which is mapped to the WebJobs SDK JobHostConfiguration settings class. If you are composing your functions within the Azure Portal, you can get to this file from the Kudu console for your Function App as shown above.
host.json
If you have configured continuous integration, you can access this file at the root of you project.
host.json
The key setting we are concerned with the "watchDirectories": ["Shared"], line. Adding this property and the "Shared" folder value to our host.json file indicates that files in the array of folders listed should be watched for changes by functions within your Function App. If there is a change to your code, the function app is restarted, recompiled and and errors are logged. For instance, while writing this post, I changed the TwitterHandle property on the Speaker class to be Twitterhandle and the UpsertFunction immediately failed with an error when attempting to create the Speaker instance:
 'Speaker' does not contain a definition for 'TwitterHandle'

What's Next

Azure Functions continues to capture my interest. The integration with Flow, LogicApps, simple API creation, the development model, the benefits to business, and much more make Azure Functions a compelling tool for your tool belt. Check out the Azure Functions C# developer reference for more details.
The recent release of the Visual Studio Tools for Azure Functions and the Azure Functions CLI are starting to ease the learning curve of Azure Function development, but the development experience still has some rough edges. Specifically, the local development and debugging story is a bit murky right now but getting better. I think this may be the topic of the next post!
As I understand and learn more of the functionality (see what I did there...) of Azure Functions I'll keep posting.
HTH - As always, let me know if you have a comment or suggestions.

Thursday, 26 July 2018

Azure Storage REST API: Authenticate with C#

In one of my projects where I've been refactoring a traditional .NET project into a .NET Core project, I used the Azure Storage nugets. As of this posting, the current version of the NuGet supports .NET Core which is awesome - but the dependencies doesn't.
Why is this a problem? Well, because if you want to migrate this code to run on .NET core and you rely on the Windows Azure Storage NuGet Package, it will not be possible to run it in .NET Core currently.
That's why I chose to use the Azure Storage REST API instead for all my things - and I haven't regretted a single moment of it (except for trying to figure the auth part out).
The authentication/authorization bits were not really clear. Well, clear as mud perhaps - the documentation is there, but it's quite confusing and lacks any good samples. So with that, I decided to make a sample.
Enjoy this tip-of-the-day post, and feel free to drop me a comment or e-mail.

Authenticate Azure Storage REST requests in C#

Everything I've built is based on information from this page: Authentication for the Azure Storage Services.

Pre-requisites

In order to use this code, there's a few pre-requisites that I'd like to note down:
  • You should have an Azure Storage account.
  • You should have your Storage Account Key.
  • You should have your Storage Account Secret.
  • NO need for the Storage Connection string.
  • The Client object in my code is a normal new HttpClient();

Required Headers

As mentioned in the public documentation, there's a few headers that are required as of this posting:
  • Date
  • Authorization
The rest of the headers are optional, but depending on what operations you want to do, and which service you're targeting, they will differ. This is focused on Table Storage currently, but can be applied to others as well.

Creating the Date Header

This is a required header, and the easiest way to demonstrate how to build it is like this:
var RequestDateString = DateTime.UtcNow.ToString("R", CultureInfo.InvariantCulture);

if (Client.DefaultRequestHeaders.Contains("x-ms-date"))
    Client.DefaultRequestHeaders.Remove("x-ms-date");
Client.DefaultRequestHeaders.Add("x-ms-date", RequestDateString);
If there's already a Date header present, remove it and add it again with the proper value.

Creating the Authorization Header

This is where the tricky part came into play. Seeing it now in retrospective, it's fairly straight forward - but before figuring out in what order, and how to properly encode this header it was a slight struggle.
var StorageAccountName = "YourStorageAccountName";
var StorageKey = "YourStorageAccountKey";
var requestUri = new Uri("YourTableName(PartitionKey='ThePartitionKey',RowKey='TheRowKey')");

if (Client.DefaultRequestHeaders.Contains("Authorization"))
    Client.DefaultRequestHeaders.Remove("Authorization");

var canonicalizedStringToBuild = string.Format("{0}\n{1}", RequestDateString, $"/{StorageAccountName}/{requestUri.AbsolutePath.TrimStart('/')}");
string signature;
using (var hmac = new HMACSHA256(Convert.FromBase64String(StorageKey)))
{
    byte[] dataToHmac = Encoding.UTF8.GetBytes(canonicalizedStringToBuild);
    signature = Convert.ToBase64String(hmac.ComputeHash(dataToHmac));
}

string authorizationHeader = string.Format($"{StorageAccountName}:" + signature);
Client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("SharedKeyLite", authorizationHeader);
As you can see, it's not entirely straight forward. These are the steps and things to consider:
  • You first decode the StorageKey from Base64
  • You then pass this into the constructor for the HMACSHA256 class
  • You then create a byte[] array to get the UTF8 bytes from the canonicalizedStringToBuild (the date + request information)
  • You then Base64 encode the hmac.ComputeHash() results
  • The result of this, is your signature, in combination with adding the Storage Account Name, so it ends up as a string looking like this format: StorageAccountName: Signature.
  • Then you create a new Authorization Header called Authorization as you can see in the snippet above, with SharedKeyLite and your signature added.
I'm going to be honest. This took some time to figure out - but once it was working, it's blazingly fast and I love it.

Accept Header

Since I want the response to be application/json this is exactly what I need to tell the request:
Client.DefaultRequestHeaders.Accept.Clear();
Client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

Request version

I'm specifying which version to use, so if there's new versions coming out I am still targeting the one I know works throughout all of my unit tests and tenants using the code.
This is done using the x-ms-version header.
if (Client.DefaultRequestHeaders.Contains("x-ms-version"))
    Client.DefaultRequestHeaders.Remove("x-ms-version");

Client.DefaultRequestHeaders.Add("x-ms-version", "2015-12-11");

DataService Version Headers

Since I'm working with entities, I need to specify the DataServiceVersion headers as such:
if (Client.DefaultRequestHeaders.Contains("DataServiceVersion"))
    Client.DefaultRequestHeaders.Remove("DataServiceVersion");
Client.DefaultRequestHeaders.Add("DataServiceVersion", "3.0;NetFx");

if (Client.DefaultRequestHeaders.Contains("MaxDataServiceVersion"))
    Client.DefaultRequestHeaders.Remove("MaxDataServiceVersion");
Client.DefaultRequestHeaders.Add("MaxDataServiceVersion", "3.0;NetFx");

If-Match Header

In my specific case I'm doing PUT and DELETE operations sometimes, and when doing that, there's an additional required header you need. The If-Match header.
if (httpMethod == HttpMethod.Delete || httpMethod == HttpMethod.Put)
{
    if (Client.DefaultRequestHeaders.Contains("If-Match"))
        Client.DefaultRequestHeaders.Remove("If-Match");
    // Currently I'm not using optimistic concurrency :-(
    Client.DefaultRequestHeaders.Add("If-Match", "*");
}

Known issues: Forbidden: Server failed to authenticate the request.

Before I hit the jackpot on how to format my Authorize header, it generated a lot of different errors. The most common one though, being this:
Status Code: Forbidden, Reason: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
There was no "easy fix" for this, as it simply meant the header was incorrect for Authorization - but it doesn't state what is incorrect or malformed (which I suppose is good, for security). So after a lot of Fiddler4 magic and experimentation with this, I could resolve the issue and the code you see in this post is the one that is currently (2016-11-01) working as expected throughout all of my projects.

Resources

The snippets here are part of a bigger project of mine, hence I can't easily share the entire source. However, should you be inclined in a full working sample, please drop a comment and if there's enough interest perhaps I'll create a new github project for it.

Tuesday, 24 July 2018

Angular and ASP.NET Core


The Angular CLI provides a way to develop front-end applications using angular that hides a lot of details. For example there’s no requirement to understand how Webpack or SystemJS work.
In fact, if you don’t know a little bit about Webpack, which is what is used to build the latest version of Angular applications, the CLI almost looks like magic. You just need to do a ng newand ng serve --open and you have a working Angular application open in your web browser.
The fact that the CLI hides all the plumbing might lead to questions like: “How do I use Angular with ASP.NET Core?”.
Angular and ASP.NET Core logos
I hope that by the end of this blog post it will be clear to you how you can answer that question (and not only with ASP.NET Core, with whichever technology you want to use your Angular app with).
You see, an angular app is an app in and of itself, it does need to be “served” somehow by a web server.
When you compile an angular application you are producing a set of JavaScript, CSS and one index.html file. That’s it.
The default folder where those “artifacts” get copied to is yourApplicationFolder/dist. You can check it out by going to your Angular application and doing an ng build.
Go on, I’ll wait.
When you do ng serve --open you are actually using a stand-alone web server (webpack-dev-server) to serve that index.html file in the dist folder.
The rest of this blog post will describe several approaches that you can take for using Angular with ASP.NET Core. The first is to have ASP.NET Core serve the Angular files.
The second approach is to have Angular and ASP.NET Core as different applications. There’s an example of how to achieve this using Nginx where both Angular and ASP.NET Core are served using port 80 and in IIS where each application is served from its own port.
The final part of the post describes a setup that I consider ideal where you can use Angular’s ng serve during development.
This post is quite long but the sections are fairly independent. If your are only interested in the last section and you are using Windows I recommend also reading the section on how to configure Angular in IIS.

Using ASP.NET Core to serve the Angular application

It can be argued that serving an Angular application “within” ASP.NET Core is wasteful in terms of resources. In the end the Angular application is just a set of static files, there’s no need to have the request for those files go through the ASP.NET Core middleware pipeline.
There might be some good reasons for doing it though, also there’s no harm in knowing how to do it and since it seems to be a common approach, being familiar with it might be useful.
One important thing to know in order to understand how we can serve an ASP.NET Core and Angular application together is to understand how a request is processed in ASP.NET Core.
When you run an ASP.NET Core application your request goes through a “pipeline” of middlewares. Every time a request comes in it goes through the middlewares in the order they are defined, and then in reverse order.
Every middleware has an opportunity to change the request or response two times, once before the other middlewares have been executed, and then after the other middlewares have executed. This allows for a middleware at the top of the pipeline to handle for example, a 401 response set by a middleware further down in the pipeline.
An example of this are the authentication middlewares that change a 401 response to a 302 redirect to a login page.
You can find the definition of this pipeline on the Startup.cs file, in the Configure method. For example, here’s the pipeline that you get when you do a dotnet new mvc:
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }
    else
    {
        app.UseExceptionHandler("/Home/Error");
    }

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}
Every time a request comes in to this ASP.NET Core application it can go through at most three middlewares. First the DeveloperExceptionPage/ExceptionHandler middleware depending if the ASP.NET Core application is running in development mode or not. Then the StaticFilesmiddleware and then finally the Mvc middleware.
The middleware that is key here is StaticFiles. This is the middleware that serves files contained in the wwwroot folder, i.e. if a request comes in for index.html and there’s an index.html file at wwwroot/index.html then that file is sent to the client. StaticFilesmiddleware won’t call the middlewares below it after this (in this case it would be Mvc).
You can probably already see how this could work with an Angular application. Just put it under wwwroot.
That’s absolutely correct, however there’s a detail about StaticFiles that is important to know. StaticFiles won’t try to do any guesses for you, i.e. if your request is for /StaticFiles won’t look for /index.html. It will just assume that this request isn’t supposed to be handled by it and it will call the next middleware in the pipeline, in this case Mvc.
For this approach to work you need another middleware named DefaultFiles which must come before StaticFiles in the pipeline:
//...
app.UseDefaultFiles();
app.UseStaticFiles();
//...
DefaultFiles will cause cause StaticFiles to look for index.html if the url ends with /.
Now the only thing left to do is to configure your Angular CLI to compile to your ASP.NET Core application’s wwwroot folder.
If you look in your Angular’s application folder you’ll find a .angular-cli.json file. In that file look for the outDir property:
...
"apps": [
{
    ...
    "outDir": "dist",
...
Change it from “dist” to the path of your ASP.NET Core’s wwwroot folder. Run ng build in your Angular application and now if you run your ASP.NET Core web application you should see your Angular application in the browser.
A nice development workflow is to run the Angular CLI build in watch mode: In a console window do ng build --watch or ng build -w if you want to save a few key strokes, and leave it running. Now every time you make a change in your Angular application you can just refresh the browser and see the change (you also need to have your ASP.NET Core application running).
There is one thing missing from this approach, though. Deep-linking support, i.e. if your Angular application uses routing and you send a user a url with a valid Angular route (e.g http://yourapplication.com/products/5) the receiving user won’t be able to open it. Trying to get to that route will result in a 404 Not Found response.
That’s because the request will go all the way through your ASP.NET Core application’s pipeline and when it reaches the MVC middleware it won’t know what to do with it and will set the response’s status code to 404 Page Not Found.
What we can do is at the top of the pipeline we look for a 404 response that is about to be sent and change its path to our Angular application’s index.html file (that way what gets served is the Angular application which will know what to do with the url in terms of routing). After this we make the request go through the pipeline again:
//add this at the start of Configure
app.Use(async (HttpContext context, Func<Task> next) =>
{
    await next.Invoke();

    if (context.Response.StatusCode == 404)
    {
        context.Request.Path = new PathString("/index.html");
        await next.Invoke();
    }
});
That fixes deep links but introduces a new problem. What if your web api (that you’ve implemented in your ASP.NET Core application) needs to send a 404 response. That’s something more than reasonable to do. Instead of a 404, the service call will receive a 200 response with index.html.
The solution here is to look at the url and decide if it’s intended for the web api or an Angular route. Usually a call to the web api will have /api in the url. That’s a simple test to perform and it will solve this problem. Here’s the revised version of a custom middleware that solves this problem:
//add this at the start of Configure
app.Use(async (HttpContext context, Func<Task> next) =>
{
    await next.Invoke();

    if (context.Response.StatusCode == 404 && !context.Request.Path.Value.Contains("/api")))
    {
        context.Request.Path = new PathString("/index.html");
        await next.Invoke();
    }
});
One last note about this approach. I’ve seen examples where the Angular application is in the same Visual Studio solution as the ASP.NET application. Visual Studio (not VS Code) will try to compile the typescript files. If you are using ng build -w you’ll want Visual Studio to leave your Typescript files alone. To do that open your project’s .csproj and add in any PropertyGroup:
<TypescriptCompileBlocked>true</TypescriptCompileBlocked>

Nginx

Nginx is a web server that can act as a reverse proxy for ASP.NET Core applications and which is also very good at serving static content.
The setup for having an Angular application work with ASP.NET Core is much simpler in Nginx. You just need a configuration similar to this:
server {
    listen 80;        

    location / {
        root /pathToYourAngularApplication/dist;
        index index.html;
        try_files $uri $uri/ /index.html;
    }

    location /api/ {
        proxy_pass http://localhost:5000;
    }
}
This is how a typical Nginx configuration file looks like. If you are not familiar with Nginx and ASP.NET Core I recommend my blog post: HTTPS in ASP.NET Core from Scratch. It has a section with instructions on how to install and setup websites using Nginx.
This configuration allows us to have both the Angular and ASP.NET Core application on port 80. Let’s look at the important parts in it.
The listen 80 statement establishes that Nginx will respond to requests coming in on port 80.
The location blocks are where we are going to define how our two applications will be served (Angular and ASP.NET). Each time a request comes in, Nginx will look at the URL and try to find the location block that best matches it. In this case the location blocks urls act like a “prefix match”, i.e., the first block will match every URL (every url that starts with a /). The second location block matches URLs that start with /api/.
Nginx picks the most “specific” location block, so even though a request for /api/users would match both location blocks, since the second one (/api/) is more specific, it will be the one that would be used to handle the request.
In the first location block (/):
root /pathToYourAngularApplication/dist sets the path where static content will be looked for as the location where your compiled Angular application files are (dist is the CLI’s default output folder).
index index.html specifies which file should be served for URLs that end in /.
try_files $uri $uri/ /index.html can be read this way: check if there’s a file that matches the normalized URL (e.g. http://www.yourwebsite.com/assets/image.jpg -> /assets/image.jpg), if that file does not exist try the normalized URL plus a / (e.g. http://www.yourwebsite.com/documents -> /documents/ -> /documents/index.html because of the index rule). If all of that fails serve the file /index.html.
Serving /index.html if no match is found is what enables us to use deep linking. For example a URL such as http://www.yourwebsite.com/documents with no math in the file system will be served with the Angular application’s index.html. Index.html will load all the necessary files for the Angular application to run, specifically the routing module. The routing module will then look at the url, and according to the routes defined in the angular app will decide which component to load.
Finally, the last location block. It instructs Nginx to forward the requests that start with /api/ to a webserver that is listening on port 5000 on localhost. That will be your ASP.NET Core’s application.
One note about the Nginx’s syntax for proxy_pass. It matters a lot if the URL for the application has a / at the end or not. The url in proxy_pass is treated differently if it has what is described in Nginx’s documentation as a “optional URI” (optional URI isn’t a great name, since in the end a URL is a URI).
An example of a URL with an optional URI is: http://localhost:5000/optionalURI/. If the location’s path is /api/, then a request for http://yourwebsite.com/api/users will be forwarded to your ASP.NET Core’s application as http://localhost:5000/optionalURI/users.
That’s why not adding the / at the end in proxy_pass is so important, because if you do (e.g.: proxy_pass http://localhost:5000/;) it falls into the “optional URI” category (it will be interpreted as an empty optional URI), and a request for http://yourwebsite.com/api/users will be seen in your ASP.NET Core’s application as a request for http://localhost:5000/users.
If you don’t add the / at the end (e.g.: proxy_pass http://localhost:5000;) then a request for http://yourwebsite.com/api/users will be seen in the ASP.NET Core application as a request for http://localhost:5000/api/users which is probably what you want.
If you need a more complete example that explains how you can make this work outside a development-time scenario (i.e. have your ASP.NET Core application auto start and remain online even if there’s an exception) check out HTTPS in ASP.NET Core from Scratch where there’s an example describing how you can use Supervisor to keep the ASP.NET application running even in the event of errors (by auto restarting it).

IIS

With IIS it becomes very cumbersome to have a configuration similar to what we can do with Nginx where both the Angular and ASP.NET Core applications are served on port 80.
To understand why it makes it easier if we understand the IIS concepts of Website and Application. When you create a website you define (among other settings) the port (e.g. 80) where it will be served from. A website can then have several applications “inside” it, all of which will share the website configuration (and therefore be served on the same port).
We could for example put our Angular application inside the “Default Web Site” and the ASP.NET Core one as an IIS Application under it, and call it for example “api”.
If the “Default Web Site” responds at http://localhost, then the ASP.NET Core application could be at http://localhost/api. Which seems to be exactly what we want. However, the requests for http://localhost/api would be seen in ASP.NET Core without the api in the url.
As far as I know there’s no way to change this behavior.
This means your ASP.NET Core application will behave differently when running inside IIS vs when executed directly (either in Visual Studio or with dotnet run).
To make matters worse an ASP.NET Core application needs to be published (dotnet publish) for it to work in IIS. It’s not like a non-Core ASP.NET application where you can just point an IIS Application to the folder that contains the ASP.NET application’s files .
So when using IIS the reasonable options are to either have ASP.NET Core serve the angular application as it was described in the first section of this article or have two separate Websites.
Lets walk-though the process of creating two separate websites. First a website for the Angular project and then for ASP.NET Core.

Angular in IIS

We’ll be adding a Website named MyNgWebSite on port 80. That means that if you have a “Default Web Site”, which in all likelihood you’ll have, you need to stop it or change its bindings since the default for it is port 80.
But before we get there we need to create an application pool for our Angular application. Right click on Application Pools in IIS Manager:
Right click on application pools
The Application Pool for an Angular application does not require Managed Code (we only need to serve static files). We should choose “No Managed Code” in the .NET CLR Version:
No Managed Code application pool
We can now add a new IIS web site and set the new application pool we created as its application pool:
Add new web site
Configure website
The physical path should be set to where your Angular project is being compiled to, usually this is the dist folder.
If you were to try to access http://localhost right now (and assuming that you stopped the “Default Web Site” or used a different port than 80) you would get a permissions error. That’s because when you create an application pool a “virtual” user is created. That user is a localuser and must have permissions to access the folder that contains the files you are trying to serve.
That user’s name is IIS AppPool\ApplicationPoolName, in this example it’s IIS AppPool\ApplicationPoolForAngular.
Go to the folder that contains the compiled Angular project, right click on it and select properties, go to the security tab, click edit, then add and finally add the application pool user:
Configure permissions for the folder
We should now be able to access your Angular application if you go to http://localhost.
We still need to do one more thing though. Enable deep-linking support.
If you have routes in your Angular application these won’t work if someone tries to access them from “outside” the Angular app. What this means is that if navigating to http://localhost/documents is valid inside the Angular application and you send that url to someone else, when that someone else clicks the link they will be greeted with a 404 page from IIS.
That’s because there is no documents folder nor index file inside it for IIS to serve. We need to tell IIS that it must serve the file index.html when someone tries to access a URL that does not exists.
We are going to use the same mechanism used for having a custom 404 page, but instead of a 404 page we’ll serve the Angular application.
To achieve this we need to create a web.config file and put it in the src folder of the Angular application with this inside:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        <httpErrors errorMode="Custom" existingResponse="Replace">
            <remove statusCode="404"/>
            <error statusCode="404" responseMode="ExecuteURL" path="/index.html"/>
        </httpErrors>
    </system.webServer>
</configuration>
A very quick explanation of what’s going on. We are using httpErrors with an errorMode="Custom" and existingResponse="Replace". This instructs IIS to replace the default error pages with the one we are about to specify.
remove statusCode="404" will remove any custom settings for 404 pages if they already exist.
error statusCode="404" responseMode="ExecuteURL" path="/index.html" will configure IIS to execute the /index.html url if there’s a 404 error. This will effectively serve your Angular application and won’t change the URL seen by the client.
Now we need to edit the .angular-cli.json file so that web.config gets copied to the output folder as an asset when the application is compiled. The assets section is under “app”, here’s an example:
{
"$schema": "./node_modules/@angular/cli/lib/config/schema.json",
"project": {
    "name": "your-app"
},
"apps": [
    {
    "root": "src",
    "outDir": "dist",
    "assets": [
        "assets",
        "favicon.ico", 
        "web.config"
    ],
    "index": "index.html",
...

ASP.NET Core in IIS

The process for the configuring an ASP.NET Core application in IIS is similar, although we need to select a different port.
But before you start you need to make sure you have the ASP.NET Core Module for IIS installed. It might already be installed if you installed the .Net Core SDK, however the best way to make sure is to go to IIS Manager and see if it’s in the modules’ list:
Module list with AspNetCoreModule
If you don’t have it you can find more information about it here and a direct link to download it here.
This module takes care of starting and keeping an ASP.NET Core application running.
Before we create the website in IIS we need the published version of the ASP.NET Core application. You can do that in the command line with dotnet publish or, in full Visual Studio, right click on the project and select Publish, then click publish to folder.
Create a new Website and point it to the ASP.NET Core project published folder, give it a different port number (for example 8080) and create an Application Pool for it.
An application pool for an ASP.NET Core application is also unmanaged (No Managed Code). Although this might seem odd, it’s because IIS is actually just acting as a reverse proxy.
Before we’re able to run the ASP.NET Project using IIS we need to changed the published folder’s permissions so that the Application Pool user can access it. If you don’t you’ll get this moderately unhelpful error message:
HTTP Error 500.19 – Internal Server Error
The requested page cannot be accessed because the related configuration data for the page is invalid.
IIS permissions error
If you look at the Config Error section you’ll see “Cannot read configuration file due to insufficient permissions”, which pretty much says it all.
Go to the published folder and add the application pool user to the list of users with permissions over that folder.
Your ASP.NET Core application should now be available on the port you’ve selected when you created the website in IIS. However, if you try to call it from the Angular application you’ll get this error “Failed to load … No ‘Access-Control-Allow-Origin’ header is present on the requested resource…”. Here’s an example of how that would look like in the developer tools console tab:
Failed CORS request
That’s because even though both our our Angular and ASP.NET Core applications are on the same domain, they are in different ports, and that’s enough to qualify the request as a Cross Origin Resource Sharing (CORS) request in all browsers except IE.
We need to enable CORS on the ASP.NET Core application. To do that we need to add the package Microsoft.AspNetCore.Cors and in ConfigureServices(IServiceCollection services... method in Startup.cs add services.AddCors():
public void ConfigureServices(IServiceCollection services)
{
    //...
    services.AddCors();
    //...
}
And in the Configure method we need to create a “policy” that says that we are expecting requests from http://localhost. We should do that before the MVC middleware:
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    //...
    app.UseCors(builder => builder.WithOrigins("http://localhost"));
    app.UseMvc();
}
You should be good to go. Your Angular and ASP.NET Core should both be working now.

Platform Agnostic Development Setup

Both Angular and ASP.NET Core applications provide ways to detect if they are running in development or production mode. That can be leveraged to create a setup that works both in Linux, Windows or Mac.
The easiest way to run an Angular application is to use run ng serve. That spins up a webpack development server that serves the Angular application on port 4200 by default.
This also has the advantage of having hot module replacing, which means you can see your changes to the Angular application as soon as you make then without even having to refresh the browser.
So ideally we want to run the Angular application this way.
For the ASP.NET Core application we want to run it without having to publish it which you would have to if it is being served by IIS.
This is the ideal development scenario, ng serve for Angular and dotnet run or running the ASP.NET Core from Visual Studio without having to publish it.
In this ideal scenario when developing we could have the Angular application running on port 4200 (through ng serve) and the ASP.NET Core application running on port 5000. When in production the Angular application would typically be served from port 80 and the ASP.NET Core application for port 8080 for example (or from a different server on port 80).
On the ASP.NET Core side of things we’d have to configure CORS to accept requests from port 4200 when in development and from port 80 when in production. In Startup.cs that would look like this:
public void ConfigureServices(IServiceCollection services)
{
    services.AddCors();
    //...        
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    //...
    if (env.IsDevelopment())
    {
        //...
        app.UseCors(builder => builder.WithOrigins("http://localhost:4200"));
    }else 
    {
        app.UseCors(builder => builder.WithOrigins("http://localhost"));
    }

    app.UseMvc();
}
That takes care of the ASP.NET Core application.
For Angular we need to leverage the environemnt.ts and environemnt.prod.ts files. You can find then under a folder name environemnts under the src folder on an Angular project.
What you put on environment.ts will be available when you run in development mode (the default) and the values in environment.prod.ts will be used when in production. To compile the Angular project with the environment set to production use the --env=prod flag (e.g. ng build --env=prod).
Here’s a simple example of how the environment files could be configured to support our hypothetical scenario, environment.ts:
export const environment = {
    production: false,
    apiBaseUrl: "http://localhost:4200/"
};
environment.prod.ts:
export const environment = {
    production: true,
    apiBaseUrl: "http://localhost/"
};
In your Angular services, to get to the environment values you just need to import the environment (always environment and not environment.prod):
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { environment } from '../environments/environment';

@Injectable()
export class MyServiceService {

    constructor(private httpClient: HttpClient) { }

    getStuff(){
        return this.httpClient.get(`${environment.apiBaseUrl}/api/suff`);
    }  
}
This approach would work even if you host on Nginx or IIS so probably the best option if you need/want to support having developers using different platforms of if you just want to compare performance between them

Angular Tutorial (Update to Angular 7)

As Angular 7 has just been released a few days ago. This tutorial is updated to show you how to create an Angular 7 project and the new fe...