Powershell importing data from SQL into Excel COM OBJECT interop and EMAILING

I needed to pull data from a ASE Database (T-SQL), filter it and then dump it into a excel workbook.  Finally wrap all that up and attach it to a Email via SMTP…

See below for inline code or jump over to my github repo
https://github.com/wlafrance/Powershell/blob/master/QueryExcelEmail.ps

Continue reading “Powershell importing data from SQL into Excel COM OBJECT interop and EMAILING”

Quick NODE Demo site exposed to Internet…

WARNING – YOU WILL EXPOSE YOUR COMPUTER TO THE WORLD…

Simplest option ever: local tunnel

Even if you have a dynamic IP, or you’re under a NAT, you can deploy your app and serve the requests right from your computer using a local tunnel.  This option is suited for some quick testing, demo a product or sharing of an app with a very small group of people.

A very nice tool for this, available on all platforms, is ngrok.
Using it, you can just type ngrok PORT and the PORT you want is exposed to the internet. You will get a ngrok.io domain, but with a paid subscription you can get a custom URL as well as more security options (remember that you are opening your machine to the public Internet).

Another service you can use is https://github.com/localtunnel/localtun

‘ILoggerFactory’ does not contain a definition for ‘AddLog4Net’ and no accessible extension method

To fix this issue you need to run the npm command to install the MS Extension.

If you are getting this error:

‘ILoggerFactory’ does not contain a definition for ‘AddLog4Net’ and no accessible extension method ‘AddLog4Net’ accepting a first argument of type ‘ILoggerFactory’ could be found (are you missing a using directive or an assembly reference?)

You will need to add this extension to our project: [ link ]

Run this NUGET Command from the Package Manager Console in Visual Studio.

install-package Microsoft.Extensions.Logging.Log4Net.AspNetCore

 

VUE.JS and Angular – Enable Windows Authentication In Web API And Angular

Original Article: https://www.c-sharpcorner.com/article/enable-windows-authentication-in-web-api-and-angular-app/

Re-posted in case site goes away

Other resource: https://www.npmjs.com/package/activedirectory

 

 

Introduction

In this article, we will learn about how to use inbuilt Windows authentication in Web API and Angular application for authentication and authorization purposes.

Description

We have a requirement for in-house project development in the Angular App using Web API. As the purpose of this application is to use inside office only,  so it’s suggested to use Windows Authentication mode.

Prerequisite

To access any web API from Angular or any Ajax method Web API must be CORS (Cross Origin Resource Sharing) enabled otherwise the request is not executed.

You can achieve this by referring to the below links.

Step 1

Create Web API Project and in Web.config select Authentication mode as “Windows”,

Enable Windows Authentication in Web API and Angular App

 

Web Config Code snippet

  1. <system.web>
  2.     <authentication mode=“Windows” ></authentication>
  3. </system.web>

Or you can publish web API project in IIS and Enable Windows Authentication from there.

Enable Windows Authentication in Web API and Angular App

 

So, based on the above screenshot you can enable Windows authentication for Web API Project.

Step 2

Use Authorize attribute on the controller or on any action method for security.

Enable Windows Authentication in Web API and Angular App

 

Code snippet for WebAPI Controller

  1. [EnableCors(origins: “*”, headers: “*”, methods: “*”, SupportsCredentials = true)]
  2.     public partial class WebAPIController : ApiController
  3.     {
  4.         /// <summary>
  5.         /// This method contains Authorize attribute for authentication and authroization
  6.         /// </summary>
  7.         /// <returns></returns>
  8.         [HttpGet]
  9.         [Authorize]
  10.         [Route(“api/AuthenticateUser”)]
  11.         public HttpResponseMessage AuthenticateUser()
  12.         {
  13.             if (User != null)
  14.             {
  15.                  return Request.CreateResponse(HttpStatusCode.OK, new
  16.                 {
  17.                     status = (int)HttpStatusCode.OK,
  18.                     isAuthenticated = true,
  19.                     isLibraryAdmin = User.IsInRole(@“domain\AdminGroup”),
  20.                     username = User.Identity.Name.Substring(User.Identity.Name.LastIndexOf(@“\”) + 1)
  21.                 });
  22.             }
  23.             else
  24.             {
  25. //This code never execute as we have used Authorize attribute on action method
  26.                 return Request.CreateResponse(HttpStatusCode.OK, new
  27.                 {
  28.                     status = (int)HttpStatusCode.BadRequest,
  29.                     isAuthenticated = false,
  30.                     isLibraryAdmin = false,
  31.                     username = “”
  32.                 });
  33.             }
  34.          }
  35.     }

As per the above screenshot, I have added [Authorize] attribute in AuthenticateUser Action method. This attribute makes sure that action is only executed if the user entered a valid credential otherwise it will display 401 Unauthorized access.

Here, I have added [Authorize] attribute only to action method. This can be put at Controller level as well and if the application has multiple Roles then it can be extended by passing Role name along with Authorize Attribute. You can explore more from WebAPI Authorization

While testing WebAPI from localhost (With Visual Studio by hitting F5 or Ctrl + F5) I am always getting this error {“Message”:”Authorization has been denied for this request.“}. However, if I publish this same website in IIS then I am getting a valid response as below with no extra work required.

{“status”:200,”isAuthenticated”:true,”isLibraryAdmin”:false,”username”:”admin”}

Here, username (admin it can be any user) is logged in user in windows system (or Virtual Machine).

Stack Overflow Question – https://stackoverflow.com/questions/24194941/windows-authentication-not-working-in-iis-express-debugging-with-visual-studio

When we run or debug the application from Visual Studio it is not hosted in IIS, instead it hosts in IISExpress which is part of the Visual Studio and stores the minimum required configuration or default configuration to run any application.

So, based on the provided solution from above Stack overflow question I have enabled windows authentication (<windowsAuthentication enabled=”true”>) in “applicationhost.config” file which resides at Project root directory “.vs\config”, this folder is hidden you must enable the show all hidden files and folder option.

Enable Windows Authentication in Web API and Angular App

 

By Default, Windows authentication value is false in  “applicationhost.config”

Now, we have successfully enabled Windows authentication in WebAPI Project.

Step 3

As per the prerequisite enable CORS at controller level along with SupportCredentials true,

Enable Windows Authentication in Web API and Angular App

 

As per screenshot, enable CORS with the provided configuration. Here Instead of “*“(Allow from any origin), you can restrict with specific IP Address or domain name.

Step 4

CORS is enabled from the server side. Now while requesting API, pass flag withCredentials: true from the frontend.

For jQuery Ajax you must pass the request as below.

  1. $.ajax({url:apiURL ,xhrFields: {   withCredentials: true }, success:successHandler });

 

For Angular you must pass the request as below.

  1. private options = new RequestOptions({ withCredentials: true });
  2. this.http.get(this.baseUrl, this.options)

Code Snippet

  1. <!DOCTYPE html>
  2. <html>
  3. <head>
  4. https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js”&gt;
  5. $(document).ready(function(){
  6.     $(“button”).click(function(){
  7.     // IIS localhost but with host domain name, with support credential true in Enable CORS
  8.      //var apiURL=”http://libraryapi.com/api/authenticateuser&#8221;;
  9.     // IIS Localhost with support credential true in Enable CORS
  10.     //var apiURL=”http://localhost:3005/api/AuthenticateUser&#8221;;
  11.     //IIS Express with support credential true in Enable CORS
  12.     var apiURL=http://localhost:51647/api/AuthenticateUser&#8221;;
  13.         $.ajax({url:apiURL ,xhrFields: {
  14.                         withCredentials: true
  15.                     }, success: function(result){
  16.                     console.log(JSON.stringify(result));
  17.                     document.write(JSON.stringify(result));
  18.             $(“#div1”).html(JSON.stringify(result));
  19.         }});
  20.     });
  21. });
  22. </head>
  23. <body>
  24. “div1”>

    Let jQuery AJAX to Access Web API Response with Windows Authentication

  25. <button> Click Me to Get Web API Response</button>
  26. </body>
  27. </html>

Now, we have successfully consumed Windows Enabled Authenticated WebAPI .

In case of accessing WebAPI, the dialog for login and password prompts infinitely for the correct password. Then you need to disable the loopback check by executing PowerShell command

New-ItemProperty HKLM:\System\CurrentControlSet\Control\Lsa -Name “DisableLoopbackCheck” -value “1” -PropertyType dword

Reference link

Conclusion

In this article, we learned about how to enable Windows Authentication in Web API and Consume Secure web API from jQuery Ajax and Angular App.

IIS stopped working for .NET WEBFORMS app

On my local box IIS just stopped working when the sites are mapped to IIS and not the DEV IIS Express instance.

After a bit of searching I did the following to my web.config. I have not had time to research what caused the change in my box that has broken it, but for now I need it to just work.

 

The changes…

In the appSettings I added these two sections for my old ASP.NET Web Forms application

    <add key="aspnet:UseTaskFriendlySynchronizationContext" value="false" />
    <add key="ValidationSettings:UnobtrusiveValidationMode" value="None" />

Returning data from command handlers

Written by Steven van Deursen

Original: https://blogs.cuttingedge.it/steven/posts/2012/returning-data-from-command-handlers/

I  posted here as I have had sites vanish and then I have a broken link…

Returning data from command handlers

This article extends the architectural design of command handlers to allow command handlers to return data.

UPDATE: Although the article below might still be very entertaining, my opionion on the subject has changed. The problems described below will go away completely when you stop using use database generated IDs! Instead let the consumer of that command generate an ID (most likely a GUID). In this case, since the client creates the ID, they already have that value, and you don’t have to return anything. This btw has other advantages, for instance, it allows commands to be executed asynchronously (or queued), without the need for the client to wait.

A few months back I described the command/handler architecture that I (and many others) use to effectively model business operations in a system. Once in a while a question pops up in my mail or at Stackoverflow about returning data from a command.

It seems strange at first to return data from commands, since the whole idea of the Command-query separation is that a function should either return a value or mutate state, but not both. So without any more context, I would respond to such question with: separate the returning of the data from the operation that mutates the state. Execute that command and execute a query after the command has finished.

When we take a closer look at the question however, we will usually see that the data being returned is an Identifier of some sort, which is the result of the creation of some entity in the system. Take a look at the following command:

public class CreateCustomerCommand
{
    public string Name { get; set; }
    public Address Address { get; set; }
    public DateTime? DateOfBirth { get; set; }
}

Since the command will create a new customer, it’s not unlikely for the caller to need the id of the customer, for instance to redirect to another page:

public ActionResult CreateCustomer(CreateCustomerCommand command)
{
    this.handler.Handle(command);
    int customerId = [get the customer Id here];
    return this.RedirectToAction("Index", new { id = customerId });
}

Still, do we really want to return values from commands? A few things to note here. First of all, returning values from commands does mean that a command can never be executed asynchronously anymore, something that architectures such as CQRS promote. Besides this, the CreateCustomerCommand seems very CRUDy, and probably doesn’t really fit an architecture like CQRS. In a CQRS like architecture, you are likely to report to your user the message “your request is being processed” or might want to poll until the operation has executed asynchronously.

For the systems I’m working on, for my customers, my fellow developers, and even myself, CQRS is a bridge too far. The idea of having all commands (possibly) execute asynchronously –and CQRS itself- is a real mind shift that I’m currently not willing to make (yet), and I can’t expect other developers to do to. With my current state of mind, it is simply too useful to have commands handlers return data to the caller. So how do we do that?

The answer is actually very simple: Define an ‘output’ property on a command:

public class CreateCustomerCommand
{
    public string Name { get; set; }
    public Address Address { get; set; }
    public DateTime? DateOfBirth { get; set; }

    // output property
    public int CustomerId { get; internal set; }
}

When a command handler sets this property during the execution, the caller can use it as follows:

public ActionResult CreateCustomer(CreateCustomerCommand command)
{
    this.handler.Handle(command);
    int customerId = command.CustomerId;
    return this.RedirectToAction("Index", new { id = customerId });
}

We can set this id from within the command handler:

public class CreateCustomerCommandHandler
    : ICommandHandler<CreateCustomerCommand>
{
    private readonly NorthwindUnitOfWork unitOfWork;

    public CreateCustomerCommandHandler(NorthwindUnitOfWork unitOfWork)
    {
        this.unitOfWork = unitOfWork;
    }
 
    public void Handle(CreateCustomerCommand command)
    {
        var customer = new Customer
        {
            Name = command.Name,
            Street = command.Address.Street,
            City = command.Address.City,
            DateOfBirth = command.DateOfBirth,
        };

        this.unitOfWork.Customers.InsertOnSubmit(customer);

        this.unitOfWork.Commit();

        // Set the output property.
        command.CustomerId = customer.Id;
    }
}

As you can see, the CustomerId property of the CreateCustomerCommand is set at the end of the Handle method of the handler. This sounds too good to be true, and well… it depends ;-).

When the Customer.Id is generated by the database, the Commit will ensure that the Customer is persisted and will retrieve the auto-generated key and it will become available immediately after the Commit. We can therefore simply set the command’s CustomerId property after calling Commit.

The previous command handler was in complete control over the unit of work. It created that unit of work, it committed that unit of work, and it disposed that unit of work. This is a simple model I effectively used in the past, and I know others are still using this today. Letting the command handler control the unit of work however, has its short comes.

This design works great when commands are small and contain little logic. It starts to fall apart however, when commands get more complex and start to depend on other abstractions that need to run in the same context / unit of work. When the unit of work is controlled by the command handler, it is the handler’s responsibility of passing it on to its dependencies, and since those dependencies are already created at the time the handler creates the unit of work, constructor injection is out of the picture. The only thing left is passing the unit of work through method arguments (method injection). Although it doesn’t seem that bad, I worked on a system where we actually did this, but the call stacks were deep and passing around the unit of work from method to method, from class to class was just tedious. To make our lives easier we started creating a new unit of work for some operations, but this actually made things worse, since a single use case / request resulted in multiple unit of works, which sometimes lead to very strange behavior, or even deadlocks.

For this reason, I stepped away from this design and instead I inject a unit of work instance into classes that need it. Though, somebody somewhere in the system must manage the unit of work. This can be solved by registering the unit of work with a Per Thread or Per Web Request lifetime and implementing a command handler decorator that will ensure the unit of work is committed after the handler completed successfully (note that committing the unit of work on the end of the web request is typically a bad idea, since there is no way to tell whether the unit of work should actually be committed at that point). You have to realize that, although simplifying your application code, the complexity is moved into the composition root. The size and complexity of your application must promote this. Although I must admit that once familiar with these types of constructions and configurations of your composition root, you will find it easy to apply in small systems as well.

One note about database generated keys. CQRS models the business around aggregate roots (a DDD concept), and each aggregate root gets a unique key, usually generated as a Guid, which can be generated in .NET. This means that when using CQRS, you will never run into the problem of database generated keys, which is great of course.

Aggregates in DDD are a group of domain objects that belong together. The Aggregate Root is the thing that holds them together. An Order for instance, may have order lines and those order lines cannot live without that order. The order is therefore the aggregate root and has a unique (global) identifier. An order line does not need a (global) identifier (although they might have an local identifier, only known inside the aggregate), since it will never be referenced directly; other aggregates will only reference the order, never the lines. They may need an identitier in your relational database, but you would probably never return them from a command, since they are purely internal to the Aggregate. If such identitier is needed, returned, or referenced from other aggregates, they are probably not part of that aggregate and the system is incorrectly modeled, accordingly to DDD. This also means that the system will have not as many primary keys as a non-DDD system will have. In a normal relational database, each order line will usually get its own auto number primary key. In that case, it will be much more likely to get into performance problems when using Guids. A Guid is 16 bytes (12 bytes bigger than an Int32) and every database index of a certain table will contain the primary key of that table, making each index 12 bytes times the number of records in the table bigger. Disk space is cheap, but I/O isn’t. When doing complex queries over large amounts of data, lowering the amount of I/O is important. And don’t forget the clustered index fragmentation that random Guids cause.

Long story short, you might be in the situation where you don’t use DDD / CQRS, want to return a database generated value from your command handlers, while having a design were command handler don’t control the unit of work. How do we do this?

Since database generated IDs only come available after the data is saved to the database, and committing happens after the handler executed, we need a construct that allows the handlers to execute some code after the commit operation. We can introduce a new abstraction to the system, where command handlers can depend upon, which allows them to register some post-commit operation:

public interface IPostCommitRegistrator
{
    event Action Committed;
}

This interface defines a single event, which command handlers can depend upon and register their post commit operation. The previously defined CreateCustomerCommandHandler will now look like this:

public class CreateCustomerCommandHandler
    : ICommandHandler<CreateCustomerCommand>
{
    private readonly UnitOfWork unitOfWork;
    private readonly IPostCommitRegistrator postCommit;

    public CreateCustomerCommandHandler(
        UnitOfWork unitOfWork, IPostCommitRegistrator postCommit)
    {
        this.unitOfWork = unitOfWork;
        this.postCommit = postCommit;
    }
 
    public void Handle(CreateCustomerCommand command)
    {
        var customer = new Customer
        {
            Name = command.Name,
            Street = command.Address.Street,
            City = command.Address.City,
            DateOfBirth = command.DateOfBirth,
        };
 
        this.unitOfWork.Customers.InsertOnSubmit(customer);
 
        // Register an event that will be called after commit.
        this.postCommit.Committed += () =>
        {
            // Set the output property.
            command.CustomerId = customer.Id;
        };
    }
}

This command handler registers a delegate to the IPostCommitRegistrator, which is injected through the constructor (note that you should only inject the IPostCommitRegistrator into a handler that actually needs it).

From the application design, this really is all there’s to it. However, there is some more work to do inside the composition root. For instance, we need an implementation of this IPostCommitRegistrator:

private sealed class PostCommitRegistratorImpl : IPostCommitRegistrator
{
    public event Action Committed = () => { };
 
    public void ExecuteActions()
    {
        this.Committed(); 
    }

    public void Reset()
    {
        // Clears the list of actions.
        this.Committed = () => { };    
    }
}

This implementation is very simple. It just implements the Committed event and defines an OnCommitted method, which will be called from the code that manages the transactional behavior of the command handlers. In my previous post I defined an TransactionCommandHandlerDecorator<T>, which allowed executing the commands in a transactional manner. Although we can extend this class to add this post commit behavior, I like my classes to be focused, and have a single responsibility. Let’s define a PostCommitCommandHandlerDecorator<T>, that has the sole responsibility of executing the registered post commit delegates, after a transaction was committed successfully:

private sealed class PostCommitCommandHandlerDecorator<T> : ICommandHandler<T>
{
    private readonly ICommandHandler<T> decorated;
    private readonly PostCommitRegistratorImpl registrator;
 
    public PostCommitCommandHandlerDecorator(
        ICommandHandler<T> decorated, PostCommitRegistratorImpl registrator)
    {
        this.decorated = decorated;
        this.registrator = registrator;
    }
 
    public void Handle(T command)
    {
        try
        {
            this.decorated.Handle(command);
 
            this.registrator.ExecuteActions();
        }
        finally
        {
            this.registrator.Reset();
        }
    }
}

This decorator depends on the PostCommitRegistratorImpl directly and during the Handle method—after the transaction completes successfully—the ExecuteActions method of the PostCommitRegistratorImpl is called. Note that this decorator depends on the PostCommitRegistratorImpl implementation and not on the IPostCommitRegistrator interface. The interface does not implement the ExecuteActions method, and we don’t want it to, since we don’t want any command handler to call that method directly. We do however want this class to be able to execute the registered delegates, so we need it to access the implementation. Since both classes are part of the composition root, this is fine. The application code itself has no notion of the PostCommitRegistratorImpl nor the PostCommitCommandHandlerDecorator<T>.

Our last task is to wire up all the dependencies correctly. This isn’t really difficult, but does need a certain state of mind, since you need to carefully consider the lifestyle of PostCommitRegistratorImpl. Up until this point this article was container agnostic. Here is an example of how to configure this using Simple Injector:

container.Register(
    typeof(ICommandHandler<>), 
    typeof(ICommandHandler<>).Assembly);

container.RegisterDecorator(
    typeof(ICommandHandler<>), 
    typeof(TransactionCommandHandlerDecorator<>));

container.RegisterDecorator(
    typeof(ICommandHandler<>), 
    typeof(PostCommitCommandHandlerDecorator<>));
 
container.Register<PostCommitRegistratorImpl>(Lifestyle.Scoped);
container.Register<IPostCommitRegistrator, PostCommitRegistratorImpl>(
    Lifestyle.Scoped);

The previous registration does a few things:

  • First it registers all public ICommandHandler<T> implementations that live in the same assembly as the ICommandHandler<T> does.
  • Next it registers the TransactionCommandHandlerDecorator<T> to be wrapped around each command handler implementation.
  • Next it registers the PostCommitCommandHandlerDecorator<T> to be wrapped around each TransactionCommandHandlerDecorator<T> implementation. It is important that the post commit decorator is wrapped around the transaction decorator, since the system will behave incorrectly when they are decorated the other way around, since that means that the registered delegates would be called before the transaction is committed.
  • The PostCommitRegistratorImpl is registered. Since we want to inject the same instance in both the command handler and the post commit decorator, we can’t use the transient lifestyle, since that will new up a new instance each time it is injected. Using a single instance for the whole application however, is only possible when the application is single-threaded (which can be the case if you run the handlers in a Windows Forms application or a Windows Service).
  • Since the application does not depend on PostCommitRegistratorImpl but on the IPostCommitRegistrator interface, we need to register this as well.

Conclusion

Again, one simple abstraction can solve the problem we have effectively. Nice about this design is that it keeps the code of the commands handlers pretty clean, and although not shown, it is easy to write unit tests for this as well.