Espresso Fueled Agile Development : Web Client Software Factory News Feed 

Over the last few weeks (in my copious spare time between midnight and six in the morning), I have been updating the Data Access Guidance code base to VS2010B2 in prepration for a few talks I am giving on Data Access patterns.  As soon as I am happy with the code, I will post something on our the project's codeplex site at http://dataguidance.codeplex.com/ .Unfortunately, there were enough changes between beta1 and beta2 that this has been a bit more work that I anticipated.  However these changes will allow me to remove a fair bit of code that we had put in around dealing with foreign keys in POCO scenarios, as that functionality is in Beta2.


Additionally, as soon as I have a chance (again in my spare time when I should be sleeping), I will continue the process and get the code working under VS2010 RC. 


Also, you should see Don's response to some questions on the Data Access project site (http://dataguidance.codeplex.com/Thread/View.aspx?ThreadId=79170) around what happened to the project and why these updates are happening opportunistically as opposed to as part of my day job. :-)

For the past several months, I ahve been working with Blaine and Francis on the Web Client Developer Guidance project.  This project (when we ship) will provide further guidance for web developers who are creating risch responsive web applications.  We have spent considerable time focusing on things like javascript, MVC development, security, separated presentation, unit testing, and a number of other areas.  In addition to a solid reference implementation (read as "intentionally incomplete sample application"), we alswo have the source for a re-usable library that helps create modualr, composable web apps with DI in either ASP.NET WebForms or ASP.NET MVC.  For more information, check out:



We currently have our bi-weekly code drops available for download.  And we are wrapping up the code and focusing on the written documentation. Please feel free to provide feedback via the disccussion lists on codeplex. If you do, please use the tag "Web Guidance v-Next (not WCSF)"


Over the next few weeks I hope to write up a few blog posts around some of the challenges we raninto and solved so far, as well as the design decisions we made.


 

For the last few months, I have been working with Don on developing guidance for developers who have to deal with data access. We have a CodePlex project over at http://dataguidance.codeplex.com/ up and running. We have been working for a while, but (unlike most p&p projects) we have been building on pre-beta technologies, which meant we could not ship our source code in a way that you could consume it.  Recently, Visual Studio 2010 Beta 1 and ASP.NET MVC for VS2010 Beta 1 were made available publicly, meaning we could ship our code.  Two weeks ago, we finished an iteration and made available our first public drop (Drop 2009.06.17) using Visual Studio 2010 Beta 1, Entity Framework 4, ASP.NET MVC for Visual Studio 2010 Beta 1, and the Composite Application Library from Prism.  This week, (after another two week iteration) we released another drop (Drop 2009.07.02), building on what we had and improving it a bit.  Check out the project and the code (which is still evolving), and give us some feedback on how we are doing things and what you think of our implementations of the Repository pattern, the specification pattern, our RESTful service implementation, and the rest of the code.  Keep in mind, we are focusing on the data access part of the project, and the rest (the web app and WPF app) is there to show how to use the data access layer.

Also, look for a new drop every two weeks (or so) until we finish the project.

Enjoy…

Ajoy has posted that three of the sessions have been made available online at pnp Summit videos online.  Since one of the three sessions is one that Grigori and I did about the Acceptance Testing Guide, Driving Development with Acceptance Testing, I figured I'd share the information.  The other two sessions are Ade talking about distributed agile development, and a discussion with Ajoy on the new SharePoint guidance from p&p that I have mentioned before (Guidance on SharePoint, Unit testing SharePoint).  If you missed the Summit and want to attend next year in Redmond (or elsewhere), keep an eye on http://www.pnpsummit.com/, which will eventually be updated with information on the next conference.


 


 

Blaine and I would like to ask the community to let us know "who is using WCSF?"  We know that WCSF has been fairly successful based on:

  • WCSF has been downloaded a few times (ok, it is really many thousands of times)
  • The discussion forums on CodePlex are fairly busy
  • Direct customer engagement with a number of companies

However, there are a lot of folks who are using WCSF that we do not know about.  Since we get requests all the time about who is using WCSF, the size of projects, the size of dev teams, etc, we would love to collect some more data and statistics.  So, if you have used or are using WCSF on a project, let us know via comments, email, or discussions on CodePlex.  If you have stopped using it, please let us know why, so we can possibly address concerns.

To make things simple, I have created a thread on the WCSF CodePlex discussion forum so you can easily provide feedback: http://www.codeplex.com/websf/Thread/View.aspx?ThreadId=41405

Thanks.

Blaine & I have been getting a lot of emails, messages, comments, etc about WCSF.  There are a lot of people interested in a public statement from patterns & practices about the roadmap for WCSF and future support of WCSF, in addition to the "normal" feature requests.

Well, we have some good news: Blaine has publicly released the WCSF roadmap in his blog as Roadmap for WCSF.

Basically, right now, we are asking for feedback from the community.  Let Blaine and/or I know what you want via our blogs, email, or the WCSF CodePlex site at www.codeplex.com/websf.  You can also create work items/ issues on the WCSF CodePlex site and have people vote on them (http://www.codeplex.com/websf/WorkItem/List.aspx). 

Thanks in advance for the feedback...

[Edit: I created a specific thread on the CodePlex discussions board to facilitate feedback: http://www.codeplex.com/websf/Thread/View.aspx?ThreadId=41403). Enjoy]

Blaine and I would like to ask the community to let us know "who is using WCSF?"  We know that WCSF has been fairly successful based on:

  • WCSF has been downloaded a few times (ok, it is really many thousands of times)
  • The discussion forums on CodePlex are fairly busy
  • Direct customer engagement with a number of companies

However, there are a lot of folks who are using WCSF that we do not know about.  Since we get requests all the time about who is using WCSF, the size of projects, the size of dev teams, etc, we would love to collect some more data and statistics.  So, if you have used or are using WCSF on a project, let us know via comments, email, or discussions on CodePlex.  If you have stopped using it, please let us know why, so we can possibly address concerns.

To make things simple, I have created a thread on the WCSF CodePlex discussion forum so you can easily provide feedback: http://www.codeplex.com/websf/Thread/View.aspx?ThreadId=41405

Thanks.

Blaine & I have been getting a lot of emails, messages, comments, etc about WCSF.  There are a lot of people interested in a public statement from patterns & practices about the roadmap for WCSF and future support of WCSF, in addition to the "normal" feature requests.

Well, we have some good news: Blaine has publicly released the WCSF roadmap in his blog as Roadmap for WCSF.

Basically, right now, we are asking for feedback from the community.  Let Blaine and/or I know what you want via our blogs, email, or the WCSF CodePlex site at www.codeplex.com/websf.  You can also create work items/ issues on the WCSF CodePlex site and have people vote on them (http://www.codeplex.com/websf/WorkItem/List.aspx). 

Thanks in advance for the feedback...

[Edit: I created a specific thread on the CodePlex discussions board to facilitate feedback: http://www.codeplex.com/websf/Thread/View.aspx?ThreadId=41403). Enjoy]

You all might be amazed by the number of emails, messages, and calls I get asking for any of the following:

  • Guidance on SharePoint
  • Requests to add SharePoint Guidance to WCSF
  • Requests to add "Office applications" support to SCSF

Well, I can finally help out folks looking for guidance on how to build SharePoint applications by saying "go check out the CodePlex project for patterns & practices SharePoint Guidance (http://www.codeplex.com/spg)."  If you are hoping for SharePoint support in WCSF, you should probably check this out too.

Blaine and Francis are leading the effort on this project.  I understand that they are tackling a few big challenges that customers have ranked as the most important including (this list is from the project's Vision Scope slide deck):

  • Unit testing and debugging
  • Packaging and deployment
  • Setting up a team development environment
  • Unclear which SharePoint features/ components to use and when
  • Solution maintenance/upgrade
  • How and when is SharePoint Designer applicable

If you do SharePoint development, you should check it out.

Chris Tavares and I were chatting yesterday morning about an idea Chris had: building a simple, reusable Http Module that gives folks DI scoped to the Application, Session, and Request.  Yesterday afternoon, during the p&p Dev team's weekly "Code Kata" we threw together a spike/proof of concept in a couple of hours.  "Code Kata" is a three hour block of time that the p&p Dev team uses for a number of things:

  • cross-project pollination of ideas, things that work, things that don't work across project teams
  • investigating new and emerging technologies that p&p may work with in the future
  • investigating new and emerging platforms to determine if p&p may want to provide guidance in the future
  • play with new shiny bits :-)

There are no unit tests, just simple acceptance tests written as a really simple web site.  If the site works and shows the right text, the test passes.  We started with a list of requirements (scaled to just the application level) on the whiteboard that looked sort of like this (which I am creating from memory):

  • Create a DI container for the application
  • Create a way to get to the container ( we choose an extension method on the Application class)
  • Allow a way to configure the container
  • Allow DI to work for pages
  • Allow DI to work for user controls
  • Allow DI to work for master pages
  • Allow DI to work for ASMX web services
  • Allow the above functionality in a simple and self contained way

So, we created a simple Web Site Application and a DLL to hold the custom HttpModule.  We created a simple web page that needed a property injected into it.  We then wrote the code to make my "test" page work properly.  First we wrote an extension method for HttpApplication.  After a short bit it changed to one for HttpApplicationState:

using System.Web;
using Microsoft.Practices.Unity;

namespace UnityForTheWebLib
{
public static class
HttpApplicationStateExtensions
{
private const string GlobalContainerKey = "Your global Unity container";

public static IUnityContainer GetContainer(this HttpApplicationState application)
{
application.Lock();
try
{
IUnityContainer container = application[GlobalContainerKey] as IUnityContainer;
if(container == null)
{
container = new UnityContainer();
application[GlobalContainerKey] = container;
}
return container;
}
finally
{
application.UnLock();
}
}
}
}


Basically, we have lazy initialization of the container and a mechanism to stuff it into the application state.  And then the initial HttpModule that only does injection on a Page:


using System;
using System.Collections.Generic;
using System.Web;
using System.Web.UI;
using Microsoft.Practices.Unity;

namespace UnityForTheWebLib
{
public class UnityHttpModule :
IHttpModule
{
#region IHttpModule Members

///<summary>
///
Initializes a module and prepares it to handle requests.
///</summary>
///
///<param name="context">
An <see cref="T:System.Web.HttpApplication"></see> that provides access to the methods, properties, and events common to all application objects within an ASP.NET application
</param>
public void Init(HttpApplication context)
{
context.PreRequestHandlerExecute += OnPreRequestHandlerExecute;
}

///<summary>
///
Disposes of the resources (other than memory) used by the module that implements <see cref="T:System.Web.IHttpModule"></see>
.
///</summary>
///
public void Dispose()
{
}

#endregion

private void
OnPreRequestHandlerExecute(object sender, EventArgs e)
{
IHttpHandler handler = HttpContext.Current.Handler;
HttpContext.Current.Application.GetContainer().BuildUp(handler.GetType(), handler);
}
}
}

This is a very simple module that calls BuildUp on the page. 


At this point we added another test, and implemented it.  After a few iterations of this we ended up with tests for injection into a User Control and a Master Page, an interface that needed to be configured on the container.  The final implementation of the Http Module looks like this



using System;
using System.Collections.Generic;
using System.Web;
using System.Web.UI;
using Microsoft.Practices.Unity;

namespace UnityForTheWebLib
{
    public class UnityHttpModule :
IHttpModule
   
{
        #region IHttpModule Members

       
///<summary>
        ///
Initializes a module and prepares it to handle requests.
       
///</summary>
        ///
        ///<param name="context">
An <see cref="T:System.Web.HttpApplication"></see> that provides access to the methods, properties, and events common to all application objects within an ASP.NET application
</param>
       
public void Init(HttpApplication context)
        {
            context.PreRequestHandlerExecute += OnPreRequestHandlerExecute;
        }

       
///<summary>
        ///
Disposes of the resources (other than memory) used by the module that implements <see cref="T:System.Web.IHttpModule"></see>
.
       
///</summary>
        ///
       
public void Dispose()
        {
        }

       
#endregion

        private void
OnPreRequestHandlerExecute(object sender, EventArgs e)
        {
            IHttpHandler handler = HttpContext.Current.Handler;
            HttpContext.Current.Application.GetContainer().BuildUp(handler.GetType(), handler);

           
// User Controls are ready to be built up after the page initialization is complete
           
Page page = HttpContext.Current.Handler as Page;
            if (page != null)
            {
                page.InitComplete += OnPageInitComplete;
            }
        }

       
// Get the controls in the page's control tree excluding the page itself
       
private IEnumerable<Control> GetControlTree(Control root)
        {
            foreach (Control child in root.Controls)
            {
                yield return child;
                foreach (Control c in GetControlTree(child))
                {
                    yield return c;
                }
            }
        }

       
// Build up each control in the page's control tree
       
private void OnPageInitComplete(object sender, EventArgs e)
        {
            Page page = (Page) sender;
            IUnityContainer container = HttpContext.Current.Application.GetContainer();
            foreach (Control c in GetControlTree(page))
            {
                container.BuildUp(c.GetType(), c);
            }
        }
    }
}


Configuration of the container can happen in the Global Application_Start handler like this:


using System;
using Microsoft.Practices.Unity;

namespace UnityForTheWebTestSite
{
public class Global : System.Web.
HttpApplication
{

protected void Application_Start(object sender, EventArgs e)
{
IUnityContainer c = Application.GetContainer();
c.RegisterType<IControlData, ControlData1>();
}
}
}

So, from the initial requirements we have this:


  • Create a DI container for the application Lazy initialization
  • Create a way to get to the container ( we choose an extension method on the Application class) Extension Method
  • Allow a way to configure the container Application Start
  • Allow DI to work for pages  Add the HttpModule and Attributes
  • Allow DI to work for user controls Add the HttpModule and Attributes
  • Allow DI to work for master pages Add the HttpModule and Attributes
  • Allow DI to work for ASMX web services
  • Allow the above functionality in a simple and self contained way This is all in a single module
We met the last requirement (simple and self contained) since all we did to the web application was add a single line to the web config.  That met the requirement.

The last area is one we met, but we are not happy with the solution.  Basically, we need to do a bit more research and see if there is a way to get into the pipeline for an ASMX request early enough to do injection and not screw things up for the normal use case or the Ajax extensions.  Until we get this figured out right, the solution is to have your Web Service constructor ask the container to do injection.  This is an ugly hack and looks something like this:


using System.ComponentModel;
using System.Web;
using System.Web.Services;
using Microsoft.Practices.Unity;

namespace UnityForTheWebTestSite
{

[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ToolboxItem(false)]
// To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line.
// [System.Web.Script.Services.ScriptService]
public class MyWebService : System.Web.Services.
WebService
{
public MyWebService()
{
HttpContext.Current.Application.GetContainer().BuildUp(this);
}

[Dependency]
public IControlData Data
{
get;
set;
}

[WebMethod]
public string HelloWorld()
{
return "Hello World " + Data.GetText();
}
}
}

To add the ability to have a container at the Session level and the request level should be as simple as following the pattern for the extension method and the http handler above. (I will leave it as an exercise for the reader.) You could use the same container for everything, or have the Session container a child of the application container, and the request container a child of the session container.  When and if we do a real implementation (as opposed to a proof of concept) we will figure out the best approach for most cases.  When and if this happens, I will update my CWAB with Unity solution and dramatically simplify it. :-)

This is the sixth post in a series. The other post include:

If you want background, go read the earlier posts.

Based upon feedback, I am making the source code available at CWAB and Unity.

For those who have been following along, you all know I am making this up as I go. So, I am asking that you be patient with me as we figure this out together. This article (in particular) may have a few false starts, and "ooops" moments.  We are replacing something that was not meant to be pluggable, and all of the refactoring work that has been done up to this point may not be enough.

Reminder:  This is a proof of concept.  The version of CWAB I am creating here is not a supported product or guidance from patterns & practices.  If you really want Unity support in WCSF, please go to the Issue Tracker on the WCSF Community Site and vote for the work item to get Unity support for WCSF.

At the end of the last post I planned on trying to actually use the UnityCompositionContainer, and then nuke the old CompositionContainer.

Let's get started by replacing any calls that creates a CompositionContainer with code that creates a UnityCompositionContainer.  There is ONE spot, in WebClientApplication.  Simple. Change, recompile, re-run unit tests, and we are green.

If it really was that simple, we should be able to remove the CompositionContainer class (and its test fixture) from the solution, and not have any challenges.  Let's try that.

We fail to compile.  Our TestableRootCompositionContainer is part of the problem. MockContainer in ManagedObjectCollectionFixture is the rest of it.

If we derive TestableRootCompositionContainer from UnityContainer, and remove everything from the class (as a temporary measure), we go from two to 27 build errors. 

<timewarp duration="an or two"/>

Not good.  But not too bad, considering what comes next.  After fixing these build errors, I got on a roll, and did the following:

  • Removed references to the old ObjectBuilder.dll in both unit test and system projects
  • Removed the BuilderStrategies folder in both unit test and system projects
  • Removed the Collections folder in both unit test and system projects
  • Removed the ObjectBuilder folder in both unit test and system projects
  • Removed the ProviderDependencyAttribute and its fixture
  • Removed StateDependencyAttribute and its fixture
  • Removed OptionalDependencyAttribute and its fixture
  • Removed ServiceDependencyAttribute and its fixture
  • Removed all references to the old ObjectBuilder namespace
  • Replaced any [CreateNew] attributes with the Unity [Dependency] attribute
  • Hacked on WebClientApplication
  • Removed the Services collection.  Replaced calls to container.Services.Add with container.RegisterType.  If needed added a call to container.Resolve, when the Services.Add call was supposed to return an object.

Ok, things are compiling again.  Finally.  But there are a few failing unit tests, about 20 or so...  Let's fix those.

<type... type.... swear.... type />

Getting those test to pass was fairly simple.  All of them required a little bit of container setup that was different than before.  This was not a big deal.

There is one major change I did make in the process, and a related renaming.  First, the renaming, Page became InjectablePage; MasterPage became InjectableMasterPage, and guess what UserControl became? Excactly, InjectableUserControl.  These are better descriptions and will remove the confusion I have dealt with on the forums where people use the wrong Page base class.  The other change is that, previously CompositionContainer had two Builders: one for things like services that stayed around, and another for Pages and transient items like presenters.  We no longer need two builders, and can get containers and lifetime managers to handle the differences.  The approach I took for simplicity is to have each page create a container that is a child of the correct Module container, use this for doing injection on the page, and then nuking the container on Page.Dispose.  This is quick and dirty, and it works, but I may revisit it later.

Now, since Unity 1.1 just shipped, it is time to upgrade.  I replaced the binaries and...

Everything still passes.

I think that is enough for today.

As a quick check, I just looked at the solution statistics for the original version of CWAB and the current solution.  CWAB went from 26KLOC (pre-article one) to 14KLOC, both numbers including unit tests.  And I bet that there is more to be cut out and cleaned up.  Having a purpose built, re-usable component for DI helped a lot here.  For this edition, windiff will probably be the best bet for following all the types of changes I made.  This one took a while and impacted a lot of code.

Next, I am going to review unit tests, run code coverage, determine if there are gaps in test coverage after all the hacking.  After that, it will be porting one of the quickstarts over to the new version of CWAB.  After that, the RI, at which point I will be sure that the updated version works well enough to be a good proof of concept.

This is the fifth post in a series. The other post include:

If you want background, go read the earlier posts.

Based upon feedback, I am making the source code available at CWAB and Unity.

First off, I have moved this little side project to source control, so I can avoid the re-work I had to do before I could write this part of the series.

Note to self :

NEVER work without a safety net.  That means both unit tests and source control.

Ok.  Now that that is out of the way....

Let's go add Unity to CWAB, via EDD (formerly TDD).

How can we do this?  First, we need a test fixture.  So, I added CompositionContainerFixture to the CompositeWeb.Tests project as an empty test fixture.  And then I added the following very simple test as a starting point:

[TestMethod]
public void UnityCompositionContainerIsICompositionContainer()
{
UnityCompositionContainer container = new UnityCompositionContainer();
Assert.IsInstanceOfType(container, typeof(ICompositionContainer));
}

This will allow me to write enough code to create the UnityCompositionContainer class and ensure that it implements the right interface.


Making the test compile forces me to create a new UnityCompositionContainer class (I added it to the fixture as a nested class for the moment), and the test fails.  No problem.  I add the interface, implement it with methods that throw, and..... We are green again. 


Next step, I copied the unit test RootContainerParentIsNull over, changed its name to NewUnityContainerParentIsNull, updated it to create a new UnityCompositionContainer, and compiled and ran the test.  It passed, which is not idea;l, but I will leave it an move on, knowing it will be important later.  For the next test, I copied ChildContainerHasParent changed its name to NewUnityContainerParentIsNull, updated it to create a new UnityCompositionContainer, and compiled and ran the test... Red. Just like we expected.  To make this work, I could do a bit of throw away work, or I could add Unity.  I feel like I can skip the simplest thing possible, because I know where I want to go. 


However, first, I'm doing a check-in.  I'll comment out the failing test, and sync up.  Just a sec.


<click click>


Ok, now To add Unity.  I grabbed the binaries from MSDN, dropped them in my tree in the Lib folder and we are good to go.  I added references in the CWAB projecct to Unity.DLL and ObjectBuilder2.DLL.  This means we have two completely separate DI systems as part of out code, which is awful.  However, it won't last too long.


Now, we can make that test pass.  I added an IUnityContainer field to UnityCompositionContainer.  I then implemented a constructor to initialize it.  Then we needed to implement CreateChildContainer.  For this, the simplest thing was to create a new constructor for UnityCompositionContainer which takes a parent container as the only parameter.  After a little wire-up we end up with passing tests and this code:


private UnityCompositionContainer(UnityCompositionContainer parentContainer)
{
Parent = parentContainer;
wrappedContainer = parentContainer.wrappedContainer.CreateChildContainer();
}
public ICompositionContainer CreateChildContainer()
{
UnityCompositionContainer child = new UnityCompositionContainer(this);
return child;
}

Fairly simple.  I will continue moving unit tests over, adapting them, and making them pass.  If there are any interesting deviations from normal, or new tests added, I'll let you know.


<timewarp duration="a short while" />


That was actually fairly simple.  I copied the tests one at a time, replaced the container used with the UnityCompositionContainer, compiled, ran the tests, watched them fail due to NotImplementedExceptions, and fixed them.  Here's the test fixture:


[TestClass]
public class UnityCompositionContainerFixture
{
[TestMethod]
public void UnityCompositionContainerIsICompositionContainer()
{
UnityCompositionContainer container = new UnityCompositionContainer();
Assert.IsInstanceOfType(container, typeof (ICompositionContainer));
}

[TestMethod]
public void NewUnityContainerParentIsNull()
{
ICompositionContainer container = new UnityCompositionContainer();
Assert.IsNull(container.Parent);
}

[TestMethod]
public void ChildContainerHasParent()
{
ICompositionContainer parent = new UnityCompositionContainer();

ICompositionContainer child = parent.CreateChildContainer();

Assert.AreSame(parent, child.Parent);
}

[TestMethod]
public void CanRegisterTypeMappingOnRootContainer()
{
UnityCompositionContainer root = new UnityCompositionContainer();

root.RegisterType<IFoo, Foo>();

IFoo resolvedIFoo = root.Resolve<IFoo>();

Assert.AreEqual(typeof (Foo), resolvedIFoo.GetType());
}

[TestMethod]
public void CanRegisterTypeMappingViaTypeObjects()
{
UnityCompositionContainer root = new UnityCompositionContainer();
root.RegisterType(typeof (IFoo), typeof (Foo));

IFoo resolvedIFoo = root.Resolve<IFoo>();

Assert.AreEqual(typeof (Foo), resolvedIFoo.GetType());
}

[TestMethod]
public void RequestingTypeMappingForUnmappedTypeReturnsRequestedType()
{
UnityCompositionContainer root = new UnityCompositionContainer();

Foo resolvedFoo = root.Resolve<Foo>();

Assert.AreEqual(typeof (Foo), resolvedFoo.GetType());
}

[TestMethod]
[ExpectedException(typeof (ArgumentException))]
public void TypeMappingsMustBeTypeCompatible()
{
UnityCompositionContainer root = new UnityCompositionContainer();
root.RegisterType(typeof (IBar), typeof (Foo));
}

[TestMethod]
public void CanRegisterMultipleTypeMappings()
{
UnityCompositionContainer root = new UnityCompositionContainer();

root.RegisterType<IFoo, Foo>();
root.RegisterType<IBar, Bar>();
IBar b = root.Resolve<IBar>();
IFoo f = root.Resolve<IFoo>();

Assert.AreEqual(typeof (Bar), b.GetType());
Assert.AreEqual(typeof (Foo), f.GetType());
}

[TestMethod]
public void RequestingTypeMappingOnChildReadsFromParent()
{
UnityCompositionContainer parent = new UnityCompositionContainer();
ICompositionContainer child = (ICompositionContainer) parent.CreateChildContainer();

parent.RegisterType<IFoo, Foo>();

Assert.AreEqual(typeof (Foo), child.Resolve<IFoo>().GetType());
}

[TestMethod]
public void ChildContainersCanOverrideParentTypeMapping()
{
UnityCompositionContainer parent = new UnityCompositionContainer();
ICompositionContainer child = (UnityCompositionContainer) parent.CreateChildContainer();

parent.RegisterType<IFoo, Foo>();
child.RegisterType<IFoo, Foo2>();

Assert.AreEqual(typeof (Foo), parent.Resolve<IFoo>().GetType());
Assert.AreEqual(typeof (Foo2), child.Resolve<IFoo>().GetType());
}

[TestMethod]
public void CanCreateChildContainer()
{
ICompositionContainer container = new UnityCompositionContainer();

ICompositionContainer child = container.CreateChildContainer();
Assert.IsNotNull(child);
Assert.AreNotSame(container, child);
}

[TestMethod]
public void CanRegisterInstance()
{
UnityCompositionContainer root = new UnityCompositionContainer();
root.RegisterInstance(typeof (string), "foo", "bar");
string returned = (string) root.Resolve(typeof (string), "foo");
Assert.AreEqual("bar", returned);
}


[TestMethod]
public void CanRegisterInstanceViaGenericWithoutName()
{
ICompositionContainer root = new UnityCompositionContainer();
Foo f1 = new Foo();
root.RegisterInstance<Foo>(f1);
Foo returnedFoo = (Foo) root.Resolve(typeof (Foo));
Assert.AreEqual(f1, returnedFoo);
}

[TestMethod]
public void CanRegisterInstanceViaGenericWithName()
{
ICompositionContainer root = new UnityCompositionContainer();
Foo f1 = new Foo();
root.RegisterInstance<Foo>("asdf", f1);
Foo returnedFoo = (Foo) root.Resolve(typeof (Foo), "asdf");
Assert.AreEqual(f1, returnedFoo);
}


[TestMethod]
public void CanResolveViaGenericWithoutName()
{
ICompositionContainer root = new UnityCompositionContainer();
Foo f1 = new Foo();
root.RegisterInstance<Foo>(f1);
Foo returnedFoo = root.Resolve<Foo>();
Assert.AreEqual(f1, returnedFoo);
}

[TestMethod]
public void CanResolveInstanceViaGenericWithName()
{
ICompositionContainer root = new UnityCompositionContainer();
Foo f1 = new Foo();
root.RegisterInstance<Foo>("asdf", f1);
Foo returnedFoo = root.Resolve<Foo>("asdf");
Assert.AreEqual(f1, returnedFoo);
}

[TestMethod]
public void UnityCompositionContainerImplementsIDisposable()
{
UnityCompositionContainer disposableContainer = new UnityCompositionContainer();
Assert.IsNotNull(disposableContainer as IDisposable);
}

// This test is commented out intentionally. The purpose of this test is to show
// that if you use the generic version of the RegisterTypeMapping method, if the
// types aren't compatible it'll fail at compile time.
//
// This is exactly what happens. However that also means this file won't compile.
// The test is left in as comments, if you wish to verify this then remove the comments,
// watch the compile file, and then comment it out again.

//[TestMethod]
//public void GenericTypeMappingRegistrationEnforcesCompileTimeCompatibility()
//{
// UnityCompositionContainer root = new UnityCompositionContainer();
// root.RegisterType<IBar, Foo>();
//}
}

Notice, there are no tests for the Services collection.  This is intentional.  We will see if we can avoid implementing the methods at all.


And here is the code to make those tests pass:


public class UnityCompositionContainer : ICompositionContainer, IDisposable
{
private ICompositionContainer parent = null;
private IUnityContainer wrappedContainer;

public UnityCompositionContainer()
{
wrappedContainer = new UnityContainer();
}

private UnityCompositionContainer(UnityCompositionContainer parentContainer)
{
Parent = parentContainer;
wrappedContainer = parentContainer.wrappedContainer.CreateChildContainer();
}

public ICompositionContainer Parent
{
get { return parent; }
set { parent = value; }
}

public IServiceCollection Services
{
get { throw new NotImplementedException(); // return services;
}
}

public void RegisterInstance(Type t, object instance)
{
wrappedContainer.RegisterInstance(t, instance);
}

public void RegisterInstance(Type t, string name, object instance)
{
wrappedContainer.RegisterInstance(t, name, instance);
}

public void RegisterInstance<TInterface>(TInterface instance)
{
wrappedContainer.RegisterInstance<TInterface>(instance);
}

public void RegisterInstance<TInterface>(string name, TInterface instance)
{
wrappedContainer.RegisterInstance<TInterface>(name, instance);
}

public void RegisterType<TRequested, TReturned>() where TReturned : TRequested
{
wrappedContainer.RegisterType<TRequested, TReturned>();
}

public void RegisterType(Type requested, Type returned)
{
wrappedContainer.RegisterType(requested, returned);
}

public object Resolve(Type typeOfItem)
{
return wrappedContainer.Resolve(typeOfItem);
}

public object Resolve(Type typeOfItem, string name)
{
return wrappedContainer.Resolve(typeOfItem, name);
}

public T Resolve<T>()
{
return wrappedContainer.Resolve<T>();
}

public T Resolve<T>(string name)
{
return wrappedContainer.Resolve<T>(name);
}

public object BuildItem(IBuilder<WCSFBuilderStage> builder, object item)
{
throw new NotImplementedException();
}

public ICompositionContainer CreateChildContainer()
{
UnityCompositionContainer child = new UnityCompositionContainer(this);
return child;
}

public void Dispose()
{
wrappedContainer.Dispose();
wrappedContainer = null;
}
}

We are now in a green state, time to check in.



Abusing Source Control


Agile teams usually check in very often.  Some teams check in every time they hit green.  I think that is a bit of overkill, unless you have a source control system with NO overhead.  I check in when I complete a feature or a story, and usually at a few good, green stopping points in the process, like I have been doing.  This was a good stopping point, and my next task is a bit risky, so this is a perfect time to check in.

This is a long enough post, even though I didn't do all that much, that I will call it quits for today.  Next post, actually use the UnityCompositionContainer, and then nuke the old CompositionContainer.

This is the third post in a series. The other post include

If you want background, go read the earlier posts.

Based upon feedback, I am making the source code available at CWAB and Unity.

In the last installment I wanted to remove the following from the ICompositionContainer:

  • static methods
  • Builder
  • Locator
  • Containers
  • Services

We completed the first four. In this installment, we will remove the Services collection, replacing it with using RegisterInstance and Resolve. After that, we will compare our ICompositionContainer to IUnityContainer, see what else we need to do, and may even pull in Unity.

Adding a few new methods

Before we do this, I want to add a generic overload to Resolve and one to RegisterInstance.  I like the way the generic calls on Unity look, compared to all the typeof() and casting we have done so far. 

I added the following tests, one at a time, and made them pass:

[TestMethod]
public void CanRegisterInstanceViaGenericWithoutName()
{
ICompositionContainer root = new TestableRootCompositionContainer();
Foo f1 = new Foo();
root.RegisterInstance<Foo>(f1);
Foo returnedFoo = (Foo)root.Resolve(typeof(Foo));
Assert.AreEqual(f1, returnedFoo);
}

[TestMethod]
public void CanRegisterInstanceViaGenericWithName()
{
ICompositionContainer root = new TestableRootCompositionContainer();
Foo f1 = new Foo();
root.RegisterInstance<Foo>("asdf", f1);
Foo returnedFoo = (Foo)root.Resolve(typeof(Foo), "asdf");
Assert.AreEqual(f1, returnedFoo);
}


[TestMethod]
public void CanResolveViaGenericWithoutName()
{
ICompositionContainer root = new TestableRootCompositionContainer();
Foo f1 = new Foo();
root.RegisterInstance<Foo>(f1);
Foo returnedFoo = root.Resolve<Foo>();
Assert.AreEqual(f1, returnedFoo);
}

[TestMethod]
public void CanResolveInstanceViaGenericWithName()
{
ICompositionContainer root = new TestableRootCompositionContainer();
Foo f1 = new Foo();
root.RegisterInstance<Foo>("asdf", f1);
Foo returnedFoo = root.Resolve<Foo>("asdf");
Assert.AreEqual(f1, returnedFoo);
}


In making them pass, I added the following to the ICompositionContainer interface:


T Resolve<T>();
T Resolve<T>(string name);
void RegisterInstance<TInterface>(TInterface instance);
void RegisterInstance<TInterface>(string name, TInterface instance);

This will help a bit with the look, feel, and style of the code, now for the real work.

Removing the Services Collection


Why do we want to remove the Services collection?  Since Unity handles this sort of functionality, we can offload it entirely to Unity.  This will cut a bit of code out of our implementation.

Let's do this the simple way, (I like to call it hack and slash development) and just remove the Services collection from the ICompositionContainer interface.  This will result in a number of build errors (twenty or so, in fact).  However, we can fix each of them easily.  Any call to add a service becomes a RegisterInstance call, and any call to get a service becomes a Resolve call.  So, we will hack each problem until we are in a green state again.  The other option, for the less daring, is to search for all uses of Services, change them one at a time, and make sure we are green at each step.  Today, I am feeling brave (read: reckless, or overly caffeinated), so I took the other approach.  If I were pairing with someone, they would probably stop me. :-)

Once CWAB compiles, we still need to get the unit test library to compile.  <click click click> <swear> <click click click>.  The biggest challenge here is that Services.AddNew<T, IT> and RegisterType<IT, T> have the generic parameters in reversed order.  Aaargh.

Everything compiles, but I have a problem: 34 unit tests are failing.  This is not optimal.  Now, why are they failing?  After looking at the first few, it looks like I removed Services from the interface, and forgot to remove it from the CompositionContainer.  The result is that some tests are still using the ServicesCollection.  Oooops.

After removing Services from CompostionContainer, and getting everything to compile again, the test results are.....

Ouch!  40 failing unit tests.  That is not good.

However, after some investigation, I can remove a couple of classes from our solution, as they are no longer necessary, and are just causing problems.

ServiceDependencyParameterResolver
ProviderDependencyParameterResolver
ProviderDependencyAttribute
ServiceDependencyAttribute

After removing them, there is a bit of code cleanup required, to remove references.  All of the ServiceDependencyAttributes I replaced with ObjectBuilder DependencyAttributes.  The ProviderAttributes, I commented out for the moment.


Wow. Progress. we are down to 37 failing tests. After looking at a few of them, where instances were not the same, I changed the SingletonPolicy in Resolve, and we are down to 24 failing tests.

Oh, gotta take a break.

<TimeWarp>Several Days Fly By</TimeWarp>

I got interrupted from getting this working the other day.  Now, I am starting with broken tests.  I hate that.  I guess I was a bit too reckless.  Well, I have a few options:


  1. I can continue and see this through
  2. I can timebox to a hour, and then roll back, and try again by taking smaller steps
  3. I can give up now and try again from the last known good state.

I am going to opt for number 1.5: timebox to two hours, and depending on where I am, make a decision.


<timewarp duration="1 hour" />

@#$%^%$#@#$!!!



<timewarp duration="a few days" reason="I needed to think about it" />

I have determined that I got a bit overzealous.  For the moment, the Services collection is necessary, and we can remove it once we have Unity in place to provide the necessary functionality.  It will stay as part of the interface, and will continue to provide functionality until we no longer need it.  However, it will eventually go away.  How did I arrive at this conclusion?  Well, I could not get the failing unit tests to pass in a reasonable amount of time.  Moving the functionality requires a lot more code than is really practical for throw-away code.  Oooops.  Everyone makes mistakes. 

As a result, I need to roll back a lot of changes.  Unfortunately, I did not back up everything (or do a check in) before starting down the path of removing the services collection.  Net result: I need to roll back everything, and then re-do the first half of this article. Double @#%$@%!


I'll revert and then rework the code up to the section Removing the Services Collection.  I'll get the code posted (with the other source code) before I start the next installment.


However, I wanted to post this article sooner rather than later, so folks don't think I have dropped the series.  Enjoy.

This is the third post in a series. The other post include

If you want background, go read the earlier posts.

Based upon feedback, I am making the source code available at CWAB and Unity.

In the last post, we started getting closer to my eventual goal. However, we still need to get rid of the static methods, Builder, Locator, Containers, and Services. 

Why?

  • Static methods will make the necessary refactorings a pain in the.... well, a pain.
  • Builder and Locator are handled by Unity internally, and we no longer need to deal with them directly. 
  • Containers will just be registered in the root container by module name, and we can use the WebClientApplication.FindModuleContainer method when we need to. 
  • Then there are CWAB Services.  These services will probably just go away.  Global Services are types that are Registered on the Root Container as singletons, and module services are types registered on the appropriate Container as singletons.  This will be really simple once we get Unity tied into the solution. 

Once we get rid of these items on the interface, we will want to add a few from the IUnityContainer, namely CreateChildContainer, and at least a few overloads (if not all of them) for RegisterType, RegisterInstance, and Resolve.

Then, we will add a new type of container to the project, a CWABUnityContainer, and maybe a CompositionContainer factory.  The CWABUnityContainer will be a facade that implements ICompositionContainer and hides a real UnityContainer under the hood.  After that, things should be simple, allowing us to delete the old container and its code, and then do a bit of re-organizing.

Note: Since this is a proof of concept, not what I would call "shipping code", I have turned off creating XML comments in the solution.  With the volume of changes, I was getting annoyed with all the warnings.

Removing Static Methods from CompositionContainer

There is only one static method on CompositionContainer, and it relies on the Locator property of the container object parameter to work.  In removing this, we can simplify the type, and remove the only dependency on the Locator property.  Let's change BuildItem to be a public, non-static method, and add it to the ICompositionContainer interface for a little while.

Everything compiles.  Tests green (except for the ones I had commented out near the end of the last article.

Removing ObjectBuilder specific parts of the interface

Let's do these one at a time, starting with Builder.  Comment it out in the interface, and....

Two parts of the test fixture fail to compile:

  • CreateRootContainerInitializesContainer needs a line commented out, since we changed the semantics of the method
  • MockWebClientApplication needs a change to the ApplicationBuilder property to return null for the moment, with a comment to come back later.

Now we can compile, and run the tests, and we are green. However, I did notice something that I want to take care of before I go any further.... IWebClientApplication has an ApplicationBuilder and a PageBuilder, which can go away.  The Container will handle the responsibilities of these two properties.  If we delete these properties from the interface, the MockWebClientApplication and WebClientApplication, we get some compile errors in WebClientApplication, which lead us down an ugly path. Instead, we will leave these properties ONLY in WebClientApplication, and make a note to delete them later.

Compile -> Good. 

Tests -> Green.

Next Step, the Locator...

If we remove it from the interface, we have compilation problems in WebClientApplication.BuildItemWithCurrentContext where we call BuildItem.  Let's roll back, change the signature of BuildItem to not have a locator, and the implementation to use the current container's locator, and then re-do the change.  Again, we will leave the property only on the CompositionContainer, even though it is not part of the interface, and add a few comments.  After a quick change to the test CreateRootContainerInitializesContainer, everything is green, and all tests pass. 

The Containers collection

If we remove the Containers property from the ICompositionContainer interface, we have problems in the ModuleLoaderService.  The ModuleLoaderService understands how the CompositionContainer works, and can add new containers (one for each module) to the collection.  We also have challenges in the DefaultModuleContainerLocatorService, where we need to find a container in the collection.  If we think about how Unity works, both of these problems can go away if we find another another way to get child containers, and to create child containers.  Unity has the Resolve method, which can take a name parameter to get a named instance of an object (this handles getting a child container). We also have the CreateChildContainer method, to help with the creation, as well as RegisterInstance to give a created instance a name to use later.

Let's roll back removing the Containers property, add the RegisterInstance and Resolve methods (that use a name) to the interface, test them, then add the CreateChildContainer method to the interface, and test it.  Once we have those pieces, we can re-work ModuleLoaderService and DefaultModuleContainerLocatorService to use the new methods.  Once we do all that, we can remove Containers from the ICompositionContainer interface without problems.

So, let's start with RegisterInstance.  Since I know whatever implementation I come up with is a throw away implementation, and it only needs to work well enough NOT to break the existing code, I am going to keep this simple. 

First, a unit test added to CompositionContainerFixture:

[TestMethod]
public void CanRegisterInstance()
{
TestableRootCompositionContainer root = new TestableRootCompositionContainer();
root.RegisterInstance(typeof(string), "foo", "bar");
string returned = (string) root.Resolve(typeof (string), "foo");
Assert.AreEqual("bar", returned);
}

This tests for the behavior that Unity will have.  I will add the following to the ICompositionContainer to get it to compile:

void RegisterInstance(Type t, string name, object instance);

And then add the stupid implementation:


public void RegisterInstance(Type t, string name, object instance)
{
}

Everything compiles, and the test fails, horribly.


Now, to get the test to pass, RegisterInstance becomes:


private Dictionary<string, object> _registeredInstances = new Dictionary<string, object>();
public void RegisterInstance(Type t, string name, object instance)
{
_registeredInstances.Add(String.Concat(t.FullName, name), instance);
}

Now, I know that is an ugly, bad, horrible monstrosity of an implementation.  It is also the simplest thing that will work. :-)  I also know I could use OB to do this, but again, I am keeping things simple, knowing I will throw away the implementation in a few hours.


Resolve changes a bit too.  I added an overloaded version, and then made the unit test above pass by making the code look like this:


public object Resolve(Type typeOfItem)
{
string temporaryID = Guid.NewGuid().ToString();
return Resolve(typeOfItem, temporaryID);
}

public object Resolve(Type typeOfItem, string name)
{
string key = String.Concat(typeOfItem.FullName, name);
if (_registeredInstances.ContainsKey(key))
{
return _registeredInstances[key];
}

PolicyList policies = new PolicyList();
policies.Set<ISingletonPolicy>(new SingletonPolicy(false), typeOfItem, name);
policies.Set<ICreationPolicy>(new DefaultCreationPolicy(), typeOfItem, name);
policies.Set<IPropertySetterPolicy>(new PropertySetterPolicy(), typeOfItem, name);

return _builder.BuildUp(
_locator,
typeOfItem,
name,
null,
policies);
}

Compile.  Run Tests.  We are Green.


Now to add CreateChildContainer.  First, we need a unit test:


[TestMethod]
public void CanCreateChildContainer()
{
ICompositionContainer container = new TestableRootCompositionContainer();

ICompositionContainer child = container.CreateChildContainer();
Assert.IsNotNull(child);
Assert.AreNotSame(container, child);
}

To make that compile and then pass, I added CreateChildContainer to the interface, the class, and then implemented it.  Simple.  And we are Green again.


Next step: remove from ICompositionContainer the Containers property, compile, and it fails.  This requires minor tweaking to the ModuleLoaderService and DefaultModuleContainerLocatorService. Now, we compile.  However, the Containers property is still on the CompositionContainer.  After we remove it from there, we have to fix a lot of unit tests so that they compile (and removing a few that no longer make sense, semantically).  This is simple, but tedious.  <click> <click> <swearing> <click> <click> ...


Ok, we are now compiling and all unit tests pass.


I need a break, so we will cut this a little shorter than I had hoped.


In this installment I wanted to remove the following from the ICompositionContainer:


  • static methods
  • Builder
  • Locator
  • Containers
  • Services

We completed the first four. In the next installment, we will remove the Services collection, replacing it with using RegisterInstance and Resolve. After that, we will compare our ICompositionContainer to IUnityContainer, see what else we need to do, and may even pull in Unity.

Some of you may remember that waaaay back in October (and earlier) the Web Client Software Factory team was shipping these things called "bundles".  A bundle was all a developer needed to get started and learn about a single concept, like the Model View Presenter pattern in a web application, or contextual auto-complete.  These bundles were released for VS2005 and .NET 2.0 on the WCSF community CodePlex site.

Now, the same bundles for VS2005 with .NET2.0 and new bundles for VS2008 with .NET 3.5 are available on MSDN's Web Client Software Factory page.  However, due to factors I do not understand and that are beyond my control, we are now calling these "Guidance Assets".

The first question everyone is going to ask is :

"How are these different from the Web Client Software Factory - February 2008?"

The Guidance Assets for .NET 3.5 are identical to the same assets inside the WCSF February release. This packaging just allows developers to grab one or two pieces of the factory at a time, or to share with a co-worker to get them up to speed on a concept.  For example, the Web Client Contextual AutoComplete Application Block for .NET Framework 3.5 is EXACTLY the same as the Autocomplete Quickstart in the factory, with everything you need to get started and stand alone documentation.  The .NET2.0 versions of the assets are identical to what we shipped in October on CodePlex (in fact, they were copied from CodePlex to the MSDN download center).

Why did we post the bundles on MSDN?

A number of customers complained that they wanted to use the bundles but their corporate policy does not allow them to use anything from CodePlex or other open source sites.  However, MSDN is a trusted source in their companies. To help these customers, we have have two distribution channels for identical assets.  :-)

Blaine has already blogged about this, Web Client Bundles for .NET 2.0 and .NET 3.5 Available on Download Center

Here is a comprehensive list of click-able links for all the ZIP files:

.NET 3.5 Assets for Visual Studio 2008: (All these are included in the Feb release of the factory, Web Client Software Factory - February 2008)

.NET 2.0 Assets for Visual Studio 2005: (These are what we released in October-ish)

This is the second post in a series. The initial post is Converting the Composite Web Application Block to Unity - Intro. If you want background, go read the first post.

Let's get started with some coding. First, I backed up my WCSF and WCSF-AppBlock code. Then, I opened up Visual Studio 2008, opened the Composite Web Application Block solution (with VSTS Tests), compiled, and ran all the tests. All Green.

Now for the tough part, thinking about what we want to do.  Basically, we need to be able to replace the CompositionContainer with an IUnityContainer, and remove a lot of dead code.  How can we do this?  We need to have a simple interface that would work as a facade on both containers (and any others).  So, let's create an interface for this facade from what we have, and morph it into what we need.

I opened CompositionContainer.cs, and using a refactoring tool ( which happens to be a VS plugin) I extracted a new public interface from the class. I made all the public methods (Except IDisposable methods) part of the interface, and let the refactoring tool do a search/replace of CompositionContainer with ICompositionContainer where it thought appropriate.

Compile -> Failed.

Ooops, It missed a few spots.   Global search, careful replace. It compiles, but seven tests fail. After updating ModuleThrowingException.TestModuleInitializer and TestModule.TestModuleInitializer to both use the new interface, the failing unit tests pass.

Re-run all the tests-> Success. We are green again. :-)

So, we have done a very minor refactoring, and arrived at a good state again. 

Before we do anything else, let's compare the two interfaces we want to get to eventually line up. Here is ICompositionContainer

public interface ICompositionContainer
{
[Dependency(NotPresentBehavior = NotPresentBehavior.ReturnNull)]
ICompositionContainer Parent { get; set; }

ICompositionContainer RootContainer { get; }
IBuilder<WCSFBuilderStage> Builder { get; }
IReadWriteLocator Locator { get; }
IManagedObjectCollection<ICompositionContainer> Containers { get; }
IServiceCollection Services { get; }
event EventHandler<DataEventArgs<object>> ObjectAdded;
event EventHandler<DataEventArgs<object>> ObjectRemoved;
void RegisterTypeMapping<TRequested, TReturned>() where TReturned : TRequested;
void RegisterTypeMapping(Type requested, Type returned);
Type GetMappedType<TRequested>();
Type GetMappedType(Type requested);
object BuildNewItem(Type typeOfItem);
void InitializeRootContainer(IBuilder<WCSFBuilderStage> builder);

[InjectionMethod]
void InitializeContainer();
}

And here is IUnityContainer:

public interface IUnityContainer : IDisposable
{
IUnityContainer RegisterType<TFrom, TTo>() where TTo : TFrom;
IUnityContainer RegisterType... //about 10 overloads

IUnityContainer RegisterInstance<TInterface>(TInterface instance);
IUnityContainer RegisterInstance... //about 10 overloads

T Resolve<T>();
T Resolve<T>(string name);
object Resolve(Type t);
object Resolve(Type t, string name);

IEnumerable<T> ResolveAll<T>();
IEnumerable<object> ResolveAll(Type t);

T BuildUp<T>(T existing);
T BuildUp<T>(T existing, string name);
object BuildUp(Type t, object existing);
object BuildUp(Type t, object existing, string name);
void Teardown(object o);

IUnityContainer AddExtension(UnityContainerExtension extension);
IUnityContainer AddNewExtension<TExtension>() where TExtension : UnityContainerExtension, new();

TConfigurator Configure<TConfigurator>() where TConfigurator : IUnityContainerExtensionConfigurator;
object Configure(Type configurationInterface);

IUnityContainer RemoveAllExtensions();
IUnityContainer CreateChildContainer();
IUnityContainer Parent { get; }
}

 


It looks like I have a little bit of work ahead.  I think I will start with the simple stuff. Parent stays around.  RegisterTypeMapping becomes RegisterType.  BuildNewItem becomes Resolve, based on the semantics.  Also, InitializeContainer and InitializeRootContainer need to go away, as does GetMappedType the Containers collection, and the public events. I won't bore you with the details, but I will show you the new version of the ICompositionContainer interface when I am done and highlight any big difficulties.


...


<clicking of keyboard> 


...


<swearing> 


....


<clicking of keyboard>  


....


<sigh>


Here is the new version of the interface:

public interface ICompositionContainer
{
/// <summary>
///
Gets or sets the parent container for this container instance.
/// </summary>
[Dependency(NotPresentBehavior = NotPresentBehavior.ReturnNull)]
ICompositionContainer Parent { get; set; }
IBuilder<WCSFBuilderStage> Builder { get; }
IReadWriteLocator Locator { get; }
IManagedObjectCollection<ICompositionContainer> Containers { get; }
IServiceCollection Services { get; }
void RegisterType<TRequested, TReturned>() where TReturned : TRequested;
void RegisterType(Type requested, Type returned);
object Resolve(Type typeOfItem);
}

In the process, I did remove code that I knew I would not need (since Unity will handle it) namely these files (and any test fixtures):


  • IContainerAwareTypeMappingPolicy.cs
  • ContainerAwareTypeMappingPolicy.cs
  • ContainerAwareTypeMappingStrategy.cs

I also commented out a few calls and marked them with TODO's ( // TODO: Replace this call with a new one from the UnityContainer)  I also commented out unit tests around features I know I removed (like type mapping support) with notes to remember them later.


We are again at a good state, having done some preliminary refactoring to make room for a new feature (or at least a better replacement for an old one).


Now, this is getting closer to my eventual goal, but we still need to get rid of the Builder, Locator, Containers, and Services.  Builder and Locator are handled by Unity.  Containers will just be registered in the root container by module name.  Services will probably just go away.  Global Services are types that are Registered on the Root Container as singletons, and module services are types registered on the appropriate Container as singletons.  This will be really simple once we get Unity in here.  In the mean time, things will probably get ugly.  We will tackle all of this in the next installment, though.

A few weeks ago, I decided I needed to play with the Unity Dependency Injection container. I wanted to see how Chris and the rest of the team had designed it and see how usable the container was. I had seen requests on the Web Client Software Factory project's CodePlex site asking for a Unity version of WCSF. Put the two together, and I have a side project, a proof of concept I can legitimately do at work. My eventual end goal is to have the Order Management RI working with a new, Unity-based version of CWAB.

So, I sat down a few Saturdays ago, with CWAB, an early drop of Unity, and started working. After about eight hours I had the basics of CWAB working: there was a container, and a module loader, global services, and module level services. About 3 hours was used figuring out Unity, the differences between it and the container we created for CWAB (CompositionContainer) , and how to get around them. The rest was work, finding bugs in Unity (I filed a few the next day), and quite a lot of fun.

Since then, I have added a few more features, made some forwards progress, and lost ground after being overly optimistic about the impact of a few changes.

Now that I have learned a bit, I wanted to share the information with folks who might be interested in doing this themselves. Rather than trying to remember what I did the first time around, I am going to start over, take a different approach, document what I do as I go, and make a few posts out of this.


Please keep in mind, none of this is tested.  Use at your own risk.  No Support Available.  Danger, Will Robinson!  Danger!


This is a proof of concept that I am fairly certain will work, but I am not 100% certain. We will find out as we go. Oh, and I guarantee that there will be breaking changes.

My strategy will be to take the existing CWAB DLL and its unit tests, refactor the container behind an interface, modify the interface to nearly match Unity, and then drop Unity in replacing one implementation of the interface with another. After I get CWAB building and the unit tests passing with Unity, I will open up the code for the RI, replace references, and get it to work.

I will do all of this in an example-driven test driven manner. I am sure I will cheat on occasion, but I will try to be honest and point it out when I do, as well as a reason why.

I spent two weeks between full time projects (still working on the 2 part time projects I am on) looking back at the build logs from the past 6 months (or so) trying to figure out what metrics we have, what we should track going forward, and what goals we should have for these metrics.   I came up with some interesting findings.

First, overall unit test code coverage was OK, but not as high as I would like on my projects.  Of course, these numbers included generated code, view code, and other things that I usually filter out.  Also coverage was fairly flat.  For the most part, as we added code, we added tests.  This, I think is much better than one of the alternatives, having coverage drop over the course of the project.

Here is coverage for the Application Blocks in WCSF, without axis labels.  The Y axis starts at 76% (not zero).

image

You will see that there was a noticeable drop in coverage part way through the project. That was when we added a new library of AJAX controls.  Before the change was made we knew this would happen and accepted the risk.  I also worked with the testers on my team to mitigate the risk via more acceptance tests and careful code reviews. And after the change, we trended up, slowly, over time.

In addition to watching coverage over time, I was able to come up with a few indices to watch going forward.  Thanks to NDepend, a little bit of work parsing build logs, and Linq, I was able to create seven indices, three of which I will be watching closely on future projects:

  • Cyclomatic Complexity
  • Afferent Coupling
  • Efferent Coupling

If you want a definition for any of these, check out the NDepend site.  The index for each of these is the average of the top 10 worst values by type.

Here is a graph of these indices over time for another, un-named project (without the axis labels)

image

 

You can see that the first few builds did not have NDepend hooked up properly.  Then everything tracks along, relatively stable until about 40% through the project.  The Dev Lead on the project was able to tell me what happened on the day that the Cyclomatic Complexity dropped drastically.  It was a big change to simplify things.

Do these graphs, numbers, and indices (and all the others that I pulled from the data, but did not share yet) tell me the whole story as far as code quality?  Of course not.  However, these can all be used as indicators.  If complexity trends sharply up over time, there may be a problem we need to investigate.  If it stays low, there can still be other problems hidden.  However, a little information is better than none.

Now, on to the next project...

When we originally released the Web Client Software Factory February 2008, there was an issue for a number of folks.  If you had VS2005 installed with the October release of WCSF, you could not install the February release on VS2008 without uninstalling the October version.  After some feedback fromcustomers who need to support both WCSF 1.1 apps and WCSF 2.0 apps, we have re-released the factory installer.  The install no longer forces an un-install of the Oct release, allowing side-by-side support.  Go get the refreshed MSI from the download center.


Blaine recently blogged about this too.


 

I know that folks have been reading about the Web Client Software Factory on Blaine's blog and on Glenn's blog.  Blaine posted about what we planned to include, back in December with Next version of Web Client Software Factory (WCSF).  Glenn had the post at the end of January, Web Client 2.0 closer than you think, and Web Client 2.0, what's the hold up? 

So, what is in the latest version of the Web Client Software Factory?

Here is a quick blurb from the front page of our docs, with my comments in Blue:

The February 2008 release of the Web Client Software Factory is an update to the June 2007 release. The following are the major changes:

  • Added user interface responsiveness guidance. The guidance includes documentation, Web controls, QuickStarts, and a new reference implementation that demonstrate how to incorporate Microsoft ASP.NET AJAX technologies in your Web applications to provide a richer user interface experience.
  • Added support for the Model-View-Presenter pattern in user controls and master pages. The Composite Web Application Block includes a new Dependency Injection mechanism that facilitates the implementation of the Model-View-Presenter pattern in Web controls and master pages. The guidance package also includes new recipes that help developers create master pages and user controls that implement the Model-View-Presenter pattern. By using the Model-View-Presenter, developers can extend the testability surface to user controls and master pages.

Dependency Injection also works in ASMX Web Services hosted in a WCSF web application project

  • User controls can be reused across modules. Developers can build Web pages made up of user controls from different modules.

There is an example of this in the Order Management Reference Implementation

  • Updated the Composite Web Application Block. The main changes include the following:
    • Improved performance
    • Support for services registration through configuration
    • Support for type mapping for dependency injection

Type mapping was a no-brainer to add.  We also needed it to show a few of the deeper concepts around MVP.

The performance improvements are the part of how the new Unity project works deep under the hood.  Chris Tavares and worked with the WCSF team on the performance enhancements.  Later, Chris went to start the Unity project, which has evolved into a great tool from what I have seen.  I know the next question is "When will WCSF support Unity?"  My answer is as soon as we can do a complete re-write.  Out of the box, Unity provides a lot of functionality that CWAB provides.  To properly use Unity, we would need to throw out a lot of the CWAB code, and re-do it. 

The ability to add a service via config was requested by the community, and is shown in the Order Management Reference Implementation.

  • Updates to the Add Business Module and Add Foundational Module recipes. These recipes now include a new option to create a separate project for the modules’ public interface.
  • Updated the patterns documentation topics. The main changes include two new pattern description topics, Inversion of Control and Module Interface Separation, and it updates to the Model-View-Presenter topic.
  • Included additional guidance for several technical concepts. The technical concepts covered are views testability, modularity, autocomplete, validation, and search. The guidance consists of documentation, QuickStarts, Web controls, and How-to topics.

These concepts were shipped previously as "Bundles" on our CodePlex community.  We will be releasing new, updated VS2008 versions of these bundles on MSDN in the near future, but all the content is in the factory.  The bundles are so folks can get a small taste and look at a single feature or concept in isolation.

  • Added support for Visual Studio 2008 to the guidance package.

VS2008 and and .NET 3.5 support was asked for very loudly by customers.

Testing Guidance

Something we do not talk about enough is the fact that we are releasing some guidance on how to test web applications.  First, since we do most development via TDD EDD (Example Driven Development), we have quite a bit in the way of unit tests.  There are a few exceptions, where we intentionally do not include unit tests as we do not want to clutter a QuickStart example with another concept that may hinder understanding of what the QuickStart is about.

We are also shipping acceptance tests.  In a few projects, these acceptance tests are manual test scripts in the form of a Visual Studio Manual Test.  In others, we have completely automated the acceptance tests using

<TheLawyersMadeMeSayIt>

Note: this is a link to an external Web site that provides software that can be used to write acceptance tests. Please note that Microsoft is not responsible for the content of external Internet sites.

</TheLawyersMadeMeSayIt>

                                                               the WatiN (Web Application Testing in .NET)

To compile the test projects and run the tests, you will need VS2008 Professional or Team System, and to download the WatiN binaries, and copy the DLLs to the Lib folder (which our help describes with more info).  The upside is that anyone can see how we are doing functional/acceptance testing  of our sample applications.  Of course, there is a lot more to testing than just acceptance testing, but it is a start.

Other resources

You will also want to check out what Blaine and Glenn say about the release:

 Espresso Fueled Agile Development : Web Client Software Factory News Feed 

 Eugenio Pace - patterns & practices Client Architecture Guidance : Web Client News Feed 

The Symposium is now behind us and I’m very pleased at how it went. Overall feedback was very positive! Even the weather seemed to have joined us, with unusual sunny days. Thanks very much again to everyone who joined us in Redmond, speakers and attendees. We certainly hope you come again next year. Planning for...

I’m happy to announce the next p&p Symposium here in Redmond. Details below.             The patterns & practices Symposium is the event for software developers and architects to have engaging and meaningful discussions with the people creating technologies and guidance at Microsoft. This year’s Symposium topics span the spectrum of...

Update: the same roadmap is now published on MSDN. I adjusted the slide below to make timelines clearer. No other changes. We wanted to share with you all the projects we are either working on or we have identified as potential areas of investment in the current Fiscal Year (that started past July and ends...

Training content based on our guides has been as popular as the content itself. You can now download the “Release Candidate” for labs corresponding to the new guide. The labs are more than just a mirror of the guide. We took the opportunity of adding a few things that complement and extend what is explained...

This week, we completed a small PoC for brabant court, a customer that is building a Windows Azure application that integrates with Intuit’s Data Services (IDS). A couple words on mabbled from brabant court. Mabbled is a Windows Azure app (ASP.NET MVC 3, EF Code First, SQL Azure, AppFabric ACS|Caching, jQuery) that provides complementary services...

In the previous post I covered the “semi-passive” way for authentication between a Windows Phone 7 client and a REST service. This post completes the information with the “active” way. There’s nothing unexpected here really: We call the Identity Provider using a RequestSecurityToken message (RST) We send the SAML token to ACS and get a...

In the last drop, we included a sample that demonstrates how to secure a REST web service with ACS, and a client calling that service running in a different security realm:

image

In this case, ACS is the bridge between the WS-Trust/SAML world (Litware in the diagram) and the REST/SWT side (Adatum’s a-Order app)

This is just a technical variation of the original sample we had in the book, that was purely based on SOAP web services (WS-Trust/SAML only):

image

 

But we have another example in preparation which is a Windows Phone 7 Client. Interacting with REST based APIs is pretty popular with mobile devices. In fact is what we decided to use when building the sample for our Windows Phone 7 Developer Guide.

There’s no WIF for the phone yet, so implementing this in the WP7 takes a little bit of extra work. And, as usual, there’re many ways to solve it.

The “semi-active” way:

This is a very popular approach. In fact, it’s the way you’re likely to see this done with the phone in many samples. It essentially involves using an embedded browser (browser = IE) and delegate to it all token negotiation until it gets the token you want. This negotiation is nothing else than the classic “passive” token negotiation, based on HTTP redirects that we have discussed ad infinitum, ad nauseam.

The trick is in the “until you get the token you want”. Because the browser is embedded in the host application (a Silverlight app in the phone), you can handle and react to all kind of events raised by it. A particular useful event to handle is Navigating. This signals that the browser is trying to initiate an HTTP request to a server. We know that the last interaction in the token negotiation (passive) is actually posting the token back the the relying party.  That’s the token we want!

 

image

So if we have a way of identifying the last POST attempt by the browser, then we have the token we need. There are many ways of doing this, but most look like this:

image

In this case we are using the “ReplyTo” address, that has been configured in ACS with a specific value “break_here” and then extract the token with the browser control SaveToString method. The Regex functions you see there, simply extract the token from the entire web page.

Once you’ve got the token, then you use it in the web service call and voila!

With this approach your phone code is completely agnostic of how you actually get the final token. This works with any identity provider, and any protocol supported by the browser.

Here’re some screenshots of our sample:

image image image  

The first one is the home screen (SL). The second one shows the embedded browser with a login screen (adjusted for the size of the phone screen) and the last one the result of calling the service.

JavaScript in the browser control in the phone has to be explicitly enabled:

image

If you don’t do this, the automatic redirections will not happen and you will see this:

image

You will have to click on the (small) button for the process to continue. This is exactly the same behavior that happens with a browser on a desktop (only that in most cases scripting is enabled).

In next post I’ll go into more detail of the other option: the “active” client. By the way, this sample will be posted to our CodePlex site soon.

Now that we presented the scenario & the requirements, let’s take a look at the solution.

What is conceptual solution we propose?

Fabrikam Shipping in the pre-Claims era:

This diagram shows Fabrikam Shipping today if used by Adatum (no claims, no federation):

image

You will see the usual suspects for a typical .NET web application. Furthermore, Fabrikam is using standard providers for authentication, authorization and profile. In this configuration, everyone in Adatum must use, of course, user name & passwords. The username is the handle associated with a role in the roles database, which drives application behavior (what you can do).

In the example, John from sales, can only Order New Shipments, but Peter from Customer Service, can Manage them.

 

Making Fabrikam Shipping Claims-Aware

What we want now, is Fabrikam to be claims aware and trust claims issued by Adatum. Claims issued by Adatum will be used for authentication and authorization. We also want to map Adatum internal roles to Fabrikam’s for authorization purposes: who will be a “Shipment Creator”? Who will be an “Administrator”? 

image

Let’s see how this would work:

  1. When John attempts to use FS for the first time (e.g. https://adatum.fabrikamshipping.com), because there’s no session established yet (John is un-authenticated from FS point of view) he will be redirected to Fabrikam’s Issuer (e.g. https://login.fabrikam.com). Fabrikam’s Issuer is trusted by the application.
  2. Again, John will be redirected to Adatum’s Issuer, because that is what Fabrikam’s Issuer trusts.
  3. If John uses a domain joined desktop, he’d already be authenticated in his network and will have a valid Kerberos token. This token is used by the Adatum’s Issuer to create Adatum’s claims: employee name, employee address, cost center, and department John works for.  

The process unwinds then:

  1. Adatum’s claims are sent back to Fabrikam’s Issuer, where they are transformed:

- Name, address and cost center are simply copied (no transformation)

- Other rules are applied that will result in a “role” claims to be issued (any of the valid roles FS understands)

More examples of mappings:

exists([issuer == "Adatum"]) => issue(type = "Role", value = "Shipment Creator");

Which can be interpreted as:

“Any employee from Adatum can create shipment orders”

 

c:[type == “http://schemas.xmlsoap.org/claims/Group”, value == "Shipments"] => issue(type = “Role”, value = “Shipment Manager”);

that would implement the rule:

“Any employee from Adatum in “Shipments” (indicated by group membership) department can manage shipment orders”

      2. After these transformation happens, John is finally directed back to the application with the transformed claims.

Adatum could issue Fabrikam’s specific claims, but we don’t want to pollute Adatum’s Issuer with Fabrikam specific concepts (like Fabrikam roles). Fabrikam will allow Adatum to issue any claims they want or can, and then will allow Adatum to configure the system to map these Adatum claims into Fabrikam claims.  

Fabrikam will do this for every new Customer using Fabrikam Shipping. Yet, their application will always understand the same set of claims: “Shipment Creator”, etc. FS stays decoupled.

 

Note 1:
This scenario is almost identical to IssueTracker. If you feel deja-vu, don’t be surprised. Only in IssueTracker, we used .NET Services ACS as the Service Provider (Fabrikam) Issuer.

Note 2:
This scenario is also similar (but not quite the same) to Adatum’s a-Order. Some key differences: Fabrikam is a multi-tenant system, probably with a provisioning experience, that a-Order lacked. This is because in our fictitious (but hopefully realistic) world, the Customer churn in Fabrikam Shipping is much higher than in a-Order. That is, we assume the frequency customers join and leave Fabrikam is higher. Thus, Fabrikam needs to automate this as much as possible. 

Note 3:
Yes, there will be another post with Adatum’s side of the story. But I’m sure by now you’ll guess what’s in there.

 

I’ll cover provisioning in the next post, as it has some interesting discussion points. But you can see some hints here.

Feedback very much welcome.

 

Post-post announcement:

We hope to have a some running code and much much polished chapters soon. We’ll probably upload those to a CodePlex site. Stay tuned!

Now that we presented the scenario & the requirements, let’s take a look at the solution.

What is conceptual solution we propose?

Fabrikam Shipping in the pre-Claims era:

This diagram shows Fabrikam Shipping today if used by Adatum (no claims, no federation):

image

You will see the usual suspects for a typical .NET web application. Furthermore, Fabrikam is using standard providers for authentication, authorization and profile. In this configuration, everyone in Adatum must use, of course, user name & passwords. The username is the handle associated with a role in the roles database, which drives application behavior (what you can do).

In the example, John from sales, can only Order New Shipments, but Peter from Customer Service, can Manage them.

 

Making Fabrikam Shipping Claims-Aware

What we want now, is Fabrikam to be claims aware and trust claims issued by Adatum. Claims issued by Adatum will be used for authentication and authorization. We also want to map Adatum internal roles to Fabrikam’s for authorization purposes: who will be a “Shipment Creator”? Who will be an “Administrator”? 

image

Let’s see how this would work:

  1. When John attempts to use FS for the first time (e.g. https://adatum.fabrikamshipping.com), because there’s no session established yet (John is un-authenticated from FS point of view) he will be redirected to Fabrikam’s Issuer (e.g. https://login.fabrikam.com). Fabrikam’s Issuer is trusted by the application.
  2. Again, John will be redirected to Adatum’s Issuer, because that is what Fabrikam’s Issuer trusts.
  3. If John uses a domain joined desktop, he’d already be authenticated in his network and will have a valid Kerberos token. This token is used by the Adatum’s Issuer to create Adatum’s claims: employee name, employee address, cost center, and department John works for.  

The process unwinds then:

  1. Adatum’s claims are sent back to Fabrikam’s Issuer, where they are transformed:

- Name, address and cost center are simply copied (no transformation)

- Other rules are applied that will result in a “role” claims to be issued (any of the valid roles FS understands)

More examples of mappings:

exists([issuer == "Adatum"]) => issue(type = "Role", value = "Shipment Creator");

Which can be interpreted as:

“Any employee from Adatum can create shipment orders”

 

c:[type == “http://schemas.xmlsoap.org/claims/Group”, value == "Shipments"] => issue(type = “Role”, value = “Shipment Manager”);

that would implement the rule:

“Any employee from Adatum in “Shipments” (indicated by group membership) department can manage shipment orders”

      2. After these transformation happens, John is finally directed back to the application with the transformed claims.

Adatum could issue Fabrikam’s specific claims, but we don’t want to pollute Adatum’s Issuer with Fabrikam specific concepts (like Fabrikam roles). Fabrikam will allow Adatum to issue any claims they want or can, and then will allow Adatum to configure the system to map these Adatum claims into Fabrikam claims.  

Fabrikam will do this for every new Customer using Fabrikam Shipping. Yet, their application will always understand the same set of claims: “Shipment Creator”, etc. FS stays decoupled.

 

Note 1:
This scenario is almost identical to IssueTracker. If you feel deja-vu, don’t be surprised. Only in IssueTracker, we used .NET Services ACS as the Service Provider (Fabrikam) Issuer.

Note 2:
This scenario is also similar (but not quite the same) to Adatum’s a-Order. Some key differences: Fabrikam is a multi-tenant system, probably with a provisioning experience, that a-Order lacked. This is because in our fictitious (but hopefully realistic) world, the Customer churn in Fabrikam Shipping is much higher than in a-Order. That is, we assume the frequency customers join and leave Fabrikam is higher. Thus, Fabrikam needs to automate this as much as possible. 

Note 3:
Yes, there will be another post with Adatum’s side of the story. But I’m sure by now you’ll guess what’s in there.

 

I’ll cover provisioning in the next post, as it has some interesting discussion points. But you can see some hints here.

Feedback very much welcome.

 

Post-post announcement:

We hope to have a some running code and much much polished chapters soon. We’ll probably upload those to a CodePlex site. Stay tuned!

Once again, thanks everybody that wrote us with reviews, feedback and suggestions! Please keep it coming! Also: we hope to have soon a CodePlex site where we can start sharing more. We are still working out some details.

As usual, the Disclaimer: this post and the next ones are early drafts to share with you the direction we are taking. They might (and I hope they will) change quite a bit in the actual Guide! We might end up not covering one of these scenarios in the book.

An additional disclaimer for this post: I wrote the whole scenario following the same template of the previous posts and it resulted in a very loooong article. So I divided it into two parts. This is Part I –> the scenario, the challenges and the requirements. Part II will be the solution.

An Ode to Simplification: there’s been quite some debate internally to this project as how to name things, especially “STS” vs. “Issuer” vs. “I-STS” vs. “R-STS” vs. “FP”, etc. Keith has started this on his blog some time ago. We definitely want to keep things simple. As simple as possible, but not simpler. For now we have settled on the term “Issuer”, independently of the logical role it takes part in. In simpler words: what we used to call “Identity Provider” is now an “Issuer”. What we called a “Federation provider” is also an “Issuer”.

Keith is writing a whole section of our book on “Jargon” and meaning of the different terms.

Credits: this scenario is largely inspired on Vittorio’s PDC demo. See here.

image

The themes for the first “Service Provider” scenario are:

  1. Identity in a SaaS application
  2. Federation with multiple Customers

There’s 1 variations in this scenario:

  1. Automating the on-boarding process

The Introduction

Fabrikam is a company that provides shipping services. As part of their offering, they have an application (Fabrikam ShippingFS) that allows its customers to create new shipping orders, track them, etc. Fabrikam Shipping is delivered as a service and runs in Fabrikam’s datacenter. Fabrikam Customers use a browser to access it.

FS is a fairly standard .NET web application: the web site is based on ASP.NET 3.5, the backend is SQL Server, etc. In the current version, users are required to authenticate using (guess what): username and password!!

Fabrikam uses ASP.NET standard providers for authentication (Membership), authorization (Roles provider) and personalization (Profile).

Fabrikam Shipping is also a multi-tenant application: the same instance of the app is used by many customers.

One sunny day in Seattle, they sign a great deal with a marquee Customer: Adatum Corp. And Adatum doesn’t like the username and password, because they are working hard to get rid of identity silos. They have 3 concerns:

  1. Usability for their employees. Lack of SSO, forgetting passwords, using sticky notes to remember them, etc.
  2. Maintenance costs:
    1. What happens if an employee forgets his or her corporate password? He will probably call Adatum’s IT help desk. What happens if they use FS and they forget its password? who should they call? Consider this:
      1. If they instruct employees to call Fabrikam’s help desk, there would be a special procedure for IT guys, would probably require training, etc.
      2. If they instruct employees to call Fabrikam directly, they would impact #1
    2. When a new employee is hired, he is already provisioned in Adatum’s systems. They don’t want special processes for FS.
  3. Liability:
    1. Adatum has authentication policies that are there for a reason. They also want to retail control on who has access to what (regardless of where that is deployed) and FS is no exception.
    2. If an employee leaves the company, he should not have access to FS anymore, effective immediately. If they used username / passwords, they could potentially access FS from other places, even if they are not an Adatum employee anymore.

Back to FS:

Access Control to FS is based on Roles. There are 3 roles:

  1. Shipment Creators”. Anyone in this role can create new orders.
  2. Shipment Managers”. Can create and modify existing shipment orders.
  3. Administrators”. Can configure the system (e.g. look and feel, shipping preferences, billing, etc).

FS also keeps profile information for users, to avoid repeatedly entering common information and preferences. More concretely, FS allows its users to store:

  1. Package Sender information (sender address)
  2. Cost Center information for billing

Fabrikam can open the bills to its Customers by Cost Center. With this, 2 employees from Adatum belonging to 2 different departments would get 2 different bills.

 

Key Requirements.

Adatum wants SSO for its employees.

Fabrikam wants to avoid storing configuration information about the shipment that can become stale later on (e.g. the package sender information).

Fabrikam wants to bill customers by Cost Center if they supply one.

Some assumptions:

  1. Adatum has an Issuer (see Scenario #1)
  2. Fabrikam can change anything in their application

We’ll look at the solution space in the next post.

Disclaimer: this post and the next ones are early drafts to share with you the direction we are taking. They might (and I hope they will) change quite a bit in the actual Guide! We might end up not covering one of these scenarios in the book. These posts represent my ideas and not those of my employer, my colleagues, friends, enemies, associates, pets. Read this at your own risk, got it? :-).

 

image

The themes for our first “enterprise” scenario are:

  1. Intranet and extranet Web SSO
  2. Using claims for user profile information
  3. RBAC with Claims
  4. Single Sign Off
  5. Single company
  6. No federation

Variations in the scenario:

  1. Hosting on Windows Azure

The introduction

Adatum is a medium company that uses Active Directory to authenticate its employees.

John, is a salesman @ Adatum. He uses a-Order, Adatum’s order processing system to enter, process, track and manage customer orders. John also uses a-Expense, an expense tracking and reimbursement system to enter his business related expenses.

Both are built with ASP.NET 3.5 and deployed in Adatum’s datacenter.

a-Order uses Windows integrated authentication. When John uses a Windows domain joined machine, a-Order recognizes the same credentials he used for login on his PC. He’s never prompted for username and password. a-Order user roles are stored in the company’s Active Directory.

a-Expense on the other hand uses a custom authentication, authorization and user profile information. All these data is stored in custom tables in a SQL Server database. John is normally prompted for username and password whenever he uses this application.

a-Expense AuthZ rules are tied to a user name and it’s deeply integrated into the application business logic at all levels (e.g. inside pages, code, stored procedures, etc). It is also something that belongs to the app itself. That is, the roles and information stored in a-Expense doesn’t exist anywhere else in Adatum.

a-Expense was initially used by a handful of employees and over the years, as Adatum grew, it was used by more and more employees. Maintaining the user database (e.g new hires, retiring users, etc) is a cumbersome process and it is not well integrated into Adatum’s existing processes for managing employee accounts.

Some information about the employees using a-Expense does exist somewhere else and needs to the replicated. For example: the “Cost Center” an employee belongs to, is already stored in the corporate AD. Updating Cost Center information in a-Expense is a difficult, error prone process as it is completely manual (each employee has to update its own Cost Center in his profile). Other user preferences (e.g. the reimbursement method: check, direct deposit, cash, etc) is something private to a-Expense.

John as most sales people in Adatum are mobile, often on the road visiting customers. Adatum wants to offer all sales staff the ability to use both applications from anywhere they are, not just its Corpnet (e.g. working from Home, connecting through WiFi access points, etc).

Adatum has many other departmental applications like a-Expense (e.g. a-Vacation, a-Facilities) that are not part of core corporate IT (e.g they have their own user, role and profile databases). Some of these applications are Windows based and others are not. Small applications like this appear all the time.

These apps might not be critical in isolation, but together they add up.

Adatum IT Dept is considering moving some of these applications to “the cloud” to decrease CAPEX and simplify management.

Adatum has also received requests from partners to access their systems. Especially a-Order. Adatum’s customers want to be able to track their orders on Adatum’s system.

 

image

What does Adatum want?

Adatum wants to avoid keeping a username/password just for a-Expense. They want their employees to use the same credentials they use to log on on their desktops.

Adatum also wants a standards based solution that works with multiple platforms and vendors.

Adatum also wants to eliminate common user profile information that needs to be replicated and maintained between multiple repositories (e.g. the Cost Center) where possible

Adatum wants secure internet access for its employees to their systems.

Adatum has big plans in the future. They want to lay the foundation for more advanced scenarios (like providing access to their Customers to other systems, like a-Order).

They also want an architecture that would allow them to easily move some applications to “the cloud” (Windows Azure in particular), and decrease management and CAPEX costs.

 

What is conceptual solution we propose?

We’ll do some forward looking in this scenario, and we’ll make some decisions considering the things we want to achieve in the future

We’ll solve these in multiple steps.

First we’ll introduce an Identity Provider (IP) in the organization. In general we want applications to trust this IP to authenticate its users. In Adatum, an “employee” is a “corporate asset”. No application should keep a custom employee database. It might not be realistic to move all applications initially to use an IP, but for now we just care about a-Expense and a-Order.

This IP will authenticate users and also return common, company wide user information such as the name, the e-mail, the employee cost center, the office number, his phone, etc.

This information is already in AD. We are not requiring (or even suggesting) to change the schema of AD for the needs of any specific application. We are just reusing what’s in there already.

Therefore, some applications will still keep other user profile information that will not be moved to the corporate IP. We want to avoid polluting AD with app specific attributes, we want to keep management decentralized, etc.

We’ll modify a-Order to use IClaimsPrincipal as opposed to IPrincipal and simply return the AD groups as claims. This is in preparation for opening a-Order to external Partners of Adatum later.

a-Expense will continue to use its own application specific roles database associated with the users, but will discontinue to use the user database as authentication is moved to the IP.

Aha! I got you!

At this point, the smart reader could say: what a minute! why all this? Shouldn’t I just enable Windows Authentication in a-Expense? That will give me SSO and it is much much simpler than deploying an IP, issuing claims, configuring the application, etc.

The answer is yes! If that’s all you need, by all means do that. But consider these: 

- This is our first step in a longer journey. We are simplifying things and slightly (we hope) over-complicating others in preparation for other requirements. An investment so to say. (e.g. Customers using a-Order later on) 

- Even now, we want more than just SSO. We want also the ability to move a-Expense to Windows Azure for instance. Windows Authentication is not an option there.

 

In this first stage then we propose:

 

 

image

Adatum wanted to enable John to work from home. One option is to simply publish the apps and the IP on the internet (probably with some firewall rules + proxy). Because there’s no Kerberos authentication happening, the IP will prompt John for Username & Password (the same he uses to login into Adatum network), then it will issue the same token with John claims, and finally he will be redirected to the app:

image

Adatum could choose to add other authentication factors when connecting from the internet (smart card, pin, etc). But whatever they do, the applications don’t care! They are still receiving the same set of claims. Here you see one advantage of factoring out authentication from the application.

The last stage is moving a-Expense to Windows Azure. Can we do that? Sure!

image

Isn’t this beautiful?

Feedback is very much welcome of course.

I will start this series by introducing the main characters of our scenario.

First, we have VeryBigCorp. VBC is a large corporation, with multiple branches and subsidiaries, thousands of employees, etc. VBC is the typical organization with a rather complex business environment: multiple business units, complex rules, regulations, etc.  

image

VBC IT department is a reflection of this complexity: they have lots of legacy components, multiple networking stacks and a rich myriad of technologies coexist in its data centers. VBC develops custom applications for some of their business units, but they also buy packages from specialized vendors.

VBC IT has multiple processes in place to deal with all these challenges: there are architecture and development guidelines that everyone is supposed to follow, there are software development lifecycle processes, standards, naming conventions, etc. All these are there for good reasons, but sometimes creates a perception of lack of agility and excessive bureaucracy.

Most technology acquisitions in VBC are handled by the IT department following strict steps.

The second character in our story is SuperCloudySoftware, a service provider (a "cloud ISV" if you want)

SCS has embraced the web since its foundation. SCS innovates very quickly, pushes updates on its service regularly based on customer feedback, focuses on user experience, etc. They are the ultimate "agilists".

SCS focused initially on smaller businesses, even some consumers. Their flagship service is IssueTracker a task tracking service.

IssueTracker is only available as a service. That means that you can't buy a license of it and deploy it in your own data center.

From the beginning SCS made the strategic decision of making IssueTracker available through "multiple heads":

  1. There is a Web Client that only requires a browser
  2. There's a Smart Client that provides a richer UX and enhanced connectivity options (e.g. working offline) and
  3. There's also a Web Services API for all functions, that allows anybody to create their own clients or want to integrate with other client environments such as Microsoft Office.

IssueTracker itself relies on cloud building blocks. For example, the persistence of the application is based on SQL Data Services. This of course is completely opaque to their customers.

Next chapter will cover IssueTracker acquisition process in VeryBigCorp.

When we implemented claim based authorization in LitwareHR, we had to write a lot of code and play with non-trivial configurations (LitwarehR includes 2 STS and all the supporting infrastructure for securing the web services and the callers to them).

Not being a security expert myself, I found the “theory” behind this amazingly simple and powerful, but the “practice” quite complex.

The good news is that all this just got much easier with the release of “Zermatt”:

“Zermatt” is a .NET developer framework and SDK that helps developers build claims-aware applications to address today’s application security requirements using a simplified model that is open and extensible, can improve security, and boosts productivity for developers.  Developers can build externalized authentication capabilities for “relying party” applications and build custom “identity providers”, often referred to as Security Token Services (STS).  With these components, developers can build applications that meet a variety of business needs more quickly.

Quoting my good friend Peter Provost: “I love deleting code!”. “Zermatt” will allow us to get rid of a ton of "plumbing" code in LitwareHR.

Resources:

Link to the beta:  http://go.microsoft.com/fwlink/?LinkId=122266

Download Keith Brown's Whitepaper http://go.microsoft.com/fwlink/?LinkId=122266

More info on MSDN:  http://msdn.microsoft.com/en-us/security/aa570351.aspx

Maestro Bertocci's blog: http://blogs.msdn.com/vbertocci

Kim Cameron blog: http://www.identityblog.com 

Keith Brown blog & article: http://www.pluralsight.com/community/blogs/keith/archive/2008/07/09/introducing-microsoft-code-name-zermatt.aspx 

Requirements:

“Zermatt” requires .Net 3.5 to be installed. It has been verified on Windows 2K3 SP2 with IIS 6.0 and Windows Vista SP1 and Windows Server 2008 with IIS 7.0.

As a by-product of a project I'm working on, I've got a working version of BlogEngine.NET on SSDS. You can download the provider and other related components from LitwareHR's web site releases section here.

In the package you will find:

  1. The SSDS based BlogProvider
  2. An SSDS based Membership & Role Providers for the web site
  3. Unit tests for all (with about 93% coverage for the provider)
  4. A simple tool for pre-loading the SSDS container with information that BlogEngine need to start

The underlying data access is inspired on LitwareHR's, although I've heavily refactored it. The key ideas remain the same: there's a Repository<T> type, it uses SOAP to access SSDS, etc. but the code is simpler, responsibilities are better balanced and distributed, and I eliminated superfluous code here and there.

The new SSDS BlogProvider also uses patterns & practices Unity application block to wire up dependencies, so you write things like this:

public override Post SelectPost(Guid id)
{
Post be_post;
Entities.Post post;

using (IRepository<Entities.Post> r_post = RepositoryFactory.Build<Entities.Post>())
{
post = r_post.GetById(id);


       if( post == null )
return null;
}

be_post = SSDSPostToBEPost(post);

return post;

}


RepositoryFactory.Build<T>() will use Unity's Resolve() method to find the appropriate Repository implementation and all its dependencies configured. Build<T>() body looks approximately like this:




c.RegisterType(typeof(ITenantManager), typeof(DefaultTenantManager));
c.RegisterType<SitkaProxyFactory, SitkaProxyFactory>();
c.RegisterType( typeof(EntityMapper<T>), typeof( GenericMapper<T> ) );
c.RegisterType( typeof(IRepository<T>), typeof( Repository<T> ) );

IRepository<T> r = c.Resolve<IRepository<T>>();
r.TenantId = Constants.BlogEngineContainer;
return r;

I'm still learning about Unity so there might be better ways of implementing what I did, but it works.



 



Things I've liked about BlogEngine.NET:



  • The provider model. I think it was great design decision that allowed me to write this fairly quickly. I didn't want to look to much into other pieces of BlogEngine.NET, and this provided a clear separation. 
  • The code structure is quite neat. Navigating it was painless and I used the same structure implemented in the XmlBlogProvider.
  • It's a nice app with a lot of functionality and a fun project to work with!

Things that I missed or would have liked to be different:



  • The one thing I missed the most is pre-built unit tests for the provider. 
  • There are some assumptions on the engine and core components about where it runs (particularly ASP.NET) that introduced some issues.
  • I only changed 1 line of code (which was superfluous anyway), and that is this in the Category class in BlogEngine.Core:

internal static string _Folder = System.Web.HttpContext.Current.Server.MapPath(BlogSettings.Instance.StorageLocation);



This prevented me from unit testing my provider in complete isolation. It turns out that the variable wasn't needed anyway, so I commented it out.



  • There are other features in the app that are not abstracted in the provider (or in another replaceable component). For example, file attachments are handled in the Add_Entry.aspx web page. I didn't want to use the file system at all, but I also didn't want to change the code, so I left it as is. There are other parts of the app that have direct dependencies like this.
  • I wasn't sure about the app exception handling architecture, so I opted to do what the other providers seem to be doing: just throw and hope the upper layers handle it :-).

 



BlogEngine.NET expects some initial data: a user, an admin role and some initial settings. The unit tests will make sure these exist prior to running, and I also created a "Provisioning" console application that populates the container with the needed information. You need to configure all appropriate configuration files with your SSDS credentials.



Finally, because I'm using LitwareHR's data access, you can use the offline proxy to test the system independently from SSDS. So, until you get the SSDS beta account, you can start playing with it.



Feedback is welcome, as usual.

The latest version of LitwareHR is now available for download here.

From our CodePlex site:

This new enhanced version of LitwareHR includes the following new features:

  • Upgraded codebase for SQL Server Data Services support
    • New "DataModel"
    • New infrastructure for data access to SSDS (including caching, cross-container search, offline development, etc.)
  • New UX
    • Better graphics and layout
    • Helper links for streamlining demos (demo "auto-complete")
  • Various bug fixes
  • New "Dependency Checker" tool to verify and install pre-requisites
  • New configuration tool to setup LitwareHR (SSDS config, etc)

The MSI contains all the tools, source code and some binary dependencies (e.g. EntLib) to build LitwareHR. You will need Visual Studio 2008 to build all. This version of LitwareHR is meant to use SQL Server Data Services (SSDS) as the on-line storage provider. You can register for a beta account with this service http://www.microsoft.com/sql/dataservices/default.mspx.

As usual, any feedback is greatly welcome!

WCF Syndication APIs are great. In just a couple of hours I wrote a simple GeoRSS feed, a fairly high level library to encapsulate the details of common entities (lines, points, polygons), all unit tests and a mashup with VE to test it.

I only encountered a couple of incompatibilities and problems (mainly related to XML namespaces) but everything was straight forward and easy. VE doesn't allow the RSS feed to come from a different server the map is hosted, so you need a proxy to walk around this limitation (Very well explained in this blog entry. A must read for anyone doing VE stuff. Thanks for the code!). 

Being fairly new to this is space, I was amazed at the power off all these infrastructure components, and the apps that could be built with this stuff; and I know I have just barely scratched the surface.

My library allows me to easily author a GeoRss feed:

image

GeoRssData is a simple object model for geo information. GeoRSSFeedBuilder translates the domain model into a RSS feed with the proper extensions. This is all quite easy with WCF Syndication classes. Finally this the result on IE:

image

Billing is obviously a very important concern, after all Litware has to get paid! As described in Part I, Litware is looking at two models for Billing:

    • Subscription based: (For example: $X/tenant/month up to 5 seats/tenant + $Y/month for additional seats)

    • Usage based: $Z/"Submitted Resume"

In our imaginary scenario, we'll assume that Litware billing requirements are easily met by NWH billing capabilities. NWH Billing system supports these two modus operandi and offers rich customization, reporting, tax considerations, etc. So, for the purposes of this exercise, NWH Billing is state of the art.

But the question is: how does NWH Billing know what and when to bill? or in other terms, how to "glue" LitwareHR to the billing system? And, more importantly from Litware perspective is how intrusive this is. Remember, Litware is willing to change the app, but only if benefits they get out-weight the liabilities.

Everybody favors a non intrusive mechanism, because it minimizes changes to the app. Not by chance, NWH knows that the least requirements and constraints the more ISV they will attract.   

NWH Application Model actually makes this fairly easy, by using an interception mechanism:

 

NWH-BILLING

Figure 1 - Northwind Hosting Operations Interception 

 

Because the application is expected, by design, to expose all it's functionality through WCF web services, it gives NWH the chance to intercept all traffic to it (between the Web Client or any client calling the app web services. WCF provides a very powerful foundation for this kind of architecture (see behaviors and message inspectors).  

What NWH does is very simple:

  1. When the application is on-boarded, it looks in the app manifest for any "Business Action" definition (remember, Business Actions are just "managed" Operations) and stores any information in the Business Action database.
  2. One of the pieces of information is a "Billable Event". A name that identifies the billing event that is triggered each time the associated Operation is invoked.   
  3. At runtime, the interceptor will forward the event to the Billing System for processing.

 

The sub-optimized interceptor in pseudo-code is illustrated in Figure 2. Message represents the incoming request, Operation is the Action being invoked, CurrentContext represents the contextual information associated with this particular invocation (It will normally contain the tenant identifier, credentials, and any other out-of-band data).  

 

NWH-BILLING-PSEUDOCODE

Figure 2: Pseudo-code for an interceptor

 

Notice that the first thing the interceptor checks, is if the Operation is enabled or not. "Enabling/Disabling" an Operation allows NWH to put a gatekeeper for execution. This might be necessary when a request arrives from an overdue tenant for example. The Billing system might flip the bit for those in debt automatically as hinted in Figure 1.

This mechanism perfectly serves Litware's requirement of "turning the application to read-only" for non-paying customers. But there might be other good scenarios for this feature: isolating malfunctioning operations, maintenance, etc. The Control Panel allows manipulation of the metadata for users or NWH Operators.

Notice there's no change required in Litware if it complies to the Application Model.  

Because nothing of value is free in this world, this approach has a cost. If the interceptor is poorly written it has the potential of causing a lot of trouble: degrading performance, bringing instability, etc. It is definitely a piece of code that needs some good developers assigned to it. A good implementation will use lot's of smart strategies like caching, short circuiting through appropriate configuration, asynchronous calls to decouple bill event processing from the main thread, etc.   

Also, appropriate for the billing example. When should the event be generated? At the beginning or at the end of the execution? If whatever the Operation does fails and throws an exception, should the event be generated anyway? Or consider this: if the Operation runs in a transactional context, should the event be associated with the transaction? Real world interception mechanisms have all these provisions.

Now, if you are wondering whether this approach is actually used or not in the real world or if it's just Eugenio's design after drinking grappa, just take a look at CRM Live services design as presented in last MIX (red highlight is mine):

 

NWH-BILLING-MIX-CRM-LIVE

Figure 3: CRM Live Architecture

 

(Full deck & video here: http://sessions.visitmix.com/default.asp?event=1011&session=2012&pid=DEV09&disc=&id=1518&year=2007&search=DEV09)

On-Boarding is the process of bringing the application to the operation. It's everything that needs to happen for hand-off of the app from the ISV to the hoster.

In the scenario described in the previous post, I mentioned that Northwind requires ISV to package the application in a specific way. The intention is of course to automate the process on their side as much as possible, avoiding as many manual steps as possible (no phone, no e-mails, nothing...)

NWH calls this a "Hosting Pack" and it will typically contain:

  • The collection of assets that need to be installed in their data center (code, files, images, db scripts, etc). Everything needed to run and install the app
  • A description of those assets, their relationships and other important information we'll discuss below

Notably, some assets are forbidden from the Pack: for example config files. Because config files contain deployment specific information (like server names, IP addresses, connection strings, etc.) NWH does not allow ISV to author the config files directly. They will be generated based on data center specific requirements.

The package is the input for the on-boarding system which is responsible for:

  • Verifying the package integrity and consistency
  • Verifying that the basic rules are followed (like no config files rule for example)
  • Installing the package in a test environment for ISV approval
  • Promotion of the package from the test environment to production

NWH doesn’t really care too much about the internal details of the application, as long as it follows the basic architecture described in their model. For example, communication between the web and the database is forbidden and it will be enforced at the network level.

NWH provides ISVs with a tool that allows them to author packages and has this rules built in to minimize the chances of mis configurations. Think of this a an "fx-cop" for on-boarding.

 

NWH-ON-BOARDING

Figure 1: On-Boarding in Northwind Hosting

 

As I'm sure you imagined, a fundamental component in the Hosting Pack is the application manifest. The assets are the ingredients, but the manifest is the recipe. Without the recipe, you might not know very well what to do with the ingredients (except if it is garlic, which goes well on anything :-) )

Figure 2 shows a simplified view of NWH manifest.

 

NWH-APP-MANIFEST

Figure 2: Northwind Hosting Application Manifest

 

The manifest describes the entities that are significant for NWH for different reasons:

  • There are deployment considerations (e.g. the files to be deployed in a Web Site)
  • Capacity planning & SLA management (expected response times for well know entry points to the app like web pages, web services, db size, etc)
  • Billing management (Billable events associated with Operations)
  • Customer support (who to contact in the ISV) 

Let's pick up one notable entity in the manifest: The "Business Action".

A “Business Action” is simply an managed interaction between the web site and the web services. There’s 1:1 relationship between a Business Action and a WCF Operation, but not all Operations are Actions. Just those that need to be “managed” by the NWH. For example, because there’s a billable event associated with it or because there’s an attached SLA to it’s execution.

Next article, I'll focus on covering more details of these "Business Actions" in the context of Billing and SLA.

In its current version, LitwareHR is focused very much on the challenges an ISV faces while building a SaaS delivered app: multi-tenancy, configurability, etc.

In the last few weeks I've been working on exploring the implications of hosting LitwareHR on a hypothetical hosting platform. The goal was to twofold: identify changes that have to be made in LitwareHR code base to support specific hosting scenarios and explore the interaction points between hoster & ISV.

I've published some of this work already in previous post which focused on Provisioning. As described there, externalizing Provisioning required changes in LitwareHR, but it was pretty straightforward because it was functionality well encapsulated.

But of course Provisioning, as important as it is, is just one aspect. This series of articles will cover what I've found. As usual, any feedback is greatly welcome.

Eventually we hope to publish this content and the collateral generated (code, demos, etc), but realistically that will take some time as we need to invest more on polishing, testing, etc.

So lets start with a first article on the the scenario itself. Who are the actors? What motivates Litware to search for a Hoster as opposed to self-host? what characteristics is Litware looking in a hoster? What attributes are important for them?

 

The Scenario

The Actors in the exercise are:

  • Litware, a SaaS ISV
  • Northwind Hosting (NWH), a SaaS hoster

Litware is searching for a hoster to run and operate their flagship SaaS delivered application (LitwareHR). Litware is a startup ISV that lacks the funding, skills and experience to operate their solution. They want to enter a global market and therefore searching for a hoster with presence in different geographies. Quality is important, so they are also looking at contracting high SLAs.

Other Litware requirements can be summarized as follows:

  • Litware wants a seamless on-boarding process. They want to be up and running in a matter of days, rather than weeks. They want to be able to upload new versions and updates with the certainty that if something goes wrong they can immediately roll-back to previous well-known state.
  • Litware wants a fully automated provisioning system for tenants. They want new potential customers to have a seamless experience to try and buy their app. 
  • Litware wants to bill their customers based on 2 scenarios: on a subscription basis (fixed amount of $/tenant/month) and on a transaction basis (in their case per Resume Submitted). They don't want to create a billing system on their own, because they want to focus resources on enhancing LitwareHR features
  • Foreseeing a dramatic growth, they want the hoster to be able to allocate resources dynamically and transparently to keep their SLAs at agreed levels. They are also expecting the Hoster to help them troubleshoot and diagnose performance issues. 

Litware is willing to make changes into the app to make this happen, increasing the level of compliance of their app to follow the hoster's environment capabilities.

Among different hosters available they have found that Northwind Hosting is a "SaaS" hoster, that goes beyond “power, ping & pipe”, has a good coverage worldwide with datacenters in the principal markets Litware is interested in expending and offers advanced services like:

  • Metering & Billing to tenants on behalf of the ISV
  • Tenant Provisioning engine
  • Identity & Role management
  • Automatic on-boarding with multiple staging environments
  • Application Management & Optimization services including:
    • Exception logging and escalation
    • Customer support
    • Profiling and tracing
    • Automatic capacity management to comply to SLAs

Northwind specializes in hosting applications built with the Microsoft distributed applications technology stack: .NET, ASP.NET, WCF, SQL Server based transactional systems. They offer high availability systems and good coverage for a good price, if the application is built according to certain rules.

Northwind has streamlined their operations for applications that are compliant with a specific model (Figure 1), in which:

  • Applications are made up of sets of Web Sites, Web Services and Databases
  • Web Sites must be implemented in ASP.NET 2.0 and must interact with WCF exposed web services
  • WCF services interact with SQL Server databases
  • WCF services that interact with a user repository using Directory Services (exposed as WS themselves)

 

NWH-HAM

Figure 1: Northwind Hosting - Application Model

 

This might might sound quite restrictive, but NWH is able to offer competitive pricing with high SLAs if the application is built according to this principles. Coincidentally, LitwareHR design is very close to this. (By pure chance of course :-))

However, NWH has some additional requirements and constraints, for example:

  • All .NET assemblies are private to the application (No components should be installed in the GAC)
  • All configuration should be stored in .NET standard files (web.config files)
  • Exception logging and handling should be done with Microsoft Enterprise Library
  • Data Access should be done with Enterprise Library Data Access Application Block
  • Only standard extensions are allowed to Enterprise Library (meaning that extensions should have been done through Microsoft published extensibility points)
  • Enterprise Library binaries will be supplied by NWH

Anything that differs from these rules can be negotiated, but it is guaranteed that for the same SLA, the costs will be higher as illustrated in Figure 2.

SLA-COST-NWH

Figure 2: Cost increase for increased SLA with different levels of compliance 

    

NWH of course, publishes these models and restrictions together with an SDK with APIs to be called for specific services. In its current implementation, NWH publishes an SDK for the following services:

  • Tenant resources provisioning
  • Identity Management (users & roles)

The ISV is required to call these APIs for these features.

Finally, Northwind Hosting requires ISVs to package the application in a predefined way and supply additional information for deployment and operations in an "Application Manifest". Northwind requires this so they can fully automate the on-boarding of a new app. We will call the "stuff" that the ISV hands over to the hoster, the "Hosting Pack". Think of it as an "MSI for hosting".

On next chapters we will drill down on these different pieces in the context of Northwind Hosting and Litware's LitwareHR. What is in the "Hosting Pack"? What does the "Manifest" describe? How is it authored and consumed? How intrusive are Northwind APIs?

 

The upcoming workshop on the WCSF next March 12th. is sold out. If you are still interested in participating please contact me through this blog and I will put you in our waiting list.

 

We normally have some late cancellations, so you might be able to make it.

Thanks a look forward to seeing you!

 Eugenio Pace - patterns & practices Client Architecture Guidance : Web Client News Feed 

Last edited Dec 7, 2006 at 11:16 PM by codeplexadmin, version 1

Comments

No comments yet.