April 15, 2019

Offboarding for a Good User Experience

In this post I’ll be sharing a finding from Nobel Prize winning Psychologist Daniel Kahneman about how humans judge and remember experiences. I’ll then challenge that it may be as important, if not more important, to consider the tail-end (offboarding) of our users’ engagement than on the beginning (onboarding).


First Impressions and Onboarding

Much has been said over the ages about the importance of making a good first impression. My grandmother once told me “there’s nothing more important than a well starched and ironed shirt”. Now, whether you believe family merits being ranked above or below a starched shirt is a debate for another time, but nonetheless, the wisdom passed on is to make your first impression count.

We can see the carryover of this wisdom in our industry through the emphasis we have placed on the onboarding process for applications. Apart from first impressions, a well-designed onboarding process is useful in many other ways. We know that any tool, even the most basic ones, come with a learning curve. A well-designed onboarding process is like the initial tour you might give a houseguest. You may initially show them to their room, and then take them to see the kitchen, the bathroom, perhaps explain how to work that pesky shower, and maybe even point out the raised step in the entryway that might have otherwise caused a stubbed toe. It’s an efficient means to demonstrate capabilities, describe where they are, how to navigate, along with mitigating potential errors and frustration.

But what if we thought of our applications beyond pure utility, but instead as a true experience? We do call it User Experience Design don’t we? How do we as humans judge the quality of a given experience?

The Experiencing Self vs. the Remembering Self

According to Nobel Prize winning Psychologist Daniel Kahneman, we think of experiences in two distinct ways: one in real-time, as he refers to as ‘the experiencing self’ and one retrospectively called ‘the remembering self’. What is notable about these two selves is that they do not always match up to each other logically and we often end up ranking experiences quite differently in hindsight.

Let me explain an experiment he uses to describe this phenomenon. In one condition a user was asked to place his hand in cold water (14 degrees Celsius). After a given time (60 seconds) the researcher told him he could remove his hand. In the second condition the user was also asked to place his hand in cold water (14 degrees Celsius) but after 60 seconds instead of ending the experiment, the researcher increased the temperature slightly to 15 degrees Celsius. After a short time the researcher then told him he could remove his hand. The users were finally given an exit survey and asked, if you had to redo one of the two experiences you just participated in, which would you choose? It turned out the users highly favored the second condition. Despite the fact the second condition contained all the discomfort of the first and then some, because it ended better, it was remembered to have been better.

The ‘remembering self’ is the storywriter of our lives and when we make decisions, it is what we reference. It takes changes, significant moments, and most importantly, endings, and condenses them all down into what we recall as our experience.

Lasting Impressions and Offboarding

The fact that our memory places so much emphasis on how an experience ends may yield new opportunities for improving our applications. Perhaps to deliver a truly great experience we need to be as mindful of the tail end of our user flows, processes, and funnels as the beginning.

I challenge you to take some time and identify the most common user exit points from your application or website. Then think about the quality of these exits from the perspective of a user. Are users being left with a feeling of resolution and satisfaction? How can you improve a user’s last and experience-defining impression?

We have carefully considered our first impression, but now it’s time to think seriously about our lasting impression.

For more information on Kahneman’s findings on this topic consider buying his NYT bestselling book “Thinking, Fast and Slow”.

Here’s also a TED Talk he gave on the subject in 2010:

January 23, 2019

Forcing Nuget Package Usage in Visual Studio/MSBuild

Forcing Nuget Package Usage in Visual Studio/MSBuild

In this post I’ll discuss a technical issue we came across during a client’s CI/CD implementation for their web site’s deployment. We discovered the core issue was a version difference between development environments, and the machines building deployment artifacts that resulted in the build process failing remotely while succeeding wonderfully on our local machines. What follows is a recap of our discovery of the issue, and the steps taken to resolve it.


NOTE: While this is written in the context of TeamCity, the scenario described can (and almost certainly does) happen outside that environment.

Recently, while working on a .Net project for a client, we ran into an interesting issue with an older version of TeamCity. The TeamCity version in use supported up to Visual Studio 2015, but our project was being developed in Visual Studio 2017. From a framework perspective this doesn’t matter at all, either environment would work equally well when built and run locally. The problem that surfaced during the TeamCity Build was something like this:


The imported project “<ApplicationInstallationRoot>MSBuildMicrosoftVisualStudiov14.0WebApplicationsMicrosoft.WebApplication.targets” was not found. Confirm that the path in the declaration is correct, and that the file exists on disk

Essentially, the MSBuild agent couldn’t find the correct assembly in the “VisualStudiov14.0” folder because that location is the Visual Studio 2017 folder, and Visual Studio 2017 was not present on the server.

A little bit of searching suggested that installing the Microsoft.WebApplication.targets package from Nuget would resolve the issue, so we installed this package:

Eventually it does solve the problem, but there are a few things to know about this solution:

  1. Depending on the version of TeamCity, we configured a Nuget Restore step to make sure that the package is restored to the build location prior to the build.
  2. Build the application
  3. In our case, do an OctopusDeploy: Create Release

Our final process in TeamCity looked like this:

With everything configured and the appropriate Nuget package in the project, all should be well.

Except it wasn’t.

We re-ran the build and got the exact same error we had seen before. This seemed strange, because we could see the package get pulled in the package restore, and the version we were looking for should have been available.

After struggling with a good number of suggestions online, we came across something regarding the .csproj file that looked like it could be a solution. Realizing that editing the .csproj file is not anyone’s idea of a good time, but in dire need of an actual solution, we rolled up our sleeves and dug in.

What we found was interesting.

In the imports section (per some suggestions from googling) we found this line:

<Import
Project="$(VSToolsPath)WebApplicationsMicrosoft.WebApplication.target"
Condition="$(VSToolsPath) != ''" />

This is where the problem originates. The project’s variable $(VSToolsPath) was grabbing the location of the Visual Studio 2015 install, which didn’t have our version of the WebApplication.targets package. That would have been in the Visual Studio 2017 install directory.

Interestingly enough, there was another line underneath it, with an absolute path to where the Nuget package would live, but with a Condition=”’false’” attribute that causes that import to be ignored.

By changing the condition to “true”, the second import was enabled, but a second import gets ignored during build, so we commented out the first.

At this point, it should be noted that the package location here is where Visual Studio thought the package should be on the development machine, so the path for this ends up not being exactly right either. We were able to get the absolute path working by trial and error, but that solution is not ideal, because it leaves you with a solution that is now potentially very tightly coupled to the build environment and its file structure.

In the end, we were able to build an import that both:

  • Forces the use of the Nuget package
  • Verifies the package is there before importing, so an absence of the package will kill the build very quickly

The final import looks like this:

<Import Project="..packagesMSBuiId.Microsoft.VisualStudio.Web.targets.14.0.0 toolsVSToolsPathWebApplicationsMicrosoft.WebApplication targets" Condition="Exists('..packagesMSBuiId.Microsoft.VisualStudio.Web.targets.14.0.0 toolsVSToolsPathWebApplicationsMicrosoft.WebApplication.targets')" />

This import defines the path as the relative path to the Nuget package, and applies a condition for import that the package is actually present in the folder before allowing the import to happen.

With that in place, the TeamCity build goes off without a hitch, and OctopusDeploy gets its artifact.

It would be possible to provide fallback options for packages by inverting the condition, but in our case we wanted the build to fail outright if the package was not in the expected location.

There are a couple lessons learned here:

  1. If it’s at all possible, try and ensure your build machines are running the same versions of things as your development environments.
  2. Failing that, it IS possible to force a build to use a packaged version of newer dlls than what is available in your build machine’s GAC… as long as you’re willing to hack around in the project files to make it happen.

Working in client environments can present unique challenges, but with the right tooling, and the willingness to work “off the beaten path” a little, a solution that’s workable and sustainable is (almost) always possible.