Install Latest Node.js and npm in a Docker Container

Install the latest versions of Node.js and npm, into a Docker Ubuntu container, with or without the need for root access. Easily update both applications to the latest versions.

Install and Confirm Node and npm

Ubuntu and Node

Recently, I was setting up a new development laptop with Ubuntu 14.10 (Utopic Unicorn). As part of the setup, I needed to install all the several development tools, including Node.js and npm. Researching the current recommendations for installing Node.js and npm on Ubuntu, I found using the traditional ‘apt-get‘ command does not always install the latest versions of either application. Additionally, ‘apt-get’ makes updating those versions difficult.

After a lot of investigation, I created three different snippets of code to install the latest copies of Node.js and npm: version 1 using ‘apt-get install’, version 2 without using ‘apt-git’, and version 3 without using ‘apt-git’ nor requiring ‘sudo’ for npm (not recommended). There is some debate on the use of sudo with some earlier versions of npm.

Some of my code came from Isaac Z. Schlueter‘s series of installations Gists, and a post on StackOverflow by Pascal Hartig. Joyant and others recommended Isaac’s Gists for installing earlier versions of Node.js and npm. Other code was found in posts by DigitalOcean.

Docker

Docker containers and virtual machines (VM) are ideal platforms for developing and testing applications, locally. I often create Docker container or Oracle’s VirtualBox VMs, to install and test new applications, before deploying them directly to my development machines. To test this code, I created three separate Docker containers, based on the official 14.04 Ubuntu base image off Docker Hub. I executed each versions of code within a container, to make sure they worked properly, before using version 2 (v2) for my laptop.

Displaying Docker Ubuntu Image and Containers

Displaying Docker Ubuntu Image and Containers

GitHub Gists

The three versions of install scripts basically do the follow:

  • Creates Ubuntu Docker container (first gist)
  • Updates the container’s version of Ubuntu
  • Installs required software to install Node.js
  • Creates new test user account within container
  • Configures system so Node.js and npm can be used by test user, without ‘sudo’ (v3)
  • Installs Node.js and npm
  • Installs a few common full-stack JavaScript npm packages
  • Verifies installation locations and contents are correct
Installing Node, npm, and New User Account

Installing Node, npm, and New User Account

Installing and Verifying npm Packages

Installing and Verifying npm Packages

, , , , , , , , , , , ,

Leave a comment

Software Delivery: Evaluating Risk within the Enterprise

As a software environment evolves from separate applications into an enterprise, how does increasing complexity raise the potential risk of delivering less-than-reliable software?

Cover Drawing

 Introduction

There are many vendor whitepapers, industry publications, blog posts, podcasts, and e-books, extolling the best practices in software development and delivery. Best practices include industry-standard concepts, such as Agile, DevOps, test automation, continuous integration, and continuous delivery. Generally, these best practices all strive to improve the process of delivering software enhancements and bug fixes to customers.

Rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead. – Wikipedia

Most of these learning resources present one of two idealized software environments. I term them the ‘applications as islands’ environment and the ‘utopian enterprise’ environment. I am also often guilty of tailoring my blog posts to one of these two idealized environments. Neither of the environments best models the typical enterprise software environments in which many of us work.

Applications as Islands

The first idealized software environment is one of isolated application stacks. These environments have multiple application stacks, each of which could include web, mobile, and desktop components, services, data sources, utilities and scripts, messaging and reporting components, and so forth. Nonetheless, each application stack is completely isolated from the other application stacks, within the same environment.

The Utopian Enterprise

The second idealized software environment is the utopian-like enterprise. These environments have multiple application stacks with multiple shared components. However, they are built using consistent and modern architectural patterns and compatible technology stacks. They are designed from the ground up to be compartmentalized, scalable, and highly risk-tolerant to changes. They often avoid the challenges of monolithic legacy applications. The closest things in the real world are probably industry trendsetters, such as Facebook, Etsy, Amazon, and Twitter. We all probably wish we could evolve our own software environments into one of these Utopias.

Complexity and Risk

As an organization continues to evolve their software, they naturally increase the overall complexity, and thereby the challenge of effectively delivering reliable and performant software. In this post, I will explore the challenges of software delivery, as a software environment grows in complexity. Specifically, I will focus on how to evaluate the level of risk based on software changes made to various components within the software environment.

Sensitivity and Impact

As we examine the level of risk introduced by software changes within the environment, two aspects of risk are inescapable, sensitivity and impact. Sensitivity will be defined as the potential degree to which one component, such as an application, service, or data source, is affected by changes to other components within the same software environment. How sensitive is ‘Application A’ to changes made to other components within the same software environment, on which ‘Application A’ is directly or indirectly dependent?

Impact will be defined as the potential effect a component’s changes have on other components within the software environment. Teams tend to only evaluate the impact of changes on the immediate component or application stack. They do not sufficiently consider how those changes impact those components that are directly and indirectly dependent on them. What level of impact do changes to ‘Service B’ have on all other components within the software environment that are directly and indirectly dependent on ‘Service B’?

Notice I use the word ‘potential’. Any change has the potential to introduce risk. The level of risk varies, based on the type and volume of changes. A few simple changes should have a low potential for impact, as opposed to a high number of changes, or more complex changes. For example, changing an internal error message logged by a particular service operation should present a very low risk. This, as opposed to rewriting that operation’s complex algorithm for calculating a customer’s creditworthiness. The potential impact of those two types’ changes to dependent components varies greatly.

Measuring Risk

For both sensitivity to change and impact of change, I will use a color-coded scale to subjectively assign a level of potential risk to each component within a given software environment. The scale ranges from ‘Low’, to ‘Moderate’, to ‘High’, to ‘Very High’. Using the scale, it is possible to ‘heat map’ a software environment, based on the level of risk from changes.

Independent Aspects of Risk

Sensitivity and impact are two independent aspects of risk. Changes to one component may have a ‘Low’ potential impact on all other components within the environment. While at the same time, that same component may have a ‘High’ sensitivity to changes made to other components within the environment. Alternatively, a component may have a ‘Very High’ risk for potential impact on multiple components within the environment. At the same time, that same component may have a ‘Low’ potential sensitivity to changes made to other components. Sensitivity and risk do not parallel each other.

Growing Complexity

Let’s look at how sensitivity and impact change as we increase the software environment’s complexity. In the first example, we will look at one of the two environments I described earlier, individual isolated applications. Applications may have their own web and mobile components, SOAP and RESTful services, data sources, utilities, scripts, scheduled tasks, and so forth. However, the applications do not depend on each other or components outside their own immediate application stack; the applications are self-contained.


When making changes in this type of environment, the real potential impact is to the overall stability, security, and performance of the individual applications, themselves. As long as they are in isolation, the applications will have no impact on each other. Therefore, application’s potential sensitivity to changes, and their impact on other applications, is ‘Low’.

Shared Components

A slightly more complex example is a software environment in which one or more applications have a dependency on a component outside their immediate application stack. For example, a healthcare provider develops a Windows-based application to track their employee’s work schedules (Application A). In addition, they develop a web application to track patient appointments (Application B). Lastly, they offer a client-facing mobile application for patients to track personal fitness and nutrition goals (Application C). Applications B and C share a common set of services and a database for managing patient data.

Software changes made to Applications A, B, and C, should have no effect on other components within the software environment. However, Applications B and C are potentially impacted by changes made to either the Services Layer or Data Layer. The Services Layer has ‘High’ potential impact within the software environment. Lastly, the Data Layer should not be directly impacted by change made to the Services Layer or Applications. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B and C. Therefore, the Data Layer’s potential impact on other dependent components within the environment is ‘Very High’.

Multiple Shared Components

An even more complex example is a software environment in which multiple applications have one or more dependencies on multiple components outside their immediate application stack (many-to-many).

Take for example, a small financial institution. They have a ‘legacy’ COBOL-based application for managing their commercial mortgage business (Application A). They also have an older J2EE-based application, they acquired through a business merger, for managing their commercial banking relationships (Application B). Next, they have a relatively new Java EE-based investment banking application to manage their retail customers (Application C). Lastly, they have web-based, client-facing application for secure, online retail banking.

Since both Application A and B serve commercial clients, it is necessary to send financial data between the two application stacks. Since both applications are built on different, older technologies, the development team built a Custom Messaging Middleware component to connect the two applications. The Custom Messaging Middleware component receives, transforms, and delivers messages between the two applications.

Changes made to Applications C and D should have no impact on other components within the software environment. However, changes made to either Application A or B has the potential to indirectly affect the ability to successfully communicate with the other application, via the Custom Messaging Middleware. Changes to the Custom Messaging Middleware have the potential to affect both Applications A and B. The Custom Messaging Middleware has a ‘Moderate’ potential sensitivity to risk, versus ‘Low’, because one could argue that changes to either Application A or Application B’s messaging format could impact the Custom Messaging Middleware’s ability to properly process that application’s messages and successfully deliver them to opposite application.

Applications B, C, and D have a direct dependency on the Services Layer, and indirectly on the Data Layer. Therefore, the potential impact of changes to the Services Layer on other components is arguably higher than in the last example. The Services Layer’s potential impact on other components is ‘Very High’.

Since Application B has a direct dependency on both the Messaging Middleware and the Services Layer, it has a higher sensitivity to changes then the other three applications. Application B’s potential sensitivity to changes by other components is ‘Very High’.

Changes made to the Services Layer or the Applications will not affect the Data Layer. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B, C, and D. Therefore, the Data Layer’s potential impact on the software environment is ‘Very High’.

Small Enterprise

The last example of increasing complexity is an environment in which even more applications are dependent on even more components. Additionally, there may be different types of components, such as a common UI and third-party APIs, which only increase the complexity of the dependencies. Although this example is nowhere near as complex as many enterprise software environments, it does begin to reflect their intricate, inner-dependent structure.

Let’s use an example of a large web-based retailer. The retailer has a standalone ERM application for managing their wholesale purchasing and product distribution (Application A). Next, they have their primary client-facing storefront (Application B). They also have a separate application to handle customer accounts (Application C). Lastly, they have an application that manages their online media retail business and media storage (Application D).

In addition to the Common Services Layer, Common Data Layer, and Custom Messaging Middleware, as seen in earlier examples, the retailer has two other components in their environment, a Common Web User Interface (UI) and a Web API. The Web UI provides the customer with a seamless branded experience, no matter which application they use – Application B, C, or D. The customer enters the Common Web UI and has all three application’s features seamlessly available to them.

The retailer also exposes a RESTful Web API for its marketing affiliates. Third parties can develop a variety of applications that drive sales to the retailer, in return for a sales commission.

In the earlier examples, individual applications had separate points of entry. However, in this example, the Common Web UI provides a single point of entry for users of Applications B, C, and D. Having a single point of entry also introduces a single point of failure for all three applications. Thus, the potential risk to the retailer and their customers is much greater. The Common Web UI’s potential impact on other components is ‘Very High’.

A single point of entry also introduces a single point of failure.

The potential sensitivity of the Common Web UI to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. Additionally, one could argue, since the Common Web UI displays the three Applications, it is also sensitive to changes made by those applications. If one of those applications becomes impaired due to a bad change, that application would seem to affect the Web UI’s functionality. The Common UI’s potential sensitivity to change is ‘High’.

The Web API is similar to the Common Web UI, in terms of potential sensitivity and impact. The potential impact of changes to the Web API is ‘Very High’, since a defect there could result in the potential impairment of the retailer’s affiliate applications. The potential sensitivity of the Web API to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. The Web API’s potential sensitivity to change is ‘High’. There is very little chance of potential impact to the Web API from the retailer’s affiliate applications.

Impact of Key Components

Lastly, as systems grow in complexity, certain components often become so key, they have the potential to impact entire environment, a true single point of failure. Below, note the potential impact of changes to the Common Services Layer on all other components. As the software environment has grown in complexity, the Common Services Layer sits at the heart of the system. The Services Layer has multiple components directly dependent on it (i.e. Application C), as well as other components indirectly dependent on it (i.e. Third-Party Applications). It is also the only point of access to and from the Common Data Layer.

There are steps organizations can take to mitigate the potential risk caused by changes to key components, like the Services Layer. Areas organizations commonly focus on to reduce risk are higher code-quality, increased test coverage, and improved performance, fault tolerance, system redundancy, and rollback capabilities. Additionally, management should more thoroughly scrutinize proposed software changes to key components, balancing new features with need for stability, availability, and performance.

Management must balance the need for new features with need for stability, availability, and performance.

Specific to services, organizations often look to decouple larger services, creating smaller, more focused services. Better separation of concerns increases the likelihood that potential impairments caused by code defects are isolated to a smaller subset of functionality.

Conclusion

In this brief post, we examined one aspect of potential risk to delivering reliable software, sensitivity and impact of software changes. There are many other sources of risk involved with delivering reliable software. They include training, communication, planning, documentation, system infrastructure, and development and release management tooling. Once all sources of risk is identified and quantified, the overall level of risk to delivering reliable software can be assessed, and steps taken to reduce the potential impact.

, , , , , , , , , , , , ,

Leave a comment

Managing Windows Servers with Chef, Book Review

Harness the power of Chef to automate management of Windows-based systems using hands-on examples.

Managing Windows Servers with Chef

Recently, I had the opportunity to read, ‘Managing Windows Servers with Chef’, authored John Ewart, and published in May, 2014 by Packt Publishing. At a svelte 110 pages in paperback form, ‘Managing Windows Servers with Chef’, is a quick read, packed with concise information, relevant examples, and excellent code samples. Available on Packt Publishing’s website for a mere $11.90 for the ebook, it a worthwhile investment for anyone considering Chef Software’s Chef product for automating their Windows-based infrastructure.

As an IT professional, I use Chef for both Windows and Linux-based IT automation, on a regular basis. In my experience, there is a plethora of information on the Internet about properly implementing and scaling Chef. There is seldom a topic I can’t find the answers to, online. However, it has also been my experience, information is often Linux-centric. That is one reason I really appreciated Ewart’s book, concentrating almost exclusively on Windows-based implementations of Chef.

IT professionals, just getting starting with Chef, or migrating from Puppet, will find the ‘Managing Windows Servers with Chef’ invaluable. Ewart does a good job building the user’s understanding of the Chef ecosystem, before beginning to explain its application to a Windows-based environment. If you are considering Chef versus Puppet Lab’s Puppet for Windows-based IT automation, reading this book will give you a solid overview of Chef.

Seasoned users of Chef will also find the ‘Managing Windows Servers with Chef’ useful. Professionals quickly master the Chef principles, and develop the means to automate their specific tasks with Chef. But inevitably, there comes the day when they must automate something new with Chef. That is where the book can serve as a handy reference.

Of all the books topics, I especially found value in Chapter 5 (Managing Cloud Services with Chef) and Chapter 6 (Going Beyond the Basics – Testing Recipes). Even large enterprise-scale corporations are moving infrastructure to cloud providers. Ewart demonstrates Chef’s Windows-based integration with Microsoft’s Azure, Amazon’s EC2, and Rackspace’s Cloud offerings. Also, Ewart’s section on testing is a reminder to all of us, of the importance of unit testing. I admit I more often practice TAD (‘Testing After Development’) than TDD (Test Driven Development), LOL. Ewart introduces both RSpec and ChefSpec for testing Chef recipes.

I recommend ‘Managing Windows Servers with Chef’ for anyone considering Chef, or who is seeking a good introductory guide to getting started with Chef for Windows-based systems.

 

, , , , , ,

Leave a comment

Data-Driven Forms with AngularJS’s Two-Way Data Binding and Custom Directives

Use the two-way data binding and custom directives features of AngularJS to develop data-driven, interactive forms.

Introduction

AngularJS has exploded on to the web-application development scene. Since being introduced in 2009, AngularJS’s use has grown exponentially. Its wide range of features and ease of use make it an ideal tool for rapidly developing modern web-applications. Combined with other modern JavaScript tools, such as Node, Express, Twitter Bootstrap, Yeoman, and NoSQL databases such as MongoDB, AngularJS developers can create robust, full-stack JavaScript applications.

A primary feature of AngularJS is two-way data binding. According to AngularJS’s website, ‘data-binding is the automatic synchronization of data between the model and view. The way that Angular implements data-binding lets you treat the model as the single-source-of-truth in your application. The view is a projection of the model at all times. When the model changes, the view reflects the change, and vice versa.‘ In the past, developers spent much of their coding time wiring up UI components to the application’s data model. AngularJS has greatly simplified this process.

Another key feature of AngularJS are directives. At a high level, according to AngularJS’ site, ‘directives are markers on a DOM element (such as an attribute, element name, comment or CSS class) that tell AngularJS’s HTML compiler to attach a specified behavior to that DOM element or even transform the DOM element and its children.‘ AngularJS provides many built-in directives, including ngModel, ngBind, ngInclude, ngRepeat, and ngChange. These directives are the building blocks of an AngularJS application. We will use many of these built-in directives in this post.

In addition to built-in directives, AngularJS allows us to create custom directives. Custom directives are a powerful feature, allowing us to encapsulate our own reusable DOM manipulation functionality.

The Sample Project

There is an infinite variety of web-based forms (‘electronic forms’). We interact with web-based forms at work, at home, and at school. Forms serve the primary purpose of collecting data user. Web-based forms allow us to order products and services over the internet, file our taxes, manage our benefits at work, track our time, and take online classes.

Tests or quizzes are a perfect example of web-based forms to demonstrate AngularJS’s many strengths, including data-binding and custom directives. In this post, we will create a series of interactive quizzes on the theme of AngularJS – sort of a learning opportunity inside a learning opportunity. Quizzes often contain several common types of question/answer formats, including true-false, multiple-choice, and multiple-correct, ordering, matching, short-answer, essay, and so forth. These question/answer formats take advantage of all the HTML form elements, including radio buttons, check-boxes, text fields, drop-down lists, list boxes, and text areas. We will build the quizzes from static JSON data files, and using AngularJS’s services, controllers, routes, views, templates, directives, and custom directives.

In the first example, we will use AngularJS’s factory service, controller, partial templates, view, routing, and built-in directive features to read JSON data from a file, and display and validate a basic true-false quiz. In the second example, we will expand our true-false quiz to contain additional types of questions, including multiple-choice and multiple-correct. For the advanced quiz, we will make use of use custom directives and partial view templates. These two new features will allow us to increase the quizzes complexity without substantially increasing the complexity of code we need to write.

Installing and Configuring the Project

This post’s project is available on GitHub. The easiest way to obtain all the source code, is to clone the project with Git. Once you have cloned the project, don’t forget to install the npm and bower packages. All commands are shown below. The minimum requirements for the project, are to have Bower, Grunt, npm, and Git installed.

git clone https://github.com/garystafford/angular-quiz.git
cd angular-quiz
npm install
bower install

Alternately, if you are experienced building JavaScript applications with the scaffolding tool, yo, you can create a new project and recreate the code yourself. To use generator-angular’s code generators, you will need yo installed, in addition to Bower, Grunt, npm, and Git. Since this post’s project is based on the Yeoman’s generator-angular, you can use npm to install Yeoman’s generator-angular. Afterwards, using generator-angular’s available code generators, you can easily reproduce the post’s basic project structure.

npm install -g generator-angular

# Use generator-angular code generators to create project components
# Instructions here: https://github.com/yeoman/generator-angular
mkdir quiz-app && cd $_
yo angular quiz
yo angular:route quizAdvanced
yo angular:factory quizAdvancedFactory
yo angular:directive quizTrueFalseDirective
Using yo with generator-angular to Set-up a New Application

Using yo with generator-angular to Set-up a New Application

Using yo with generator-angular to Create New Components

Using yo with generator-angular to Create New Components

If you used the generator-angular code generator to create the project yourself, using the above instructions, your module will be called ‘quizApp’. The application name, found in the ‘package.json’ and ‘bower.json’ files, will be ‘quiz’. I changed my project’s module and app names to be more descriptive, along with the names of the routes, factories, directives, and other components. They will also vary slightly using the code generators.

Also, if you used the generator-angular code generator to create the project yourself, you may need to install a few additional npm and bower packages, not part of generator-angular project, to reproduce this post’s project, exactly.

Project Structure

The project structure follows the generator-angular format. Most core application files are kept in the ‘app’ folder. This post’s project has added the ‘app/data’ folder, which holds the quiz data, and the ‘app/scripts/partials’, which holds the partial view templates for the custom directives (explained later).

Project View from WebStorm 8

Project View from WebStorm 8

Starting the Project

The project is started using the ‘grunt serve‘ command. Using the grunt server, the project be hosted on ‘localhost’, port 9000, by default. This can be changed to a specific hostname or IP address by editing the ‘Gruntfile.js’ file’s ‘connect‘ task.

Testing the Project

There are some basic tests created using the Karma, Test Runner for JavaScript. These tests are run using the ‘grunt test‘ command. Test are set to run on port 8092, using the PhantomJS web browser. PhantomJS, if you’re not familiar, is a headless WebKit scriptable with a JavaScript API. PhantomJS is ideal for use with Continuous Integration Servers, such as TravisCI. If you do not have PhantomJS installed, and plan to run the tests, change the ‘browser‘ property in the ‘karma.conf.js’ file, located in the project’s root directory. Chrome is a good alternative for local testing. Test results for this GitHub project can be reviewed on TravisCI.

Creating a complete set of unit tests for the advanced quiz proved challenging based on its nested, partial view templates, described in the Advanced Quiz section. I may add a more complete set of unit test in the future.

Basic Quiz

The first quiz is a six-question, basic true-false format form. The user answers all six questions, and then pushes a button to display the results.

Basic Quiz Before User Input

Basic Quiz Before User Input

Basic Quiz With User Input

Basic Quiz With User Input

The basic quiz uses a single controller (quizBasicController.js), single factory service (quizBasicFactory.js), single route (apps.js – ‘/quizBasic’), and a single partial view template (quiz-basic.html), in addition to the main layout (index.html). All these components are part the ‘quizModule’ AngularJS module. I’ve attempted to illustrate these relationships in the diagram, below.

The factory service (quizBasicFactory.js) uses $resource, a service in AngularJS’s ngResource module, to load the contents of a local JSON-format file (quiz-basic.json).

angular.module('quizModule')
 .factory('quizBasicFactory', function ($resource) {
 return $resource('./data/quiz-basic.json');
 });
{
  "name":      "Basic Quiz Example",
  "questions": [
    {
      "_id":      1,
      "question": "AngularJS is a declarative programming language.",
      "answer":   true
    },
    {
      "_id":      2,
      "question": "The acronym 'SPA' stands for Single-Page Application.",
      "answer":   true
    },
    {
      "_id":      3,
      "question": "AngularJS is written in C++.",
      "answer":   false
    }
    ...
  ]
}

The controller (quizBasicController.js), calls the factory service (quizBasicFactory.js), which returns the ‘data’ object.

angular.module('quizModule')
  .controller('QuizBasicController',
  function ($scope, quizBasicFactory) {
    var createResults;
    $scope.title = null; // quiz title
    $scope.quiz = {}; // quiz questions
    $scope.results = []; // user results

    quizBasicFactory.get(function (data) {
      $scope.title = data.name;
      $scope.quiz = data.questions;
      createResults();
    });

    // prepare array of result objects
    createResults = function () {
      var len = $scope.quiz.length;
      for (var i = 0; i < len; i++) {
        $scope.results.push({
          _id:        $scope.quiz[i]._id,
          answer:     $scope.quiz[i].answer,
          userChoice: null,
          correct:    null
        });
      }
    };

    // assign and check user's choice
    $scope.checkUserChoice = function (question, userChoice) {
      // assign the user's choice to userChoice
      $scope.results[question - 1].userChoice = userChoice;

      // check the user's choice against the answer
      if ($scope.results[question - 1].answer === userChoice) {
        $scope.results[question - 1].correct = 'Correct';
      } else {
        $scope.results[question - 1].correct = 'Incorrect';
      }
    };

    // only show results if all questions are answered
    $scope.checkQuizCompleted = function () {
      var len = $scope.results.length;
      for (var i = 0; i < len; i++) {
        if ($scope.results[i].userChoice === null) {
          return true;
        }
      }
      return false;
    };
  });
The 'data' Object Returned from Factory Service containing  JSON Data

The ‘data’ Object Returned from Factory Service containing JSON Data

Contents of the ‘data’ object are used to populate ‘$scope.quiz[]’, ‘$scope.title’, and ‘$scope.results[]’ properties. The $scope holds the quiz data ($scope.quiz[]), the quiz title ($scope.title), and the results ($scope.results[]). The ‘$scope.checkUserChoice()’ method stores the user’s answer in ‘$scope.results[].answer’ property, and evaluates if the answer is correct ($scope.results[].correct). The ‘$scope.checkQuizCompleted()’ method checks to make sure all questions have been answered before showing the results, when the user clicks the ‘Show Results’ button.

The $scope Containing Quiz, Title, and Results Properties

The $scope Containing Quiz, Title, and Results Properties

AngularJS bootstraps the application. Through AngularJS’s compiling and linking process, the partial view template (quiz-basic.html), shown below, the controller (quizBasicController.js), and the main layout (index.html), form the ‘\quizBasic’ View, which is presented to the user. Blogger, Dag-Inge Aas does a nice job of explaining this process in his post, Understanding template compiling in AngularJS.

<h4 class="title">{{title}}
<br/>

<!--quiz section-->
<form name="quiz">
  <div ng-repeat="question in quiz">
    <strong>{{question._id}}. {{question.question}}</strong>

    <div class="radio">
      <input required
             name="_id{{question._id}}"
             type="radio"
             value="true"
             ng-model="question.userChoice"
             ng-change="$parent.checkUserChoice(question._id, true)"/>
      <label for="_id{{question._id}}">True</label>
      <br/>
      <input required
             name="_id{{question._id}}"
             type="radio"
             ng-value="false"
             ng-model="question.userChoice"
             ng-change="$parent.checkUserChoice(question._id, false)"/>
      <label for="_id{{question._id}}">False</label>
    </div>
  </div>
</form>

<hr/>

<!--results section-->
<div ng-init="showAnswers=true">
  btn-sm"
          ng-click="showAnswers=checkQuizCompleted()">
    Show Results
  </button>
  <br/>
  <br/>

  <div ng-hide="showAnswers">
    <strong>Results</strong>

    <div ng-repeat="result in results">
      {{result._id}}. <span
        ng-class="result.correct == 'Correct' ? 'correct' : 'incorrect'">
        {{result.correct}}
      </span>
    </div>
  </div>

We load all the contents of the JSON data file into $scope and use the ‘ng-repeat‘ directive to iterate over the questions ($scope.quiz[]) and the results ($scope.results[]). Because of this, modifying existing questions and adding new ones is easy. This requires no additional coding, just a change to the JSON data file.

 Advanced Quiz

Using all the same basic building blocks as the basic quiz, with the addition of custom-directives, we can add complexity to our quiz, without a lot of additional coding. This advanced quiz has nine questions, including three true-false format, three multiple-choice format, and three multiple-correct format. As the user answers each questions, they are presented with the results, either ‘Correct’ or ‘Incorrect’.

Advanced Quiz Before User Input

Advanced Quiz Before User Input

Advanced Quiz With User Input

Advanced Quiz With User Input

Similar to the basic quiz, the advanced quiz uses a single controller (quizAdvancedController.js), factory service (quizAdvancedFactory.js), route (apps.js – ‘/quizAdvanced’), partial view template (quiz-advanced.html), and the main layout (index.html). Additionally, the advanced quiz uses a filter, three custom directives, and four partial view templates. The fourth partial view template, ‘quiz-choice-response.html’, is called by the first three partial view templates. It contains common DOM elements. Like the basic quiz, all these components are part the ‘quizModule’ module. I’ve attempted to illustrate these relationships in the diagram, below.

Just like with the basic quiz, the factory service (quizAdvancedFactory.js) uses $resource to load the contents of a local JSON-format file (quiz-advanced.json). This time however, the JSON file contains three types of questions, each with a slightly different schema. The three different question types are shown in the code snippet below. The true-false questions have a boolean value as the answer, the multiple choice questions, an integer as an answer, and the multiple correct questions, an array of integers as an answer.

angular.module('quizModule')
  .factory('quizAdvancedFactory', function ($resource) {
    return $resource('./data/quiz-advanced.json');
  });
{
  "name":      "Advanced Quiz Example",
  "questions": [
    {
      "_id":      1,
      "question": "AngularJS is written completely in JavaScript.",
      "type":     "True-false",
      "answer":   true
    },
    {
      "_id":      4,
      "question": "What does the acronym 'MVC' stand for?",
      "type":     "Multiple choice",
      "choices":  [
        {
          "_id":    1,
          "choice": "Method, Variable, Constant"
        },
        {
          "_id":    2,
          "choice": "Module, View, Constraint"
        },
        {
          "_id":    3,
          "choice": "Model, View, Controller"
        },
        {
          "_id":    4,
          "choice": "None of the above"
        }
      ],
      "answer":   3
    },
    {
      "_id":      7,
      "question": "Which of the following are associated with AngularJS?",
      "type":     "Multiple correct",
      "choices":  [
        {
          "_id":    1,
          "choice": "Controller"
        },
        {
          "_id":    2,
          "choice": "Interface"
        },
        {
          "_id":    3,
          "choice": "Route"
        },
        {
          "_id":    4,
          "choice": "View"
        },
        {
          "_id":    5,
          "choice": "Model"
        },
        {
          "_id":    6,
          "choice": "Generator"
        },
        {
          "_id":    7,
          "choice": "Service"
        },
        {
          "_id":    8,
          "choice": "Node"
        }
      ],
      "answer":   [1, 3, 4, 5, 7]
    }
    ...
  ]
}

The controller (quizAdvancedController.js), calls the factory service (quizAdvancedFactory.js), which returns the ‘data’ object, just like in the basic quiz example.

angular.module('quizModule')
  .controller('QuizAdvancedController',
  function ($scope, quizAdvancedFactory, filterFilter) {
    var createResults;
    $scope.title = null; // quiz title
    $scope.quiz = {}; // quiz questions
    $scope.results = []; // user results

    quizAdvancedFactory.get(function (data) {
      $scope.title = data.name;
      $scope.quiz = data.questions;
      createResults();
    });

    // prepare array of result objects
    createResults = function () {
      var len = $scope.quiz.length;
      for (var i = 0; i < len; i++) {
        $scope.results.push({
          _id:        $scope.quiz[i]._id,
          answer:     $scope.quiz[i].answer,
          userChoice: null,
          correct:    null
        });
      }
    };

    // used for multiple correct type questions
    $scope.checkUserMultiCorrectChoice = function (question, userChoice) {
      // create blank array
      if ($scope.results[question - 1].userChoice === null) {
        $scope.results[question - 1].userChoice = [];
      }

      // find choice, if not there the add or if there remove
      var pos = $scope.results[question - 1].userChoice.indexOf(userChoice);
      if (pos < 0) {
        $scope.results[question - 1].userChoice.push(userChoice);
      } else {
        $scope.results[question - 1].userChoice.slice(pos, 1);
      }

      // check the user's choice against the answer
      var answer = JSON.stringify($scope.quiz[question - 1].answer.sort());
      var choice = JSON.stringify($scope.results[question - 1].userChoice.sort());

      if (answer === choice) {
        $scope.results[question - 1].correct = true;
      } else {
        $scope.results[question - 1].correct = false;
      }
    };

    // used for multiple choice and true-false type questions
    $scope.checkUserChoice = function (question, userChoice) {
      // assign the user's choice to userChoice
      $scope.results[question - 1].userChoice = userChoice;

      // check the user's choice against the answer
      if ($scope.results[question - 1].answer === userChoice) {
        $scope.results[question - 1].correct = true;
      } else {
        $scope.results[question - 1].correct = false;
      }
    };

    // find a specific question
    $scope.filteredQuestion = function (questionId) {
      return filterFilter($scope.quiz, {_id: questionId});
    };
  });

For true-false and multiple-choice questions, the ‘$scope.checkUserChoice()’ method stores the user’s answer in the ‘$scope.results[].answer’ property. The method also evaluates if the answer is correct, and stores that value in the ‘$scope.results[].correct’ property. The method takes two input parameters, question id and user’s choice.

For multiple correct questions, the ‘$scope.checkUserMultiCorrectChoice()’ method does the same. The difference, for multiple-correct questions, the method stores both the multiple answers and multiple user choices in a pair of arrays, ‘$scope.results[].answer[]’ and ‘$scope.results[].userChoice[]’ object arrays. In addition to storing the user’s choices, the method removes user choices if they are deselected by the user, in the view.

Lastly, the ‘$scope.checkUserMultiCorrectChoice()’ method evaluates the user’s choices array against the correct answers array. In the example below, note the ‘$scope.results[6].answer[]’ array and the ‘$scope.results[6].userChoice[]’ array. They were determined to be equal by the ‘$scope.checkUserMultiCorrectChoice()’, and reflected in the ‘true’ value of the ‘$scope.results[6].correct’ property.

Advanced Quiz Results for Multiple-Correct Question

Advanced Quiz Results for Multiple-Correct Question

Filter

In the ‘quizAdvancedController.js’ controller, note the ‘filterFilter’ object injected into the controller’s main function. At the end of the controller, also note the ‘$scope.filterQuestion(questionId)’ method.

angular.module('quizModule')
  .controller('QuizAdvancedController',
  function ($scope, quizAdvancedFactory, filterFilter) {
    ...
    // find a specific question
    $scope.filteredQuestion = function (questionId) {
      return filterFilter($scope.quiz, {_id: questionId});
    };
  });

The ‘$scope.filterQuestion(questionId)’ method takes a question id as an input parameter, and returns that single question. The ‘$scope.filterQuestion(questionId)’ method actually returns a call to the angular.filter‘s filterFilter. It takes two parameters,  an array containing the entire set of questions (‘$scope.quiz’ array), and a ‘pattern object’ containing the specific ‘id’ to filter on (‘{_id: questionId}’).

The filter method is called by the three question-type partial view templates, for example ‘quiz-multi-choice.html’. For example, the partial view template, ‘quiz-advanced.html’, uses the ‘quiz-multichoice’ element to call the custom directive, ‘quizMultiChoiceDirective.js’, passing it a request for question id 4.

<h4 class="title">{{title}}</h4>
<br/>
<form name="quiz">
  <!--true-false-->
  <quiz-truefalse filter-by="1"></quiz-truefalse>
  <quiz-truefalse filter-by="2"></quiz-truefalse>
  <quiz-truefalse filter-by="3"></quiz-truefalse>

  <!--multi-choice-->
  <quiz-multichoice filter-by="4"></quiz-multichoice>
  <quiz-multichoice filter-by="5"></quiz-multichoice>
  <quiz-multichoice filter-by="6"></quiz-multichoice>

  <!--multi-correct-->
  <quiz-multicorrect filter-by="7"></quiz-multicorrect>
  <quiz-multicorrect filter-by="8"></quiz-multicorrect>
  <quiz-multicorrect filter-by="9"></quiz-multicorrect>
</form>

The custom directive, ‘quizMultiChoiceDirective.js’, loads the partial view template, ‘quiz-multi-choice.html’, using the ‘templateUrl’ argument. The ‘templateUrl’ argument uses ajax to load the template. The template, ‘quiz-multi-choice.html’, uses the ‘ng-repeat‘ directive to populate its section of the advanced quiz with question id 4 (div ng-repeat="question in $parent.filteredQuestion(filterBy)). It does so by calling filteredQuestion(4), in the ‘quizAdvancedController.js’ controller.

<div ng-repeat="question in $parent.filteredQuestion(filterBy)">
  <strong>{{question._id}}. {{question.question}}</strong>
  <div class="radio" ng-repeat="choice in question.choices">
    <input
        name="_id{{question._id}}"
        type="radio"
        value="{{choice._id}}"
        ng-model="question.userChoice"
        ng-change="$parent.$parent.$parent.checkUserChoice(question._id, choice._id)"/>
    <label for="_id{{question._id}}">{{choice.choice}}</label>
  </div>
  <div ng-include src="'/scripts/partials/quiz-choice-response.html'"></div>
</div>
<br/>

The ‘quiz-multi-choice.html’ template also loads the contents of the ‘choice-response.html’ template. This template contains DOM elements, common to all three question-type templates.

<div ng-if="$parent.$parent.$parent.results[question._id - 1].correct"
     class="result correct">
  <span class="glyphicon glyphicon-thumbs-up"></span>
  Correct!
</div>
<!--specify 'false' because not true (!) would include null (blank)-->
<div ng-if="$parent.$parent.$parent.results[question._id - 1].correct === false"
     class="result incorrect">
  <span class="glyphicon glyphicon-thumbs-down"></span>
  Incorrect
</div>

I have attempted to illustrate the filter in the diagram, below. I intentionally left out a few non-essential components to simplify the diagram, such as the main layout, config, route, service, other custom directives, and the JSON data file.

Using these techniques, we can easily extend the quiz, adding new answer types, such as ordering, matching, short-answer, and so forth.

Managing Scope

Being familiar with AngularJS, you should understand how scope works. You should know there is more than one scope, and that scope is normally inherited from the parent scope. Directives such as ng-repeat, ng-switch, ng-view, and ng-include, all create their own child scopes. Said better by AngularJS’s team, ‘in AngularJS, a child scope normally prototypically inherits from its parent scope. One exception to this rule is a directive that uses scope: { … } — this creates an isolate scope that does not prototypically inherit.‘ We use a number of directives. We also use ‘scope:’ within our custom directives for the advanced quiz example, which breaks the chain of inheritance.

In some of the code examples in this post, you will notice the use of ‘$parent‘, ‘$parent.$parent‘, or even ‘$parent.$parent.$parent‘, instead of simply ‘$scope‘. Sometimes, it necessary to reach outside the current scope, to a parent’s scope (‘$parent‘), or that parent’s parent’s scope (‘$parent.$parent‘). A simple example of this, in the partial view template, ‘quiz-multi-choice.htm’, we call ‘$parent.filteredQuestion(filterBy)‘. The ‘filteredQuestion(filterBy)’ method we need to use is in the parent scope of the template’s scope, so we call ‘$parent’ instead of ‘$scope’.

So how can you determine which scope contains the method or properties you are seeking? Batarang, the AngularJS WebInspector Extension for Chrome. Batarang adds an additional ‘AngularJS’ tab to Developer tools for Chrome. Previously, we were using the example of question id 4 with the AngularJS’s filter. Using the Batarang, below, we can see the question id 4 in the final View. Each question returned using the filter is contained within its own separate scope.

Question #4 in Batarang Models Tab

Question #4 in Batarang Models Tab

This example also shows how complex working with AngularJS’s scope(s) can be. Starting with a particular scope, using Batarang, you can visually move up (parent scope) or down (child scope) within the scope hierarchy. The contents of each scope, the Model, is displayed on the right. Batarang also offers several other feature, seen below, including AngularJS application performance and dependency visualization.

Links

Quiz Question Types (presentation)

Understanding Service Types (article)

Understanding Scopes (article)

Build custom directives with AngularJS (article)

Google I/O 2012 – Better Web App Development Through Tooling (YouTube video)

, , , , , , , , , , ,

Leave a comment

Cloud-based Continuous Integration and Deployment for .NET Development

Create a cloud-based, continuous integration and deployment toolchain for distributed .NET development teams, using GitHub, AppVeyor, and Microsoft Azure.

Introduction

Whether you are part of a large enterprise development environment, or a member of a small start-up, you are likely working with remote team members. You may be remote, yourself. Developers, testers, web designers, and other team members, commonly work remotely on software projects. Distributed teams, comprised of full-time staff, contractors, and third-party vendors, often work in different buildings, different cities, and even different countries.

If software is no longer strictly developed in-house, why should our software development and integration tools be located in-house? We live in a quickly evolving world of Saas, PaaS, and IaaS. Popular SaaS development tools include Visual Studio Online, GitHub, BitBucket, Travis-CI, AppVeyor, CloudBeesJIRA, AWS, Microsoft Azure, Nodejitsu, and Heroku, to name just a few. With all these ‘cord-cutting’ tools, there is no longer a need for distributed development teams to be tethered to on-premise tooling, via VPN tunnels and Remote Desktop Connections.

There are many combinations of hosted software development and integration tools available, depending on your technology stack, team size, and budget. In this post, we will explore one such toolchain for .NET development. Using GitGitHub, AppVeyor, and Microsoft Azure, we will continuously build, test, and deploy a multi-tier .NET solution, without ever leaving Visual Studio. This particular toolchain has strong integration between tools, and will scale to fit most development teams.

Git and GitHub
Git and GitHub are widely used in development today. Visual Studio 2013 has fully-integrated Git support and Visual Studio 2012 has supported Git via a plug-in since early last year. Git is fully compatible with Windows. Additionally, there are several third party tools available to manage Git and GitHub repositories on Windows. These include Git Bash (my favorite), Git GUI, and GitHub for Windows.

GitHub acts as a replacement for your in-house Git server. Developers commit code to their individual local Git project repositories. They then push, pull, and merge code to and from a hosted GitHub repository. For security, GitHub requires a registered username and password to push code. Data transfer between the local Git repository and GitHub is done using HTTPS with SSL certificates or SSH with public-key encryption. GitHub also offers two-factor authentication (2FA). Additionally, for those companies concerned about privacy and added security, GitHub offers private repositories. These plans range in price from $25 to $200 per month, currently.

GitHub View of Solution

GitHub View of Solution

AppVeyor
AppVeyor’s tagline is ‘Continuous Integration for busy developers’. AppVeyor automates building, testing and deployment of .NET applications. AppVeyor is similar to Jenkins and Hudson in terms of basic functionality, except AppVeyor is only provided as a SaaS. There are several hosted solutions in the continuous integration and delivery space similar to AppVeyor. They include CloudBees (hosted-Jenkins) and Travis-CI. While CloudBees and Travis CI works with several technology stacks, AppVeyor focuses specifically on .NET. Its closest competitor may be Microsoft’s new Visual Studio Online.

Identical to GitHub, AppVeyor also offers private repositories (spaces for building and testing code). Prices for private repositories currently range from $39 to $319 per month. Private repositories offer both added security and support.  AppVeyor integrates nicely with several cloud-based code repositories, including GitHub, BitBucket, Visual Studio Online, and Fog Creek’s Kiln.

AppVeyor View of Last Build of Solution

AppVeyor View of Latest Build of Solution

Azure
This post demonstrates continuous deployment from AppVeyor to a Microsoft Server 2012-based Azure VM. The VM has IIS 8.5, Web Deploy 3.5, IIS Web Management Service (WMSVC), and other components and configuration necessary to host the post’s sample Solution. AppVeyor would work just as well with Azure’s other hosting options, as well as other cloud-based hosting providers, such as AWS or Rackspace, which also supports the .NET stack.

New Microsoft Azure Portal View of VM

New Microsoft Azure Portal View of VM

Sample Solution

The Visual Studio Solution used for this post was originally developed as part of an earlier post, Consuming Cross-Domain WCF REST Services with jQuery using JSONP. The original Solution, from 2011, demonstrated jQuery’s AJAX capabilities to communicate with a RESTful WCF service, cross-domains, using JSONP. I have since updated and modernized the Solution for this post. The revised Solution is on a new branch (‘rev2014′) on GitHub. Major changes to the Solution include an upgrade from VS2010 to VS2013, the use of Git DVCS, NuGet package management, Web Publish Profiles, Web Essentials for bundling JS and CSS, Twitter Bootstrap, unit testing, and a lot of code refactoring.

Revised Restaurant Menu Demo Viewed on Android Tablet

Revised Restaurant Menu Demo Viewed on Android Tablet

The updated VS Solution contains the following four Projects:

  1. Restaurant – C# Class Library
  2. RestaurantUnitTests – Unit Test Project
  3. RestaurantWcfService – C# WCF Service Application
  4. RestaurantDemoSite – Web Site (JS/HTML5)
VS 2013 View of Solution

VS 2013 View of Solution

The Visual Studio Solution Explorer tab, here, shows all projects contained in the Solution, and the primary files and directories they contain.

As explained in the earlier post, the ‘RestaurantDemoSite’ web site makes calls to the ‘RestaurantWcfService’ WCF service. The WCF service exposes two operations, one that returns the menu (‘GetCurrentMenu’), and the other that accepts an order (‘SendOrder’). For simplicity, orders are stored in the files system as JSON files. No database is required for the Solution. All business logic is contained in the ‘Restaurant’ class library, which is referenced by the WCF service. This architecture is illustrated in this Visual Studio Assembly Dependencies Diagram.

Installing and Configuring the Solution

The README.md file in the GitHub repository contains instructions for installing and configuring this Solution. In addition, a set of PowerShell scripts, part of the Solution’s repository, makes the installation and configuration process, quick and easy. The scripts handle creating the necessary file directories and environment variables, setting file access permissions, and configuring IIS websites. Make sure to change the values of the environment variables before running the script. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create environment variables
[Environment]::SetEnvironmentVariable("AZURE_VM_HOSTNAME", `
  "{YOUR HOSTNAME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_USERNAME", `
  "{YOUR USERNME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_PASSWORD", `
  "{YOUR PASSWORD HERE}", "User")

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

Cloud-Based Continuous Integration and Delivery

Webhooks
The first point of integration in our hosted toolchain is between GitHub and AppVeyor. In order for AppVeyor to work with GitHub, we use a Webhook. Webhooks are widely used to communicate events between systems, over HTTP. According to GitHub, ‘every GitHub repository has the option to communicate with a web server whenever the repository is pushed to. These webhooks can be used to update an external issue tracker, trigger CI builds, update a backup mirror, or even deploy to your production server.‘ Basically, we give GitHub permission to tell AppVeyor every time code is pushed to the GitHub. GitHub sends a HTTP POST to a specific URL, provided by AppVeyor. AppVeyor responds to the POST by cloning the GitHub repository, and building, testing, and deploying the Projects. Below is an example of a webhook for AppVeyor, in GitHub.

GitHub's AppVeyor Webhook Configuration

GitHub’s AppVeyor Webhook Configuration

Unit Tests
To help illustrate the use of AppVeyor for automated unit testing, the updated Solution contains a Unit Test Project. Every time code is committed to GitHub, AppVeyor will clone and build the Solution, followed by running the set of unit tests shown below. The project’s unit tests test the Restaurant class library (‘restaurant.dll’). The unit tests provide 100% code coverage, as shown in the Visual Studio Code Coverage Results tab, below:

Code Coverage Results for Restaurant Class Library

Code Coverage Results for Restaurant Class Library

AppVeyor runs the Solution’s automated unit tests using VSTest.Console.exe. VSTest.Console calls the unit test Project’s assembly (‘restaurantunittests.dll’).  As shown below, the VSTest command (in light blue) runs all tests, and then displays individual test results, a results summary, and the total test execution time.

AppVeyor Running Automated Unit Tests Using VSTest.Console

AppVeyor Running Automated Unit Tests Using VSTest.Console

VSTest.Console has several command line options similar to MSBuild. They can be adjusted to output various levels of feedback on test results. For larger projects, you can selectively choose which pre-defined test sets to run. Test sets needs are set-up in Solution, in advance.

Configuring Azure VM
Before we publish the Solution from AppVeyor to the Azure, we need to configure the VM. Again, we can use PowerShell to script most of the configuration. Most scripts are the same ones we used to configure our local environment. The README.md file in the GitHub repository contains instructions. The scripts handle creating the necessary file directories, setting file access permissions, configuring the IIS websites, creating the Web Deploy User account, and assigning it in IIS. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create new local non-admin User and Group for Web Deploy

# Main variables (Change these!)
[string]$userName = "USER_NAME_HERE" # mjones
[string]$fullName = "FULL USER NAME HERE" # Mike Jones
[string]$password = "USER_PASSWORD_HERE" # pa$$w0RD!
[string]$groupName = "GROUP_NAME_HERE" # Development

# Create new local user account
[ADSI]$server = "WinNT://$Env:COMPUTERNAME"
$newUser = $server.Create("User", $userName)
$newUser.SetPassword($password)

$newUser.Put("FullName", "$fullName")
$newUser.Put("Description", "$fullName User Account")

# Assign flags to user
[int]$ADS_UF_PASSWD_CANT_CHANGE = 64
[int]$ADS_UF_DONT_EXPIRE_PASSWD = 65536
[int]$COMBINED_FLAG_VALUE = 65600

$flags = $newUser.UserFlags.value -bor $COMBINED_FLAG_VALUE
$newUser.put("userFlags", $flags)
$newUser.SetInfo()

# Create new local group
$newGroup=$server.Create("Group", $groupName)
$newGroup.Put("Description","$groupName Group")
$newGroup.SetInfo()

# Assign user to group
[string]$serverPath = $server.Path
$group = [ADSI]"$serverPath/$groupName, group"
$group.Add("$serverPath/$userName, user")

# Assign local non-admin User in IIS for Web Deploy
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management")
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\MenuWcfRestService", $FALSE)
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\RestaurantDemoSite", $FALSE)

Publish Profiles
The second point of integration in our toolchain is between AppVeyor and the Azure VM. We will be using Microsoft’s Web Deploy to deploy our Solution from AppVeyor to Azure.  Web Deploy integrates with the IIS Web Management Service (WMSVC) for remote deployment by non-administrators. I have already configured Web Deploy and created a non-administrative user on the Azure VM. This user’s credentials will be used for deployments. These are the credentials in the username and password environment variables we created.

To continuously deploy to Azure, we will use Web Publish Profiles with Microsoft’s Web Deploy technology. Both the website and WCF service projects contain individual profiles for local development (‘LocalMachine’), as well as deployment to Azure (‘AzureVM’). The ‘AzureVM’ profiles contain all the configuration information AppVeyor needs to connect to the Azure VM and deploy the website and WCF service.

The easiest way to create a profile is by right-clicking on the project and selecting the ‘Publish…’ and ‘Publish Web Site’ menu items. Using the Publish Web wizard, you can quickly build and validate a profile.

Publish Web Profile Tab

Publish Web Profile Tab

Each profile in the above Profile drop-down, represents a ‘.pubxml’ file. The Publish Web wizard is merely a visual interface to many of the basic configurable options found in the Publish Profile’s ‘.pubxml’ file. The .pubxml profile files can be found in the Project Explorer. For the website, profiles are in the ‘App_Data’ directory (i.e. ‘Restaurant\RestaurantDemoSite\App_Data\PublishProfiles\AzureVM.pubxml’). For the WCF service, profiles are in the ‘Properties’ directory (i.e. ‘Restaurant\RestaurantWcfService\Properties\PublishProfiles\AzureVM.pubxml’).

As an example, below are the contents of the ‘LocalMachine’ profile for the WCF service (‘LocalMachine.pubxml’). This is about as simple as a profile gets. Note since we are deploying locally, the profile is configured to open the main page of the website in a browser, after deployment; a helpful time-saver during development.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>FileSystem</WebPublishMethod>
        <LastUsedBuildConfiguration>Debug</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish>http://localhost:9250/RestaurantService.svc/help</SiteUrlToLaunchAfterPublish>
        <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <publishUrl>C:\MenuWcfRestService</publishUrl>
        <DeleteExistingFiles>True</DeleteExistingFiles>
    </PropertyGroup>
</Project>

A key change we will make is to use environment variables in place of sensitive configuration values in the ‘AzureVM’ Publish Profiles. The Web Publish wizard does not allow this change. To do this, we must edit the ‘AzureVM.pubxml’ file for both the website and the WCF service. We will replace the hostname of the server where we will deploy the projects with a variable (i.e. AZURE_VM_HOSTNAME = ‘MyAzurePublicServer.net’). We will also replace the username and password used to access the deployment destination. This way, someone accessing the Solution’s source code, won’t be able to obtain any sensitive information, which would give them the ability to hack your site. Note the use of the ‘AZURE_VM_HOSTNAME’ and ‘AZURE_VM_USERNAME’ environment variables, show below.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>MSDeploy</WebPublishMethod>
        <LastUsedBuildConfiguration>AppVeyor</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish />
        <LaunchSiteAfterPublish>False</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <MSDeployServiceURL>https://$(AZURE_VM_HOSTNAME):8172/msdeploy.axd</MSDeployServiceURL>
        <DeployIisAppPath>MenuWcfRestService</DeployIisAppPath>
        <RemoteSitePhysicalPath />
        <SkipExtraFilesOnServer>False</SkipExtraFilesOnServer>
        <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
        <EnableMSDeployBackup>True</EnableMSDeployBackup>
        <UserName>$(AZURE_VM_USERNAME)</UserName>
        <_SavePWD>False</_SavePWD>
        <_DestinationType>AzureVirtualMachine</_DestinationType>
    </PropertyGroup>
</Project>

The downside of adding environment variables to the ‘AzureVM’ profiles, the Publish Profile wizard feature within Visual Studio will no longer allow us to deploy, using the ‘AzureVM’ profiles. As demonstrated below, after substituting variables for actual values, the ‘Server’ and ‘User name’ values will no longer display properly. We can confirm this by trying to validate the connection, which fails. This does not indicate your environment variable values are incorrect, only that Visual Studio can longer correctly parse the ‘AzureVM.pubxml’ file and display it properly in the IDE. No big deal…

Publish Web Connection Tab - Failed Validation

Publish Web Connection Tab – Failed Validation

We can use the command line or PowerShell to deploy with the ‘AzureVM’ profiles.  AppVeyor accepts both command line input, as well as PowerShell for most tasks. All examples in this post and in the GitHub repository use PowerShell.

To build and deploy (publish) to Azure from the command line or PowerShell, we will use MSBuild. Below are the MSBuild commands used by AppVeyor to build our Solution, and then deploy our Solution to Azure. The first two MSBuild commands build the WCF service and the website. The second two deploy them to Azure. There are several ways you could construct these commands to successfully build and deploy this Solution. I found these commands to be the most succinct. I have split the build and the deploy functions so that the AppVeyor can run the automated unit tests, in between. If the tests don’t pass, we don’t want to deploy the code.

# Build WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:Configuration=AppVeyor /verbosity:minimal /nologo

# Build website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:Configuration=Release /verbosity:minimal /nologo

Write-Host "*** Solution builds complete."
# Deploy WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=AppVeyor `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

# Deploy website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=Release `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

Write-Host "*** Solution deployments complete."

Below is the output from AppVeyor showing the WCF Service and website’s deployment to Azure. Deployment is the last step in the continuous delivery process. At this point, the Solution was already built and the automated unit tests completed, successfully.

AppVeyor Output from Deployments to Azure.

AppVeyor Output from Deployments to Azure.

Below is the final view of the sample Solution’s WCF service and web site deployed to IIS 8.5 on the Azure VM.

Final View of IIS Sites Running on Azure VM

Final View of IIS Sites Running on Azure VM

Links

 

, , , , , , , , , , , , , , , , , ,

1 Comment

Single Page Web Applications, Book Review

A brief review of ‘Single Page Web Applications’, by authors Michael S. Mikowski and Josh C. Powell. Learn to build modern browser-based apps, using the latest full-stack JavaScript technologies.

Recently, I had the opportunity to review the eBook edition of ‘Single Page Web Applications‘, by authors Michael S. Mikowski and Josh C. Powell, published by Manning Publications. Most of us involved in software development are acutely aware of recent explosion of the interest in full-stack JavaScript applications, NoSQL databases, HTML5/CCS3, web-sockets, and single-page web applications (SPAs). Mikowski and Powell’s book, Single Page Web Applications, hit the market at a perfect time (release last September), and with just the right mix of timely learning opportunities for the reader.

An interesting twist on many current books in this category, the lack of the author’s heavy reliance on one or more popular JavaScript libraries, such as AngularJS,  Ember.js, and Backbone.js. Mikowski and Powell purposefully build a JavaScript-based SPA from the ground up, without simply plugging into a ready-made library or API. Although many readers may be heavily tied to a certain library or API, understanding how to build a SPA from the ground up is invaluable.

The first thing that struck me, the thoroughness of the book’s examples. A question many publishers ask, does the book have enough ‘real-word examples’. Sadly, the answer is often no. Many books only offer incomplete, academic examples. They are often difficult to scale to match the complexity of modern software development. However in this case, I felt Mikowski and Powell’s book hit a home run with their ‘real-world’ code samples. It is obvious both authors are working professionals, doing development in the ‘real world’. The book’s samples build upon one another throughout the book, effectively expanding the application’s scope and the user’s knowledge.

The second attribute that stood out to me, the book’s documentation. In fact, that might have been one of the very few minor negatives I found with the book — to many comments. The authors go to great lengths to thoroughly comment and document the code samples. In some examples, almost obscuring the code itself. I found the comments both detailed and helpful.

The third attribute that stood out to me, the author’s focus on testing. Testing the sample applications is highlighted throughout the book. Additionally, Appendix B, ‘Testing a SPA’, had more information on testing complex JavaScript applications than many other books I have read. Testing software is often ignored in books and training materials. However, software testing is an integral part of the ‘real-world’ software development life-cycle. Testing is critical to software’s success.

Lastly, I found a lot of value in Appendix A, ‘JavaScript coding standard‘. Read this part, first! Anyone can follow along with the book, mimicking code samples, without really understanding JavaScript’s core concepts. Without a real understanding, it is hard to apply the book’s lessons to your own application. I felt the JavaScript overview in Appendix A of Mikowski and Powell’s book was one of the best I have read. I will be referring back to appendix’s coding style guide, in the future.

, , , , , ,

Leave a comment

Windows PowerShell 4.0 for .NET Developers, Book Review

A brief review of ‘Windows PowerShell 4.0 for .NET Developers’, a fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment.

Windows PowerShell 4.0 for .NET Developers

Introduction

Recently, I had the opportunity to review ‘Windows PowerShell 4.0 for .NET Developers‘, published by Packt Publishing. According to its author, Sherif Talaat, the book is ‘a fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment.‘ Working in a large and complex software development organization, technologies such as PowerShell, which enable increased speed and automation, are essential to our success. Having used PowerShell on a regular basis as a .NET developer for the past few years, I was excited to see what Sherif’s newest book offered.

Requirements

The book recommends the following minimal software configuration to work through the code samples:

  • Windows Server 2012 R2 (includes PowerShell 4.0 and .NET 4.5)
  • SQL Server 2012
  • Visual Studio 2012/2013
  • Visual Studio Team Foundation Server (TFS) 2012/2013

To test the book’s samples, I provisioned a fresh VM, and using my MSDN subscription, installed the required Windows Server, SQL Server, and Team Foundation Server. I worked directly on the VM, as well as remotely from a Windows 7 Enterprise-based development machine with Visual Studio 2012 installed. The code samples worked fairly well, with only a few minor problems I found. There is still no errata published for the book as of the time of review.

A key aspect many authors do not address, is the complexities of using PowerShell in a corporate environment. Working individually or on a small network, developers don’t always experience the added burden of restrictive network security, LDAP, proxy servers, proxy authentication, XML gateways, firewalls, and centralized computer administration. Any code that requires access to remote servers and systems, often requires additional coding to work within a corporate environment. It can be frustrating to debug and extend simple examples to work successfully within an enterprise setting.

Contents

Windows PowerShell 4.0 for .NET Developers, at 115 pages in length, is divided into five chapters:

  • Chapter 1: Getting Started with Windows PowerShell
  • Chapter 2: Unleashing Your Development Skills with PowerShell
  • Chapter 3: PowerShell for Your Daily Administration Tasks
  • Chapter 4: PowerShell and Web Technologies
  • Chapter 5: PowerShell and Team Foundation Server

Chapter 1 provides a brief introduction to PowerShell. At a scant 30 pages, I would not recommend this book as a way to learn PowerShell for the beginner. For learning PowerShell, I recommend Instant Windows PowerShell, by Vinith Menon, also published by Packt Publishing. Alternatively, I recommend a few books by Manning Publications, including Learn Windows PowerShell in a Month of Lunches, Second Edition.

Chapter 2 discusses PowerShell in relationship to several key Microsoft technologies, including Windows Management Instrumentation (WMI), Common Information Model (CIM), Component Object Model (COM) and Extensible Markup Language (XML). As a .NET developer, it’s almost impossible not to have worked with one, or all of these technologies. Chapter 2 discusses how PowerShell works with .NET objects, and extend the .NET framework. The chapter also includes an easy-to-follow example of creating, importing, and calling a PowerShell binary module (compiled .NET class library), using Visual Studio.

Chapter 3 explores areas where .NET developer can start leveraging PowerShell for daily administrative tasks. In particular, I found the sections on PowerShell Remoting and administering IIS and SQL Server particularly useful. Being able to easily connect to remote web, application, and database servers from the command line (or, PowerShell prompt) and do basic system administration is a huge time savings in an agile development environment.

Chapters 4 focuses on how PowerShell interfaces with SOAP and REST based services, web requests, and JSON. Windows Communication Foundation (WCF) based service-oriented application development has been a trend for the last few years. Being able to manage, test, and monitor SOAP and RESTful services and HTTP requests/responses is important to .NET developers. PowerShell can often quicker and easier than writing and compiling service utilities in Visual Studio, or using proprietary third-party applications.

Chapter 5 is dedicated to Visual Studio Team Foundation Server (TFS), Microsoft’s end-to-end, Application Lifecycle Management (ALM) solution. Chapter 5 details the installation and use of TFS Power Tools and TFS PowerShell snap-in. Having held the roles of lead developer and Scrum Master, I have personally found some of the best uses for PowerShell in automating various aspects of TFS. Managing TFS often requires repetitive tasks, the place where PowerShell excels. You will need to explore additional resources beyond the scope of this book to really start automating TFS with PowerShell.

Conclusion

Overall, I enjoyed the book and felt it was well worth the time to explore. I applaud Sherif for targeting a PowerShell book specifically to developers. Due to its short length, the book did leave me wanting more information on a few subjects that were barely skimmed. I also found myself expecting guidance on a few subjects the book did not touch upon, such as PowerShell for cloud-based development (Azure), test automation, and build and deployment automation. For more information on some of those subjects, I recommend Sherif’s other book, also published by Packt Publishing, PowerShell 3.0 Advanced Administration Handbook.

, , , , , , , ,

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 758 other followers