Installing Puppet Master and Agents on Multiple VM Using Vagrant and VirtualBox

 Automatically provision multiple VMs with Vagrant and VirtualBox. Automatically install, configure, and test Puppet Master and Puppet Agents on those VMs.

Puppet Master Agent Vagrant (3)

Introduction

Note this post and accompanying source code was updated on 12/16/2014 to v0.2.1. It contains several improvements to improve and simplify the install process.

If you Puppet Labs’ Open Source Puppet Agent/Master architecture is an effective solution to manage infrastructure and system configuration. However, for the average System Engineer or Software Developer, installing and configuring Puppet Master and Puppet Agent can be challenging. If the installation doesn’t work properly, the engineer’s stuck troubleshooting, or trying to remove and re-install Puppet.

A better solution, automate the installation of Puppet Master and Puppet Agent on Virtual Machines (VMs). Automating the installation process guarantees accuracy and consistency. Installing Puppet on VMs means the VMs can be snapshotted, cloned, or simply destroyed and recreated, if needed.

In this post, we will use Vagrant and VirtualBox to create three VMs. The VMs will be build from a  Ubuntu 14.04.1 LTS (Trusty Tahr) Vagrant Box, previously on Vagrant Cloud, now on Atlas. We will use a single JSON-format configuration file to build all three VMs, automatically. As part of the Vagrant provisioning process, we will run a bootstrap shell script to install Puppet Master on the first VM (Puppet Master server) and Puppet Agent on the two remaining VMs (agent nodes).

Lastly, to test our Puppet installations, we will use Puppet to install some basic Puppet modules, including ntp and git on the server, and ntpgitDocker and Fig, on the agent nodes.

All the source code this project is on Github.

Vagrant

To begin the process, we will use the JSON-format configuration file to create the three VMs, using Vagrant and VirtualBox.

{
  "nodes": {
    "puppet.example.com": {
      ":ip": "192.168.32.5",
      "ports": [],
      ":memory": 1024,
      ":bootstrap": "bootstrap-master.sh"
    },
    "node01.example.com": {
      ":ip": "192.168.32.10",
      "ports": [],
      ":memory": 1024,
      ":bootstrap": "bootstrap-node.sh"
    },
    "node02.example.com": {
      ":ip": "192.168.32.20",
      "ports": [],
      ":memory": 1024,
      ":bootstrap": "bootstrap-node.sh"
    }
  }
}

The Vagrantfile uses the JSON-format configuration file, to provision the three VMs, using a single ‘vagrant up‘ command. That’s it, less than 30 lines of actual code in the Vagrantfile to create as many VMs as we need. For this post’s example, we will not need to add any port mappings, which can be done from the JSON configuration file (see the READM.md for more directions). The Vagrant Box we are using already has the correct ports opened.

If you have not previously used the Ubuntu Vagrant Box, it will take a few minutes the first time for Vagrant to download the it to the local Vagrant Box repository.

# vi: set ft=ruby :

# Builds Puppet Master and multiple Puppet Agent Nodes using JSON config file
# Author: Gary A. Stafford

# read vm and chef configurations from JSON files
nodes_config = (JSON.parse(File.read("nodes.json")))['nodes']

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "ubuntu/trusty64"

  nodes_config.each do |node|
    node_name   = node[0] # name of node
    node_values = node[1] # content of node

    config.vm.define node_name do |config|
      # configures all forwarding ports in JSON array
      ports = node_values['ports']
      ports.each do |port|
        config.vm.network :forwarded_port,
          host:  port[':host'],
          guest: port[':guest'],
          id:    port[':id']
      end

      config.vm.hostname = node_name
      config.vm.network :private_network, ip: node_values[':ip']

      config.vm.provider :virtualbox do |vb|
        vb.customize ["modifyvm", :id, "--memory", node_values[':memory']]
        vb.customize ["modifyvm", :id, "--name", node_name]
      end

      config.vm.provision :shell, :path => node_values[':bootstrap']
    end
  end
end

Once provisioned, the three VMs, also referred to as ‘Machines’ by Vagrant, should appear, as shown below, in Oracle VM VirtualBox Manager.

Vagrant Machines in VM VirtualBox Manager

Vagrant Machines in VM VirtualBox Manager

The name of the VMs, referenced in Vagrant commands, is the parent node name in the JSON configuration file (node_name), such as, ‘vagrant ssh puppet.example.com‘.

Vagrant Machine Names

Vagrant Machine Names

Bootstrapping Puppet Master Server

As part of the Vagrant provisioning process, a bootstrap script is executed on each of the VMs (script shown below). This script will do 98% of the required work for us. There is one for the Puppet Master server VM, and one for each agent node.

#!/bin/sh

# Run on VM to bootstrap Puppet Master server

if ps aux | grep "puppet master" | grep -v grep 2> /dev/null
then
    echo "Puppet Master is already installed. Exiting..."
else
    # Install Puppet Master
    wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb && \
    sudo dpkg -i puppetlabs-release-trusty.deb && \
    sudo apt-get update -yq && sudo apt-get upgrade -yq && \
    sudo apt-get install -yq puppetmaster

    # Configure /etc/hosts file
    echo "" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.5    puppet.example.com  puppet" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.10   node01.example.com  node01" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.20   node02.example.com  node02" | sudo tee --append /etc/hosts 2> /dev/null

    # Add optional alternate DNS names to /etc/puppet/puppet.conf
    sudo sed -i 's/.*\[main\].*/&\ndns_alt_names = puppet,puppet.example.com/' /etc/puppet/puppet.conf

    # Install some initial puppet modules on Puppet Master server
    sudo puppet module install puppetlabs-ntp
    sudo puppet module install garethr-docker
    sudo puppet module install puppetlabs-git
    sudo puppet module install puppetlabs-vcsrepo
    sudo puppet module install garystafford-fig

    # symlink manifest from Vagrant synced folder location
    ln -s /vagrant/site.pp /etc/puppet/manifests/site.pp
fi

There are a few last commands we need to run ourselves, from within the VMs. Once the provisioning process is complete,  ‘vagrant ssh puppet.example.com‘ into the newly provisioned Puppet Master server. Below are the commands we need to run within the ‘puppet.example.com‘ VM.

sudo service puppetmaster status # test that puppet master was installed
sudo service puppetmaster stop
sudo puppet master --verbose --no-daemonize
# Ctrl+C to kill puppet master
sudo service puppetmaster start
sudo puppet cert list --all # check for 'puppet' cert

According to Puppet’s website, ‘these steps will create the CA certificate and the puppet master certificate, with the appropriate DNS names included.

Bootstrapping Puppet Agent Nodes

Now that the Puppet Master server is running, open a second terminal tab (‘Shift+Ctrl+T‘). Use the command, ‘vagrant ssh node01.example.com‘, to ssh into the new Puppet Agent node. The agent node bootstrap script should have already executed as part of the Vagrant provisioning process.

#!/bin/sh

# Run on VM to bootstrap Puppet Agent nodes
# http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet/

if ps aux | grep "puppet agent" | grep -v grep 2> /dev/null
then
    echo "Puppet Agent is already installed. Moving on..."
else
    sudo apt-get install -yq puppet
fi

if cat /etc/crontab | grep puppet 2> /dev/null
then
    echo "Puppet Agent is already configured. Exiting..."
else
    sudo apt-get update -yq && sudo apt-get upgrade -yq

    sudo puppet resource cron puppet-agent ensure=present user=root minute=30 \
        command='/usr/bin/puppet agent --onetime --no-daemonize --splay'

    sudo puppet resource service puppet ensure=running enable=true

    # Configure /etc/hosts file
    echo "" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "# Host config for Puppet Master and Agent Nodes" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.5    puppet.example.com  puppet" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.10   node01.example.com  node01" | sudo tee --append /etc/hosts 2> /dev/null && \
    echo "192.168.32.20   node02.example.com  node02" | sudo tee --append /etc/hosts 2> /dev/null

    # Add agent section to /etc/puppet/puppet.conf
    echo "" && echo "[agent]\nserver=puppet" | sudo tee --append /etc/puppet/puppet.conf 2> /dev/null

    sudo puppet agent --enable
fi

Run the two commands below within both the ‘node01.example.com‘ and ‘node02.example.com‘ agent nodes.

sudo service puppet status # test that agent was installed
sudo puppet agent --test --waitforcert=60 # initiate certificate signing request (CSR)

The second command above will manually start Puppet’s Certificate Signing Request (CSR) process, to generate the certificates and security credentials (private and public keys) generated by Puppet’s built-in certificate authority (CA). Each Puppet Agent node must have it certificate signed by the Puppet Master, first. According to Puppet’s website, “Before puppet agent nodes can retrieve their configuration catalogs, they need a signed certificate from the local Puppet certificate authority (CA). When using Puppet’s built-in CA (that is, not using an external CA), agents will submit a certificate signing request (CSR) to the CA Puppet Master and will retrieve a signed certificate once one is available.

Agent Node Starting Puppet's Certificate Signing Request (CSR) Process

Agent Node Starting Puppet’s Certificate Signing Request (CSR) Process

Back on the Puppet Master Server, run the following commands to sign the certificate(s) from the agent node(s). You may sign each node’s certificate individually, or wait and sign them all at once. Note the agent node(s) will wait for the Puppet Master to sign the certificate, before continuing with the Puppet Agent configuration run.

sudo puppet cert list # should see 'node01.example.com' cert waiting for signature
sudo puppet cert sign --all # sign the agent node certs
sudo puppet cert list --all # check for signed certs
Puppet Master Completing Puppet's Certificate Signing Request (CSR) Process

Puppet Master Completing Puppet’s Certificate Signing Request (CSR) Process

Once the certificate signing process is complete, the Puppet Agent retrieves the client configuration from the Puppet Master and applies it to the local agent node. The Puppet Agent will execute all applicable steps in the site.pp manifest on the Puppet Master server, designated for that specific Puppet Agent node (ie.’node node02.example.com {...}‘).

Configuration Run Completed on Puppet Agent Node

Configuration Run Completed on Puppet Agent Node

Below is the main site.pp manifest on the Puppet Master server, applied by Puppet Agent on the agent nodes.

node default {
# Test message
  notify { "Debug output on ${hostname} node.": }

  include ntp, git
}

node 'node01.example.com', 'node02.example.com' {
# Test message
  notify { "Debug output on ${hostname} node.": }

  include ntp, git, docker, fig
}

That’s it! You should now have one server VM running Puppet Master, and two agent node VMs running Puppet Agent. Both agent nodes should have successfully been registered with Puppet Master, and configured themselves based on the Puppet Master’s main manifest. Agent node configuration includes installing ntp, git, Fig, and Docker.

Helpful Links

All the source code this project is on Github.

Puppet Glossary (of terms):
https://docs.puppetlabs.com/references/glossary.html

Puppet Labs Open Source Automation Tools:
http://puppetlabs.com/misc/download-options

Puppet Master Overview:
http://ci.openstack.org/puppet.html

Install Puppet on Ubuntu:
https://docs.puppetlabs.com/guides/install_puppet/install_debian_ubuntu.html

Installing Puppet Master:
http://andyhan.linuxdict.com/index.php/sys-adm/item/273-puppet-371-on-centos-65-quick-start-i

Regenerating Node Certificates:
https://docs.puppetlabs.com/puppet/latest/reference/ssl_regenerate_certificates.html

Automating Development Environments with Vagrant and Puppet:
http://blog.kloudless.com/2013/07/01/automating-development-environments-with-vagrant-and-puppet

, , , , , , , , , ,

Leave a comment

Preventing Race Conditions Between Containers in ‘Dockerized’ MEAN Applications

Eliminate potential race conditions between the MongoDB data Docker container and the Node.js web-application container in a ‘Dockerized’ MEAN application.

MEAN.JS Dockerized

Introduction

The MEAN stack is a has gained enormous popularity as a reliable and scalable full-stack JavaScript solution. MEAN web application’s have four main components, MongoDB, Express, AngularJS, and Node.js. MEAN web-applications often includes other components, such as Mongoose, Passport, Twitter Bootstrap, Yoeman, Grunt or Gulp, and Bower. The two most popular ready-made MEAN application templates are MEAN.IO from Linnovate, and MEAN.JS. Both of these offer a ready-made application framework for building MEAN applications.

Docker has also gained enormous popularity. According to Docker, Docker is an open platform, which enables developers and sysadmins apps to be quickly assembled from components. ‘Dockerized’ apps are completely portable and can run anywhere.

Docker is an ideal solution for MEAN applications. Being a full-stack JavaScript solution, MEAN applications are based on a multi-tier architecture. The MEAN application’s data tier contains the MongoDB noSQL database. The application tier (logic tier) contains Node.js and Express. The application tier can also contain other components, such as Mongoose, a Node.js Object Document Mapper (ODM) for MongoDB, and Passport, an authentication middleware for Node.js. Lastly, the presentation tier (front end) has client-side tools, such as AngularJS and Twitter Bootstrap.

Using Docker, we can ‘Dockerize’ or containerize each tier of a MEAN application, mirroring the physical architecture we would deploy a MEAN application to, in a Production environment. Just as we would always run a separate database server or servers for MongoDB, we can isolate MongoDB into a Docker container. Likewise, we can isolate the Node.js web server, along with the rest of the components (Mongoose, Express, Passport) on the application and presentation tiers, into a Docker container. We can easily add more containers, for more functionality, such as load-balancing and reverse-proxies (nginx), and caching (Redis and Memcached).

The MEAN.JS project has been very progressive in implementing Docker, to offer a more realistic environment for development and testing. An additional tool that the MEAN.JS project has implemented, to automate the creation of multiple Docker containers, is Fig. The tool, Fig, provides quick, automated creation of multiple, linked Docker containers.

Using Docker and Fig, a Developer can pull down ready-made base containers from Docker Hub, configure the containers as part of a multi-tier application environment, deploy our MEAN application components to the containers, and start the applications, all with a short list of commands.

MEAN.JS Dockerized
Note, I said development and test, not production. To extend Docker and Fig to production, you can use tools such as Flocker. Flocker, by ClusterHQ, can scale the single-host Fig environment to multiple containers on multiple machines (hosts).

MEAN Dockerized

Race Conditions

Docker containers have a very fast start-up time, compared to other technologies, such as VMs (virtual machines). However, based on their contents, containers take varying amounts of time to fully start-up. In most multi-tier applications, there is a required start-up sequence for components (tiers, servers, applications). For example, in a database-driven application, like a MEAN application, you should make sure the MongoDB database server is up and running, before starting the application. Although this is obvious, it becomes harder to guarantee the order in which components will start-up, when you leverage an asynchronous, automated, continuous delivery solution like Docker with Fig.

When component dependencies are not met because another container is not fully started, we can refer to this as race condition. I have found with most multi-container MEAN application, the slower starting MongoDB data container prevents the quicker-starting Node.js web-application container from properly starting the MEAN application. In other words, the application crashes.

Fixing Race Conditions with MEAN.JS Applications

In order to eliminate race conditions, we need to script our start-up sequence to guarantee the order in which components will start, ensuring the overall application starts correctly. Specifically in this post, we will eliminate the potential race condition between the MongoDB data container (db_1) and the Node.js web-application container (web_1). At the same time, we will fix a small error with the existing MEAN.JS project, that prevents proper start-up of the ‘dockerized’ container MEAN.JS application.

Race Condition with Docker

Download and Build MEAN.JS App

Clone the meanjs/mean repository, and install npm and bower packages.

git clone https://github.com/meanjs/mean.git
cd mean
npm install
bower install

Modify MEAN.JS App

  1. Add fig_start.sh start-up script to root of mean project.
  2. Modify the Dockerfile, replace CMD["grunt"] with CMD /bin/sh /home/mean/wait_mongo_start.sh
  3. Optional, add wait_mongo_start.sh clean-up script to root of mean project.

Fix Existing Issue with MEAN.JS App When Using Docker and Fig

The existing MEAN.IO application references localhost in the development configuration (config/env/development.js). The development configuration is the one used by the MEAN.IO application, at start-up. The MongoDB data container (db_1) is not running on localhost, it is running on a IP address, assigned my Docker. To discover the IP address, we must reference an environment variable (DB_1_PORT_27017_TCP_ADDR), created by Docker, within the Node.js web-application container (web_1).

  1. Modify the config/env/development.js file, add var DB_HOST = process.env.DB_1_PORT_27017_TCP_ADDR || 'localhost';
  2. Modify the config/env/development.js file, change db: 'mongodb://localhost/mean-dev', to db: 'mongodb://' + DB_HOST + '/mean-dev',

Start the Application

Start the application using Fig commands or using the clean-up/start-up script (sh fig_start.sh).

  1. Run fig build && fig up
  2. Alternately, run sh fig_start.sh

The Details…

The CMD command is the last step in the Dockerfile.The CMD command sets the wait_mongo_start.sh script to execute in the Node.js web-application container (web_1) when the container starts. This script prevents the grunt command from running, until nc (or netcat) succeeds at connecting to the IP address and port of mongod, the primary daemon process for the MongoDB system, on the MongoDB data container (db_1). The script uses a 3-second polling interval, which can be modified if necessary.

#!/bin/sh

polling_interval=3

# optional, view db_1 container-related env vars
#env | grep DB_1 | sort

echo "wait for mongo to start first..."

# wait until mongo is running in db_1 container
until nc -z $DB_1_PORT_27017_TCP_ADDR $DB_1_PORT_27017_TCP_PORT
do
 echo "waiting for $polling_interval seconds..."
 sleep $polling_interval
done

# start node app
grunt

The environment variables referenced in the script are created in the Node.js web-application container (web_1), automatically, by Docker. They are shown in the screen grab, below. You can discover these variables by uncommenting the env | grep DB_1 | sort line, above.

Docker Environment Variables Relating to DB_1

Docker Environment Variables Relating to DB_1

The Dockerfile modification is highlighted below.

#FROM dockerfile/nodejs

MAINTAINER Matthias Luebken, matthias@catalyst-zero.com

WORKDIR /home/mean

# Install Mean.JS Prerequisites
RUN npm install -g grunt-cli
RUN npm install -g bower

# Install Mean.JS packages
ADD package.json /home/mean/package.json
RUN npm install

# Manually trigger bower. Why doesn't this work via npm install?
ADD .bowerrc /home/mean/.bowerrc
ADD bower.json /home/mean/bower.json
RUN bower install --config.interactive=false --allow-root

# Make everything available for start
ADD . /home/mean

# Currently only works for development
ENV NODE_ENV development

# Port 3000 for server
# Port 35729 for livereload
EXPOSE 3000 35729

CMD /bin/sh /home/mean/wait_mongo_start.sh

The config/env/development.js modifications are highlighted below (abridged code).

'use strict';

// used when building application using fig and Docker
var DB_HOST = process.env.DB_1_PORT_27017_TCP_ADDR || 'localhost';

module.exports = {
	db: 'mongodb://' + DB_HOST + '/mean-dev',
	log: {
		// Can specify one of 'combined', 'common', 'dev', 'short', 'tiny'
		format: 'dev',
		// Stream defaults to process.stdout
		// Uncomment to enable logging to a log on the file system
		options: {
			//stream: 'access.log'
		}
	},
        ...

The fig_start.sh file is optional and not part of the solution for the race condition. Instead of repeating multiple commands, I prefer running a single script, which can execute the commands, consistently. Note, commands in this script remove ALL ‘Exited’ containers and untagged (<none>) images.

#!/bin/sh

# remove all exited containers
echo "Removing all 'Exited' containers..."
docker rm -f $(docker ps --filter 'status=Exited' -a) > /dev/null 2>&1

# remove all  images
echo "Removing all untagged images..."
docker rmi $(docker images | grep "^" | awk "{print $3}") > /dev/null 2>&1

# build and start containers with fig
fig build && fig up

MEAN Application Start-Up Screen Grabs

Below are screen grabs showing the MEAN.JS application starting up, both before and after the changes were implemented.

Start Script Cleaning Up Docker Images and Containers and Running Fig

Start Script Cleaning Up Docker Images and Containers and Running Fig

MongoDB Cannot Connect on localhost

MongoDB Cannot Connect on localhost

MEAN Application Waiting for MongoDB to Start, Currently at 70%...

MEAN Application Waiting for MongoDB to Start, Currently at 70%…

Connected to MongoDB on Correct IP Address and Grunt Running

Connected to MongoDB on Correct IP Address and Grunt Running

MEAN.JS Docker Containers Created

MEAN.JS Docker Containers Created

MEAN Application Successfully Running in Docker Containers

MEAN Application Successfully Running in Docker Containers

New Article Created with MEAN Application

New Article Created with MEAN Application

, , , , , , , , , , , , , , , , , ,

Leave a comment

Install Latest Node.js and npm in a Docker Container

Install the latest versions of Node.js and npm, into a Docker container, with or without the need for root access. Easily update both applications to the latest versions.

Install and Confirm Node and npm

Ubuntu and Node

Recently, I was setting up a new development laptop with Ubuntu 14.10 (Utopic Unicorn). As part of the setup, I needed to install all the several development tools, including Node.js and npm. Researching the current recommendations for installing Node.js and npm on Ubuntu, I found using the traditional ‘apt-get‘ command does not always install the latest versions of either application. Additionally, ‘apt-get’ makes updating those versions difficult.

After a lot of investigation, I created three different snippets of code to install the latest copies of Node.js and npm. Some of my code came from Isaac Z. Schlueter‘s series of installations Gists, and a post on StackOverflow by Pascal Hartig. Joyant and others recommended Isaac’s Gists for installing earlier versions of Node.js and npm. Other code was found in posts by DigitalOcean. Versions are as follows:

  • Version 1: using ‘apt-get install’
  • Version 2: using curl, make, and npmjs.org’s install script
  • Version 3: version 2 without requiring ‘sudo’ to use npm*

*There is some debate on the use of ‘sudo’ with some earlier versions of npm. It appears not to be recommended with the latest versions of npm.

Docker

Docker containers and virtual machines (VM) are ideal platforms for developing and testing applications, locally. I often create a Docker container or VirtualBox VM, to install and test new scripts, before running them within our software environments. To test this code, I created three separate Docker containers, based on the official 14.04 Ubuntu base image, located on Docker Hub. I then executed each version of code within a container. After installation testing, I chose version 2 for my laptop.

Displaying Docker Ubuntu Image and Containers

Docker Ubuntu Image and Containers

GitHub Gists

The three versions of install scripts on gist.github.com, perform the following tasks:

  • Creates Docker container
  • Updates Ubuntu system packages within container
  • Creates new ‘testuser’ account within container (‘testuser’)
  • Installs required software to install Node.js, if necessary (curl, make, etc.)
  • Installs Node.js and npm
  • Installs some common full-stack JavaScript npm packages
  • Verifies installation locations and contents correct

Running Code

Installing Node, npm, and New User Account

Installing Node, npm, and New User Account

Installing and Verifying npm Packages

Installing and Verifying npm Packages

, , , , , , , , , , ,

Leave a comment

Software Delivery: Evaluating Risk within the Enterprise

As a software environment evolves from separate applications into an enterprise, how does increasing complexity raise the potential risk of delivering less-than-reliable software?

Cover Drawing

 Introduction

There are many vendor whitepapers, industry publications, blog posts, podcasts, and e-books, extolling the best practices in software development and delivery. Best practices include industry-standard concepts, such as Agile, DevOps, test automation, continuous integration, and continuous delivery. Generally, these best practices all strive to improve the process of delivering software enhancements and bug fixes to customers.

Rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead. – Wikipedia

Most of these learning resources present one of two idealized software environments. I term them the ‘applications as islands’ environment and the ‘utopian enterprise’ environment. I am also often guilty of tailoring my blog posts to one of these two idealized environments. Neither of the environments best models the typical enterprise software environments in which many of us work.

Applications as Islands

The first idealized software environment is one of isolated application stacks. These environments have multiple application stacks, each of which could include web, mobile, and desktop components, services, data sources, utilities and scripts, messaging and reporting components, and so forth. Nonetheless, each application stack is completely isolated from the other application stacks, within the same environment.

The Utopian Enterprise

The second idealized software environment is the utopian-like enterprise. These environments have multiple application stacks with multiple shared components. However, they are built using consistent and modern architectural patterns and compatible technology stacks. They are designed from the ground up to be compartmentalized, scalable, and highly risk-tolerant to changes. They often avoid the challenges of monolithic legacy applications. The closest things in the real world are probably industry trendsetters, such as Facebook, Etsy, Amazon, and Twitter. We all probably wish we could evolve our own software environments into one of these Utopias.

Complexity and Risk

As an organization continues to evolve their software, they naturally increase the overall complexity, and thereby the challenge of effectively delivering reliable and performant software. In this post, I will explore the challenges of software delivery, as a software environment grows in complexity. Specifically, I will focus on how to evaluate the level of risk based on software changes made to various components within the software environment.

Sensitivity and Impact

As we examine the level of risk introduced by software changes within the environment, two aspects of risk are inescapable, sensitivity and impact. Sensitivity will be defined as the potential degree to which one component, such as an application, service, or data source, is affected by changes to other components within the same software environment. How sensitive is ‘Application A’ to changes made to other components within the same software environment, on which ‘Application A’ is directly or indirectly dependent?

Impact will be defined as the potential effect a component’s changes have on other components within the software environment. Teams tend to only evaluate the impact of changes on the immediate component or application stack. They do not sufficiently consider how those changes impact those components that are directly and indirectly dependent on them. What level of impact do changes to ‘Service B’ have on all other components within the software environment that are directly and indirectly dependent on ‘Service B’?

Notice I use the word ‘potential’. Any change has the potential to introduce risk. The level of risk varies, based on the type and volume of changes. A few simple changes should have a low potential for impact, as opposed to a high number of changes, or more complex changes. For example, changing an internal error message logged by a particular service operation should present a very low risk. This, as opposed to rewriting that operation’s complex algorithm for calculating a customer’s creditworthiness. The potential impact of those two types’ changes to dependent components varies greatly.

Measuring Risk

For both sensitivity to change and impact of change, I will use a color-coded scale to subjectively assign a level of potential risk to each component within a given software environment. The scale ranges from ‘Low’, to ‘Moderate’, to ‘High’, to ‘Very High’. Using the scale, it is possible to ‘heat map’ a software environment, based on the level of risk from changes.

Independent Aspects of Risk

Sensitivity and impact are two independent aspects of risk. Changes to one component may have a ‘Low’ potential impact on all other components within the environment. While at the same time, that same component may have a ‘High’ sensitivity to changes made to other components within the environment. Alternatively, a component may have a ‘Very High’ risk for potential impact on multiple components within the environment. At the same time, that same component may have a ‘Low’ potential sensitivity to changes made to other components. Sensitivity and risk do not parallel each other.

Growing Complexity

Let’s look at how sensitivity and impact change as we increase the software environment’s complexity. In the first example, we will look at one of the two environments I described earlier, individual isolated applications. Applications may have their own web and mobile components, SOAP and RESTful services, data sources, utilities, scripts, scheduled tasks, and so forth. However, the applications do not depend on each other or components outside their own immediate application stack; the applications are self-contained.


When making changes in this type of environment, the real potential impact is to the overall stability, security, and performance of the individual applications, themselves. As long as they are in isolation, the applications will have no impact on each other. Therefore, application’s potential sensitivity to changes, and their impact on other applications, is ‘Low’.

Shared Components

A slightly more complex example is a software environment in which one or more applications have a dependency on a component outside their immediate application stack. For example, a healthcare provider develops a Windows-based application to track their employee’s work schedules (Application A). In addition, they develop a web application to track patient appointments (Application B). Lastly, they offer a client-facing mobile application for patients to track personal fitness and nutrition goals (Application C). Applications B and C share a common set of services and a database for managing patient data.

Software changes made to Applications A, B, and C, should have no effect on other components within the software environment. However, Applications B and C are potentially impacted by changes made to either the Services Layer or Data Layer. The Services Layer has ‘High’ potential impact within the software environment. Lastly, the Data Layer should not be directly impacted by change made to the Services Layer or Applications. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B and C. Therefore, the Data Layer’s potential impact on other dependent components within the environment is ‘Very High’.

Multiple Shared Components

An even more complex example is a software environment in which multiple applications have one or more dependencies on multiple components outside their immediate application stack (many-to-many).

Take for example, a small financial institution. They have a ‘legacy’ COBOL-based application for managing their commercial mortgage business (Application A). They also have an older J2EE-based application, they acquired through a business merger, for managing their commercial banking relationships (Application B). Next, they have a relatively new Java EE-based investment banking application to manage their retail customers (Application C). Lastly, they have web-based, client-facing application for secure, online retail banking.

Since both Application A and B serve commercial clients, it is necessary to send financial data between the two application stacks. Since both applications are built on different, older technologies, the development team built a Custom Messaging Middleware component to connect the two applications. The Custom Messaging Middleware component receives, transforms, and delivers messages between the two applications.

Changes made to Applications C and D should have no impact on other components within the software environment. However, changes made to either Application A or B has the potential to indirectly affect the ability to successfully communicate with the other application, via the Custom Messaging Middleware. Changes to the Custom Messaging Middleware have the potential to affect both Applications A and B. The Custom Messaging Middleware has a ‘Moderate’ potential sensitivity to risk, versus ‘Low’, because one could argue that changes to either Application A or Application B’s messaging format could impact the Custom Messaging Middleware’s ability to properly process that application’s messages and successfully deliver them to opposite application.

Applications B, C, and D have a direct dependency on the Services Layer, and indirectly on the Data Layer. Therefore, the potential impact of changes to the Services Layer on other components is arguably higher than in the last example. The Services Layer’s potential impact on other components is ‘Very High’.

Since Application B has a direct dependency on both the Messaging Middleware and the Services Layer, it has a higher sensitivity to changes then the other three applications. Application B’s potential sensitivity to changes by other components is ‘Very High’.

Changes made to the Services Layer or the Applications will not affect the Data Layer. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B, C, and D. Therefore, the Data Layer’s potential impact on the software environment is ‘Very High’.

Small Enterprise

The last example of increasing complexity is an environment in which even more applications are dependent on even more components. Additionally, there may be different types of components, such as a common UI and third-party APIs, which only increase the complexity of the dependencies. Although this example is nowhere near as complex as many enterprise software environments, it does begin to reflect their intricate, inner-dependent structure.

Let’s use an example of a large web-based retailer. The retailer has a standalone ERM application for managing their wholesale purchasing and product distribution (Application A). Next, they have their primary client-facing storefront (Application B). They also have a separate application to handle customer accounts (Application C). Lastly, they have an application that manages their online media retail business and media storage (Application D).

In addition to the Common Services Layer, Common Data Layer, and Custom Messaging Middleware, as seen in earlier examples, the retailer has two other components in their environment, a Common Web User Interface (UI) and a Web API. The Web UI provides the customer with a seamless branded experience, no matter which application they use – Application B, C, or D. The customer enters the Common Web UI and has all three application’s features seamlessly available to them.

The retailer also exposes a RESTful Web API for its marketing affiliates. Third parties can develop a variety of applications that drive sales to the retailer, in return for a sales commission.

In the earlier examples, individual applications had separate points of entry. However, in this example, the Common Web UI provides a single point of entry for users of Applications B, C, and D. Having a single point of entry also introduces a single point of failure for all three applications. Thus, the potential risk to the retailer and their customers is much greater. The Common Web UI’s potential impact on other components is ‘Very High’.

A single point of entry also introduces a single point of failure.

The potential sensitivity of the Common Web UI to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. Additionally, one could argue, since the Common Web UI displays the three Applications, it is also sensitive to changes made by those applications. If one of those applications becomes impaired due to a bad change, that application would seem to affect the Web UI’s functionality. The Common UI’s potential sensitivity to change is ‘High’.

The Web API is similar to the Common Web UI, in terms of potential sensitivity and impact. The potential impact of changes to the Web API is ‘Very High’, since a defect there could result in the potential impairment of the retailer’s affiliate applications. The potential sensitivity of the Web API to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. The Web API’s potential sensitivity to change is ‘High’. There is very little chance of potential impact to the Web API from the retailer’s affiliate applications.

Impact of Key Components

Lastly, as systems grow in complexity, certain components often become so key, they have the potential to impact entire environment, a true single point of failure. Below, note the potential impact of changes to the Common Services Layer on all other components. As the software environment has grown in complexity, the Common Services Layer sits at the heart of the system. The Services Layer has multiple components directly dependent on it (i.e. Application C), as well as other components indirectly dependent on it (i.e. Third-Party Applications). It is also the only point of access to and from the Common Data Layer.

There are steps organizations can take to mitigate the potential risk caused by changes to key components, like the Services Layer. Areas organizations commonly focus on to reduce risk are higher code-quality, increased test coverage, and improved performance, fault tolerance, system redundancy, and rollback capabilities. Additionally, management should more thoroughly scrutinize proposed software changes to key components, balancing new features with need for stability, availability, and performance.

Management must balance the need for new features with need for stability, availability, and performance.

Specific to services, organizations often look to decouple larger services, creating smaller, more focused services. Better separation of concerns increases the likelihood that potential impairments caused by code defects are isolated to a smaller subset of functionality.

Conclusion

In this brief post, we examined one aspect of potential risk to delivering reliable software, sensitivity and impact of software changes. There are many other sources of risk involved with delivering reliable software. They include training, communication, planning, documentation, system infrastructure, and development and release management tooling. Once all sources of risk is identified and quantified, the overall level of risk to delivering reliable software can be assessed, and steps taken to reduce the potential impact.

, , , , , , , , , , , , ,

Leave a comment

Managing Windows Servers with Chef, Book Review

Harness the power of Chef to automate management of Windows-based systems using hands-on examples.

Managing Windows Servers with Chef

Recently, I had the opportunity to read, ‘Managing Windows Servers with Chef’, authored John Ewart, and published in May, 2014 by Packt Publishing. At a svelte 110 pages in paperback form, ‘Managing Windows Servers with Chef’, is a quick read, packed with concise information, relevant examples, and excellent code samples. Available on Packt Publishing’s website for a mere $11.90 for the ebook, it a worthwhile investment for anyone considering Chef Software’s Chef product for automating their Windows-based infrastructure.

As an IT professional, I use Chef for both Windows and Linux-based IT automation, on a regular basis. In my experience, there is a plethora of information on the Internet about properly implementing and scaling Chef. There is seldom a topic I can’t find the answers to, online. However, it has also been my experience, information is often Linux-centric. That is one reason I really appreciated Ewart’s book, concentrating almost exclusively on Windows-based implementations of Chef.

IT professionals, just getting starting with Chef, or migrating from Puppet, will find the ‘Managing Windows Servers with Chef’ invaluable. Ewart does a good job building the user’s understanding of the Chef ecosystem, before beginning to explain its application to a Windows-based environment. If you are considering Chef versus Puppet Lab’s Puppet for Windows-based IT automation, reading this book will give you a solid overview of Chef.

Seasoned users of Chef will also find the ‘Managing Windows Servers with Chef’ useful. Professionals quickly master the Chef principles, and develop the means to automate their specific tasks with Chef. But inevitably, there comes the day when they must automate something new with Chef. That is where the book can serve as a handy reference.

Of all the books topics, I especially found value in Chapter 5 (Managing Cloud Services with Chef) and Chapter 6 (Going Beyond the Basics – Testing Recipes). Even large enterprise-scale corporations are moving infrastructure to cloud providers. Ewart demonstrates Chef’s Windows-based integration with Microsoft’s Azure, Amazon’s EC2, and Rackspace’s Cloud offerings. Also, Ewart’s section on testing is a reminder to all of us, of the importance of unit testing. I admit I more often practice TAD (‘Testing After Development’) than TDD (Test Driven Development), LOL. Ewart introduces both RSpec and ChefSpec for testing Chef recipes.

I recommend ‘Managing Windows Servers with Chef’ for anyone considering Chef, or who is seeking a good introductory guide to getting started with Chef for Windows-based systems.

 

, , , , , ,

Leave a comment

Data-Driven Forms with AngularJS’s Two-Way Data Binding and Custom Directives

Use the two-way data binding and custom directives features of AngularJS to develop data-driven, interactive forms.

Introduction

AngularJS has exploded on to the web-application development scene. Since being introduced in 2009, AngularJS’s use has grown exponentially. Its wide range of features and ease of use make it an ideal tool for rapidly developing modern web-applications. Combined with other modern JavaScript tools, such as Node, Express, Twitter Bootstrap, Yeoman, and NoSQL databases such as MongoDB, AngularJS developers can create robust, full-stack JavaScript applications.

A primary feature of AngularJS is two-way data binding. According to AngularJS’s website, ‘data-binding is the automatic synchronization of data between the model and view. The way that Angular implements data-binding lets you treat the model as the single-source-of-truth in your application. The view is a projection of the model at all times. When the model changes, the view reflects the change, and vice versa.‘ In the past, developers spent much of their coding time wiring up UI components to the application’s data model. AngularJS has greatly simplified this process.

Another key feature of AngularJS are directives. At a high level, according to AngularJS’ site, ‘directives are markers on a DOM element (such as an attribute, element name, comment or CSS class) that tell AngularJS’s HTML compiler to attach a specified behavior to that DOM element or even transform the DOM element and its children.‘ AngularJS provides many built-in directives, including ngModel, ngBind, ngInclude, ngRepeat, and ngChange. These directives are the building blocks of an AngularJS application. We will use many of these built-in directives in this post.

In addition to built-in directives, AngularJS allows us to create custom directives. Custom directives are a powerful feature, allowing us to encapsulate our own reusable DOM manipulation functionality.

The Sample Project

There is an infinite variety of web-based forms (‘electronic forms’). We interact with web-based forms at work, at home, and at school. Forms serve the primary purpose of collecting data user. Web-based forms allow us to order products and services over the internet, file our taxes, manage our benefits at work, track our time, and take online classes.

Tests or quizzes are a perfect example of web-based forms to demonstrate AngularJS’s many strengths, including data-binding and custom directives. In this post, we will create a series of interactive quizzes on the theme of AngularJS – sort of a learning opportunity inside a learning opportunity. Quizzes often contain several common types of question/answer formats, including true-false, multiple-choice, and multiple-correct, ordering, matching, short-answer, essay, and so forth. These question/answer formats take advantage of all the HTML form elements, including radio buttons, check-boxes, text fields, drop-down lists, list boxes, and text areas. We will build the quizzes from static JSON data files, and using AngularJS’s services, controllers, routes, views, templates, directives, and custom directives.

In the first example, we will use AngularJS’s factory service, controller, partial templates, view, routing, and built-in directive features to read JSON data from a file, and display and validate a basic true-false quiz. In the second example, we will expand our true-false quiz to contain additional types of questions, including multiple-choice and multiple-correct. For the advanced quiz, we will make use of use custom directives and partial view templates. These two new features will allow us to increase the quizzes complexity without substantially increasing the complexity of code we need to write.

Installing and Configuring the Project

This post’s project is available on GitHub. The easiest way to obtain all the source code, is to clone the project with Git. Once you have cloned the project, don’t forget to install the npm and bower packages. All commands are shown below. The minimum requirements for the project, are to have Bower, Grunt, npm, and Git installed.

git clone https://github.com/garystafford/angular-quiz.git
cd angular-quiz
npm install
bower install

Alternately, if you are experienced building JavaScript applications with the scaffolding tool, yo, you can create a new project and recreate the code yourself. To use generator-angular’s code generators, you will need yo installed, in addition to Bower, Grunt, npm, and Git. Since this post’s project is based on the Yeoman’s generator-angular, you can use npm to install Yeoman’s generator-angular. Afterwards, using generator-angular’s available code generators, you can easily reproduce the post’s basic project structure.

npm install -g generator-angular

# Use generator-angular code generators to create project components
# Instructions here: https://github.com/yeoman/generator-angular
mkdir quiz-app && cd $_
yo angular quiz
yo angular:route quizAdvanced
yo angular:factory quizAdvancedFactory
yo angular:directive quizTrueFalseDirective
Using yo with generator-angular to Set-up a New Application

Using yo with generator-angular to Set-up a New Application

Using yo with generator-angular to Create New Components

Using yo with generator-angular to Create New Components

If you used the generator-angular code generator to create the project yourself, using the above instructions, your module will be called ‘quizApp’. The application name, found in the ‘package.json’ and ‘bower.json’ files, will be ‘quiz’. I changed my project’s module and app names to be more descriptive, along with the names of the routes, factories, directives, and other components. They will also vary slightly using the code generators.

Also, if you used the generator-angular code generator to create the project yourself, you may need to install a few additional npm and bower packages, not part of generator-angular project, to reproduce this post’s project, exactly.

Project Structure

The project structure follows the generator-angular format. Most core application files are kept in the ‘app’ folder. This post’s project has added the ‘app/data’ folder, which holds the quiz data, and the ‘app/scripts/partials’, which holds the partial view templates for the custom directives (explained later).

Project View from WebStorm 8

Project View from WebStorm 8

Starting the Project

The project is started using the ‘grunt serve‘ command. Using the grunt server, the project be hosted on ‘localhost’, port 9000, by default. This can be changed to a specific hostname or IP address by editing the ‘Gruntfile.js’ file’s ‘connect‘ task.

Testing the Project

There are some basic tests created using the Karma, Test Runner for JavaScript. These tests are run using the ‘grunt test‘ command. Test are set to run on port 8092, using the PhantomJS web browser. PhantomJS, if you’re not familiar, is a headless WebKit scriptable with a JavaScript API. PhantomJS is ideal for use with Continuous Integration Servers, such as TravisCI. If you do not have PhantomJS installed, and plan to run the tests, change the ‘browser‘ property in the ‘karma.conf.js’ file, located in the project’s root directory. Chrome is a good alternative for local testing. Test results for this GitHub project can be reviewed on TravisCI.

Creating a complete set of unit tests for the advanced quiz proved challenging based on its nested, partial view templates, described in the Advanced Quiz section. I may add a more complete set of unit test in the future.

Basic Quiz

The first quiz is a six-question, basic true-false format form. The user answers all six questions, and then pushes a button to display the results.

Basic Quiz Before User Input

Basic Quiz Before User Input

Basic Quiz With User Input

Basic Quiz With User Input

The basic quiz uses a single controller (quizBasicController.js), single factory service (quizBasicFactory.js), single route (apps.js – ‘/quizBasic’), and a single partial view template (quiz-basic.html), in addition to the main layout (index.html). All these components are part the ‘quizModule’ AngularJS module. I’ve attempted to illustrate these relationships in the diagram, below.

The factory service (quizBasicFactory.js) uses $resource, a service in AngularJS’s ngResource module, to load the contents of a local JSON-format file (quiz-basic.json).

angular.module('quizModule')
 .factory('quizBasicFactory', function ($resource) {
 return $resource('./data/quiz-basic.json');
 });
{
  "name":      "Basic Quiz Example",
  "questions": [
    {
      "_id":      1,
      "question": "AngularJS is a declarative programming language.",
      "answer":   true
    },
    {
      "_id":      2,
      "question": "The acronym 'SPA' stands for Single-Page Application.",
      "answer":   true
    },
    {
      "_id":      3,
      "question": "AngularJS is written in C++.",
      "answer":   false
    }
    ...
  ]
}

The controller (quizBasicController.js), calls the factory service (quizBasicFactory.js), which returns the ‘data’ object.

angular.module('quizModule')
  .controller('QuizBasicController',
  function ($scope, quizBasicFactory) {
    var createResults;
    $scope.title = null; // quiz title
    $scope.quiz = {}; // quiz questions
    $scope.results = []; // user results

    quizBasicFactory.get(function (data) {
      $scope.title = data.name;
      $scope.quiz = data.questions;
      createResults();
    });

    // prepare array of result objects
    createResults = function () {
      var len = $scope.quiz.length;
      for (var i = 0; i < len; i++) {
        $scope.results.push({
          _id:        $scope.quiz[i]._id,
          answer:     $scope.quiz[i].answer,
          userChoice: null,
          correct:    null
        });
      }
    };

    // assign and check user's choice
    $scope.checkUserChoice = function (question, userChoice) {
      // assign the user's choice to userChoice
      $scope.results[question - 1].userChoice = userChoice;

      // check the user's choice against the answer
      if ($scope.results[question - 1].answer === userChoice) {
        $scope.results[question - 1].correct = 'Correct';
      } else {
        $scope.results[question - 1].correct = 'Incorrect';
      }
    };

    // only show results if all questions are answered
    $scope.checkQuizCompleted = function () {
      var len = $scope.results.length;
      for (var i = 0; i < len; i++) {
        if ($scope.results[i].userChoice === null) {
          return true;
        }
      }
      return false;
    };
  });
The 'data' Object Returned from Factory Service containing  JSON Data

The ‘data’ Object Returned from Factory Service containing JSON Data

Contents of the ‘data’ object are used to populate ‘$scope.quiz[]’, ‘$scope.title’, and ‘$scope.results[]’ properties. The $scope holds the quiz data ($scope.quiz[]), the quiz title ($scope.title), and the results ($scope.results[]). The ‘$scope.checkUserChoice()’ method stores the user’s answer in ‘$scope.results[].answer’ property, and evaluates if the answer is correct ($scope.results[].correct). The ‘$scope.checkQuizCompleted()’ method checks to make sure all questions have been answered before showing the results, when the user clicks the ‘Show Results’ button.

The $scope Containing Quiz, Title, and Results Properties

The $scope Containing Quiz, Title, and Results Properties

AngularJS bootstraps the application. Through AngularJS’s compiling and linking process, the partial view template (quiz-basic.html), shown below, the controller (quizBasicController.js), and the main layout (index.html), form the ‘\quizBasic’ View, which is presented to the user. Blogger, Dag-Inge Aas does a nice job of explaining this process in his post, Understanding template compiling in AngularJS.

<h4 class="title">{{title}}
<br/>

<!--quiz section-->
<form name="quiz">
  <div ng-repeat="question in quiz">
    <strong>{{question._id}}. {{question.question}}</strong>

    <div class="radio">
      <input required
             name="_id{{question._id}}"
             type="radio"
             value="true"
             ng-model="question.userChoice"
             ng-change="$parent.checkUserChoice(question._id, true)"/>
      <label for="_id{{question._id}}">True</label>
      <br/>
      <input required
             name="_id{{question._id}}"
             type="radio"
             ng-value="false"
             ng-model="question.userChoice"
             ng-change="$parent.checkUserChoice(question._id, false)"/>
      <label for="_id{{question._id}}">False</label>
    </div>
  </div>
</form>

<hr/>

<!--results section-->
<div ng-init="showAnswers=true">
  btn-sm"
          ng-click="showAnswers=checkQuizCompleted()">
    Show Results
  </button>
  <br/>
  <br/>

  <div ng-hide="showAnswers">
    <strong>Results</strong>

    <div ng-repeat="result in results">
      {{result._id}}. <span
        ng-class="result.correct == 'Correct' ? 'correct' : 'incorrect'">
        {{result.correct}}
      </span>
    </div>
  </div>

We load all the contents of the JSON data file into $scope and use the ‘ng-repeat‘ directive to iterate over the questions ($scope.quiz[]) and the results ($scope.results[]). Because of this, modifying existing questions and adding new ones is easy. This requires no additional coding, just a change to the JSON data file.

 Advanced Quiz

Using all the same basic building blocks as the basic quiz, with the addition of custom-directives, we can add complexity to our quiz, without a lot of additional coding. This advanced quiz has nine questions, including three true-false format, three multiple-choice format, and three multiple-correct format. As the user answers each questions, they are presented with the results, either ‘Correct’ or ‘Incorrect’.

Advanced Quiz Before User Input

Advanced Quiz Before User Input

Advanced Quiz With User Input

Advanced Quiz With User Input

Similar to the basic quiz, the advanced quiz uses a single controller (quizAdvancedController.js), factory service (quizAdvancedFactory.js), route (apps.js – ‘/quizAdvanced’), partial view template (quiz-advanced.html), and the main layout (index.html). Additionally, the advanced quiz uses a filter, three custom directives, and four partial view templates. The fourth partial view template, ‘quiz-choice-response.html’, is called by the first three partial view templates. It contains common DOM elements. Like the basic quiz, all these components are part the ‘quizModule’ module. I’ve attempted to illustrate these relationships in the diagram, below.

Just like with the basic quiz, the factory service (quizAdvancedFactory.js) uses $resource to load the contents of a local JSON-format file (quiz-advanced.json). This time however, the JSON file contains three types of questions, each with a slightly different schema. The three different question types are shown in the code snippet below. The true-false questions have a boolean value as the answer, the multiple choice questions, an integer as an answer, and the multiple correct questions, an array of integers as an answer.

angular.module('quizModule')
  .factory('quizAdvancedFactory', function ($resource) {
    return $resource('./data/quiz-advanced.json');
  });
{
  "name":      "Advanced Quiz Example",
  "questions": [
    {
      "_id":      1,
      "question": "AngularJS is written completely in JavaScript.",
      "type":     "True-false",
      "answer":   true
    },
    {
      "_id":      4,
      "question": "What does the acronym 'MVC' stand for?",
      "type":     "Multiple choice",
      "choices":  [
        {
          "_id":    1,
          "choice": "Method, Variable, Constant"
        },
        {
          "_id":    2,
          "choice": "Module, View, Constraint"
        },
        {
          "_id":    3,
          "choice": "Model, View, Controller"
        },
        {
          "_id":    4,
          "choice": "None of the above"
        }
      ],
      "answer":   3
    },
    {
      "_id":      7,
      "question": "Which of the following are associated with AngularJS?",
      "type":     "Multiple correct",
      "choices":  [
        {
          "_id":    1,
          "choice": "Controller"
        },
        {
          "_id":    2,
          "choice": "Interface"
        },
        {
          "_id":    3,
          "choice": "Route"
        },
        {
          "_id":    4,
          "choice": "View"
        },
        {
          "_id":    5,
          "choice": "Model"
        },
        {
          "_id":    6,
          "choice": "Generator"
        },
        {
          "_id":    7,
          "choice": "Service"
        },
        {
          "_id":    8,
          "choice": "Node"
        }
      ],
      "answer":   [1, 3, 4, 5, 7]
    }
    ...
  ]
}

The controller (quizAdvancedController.js), calls the factory service (quizAdvancedFactory.js), which returns the ‘data’ object, just like in the basic quiz example.

angular.module('quizModule')
  .controller('QuizAdvancedController',
  function ($scope, quizAdvancedFactory, filterFilter) {
    var createResults;
    $scope.title = null; // quiz title
    $scope.quiz = {}; // quiz questions
    $scope.results = []; // user results

    quizAdvancedFactory.get(function (data) {
      $scope.title = data.name;
      $scope.quiz = data.questions;
      createResults();
    });

    // prepare array of result objects
    createResults = function () {
      var len = $scope.quiz.length;
      for (var i = 0; i < len; i++) {
        $scope.results.push({
          _id:        $scope.quiz[i]._id,
          answer:     $scope.quiz[i].answer,
          userChoice: null,
          correct:    null
        });
      }
    };

    // used for multiple correct type questions
    $scope.checkUserMultiCorrectChoice = function (question, userChoice) {
      // create blank array
      if ($scope.results[question - 1].userChoice === null) {
        $scope.results[question - 1].userChoice = [];
      }

      // find choice, if not there the add or if there remove
      var pos = $scope.results[question - 1].userChoice.indexOf(userChoice);
      if (pos < 0) {
        $scope.results[question - 1].userChoice.push(userChoice);
      } else {
        $scope.results[question - 1].userChoice.slice(pos, 1);
      }

      // check the user's choice against the answer
      var answer = JSON.stringify($scope.quiz[question - 1].answer.sort());
      var choice = JSON.stringify($scope.results[question - 1].userChoice.sort());

      if (answer === choice) {
        $scope.results[question - 1].correct = true;
      } else {
        $scope.results[question - 1].correct = false;
      }
    };

    // used for multiple choice and true-false type questions
    $scope.checkUserChoice = function (question, userChoice) {
      // assign the user's choice to userChoice
      $scope.results[question - 1].userChoice = userChoice;

      // check the user's choice against the answer
      if ($scope.results[question - 1].answer === userChoice) {
        $scope.results[question - 1].correct = true;
      } else {
        $scope.results[question - 1].correct = false;
      }
    };

    // find a specific question
    $scope.filteredQuestion = function (questionId) {
      return filterFilter($scope.quiz, {_id: questionId});
    };
  });

For true-false and multiple-choice questions, the ‘$scope.checkUserChoice()’ method stores the user’s answer in the ‘$scope.results[].answer’ property. The method also evaluates if the answer is correct, and stores that value in the ‘$scope.results[].correct’ property. The method takes two input parameters, question id and user’s choice.

For multiple correct questions, the ‘$scope.checkUserMultiCorrectChoice()’ method does the same. The difference, for multiple-correct questions, the method stores both the multiple answers and multiple user choices in a pair of arrays, ‘$scope.results[].answer[]’ and ‘$scope.results[].userChoice[]’ object arrays. In addition to storing the user’s choices, the method removes user choices if they are deselected by the user, in the view.

Lastly, the ‘$scope.checkUserMultiCorrectChoice()’ method evaluates the user’s choices array against the correct answers array. In the example below, note the ‘$scope.results[6].answer[]’ array and the ‘$scope.results[6].userChoice[]’ array. They were determined to be equal by the ‘$scope.checkUserMultiCorrectChoice()’, and reflected in the ‘true’ value of the ‘$scope.results[6].correct’ property.

Advanced Quiz Results for Multiple-Correct Question

Advanced Quiz Results for Multiple-Correct Question

Filter

In the ‘quizAdvancedController.js’ controller, note the ‘filterFilter’ object injected into the controller’s main function. At the end of the controller, also note the ‘$scope.filterQuestion(questionId)’ method.

angular.module('quizModule')
  .controller('QuizAdvancedController',
  function ($scope, quizAdvancedFactory, filterFilter) {
    ...
    // find a specific question
    $scope.filteredQuestion = function (questionId) {
      return filterFilter($scope.quiz, {_id: questionId});
    };
  });

The ‘$scope.filterQuestion(questionId)’ method takes a question id as an input parameter, and returns that single question. The ‘$scope.filterQuestion(questionId)’ method actually returns a call to the angular.filter‘s filterFilter. It takes two parameters,  an array containing the entire set of questions (‘$scope.quiz’ array), and a ‘pattern object’ containing the specific ‘id’ to filter on (‘{_id: questionId}’).

The filter method is called by the three question-type partial view templates, for example ‘quiz-multi-choice.html’. For example, the partial view template, ‘quiz-advanced.html’, uses the ‘quiz-multichoice’ element to call the custom directive, ‘quizMultiChoiceDirective.js’, passing it a request for question id 4.

<h4 class="title">{{title}}</h4>
<br/>
<form name="quiz">
  <!--true-false-->
  <quiz-truefalse filter-by="1"></quiz-truefalse>
  <quiz-truefalse filter-by="2"></quiz-truefalse>
  <quiz-truefalse filter-by="3"></quiz-truefalse>

  <!--multi-choice-->
  <quiz-multichoice filter-by="4"></quiz-multichoice>
  <quiz-multichoice filter-by="5"></quiz-multichoice>
  <quiz-multichoice filter-by="6"></quiz-multichoice>

  <!--multi-correct-->
  <quiz-multicorrect filter-by="7"></quiz-multicorrect>
  <quiz-multicorrect filter-by="8"></quiz-multicorrect>
  <quiz-multicorrect filter-by="9"></quiz-multicorrect>
</form>

The custom directive, ‘quizMultiChoiceDirective.js’, loads the partial view template, ‘quiz-multi-choice.html’, using the ‘templateUrl’ argument. The ‘templateUrl’ argument uses ajax to load the template. The template, ‘quiz-multi-choice.html’, uses the ‘ng-repeat‘ directive to populate its section of the advanced quiz with question id 4 (div ng-repeat="question in $parent.filteredQuestion(filterBy)). It does so by calling filteredQuestion(4), in the ‘quizAdvancedController.js’ controller.

<div ng-repeat="question in $parent.filteredQuestion(filterBy)">
  <strong>{{question._id}}. {{question.question}}</strong>
  <div class="radio" ng-repeat="choice in question.choices">
    <input
        name="_id{{question._id}}"
        type="radio"
        value="{{choice._id}}"
        ng-model="question.userChoice"
        ng-change="$parent.$parent.$parent.checkUserChoice(question._id, choice._id)"/>
    <label for="_id{{question._id}}">{{choice.choice}}</label>
  </div>
  <div ng-include src="'/scripts/partials/quiz-choice-response.html'"></div>
</div>
<br/>

The ‘quiz-multi-choice.html’ template also loads the contents of the ‘choice-response.html’ template. This template contains DOM elements, common to all three question-type templates.

<div ng-if="$parent.$parent.$parent.results[question._id - 1].correct"
     class="result correct">
  <span class="glyphicon glyphicon-thumbs-up"></span>
  Correct!
</div>
<!--specify 'false' because not true (!) would include null (blank)-->
<div ng-if="$parent.$parent.$parent.results[question._id - 1].correct === false"
     class="result incorrect">
  <span class="glyphicon glyphicon-thumbs-down"></span>
  Incorrect
</div>

I have attempted to illustrate the filter in the diagram, below. I intentionally left out a few non-essential components to simplify the diagram, such as the main layout, config, route, service, other custom directives, and the JSON data file.

Using these techniques, we can easily extend the quiz, adding new answer types, such as ordering, matching, short-answer, and so forth.

Managing Scope

Being familiar with AngularJS, you should understand how scope works. You should know there is more than one scope, and that scope is normally inherited from the parent scope. Directives such as ng-repeat, ng-switch, ng-view, and ng-include, all create their own child scopes. Said better by AngularJS’s team, ‘in AngularJS, a child scope normally prototypically inherits from its parent scope. One exception to this rule is a directive that uses scope: { … } — this creates an isolate scope that does not prototypically inherit.‘ We use a number of directives. We also use ‘scope:’ within our custom directives for the advanced quiz example, which breaks the chain of inheritance.

In some of the code examples in this post, you will notice the use of ‘$parent‘, ‘$parent.$parent‘, or even ‘$parent.$parent.$parent‘, instead of simply ‘$scope‘. Sometimes, it necessary to reach outside the current scope, to a parent’s scope (‘$parent‘), or that parent’s parent’s scope (‘$parent.$parent‘). A simple example of this, in the partial view template, ‘quiz-multi-choice.htm’, we call ‘$parent.filteredQuestion(filterBy)‘. The ‘filteredQuestion(filterBy)’ method we need to use is in the parent scope of the template’s scope, so we call ‘$parent’ instead of ‘$scope’.

So how can you determine which scope contains the method or properties you are seeking? Batarang, the AngularJS WebInspector Extension for Chrome. Batarang adds an additional ‘AngularJS’ tab to Developer tools for Chrome. Previously, we were using the example of question id 4 with the AngularJS’s filter. Using the Batarang, below, we can see the question id 4 in the final View. Each question returned using the filter is contained within its own separate scope.

Question #4 in Batarang Models Tab

Question #4 in Batarang Models Tab

This example also shows how complex working with AngularJS’s scope(s) can be. Starting with a particular scope, using Batarang, you can visually move up (parent scope) or down (child scope) within the scope hierarchy. The contents of each scope, the Model, is displayed on the right. Batarang also offers several other feature, seen below, including AngularJS application performance and dependency visualization.

Links

Quiz Question Types (presentation)

Understanding Service Types (article)

Understanding Scopes (article)

Build custom directives with AngularJS (article)

Google I/O 2012 – Better Web App Development Through Tooling (YouTube video)

, , , , , , , , , , ,

Leave a comment

Cloud-based Continuous Integration and Deployment for .NET Development

Create a cloud-based, continuous integration and deployment toolchain for distributed .NET development teams, using GitHub, AppVeyor, and Microsoft Azure.

Introduction

Whether you are part of a large enterprise development environment, or a member of a small start-up, you are likely working with remote team members. You may be remote, yourself. Developers, testers, web designers, and other team members, commonly work remotely on software projects. Distributed teams, comprised of full-time staff, contractors, and third-party vendors, often work in different buildings, different cities, and even different countries.

If software is no longer strictly developed in-house, why should our software development and integration tools be located in-house? We live in a quickly evolving world of Saas, PaaS, and IaaS. Popular SaaS development tools include Visual Studio Online, GitHub, BitBucket, Travis-CI, AppVeyor, CloudBeesJIRA, AWS, Microsoft Azure, Nodejitsu, and Heroku, to name just a few. With all these ‘cord-cutting’ tools, there is no longer a need for distributed development teams to be tethered to on-premise tooling, via VPN tunnels and Remote Desktop Connections.

There are many combinations of hosted software development and integration tools available, depending on your technology stack, team size, and budget. In this post, we will explore one such toolchain for .NET development. Using GitGitHub, AppVeyor, and Microsoft Azure, we will continuously build, test, and deploy a multi-tier .NET solution, without ever leaving Visual Studio. This particular toolchain has strong integration between tools, and will scale to fit most development teams.

Git and GitHub
Git and GitHub are widely used in development today. Visual Studio 2013 has fully-integrated Git support and Visual Studio 2012 has supported Git via a plug-in since early last year. Git is fully compatible with Windows. Additionally, there are several third party tools available to manage Git and GitHub repositories on Windows. These include Git Bash (my favorite), Git GUI, and GitHub for Windows.

GitHub acts as a replacement for your in-house Git server. Developers commit code to their individual local Git project repositories. They then push, pull, and merge code to and from a hosted GitHub repository. For security, GitHub requires a registered username and password to push code. Data transfer between the local Git repository and GitHub is done using HTTPS with SSL certificates or SSH with public-key encryption. GitHub also offers two-factor authentication (2FA). Additionally, for those companies concerned about privacy and added security, GitHub offers private repositories. These plans range in price from $25 to $200 per month, currently.

GitHub View of Solution

GitHub View of Solution

AppVeyor
AppVeyor’s tagline is ‘Continuous Integration for busy developers’. AppVeyor automates building, testing and deployment of .NET applications. AppVeyor is similar to Jenkins and Hudson in terms of basic functionality, except AppVeyor is only provided as a SaaS. There are several hosted solutions in the continuous integration and delivery space similar to AppVeyor. They include CloudBees (hosted-Jenkins) and Travis-CI. While CloudBees and Travis CI works with several technology stacks, AppVeyor focuses specifically on .NET. Its closest competitor may be Microsoft’s new Visual Studio Online.

Identical to GitHub, AppVeyor also offers private repositories (spaces for building and testing code). Prices for private repositories currently range from $39 to $319 per month. Private repositories offer both added security and support.  AppVeyor integrates nicely with several cloud-based code repositories, including GitHub, BitBucket, Visual Studio Online, and Fog Creek’s Kiln.

AppVeyor View of Last Build of Solution

AppVeyor View of Latest Build of Solution

Azure
This post demonstrates continuous deployment from AppVeyor to a Microsoft Server 2012-based Azure VM. The VM has IIS 8.5, Web Deploy 3.5, IIS Web Management Service (WMSVC), and other components and configuration necessary to host the post’s sample Solution. AppVeyor would work just as well with Azure’s other hosting options, as well as other cloud-based hosting providers, such as AWS or Rackspace, which also supports the .NET stack.

New Microsoft Azure Portal View of VM

New Microsoft Azure Portal View of VM

Sample Solution

The Visual Studio Solution used for this post was originally developed as part of an earlier post, Consuming Cross-Domain WCF REST Services with jQuery using JSONP. The original Solution, from 2011, demonstrated jQuery’s AJAX capabilities to communicate with a RESTful WCF service, cross-domains, using JSONP. I have since updated and modernized the Solution for this post. The revised Solution is on a new branch (‘rev2014′) on GitHub. Major changes to the Solution include an upgrade from VS2010 to VS2013, the use of Git DVCS, NuGet package management, Web Publish Profiles, Web Essentials for bundling JS and CSS, Twitter Bootstrap, unit testing, and a lot of code refactoring.

Revised Restaurant Menu Demo Viewed on Android Tablet

Revised Restaurant Menu Demo Viewed on Android Tablet

The updated VS Solution contains the following four Projects:

  1. Restaurant – C# Class Library
  2. RestaurantUnitTests – Unit Test Project
  3. RestaurantWcfService – C# WCF Service Application
  4. RestaurantDemoSite – Web Site (JS/HTML5)
VS 2013 View of Solution

VS 2013 View of Solution

The Visual Studio Solution Explorer tab, here, shows all projects contained in the Solution, and the primary files and directories they contain.

As explained in the earlier post, the ‘RestaurantDemoSite’ web site makes calls to the ‘RestaurantWcfService’ WCF service. The WCF service exposes two operations, one that returns the menu (‘GetCurrentMenu’), and the other that accepts an order (‘SendOrder’). For simplicity, orders are stored in the files system as JSON files. No database is required for the Solution. All business logic is contained in the ‘Restaurant’ class library, which is referenced by the WCF service. This architecture is illustrated in this Visual Studio Assembly Dependencies Diagram.

Installing and Configuring the Solution

The README.md file in the GitHub repository contains instructions for installing and configuring this Solution. In addition, a set of PowerShell scripts, part of the Solution’s repository, makes the installation and configuration process, quick and easy. The scripts handle creating the necessary file directories and environment variables, setting file access permissions, and configuring IIS websites. Make sure to change the values of the environment variables before running the script. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create environment variables
[Environment]::SetEnvironmentVariable("AZURE_VM_HOSTNAME", `
  "{YOUR HOSTNAME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_USERNAME", `
  "{YOUR USERNME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_PASSWORD", `
  "{YOUR PASSWORD HERE}", "User")

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

Cloud-Based Continuous Integration and Delivery

Webhooks
The first point of integration in our hosted toolchain is between GitHub and AppVeyor. In order for AppVeyor to work with GitHub, we use a Webhook. Webhooks are widely used to communicate events between systems, over HTTP. According to GitHub, ‘every GitHub repository has the option to communicate with a web server whenever the repository is pushed to. These webhooks can be used to update an external issue tracker, trigger CI builds, update a backup mirror, or even deploy to your production server.‘ Basically, we give GitHub permission to tell AppVeyor every time code is pushed to the GitHub. GitHub sends a HTTP POST to a specific URL, provided by AppVeyor. AppVeyor responds to the POST by cloning the GitHub repository, and building, testing, and deploying the Projects. Below is an example of a webhook for AppVeyor, in GitHub.

GitHub's AppVeyor Webhook Configuration

GitHub’s AppVeyor Webhook Configuration

Unit Tests
To help illustrate the use of AppVeyor for automated unit testing, the updated Solution contains a Unit Test Project. Every time code is committed to GitHub, AppVeyor will clone and build the Solution, followed by running the set of unit tests shown below. The project’s unit tests test the Restaurant class library (‘restaurant.dll’). The unit tests provide 100% code coverage, as shown in the Visual Studio Code Coverage Results tab, below:

Code Coverage Results for Restaurant Class Library

Code Coverage Results for Restaurant Class Library

AppVeyor runs the Solution’s automated unit tests using VSTest.Console.exe. VSTest.Console calls the unit test Project’s assembly (‘restaurantunittests.dll’).  As shown below, the VSTest command (in light blue) runs all tests, and then displays individual test results, a results summary, and the total test execution time.

AppVeyor Running Automated Unit Tests Using VSTest.Console

AppVeyor Running Automated Unit Tests Using VSTest.Console

VSTest.Console has several command line options similar to MSBuild. They can be adjusted to output various levels of feedback on test results. For larger projects, you can selectively choose which pre-defined test sets to run. Test sets needs are set-up in Solution, in advance.

Configuring Azure VM
Before we publish the Solution from AppVeyor to the Azure, we need to configure the VM. Again, we can use PowerShell to script most of the configuration. Most scripts are the same ones we used to configure our local environment. The README.md file in the GitHub repository contains instructions. The scripts handle creating the necessary file directories, setting file access permissions, configuring the IIS websites, creating the Web Deploy User account, and assigning it in IIS. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create new local non-admin User and Group for Web Deploy

# Main variables (Change these!)
[string]$userName = "USER_NAME_HERE" # mjones
[string]$fullName = "FULL USER NAME HERE" # Mike Jones
[string]$password = "USER_PASSWORD_HERE" # pa$$w0RD!
[string]$groupName = "GROUP_NAME_HERE" # Development

# Create new local user account
[ADSI]$server = "WinNT://$Env:COMPUTERNAME"
$newUser = $server.Create("User", $userName)
$newUser.SetPassword($password)

$newUser.Put("FullName", "$fullName")
$newUser.Put("Description", "$fullName User Account")

# Assign flags to user
[int]$ADS_UF_PASSWD_CANT_CHANGE = 64
[int]$ADS_UF_DONT_EXPIRE_PASSWD = 65536
[int]$COMBINED_FLAG_VALUE = 65600

$flags = $newUser.UserFlags.value -bor $COMBINED_FLAG_VALUE
$newUser.put("userFlags", $flags)
$newUser.SetInfo()

# Create new local group
$newGroup=$server.Create("Group", $groupName)
$newGroup.Put("Description","$groupName Group")
$newGroup.SetInfo()

# Assign user to group
[string]$serverPath = $server.Path
$group = [ADSI]"$serverPath/$groupName, group"
$group.Add("$serverPath/$userName, user")

# Assign local non-admin User in IIS for Web Deploy
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management")
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\MenuWcfRestService", $FALSE)
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\RestaurantDemoSite", $FALSE)

Publish Profiles
The second point of integration in our toolchain is between AppVeyor and the Azure VM. We will be using Microsoft’s Web Deploy to deploy our Solution from AppVeyor to Azure.  Web Deploy integrates with the IIS Web Management Service (WMSVC) for remote deployment by non-administrators. I have already configured Web Deploy and created a non-administrative user on the Azure VM. This user’s credentials will be used for deployments. These are the credentials in the username and password environment variables we created.

To continuously deploy to Azure, we will use Web Publish Profiles with Microsoft’s Web Deploy technology. Both the website and WCF service projects contain individual profiles for local development (‘LocalMachine’), as well as deployment to Azure (‘AzureVM’). The ‘AzureVM’ profiles contain all the configuration information AppVeyor needs to connect to the Azure VM and deploy the website and WCF service.

The easiest way to create a profile is by right-clicking on the project and selecting the ‘Publish…’ and ‘Publish Web Site’ menu items. Using the Publish Web wizard, you can quickly build and validate a profile.

Publish Web Profile Tab

Publish Web Profile Tab

Each profile in the above Profile drop-down, represents a ‘.pubxml’ file. The Publish Web wizard is merely a visual interface to many of the basic configurable options found in the Publish Profile’s ‘.pubxml’ file. The .pubxml profile files can be found in the Project Explorer. For the website, profiles are in the ‘App_Data’ directory (i.e. ‘Restaurant\RestaurantDemoSite\App_Data\PublishProfiles\AzureVM.pubxml’). For the WCF service, profiles are in the ‘Properties’ directory (i.e. ‘Restaurant\RestaurantWcfService\Properties\PublishProfiles\AzureVM.pubxml’).

As an example, below are the contents of the ‘LocalMachine’ profile for the WCF service (‘LocalMachine.pubxml’). This is about as simple as a profile gets. Note since we are deploying locally, the profile is configured to open the main page of the website in a browser, after deployment; a helpful time-saver during development.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>FileSystem</WebPublishMethod>
        <LastUsedBuildConfiguration>Debug</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish>http://localhost:9250/RestaurantService.svc/help</SiteUrlToLaunchAfterPublish>
        <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <publishUrl>C:\MenuWcfRestService</publishUrl>
        <DeleteExistingFiles>True</DeleteExistingFiles>
    </PropertyGroup>
</Project>

A key change we will make is to use environment variables in place of sensitive configuration values in the ‘AzureVM’ Publish Profiles. The Web Publish wizard does not allow this change. To do this, we must edit the ‘AzureVM.pubxml’ file for both the website and the WCF service. We will replace the hostname of the server where we will deploy the projects with a variable (i.e. AZURE_VM_HOSTNAME = ‘MyAzurePublicServer.net’). We will also replace the username and password used to access the deployment destination. This way, someone accessing the Solution’s source code, won’t be able to obtain any sensitive information, which would give them the ability to hack your site. Note the use of the ‘AZURE_VM_HOSTNAME’ and ‘AZURE_VM_USERNAME’ environment variables, show below.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>MSDeploy</WebPublishMethod>
        <LastUsedBuildConfiguration>AppVeyor</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish />
        <LaunchSiteAfterPublish>False</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <MSDeployServiceURL>https://$(AZURE_VM_HOSTNAME):8172/msdeploy.axd</MSDeployServiceURL>
        <DeployIisAppPath>MenuWcfRestService</DeployIisAppPath>
        <RemoteSitePhysicalPath />
        <SkipExtraFilesOnServer>False</SkipExtraFilesOnServer>
        <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
        <EnableMSDeployBackup>True</EnableMSDeployBackup>
        <UserName>$(AZURE_VM_USERNAME)</UserName>
        <_SavePWD>False</_SavePWD>
        <_DestinationType>AzureVirtualMachine</_DestinationType>
    </PropertyGroup>
</Project>

The downside of adding environment variables to the ‘AzureVM’ profiles, the Publish Profile wizard feature within Visual Studio will no longer allow us to deploy, using the ‘AzureVM’ profiles. As demonstrated below, after substituting variables for actual values, the ‘Server’ and ‘User name’ values will no longer display properly. We can confirm this by trying to validate the connection, which fails. This does not indicate your environment variable values are incorrect, only that Visual Studio can longer correctly parse the ‘AzureVM.pubxml’ file and display it properly in the IDE. No big deal…

Publish Web Connection Tab - Failed Validation

Publish Web Connection Tab – Failed Validation

We can use the command line or PowerShell to deploy with the ‘AzureVM’ profiles.  AppVeyor accepts both command line input, as well as PowerShell for most tasks. All examples in this post and in the GitHub repository use PowerShell.

To build and deploy (publish) to Azure from the command line or PowerShell, we will use MSBuild. Below are the MSBuild commands used by AppVeyor to build our Solution, and then deploy our Solution to Azure. The first two MSBuild commands build the WCF service and the website. The second two deploy them to Azure. There are several ways you could construct these commands to successfully build and deploy this Solution. I found these commands to be the most succinct. I have split the build and the deploy functions so that the AppVeyor can run the automated unit tests, in between. If the tests don’t pass, we don’t want to deploy the code.

# Build WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:Configuration=AppVeyor /verbosity:minimal /nologo

# Build website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:Configuration=Release /verbosity:minimal /nologo

Write-Host "*** Solution builds complete."
# Deploy WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=AppVeyor `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

# Deploy website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=Release `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

Write-Host "*** Solution deployments complete."

Below is the output from AppVeyor showing the WCF Service and website’s deployment to Azure. Deployment is the last step in the continuous delivery process. At this point, the Solution was already built and the automated unit tests completed, successfully.

AppVeyor Output from Deployments to Azure.

AppVeyor Output from Deployments to Azure.

Below is the final view of the sample Solution’s WCF service and web site deployed to IIS 8.5 on the Azure VM.

Final View of IIS Sites Running on Azure VM

Final View of IIS Sites Running on Azure VM

Links

 

, , , , , , , , , , , , , , , , , ,

1 Comment

Follow

Get every new post delivered to your Inbox.

Join 814 other followers