Cloud-based Continuous Integration and Deployment for .NET Development

Create a cloud-based, continuous integration and deployment toolchain for distributed .NET development teams, using GitHub, AppVeyor, and Microsoft Azure.

Introduction

Whether you are part of a large enterprise development environment, or a member of a small start-up, you are likely working with remote team members. You may be remote, yourself. Developers, testers, web designers, and other team members, commonly work remotely on software projects. Distributed teams, comprised of full-time staff, contractors, and third-party vendors, often work in different buildings, different cities, and even different countries.

If software is no longer strictly developed in-house, why should our software development and integration tools be located in-house? We live in a quickly evolving world of Saas, PaaS, and IaaS. Popular SaaS development tools include Visual Studio Online, GitHub, BitBucket, Travis-CI, AppVeyor, CloudBeesJIRA, AWS, Microsoft Azure, Nodejitsu, and Heroku, to name just a few. With all these ‘cord-cutting’ tools, there is no longer a need for distributed development teams to be tethered to on-premise tooling, via VPN tunnels and Remote Desktop Connections.

There are many combinations of hosted software development and integration tools available, depending on your technology stack, team size, and budget. In this post, we will explore one such toolchain for .NET development. Using GitGitHub, AppVeyor, and Microsoft Azure, we will continuously build, test, and deploy a multi-tier .NET solution, without ever leaving Visual Studio. This particular toolchain has strong integration between tools, and will scale to fit most development teams.

Git and GitHub
Git and GitHub are widely used in development today. Visual Studio 2013 has fully-integrated Git support and Visual Studio 2012 has supported Git via a plug-in since early last year. Git is fully compatible with Windows. Additionally, there are several third party tools available to manage Git and GitHub repositories on Windows. These include Git Bash (my favorite), Git GUI, and GitHub for Windows.

GitHub acts as a replacement for your in-house Git server. Developers commit code to their individual local Git project repositories. They then push, pull, and merge code to and from a hosted GitHub repository. For security, GitHub requires a registered username and password to push code. Data transfer between the local Git repository and GitHub is done using HTTPS with SSL certificates or SSH with public-key encryption. GitHub also offers two-factor authentication (2FA). Additionally, for those companies concerned about privacy and added security, GitHub offers private repositories. These plans range in price from $25 to $200 per month, currently.

GitHub View of Solution

GitHub View of Solution

AppVeyor
AppVeyor’s tagline is ‘Continuous Integration for busy developers’. AppVeyor automates building, testing and deployment of .NET applications. AppVeyor is similar to Jenkins and Hudson in terms of basic functionality, except AppVeyor is only provided as a SaaS. There are several hosted solutions in the continuous integration and delivery space similar to AppVeyor. They include CloudBees (hosted-Jenkins) and Travis-CI. While CloudBees and Travis CI works with several technology stacks, AppVeyor focuses specifically on .NET. Its closest competitor may be Microsoft’s new Visual Studio Online.

Identical to GitHub, AppVeyor also offers private repositories (spaces for building and testing code). Prices for private repositories currently range from $39 to $319 per month. Private repositories offer both added security and support.  AppVeyor integrates nicely with several cloud-based code repositories, including GitHub, BitBucket, Visual Studio Online, and Fog Creek’s Kiln.

AppVeyor View of Last Build of Solution

AppVeyor View of Latest Build of Solution

Azure
This post demonstrates continuous deployment from AppVeyor to a Microsoft Server 2012-based Azure VM. The VM has IIS 8.5, Web Deploy 3.5, IIS Web Management Service (WMSVC), and other components and configuration necessary to host the post’s sample Solution. AppVeyor would work just as well with Azure’s other hosting options, as well as other cloud-based hosting providers, such as AWS or Rackspace, which also supports the .NET stack.

New Microsoft Azure Portal View of VM

New Microsoft Azure Portal View of VM

Sample Solution

The Visual Studio Solution used for this post was originally developed as part of an earlier post, Consuming Cross-Domain WCF REST Services with jQuery using JSONP. The original Solution, from 2011, demonstrated jQuery’s AJAX capabilities to communicate with a RESTful WCF service, cross-domains, using JSONP. I have since updated and modernized the Solution for this post. The revised Solution is on a new branch (‘rev2014′) on GitHub. Major changes to the Solution include an upgrade from VS2010 to VS2013, the use of Git DVCS, NuGet package management, Web Publish Profiles, Web Essentials for bundling JS and CSS, Twitter Bootstrap, unit testing, and a lot of code refactoring.

Revised Restaurant Menu Demo Viewed on Android Tablet

Revised Restaurant Menu Demo Viewed on Android Tablet

The updated VS Solution contains the following four Projects:

  1. Restaurant – C# Class Library
  2. RestaurantUnitTests – Unit Test Project
  3. RestaurantWcfService – C# WCF Service Application
  4. RestaurantDemoSite – Web Site (JS/HTML5)
VS 2013 View of Solution

VS 2013 View of Solution

The Visual Studio Solution Explorer tab, here, shows all projects contained in the Solution, and the primary files and directories they contain.

As explained in the earlier post, the ‘RestaurantDemoSite’ web site makes calls to the ‘RestaurantWcfService’ WCF service. The WCF service exposes two operations, one that returns the menu (‘GetCurrentMenu’), and the other that accepts an order (‘SendOrder’). For simplicity, orders are stored in the files system as JSON files. No database is required for the Solution. All business logic is contained in the ‘Restaurant’ class library, which is referenced by the WCF service. This architecture is illustrated in this Visual Studio Assembly Dependencies Diagram.

Installing and Configuring the Solution

The README.md file in the GitHub repository contains instructions for installing and configuring this Solution. In addition, a set of PowerShell scripts, part of the Solution’s repository, makes the installation and configuration process, quick and easy. The scripts handle creating the necessary file directories and environment variables, setting file access permissions, and configuring IIS websites. Make sure to change the values of the environment variables before running the script. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create environment variables
[Environment]::SetEnvironmentVariable("AZURE_VM_HOSTNAME", `
  "{YOUR HOSTNAME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_USERNAME", `
  "{YOUR USERNME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_PASSWORD", `
  "{YOUR PASSWORD HERE}", "User")

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

Cloud-Based Continuous Integration and Delivery

Webhooks
The first point of integration in our hosted toolchain is between GitHub and AppVeyor. In order for AppVeyor to work with GitHub, we use a Webhook. Webhooks are widely used to communicate events between systems, over HTTP. According to GitHub, ‘every GitHub repository has the option to communicate with a web server whenever the repository is pushed to. These webhooks can be used to update an external issue tracker, trigger CI builds, update a backup mirror, or even deploy to your production server.‘ Basically, we give GitHub permission to tell AppVeyor every time code is pushed to the GitHub. GitHub sends a HTTP POST to a specific URL, provided by AppVeyor. AppVeyor responds to the POST by cloning the GitHub repository, and building, testing, and deploying the Projects. Below is an example of a webhook for AppVeyor, in GitHub.

GitHub's AppVeyor Webhook Configuration

GitHub’s AppVeyor Webhook Configuration

Unit Tests
To help illustrate the use of AppVeyor for automated unit testing, the updated Solution contains a Unit Test Project. Every time code is committed to GitHub, AppVeyor will clone and build the Solution, followed by running the set of unit tests shown below. The project’s unit tests test the Restaurant class library (‘restaurant.dll’). The unit tests provide 100% code coverage, as shown in the Visual Studio Code Coverage Results tab, below:

Code Coverage Results for Restaurant Class Library

Code Coverage Results for Restaurant Class Library

AppVeyor runs the Solution’s automated unit tests using VSTest.Console.exe. VSTest.Console calls the unit test Project’s assembly (‘restaurantunittests.dll’).  As shown below, the VSTest command (in light blue) runs all tests, and then displays individual test results, a results summary, and the total test execution time.

AppVeyor Running Automated Unit Tests Using VSTest.Console

AppVeyor Running Automated Unit Tests Using VSTest.Console

VSTest.Console has several command line options similar to MSBuild. They can be adjusted to output various levels of feedback on test results. For larger projects, you can selectively choose which pre-defined test sets to run. Test sets needs are set-up in Solution, in advance.

Configuring Azure VM
Before we publish the Solution from AppVeyor to the Azure, we need to configure the VM. Again, we can use PowerShell to script most of the configuration. Most scripts are the same ones we used to configure our local environment. The README.md file in the GitHub repository contains instructions. The scripts handle creating the necessary file directories, setting file access permissions, configuring the IIS websites, creating the Web Deploy User account, and assigning it in IIS. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create new local non-admin User and Group for Web Deploy

# Main variables (Change these!)
[string]$userName = "USER_NAME_HERE" # mjones
[string]$fullName = "FULL USER NAME HERE" # Mike Jones
[string]$password = "USER_PASSWORD_HERE" # pa$$w0RD!
[string]$groupName = "GROUP_NAME_HERE" # Development

# Create new local user account
[ADSI]$server = "WinNT://$Env:COMPUTERNAME"
$newUser = $server.Create("User", $userName)
$newUser.SetPassword($password)

$newUser.Put("FullName", "$fullName")
$newUser.Put("Description", "$fullName User Account")

# Assign flags to user
[int]$ADS_UF_PASSWD_CANT_CHANGE = 64
[int]$ADS_UF_DONT_EXPIRE_PASSWD = 65536
[int]$COMBINED_FLAG_VALUE = 65600

$flags = $newUser.UserFlags.value -bor $COMBINED_FLAG_VALUE
$newUser.put("userFlags", $flags)
$newUser.SetInfo()

# Create new local group
$newGroup=$server.Create("Group", $groupName)
$newGroup.Put("Description","$groupName Group")
$newGroup.SetInfo()

# Assign user to group
[string]$serverPath = $server.Path
$group = [ADSI]"$serverPath/$groupName, group"
$group.Add("$serverPath/$userName, user")

# Assign local non-admin User in IIS for Web Deploy
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management")
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\MenuWcfRestService", $FALSE)
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\RestaurantDemoSite", $FALSE)

Publish Profiles
The second point of integration in our toolchain is between AppVeyor and the Azure VM. We will be using Microsoft’s Web Deploy to deploy our Solution from AppVeyor to Azure.  Web Deploy integrates with the IIS Web Management Service (WMSVC) for remote deployment by non-administrators. I have already configured Web Deploy and created a non-administrative user on the Azure VM. This user’s credentials will be used for deployments. These are the credentials in the username and password environment variables we created.

To continuously deploy to Azure, we will use Web Publish Profiles with Microsoft’s Web Deploy technology. Both the website and WCF service projects contain individual profiles for local development (‘LocalMachine’), as well as deployment to Azure (‘AzureVM’). The ‘AzureVM’ profiles contain all the configuration information AppVeyor needs to connect to the Azure VM and deploy the website and WCF service.

The easiest way to create a profile is by right-clicking on the project and selecting the ‘Publish…’ and ‘Publish Web Site’ menu items. Using the Publish Web wizard, you can quickly build and validate a profile.

Publish Web Profile Tab

Publish Web Profile Tab

Each profile in the above Profile drop-down, represents a ‘.pubxml’ file. The Publish Web wizard is merely a visual interface to many of the basic configurable options found in the Publish Profile’s ‘.pubxml’ file. The .pubxml profile files can be found in the Project Explorer. For the website, profiles are in the ‘App_Data’ directory (i.e. ‘Restaurant\RestaurantDemoSite\App_Data\PublishProfiles\AzureVM.pubxml’). For the WCF service, profiles are in the ‘Properties’ directory (i.e. ‘Restaurant\RestaurantWcfService\Properties\PublishProfiles\AzureVM.pubxml’).

As an example, below are the contents of the ‘LocalMachine’ profile for the WCF service (‘LocalMachine.pubxml’). This is about as simple as a profile gets. Note since we are deploying locally, the profile is configured to open the main page of the website in a browser, after deployment; a helpful time-saver during development.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>FileSystem</WebPublishMethod>
        <LastUsedBuildConfiguration>Debug</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish>http://localhost:9250/RestaurantService.svc/help</SiteUrlToLaunchAfterPublish>
        <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <publishUrl>C:\MenuWcfRestService</publishUrl>
        <DeleteExistingFiles>True</DeleteExistingFiles>
    </PropertyGroup>
</Project>

A key change we will make is to use environment variables in place of sensitive configuration values in the ‘AzureVM’ Publish Profiles. The Web Publish wizard does not allow this change. To do this, we must edit the ‘AzureVM.pubxml’ file for both the website and the WCF service. We will replace the hostname of the server where we will deploy the projects with a variable (i.e. AZURE_VM_HOSTNAME = ‘MyAzurePublicServer.net’). We will also replace the username and password used to access the deployment destination. This way, someone accessing the Solution’s source code, won’t be able to obtain any sensitive information, which would give them the ability to hack your site. Note the use of the ‘AZURE_VM_HOSTNAME’ and ‘AZURE_VM_USERNAME’ environment variables, show below.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>MSDeploy</WebPublishMethod>
        <LastUsedBuildConfiguration>AppVeyor</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish />
        <LaunchSiteAfterPublish>False</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <MSDeployServiceURL>https://$(AZURE_VM_HOSTNAME):8172/msdeploy.axd</MSDeployServiceURL>
        <DeployIisAppPath>MenuWcfRestService</DeployIisAppPath>
        <RemoteSitePhysicalPath />
        <SkipExtraFilesOnServer>False</SkipExtraFilesOnServer>
        <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
        <EnableMSDeployBackup>True</EnableMSDeployBackup>
        <UserName>$(AZURE_VM_USERNAME)</UserName>
        <_SavePWD>False</_SavePWD>
        <_DestinationType>AzureVirtualMachine</_DestinationType>
    </PropertyGroup>
</Project>

The downside of adding environment variables to the ‘AzureVM’ profiles, the Publish Profile wizard feature within Visual Studio will no longer allow us to deploy, using the ‘AzureVM’ profiles. As demonstrated below, after substituting variables for actual values, the ‘Server’ and ‘User name’ values will no longer display properly. We can confirm this by trying to validate the connection, which fails. This does not indicate your environment variable values are incorrect, only that Visual Studio can longer correctly parse the ‘AzureVM.pubxml’ file and display it properly in the IDE. No big deal…

Publish Web Connection Tab - Failed Validation

Publish Web Connection Tab – Failed Validation

We can use the command line or PowerShell to deploy with the ‘AzureVM’ profiles.  AppVeyor accepts both command line input, as well as PowerShell for most tasks. All examples in this post and in the GitHub repository use PowerShell.

To build and deploy (publish) to Azure from the command line or PowerShell, we will use MSBuild. Below are the MSBuild commands used by AppVeyor to build our Solution, and then deploy our Solution to Azure. The first two MSBuild commands build the WCF service and the website. The second two deploy them to Azure. There are several ways you could construct these commands to successfully build and deploy this Solution. I found these commands to be the most succinct. I have split the build and the deploy functions so that the AppVeyor can run the automated unit tests, in between. If the tests don’t pass, we don’t want to deploy the code.

# Build WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:Configuration=AppVeyor /verbosity:minimal /nologo

# Build website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:Configuration=Release /verbosity:minimal /nologo

Write-Host "*** Solution builds complete."
# Deploy WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=AppVeyor `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

# Deploy website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=Release `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

Write-Host "*** Solution deployments complete."

Below is the output from AppVeyor showing the WCF Service and website’s deployment to Azure. Deployment is the last step in the continuous delivery process. At this point, the Solution was already built and the automated unit tests completed, successfully.

AppVeyor Output from Deployments to Azure.

AppVeyor Output from Deployments to Azure.

Below is the final view of the sample Solution’s WCF service and web site deployed to IIS 8.5 on the Azure VM.

Final View of IIS Sites Running on Azure VM

Final View of IIS Sites Running on Azure VM

Links

 

, , , , , , , , , , , , , , , , , ,

1 Comment

Single Page Web Applications, Book Review

A brief review of ‘Single Page Web Applications’, by authors Michael S. Mikowski and Josh C. Powell. Learn to build modern browser-based apps, using the latest full-stack JavaScript technologies.

Recently, I had the opportunity to review the eBook edition of ‘Single Page Web Applications‘, by authors Michael S. Mikowski and Josh C. Powell, published by Manning Publications. Most of us involved in software development are acutely aware of recent explosion of the interest in full-stack JavaScript applications, NoSQL databases, HTML5/CCS3, web-sockets, and single-page web applications (SPAs). Mikowski and Powell’s book, Single Page Web Applications, hit the market at a perfect time (release last September), and with just the right mix of timely learning opportunities for the reader.

An interesting twist on many current books in this category, the lack of the author’s heavy reliance on one or more popular JavaScript libraries, such as AngularJS,  Ember.js, and Backbone.js. Mikowski and Powell purposefully build a JavaScript-based SPA from the ground up, without simply plugging into a ready-made library or API. Although many readers may be heavily tied to a certain library or API, understanding how to build a SPA from the ground up is invaluable.

The first thing that struck me, the thoroughness of the book’s examples. A question many publishers ask, does the book have enough ‘real-word examples’. Sadly, the answer is often no. Many books only offer incomplete, academic examples. They are often difficult to scale to match the complexity of modern software development. However in this case, I felt Mikowski and Powell’s book hit a home run with their ‘real-world’ code samples. It is obvious both authors are working professionals, doing development in the ‘real world’. The book’s samples build upon one another throughout the book, effectively expanding the application’s scope and the user’s knowledge.

The second attribute that stood out to me, the book’s documentation. In fact, that might have been one of the very few minor negatives I found with the book — to many comments. The authors go to great lengths to thoroughly comment and document the code samples. In some examples, almost obscuring the code itself. I found the comments both detailed and helpful.

The third attribute that stood out to me, the author’s focus on testing. Testing the sample applications is highlighted throughout the book. Additionally, Appendix B, ‘Testing a SPA’, had more information on testing complex JavaScript applications than many other books I have read. Testing software is often ignored in books and training materials. However, software testing is an integral part of the ‘real-world’ software development life-cycle. Testing is critical to software’s success.

Lastly, I found a lot of value in Appendix A, ‘JavaScript coding standard‘. Read this part, first! Anyone can follow along with the book, mimicking code samples, without really understanding JavaScript’s core concepts. Without a real understanding, it is hard to apply the book’s lessons to your own application. I felt the JavaScript overview in Appendix A of Mikowski and Powell’s book was one of the best I have read. I will be referring back to appendix’s coding style guide, in the future.

, , , , , ,

Leave a comment

Windows PowerShell 4.0 for .NET Developers, Book Review

A brief review of ‘Windows PowerShell 4.0 for .NET Developers’, a fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment.

Windows PowerShell 4.0 for .NET Developers

Introduction

Recently, I had the opportunity to review ‘Windows PowerShell 4.0 for .NET Developers‘, published by Packt Publishing. According to its author, Sherif Talaat, the book is ‘a fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment.‘ Working in a large and complex software development organization, technologies such as PowerShell, which enable increased speed and automation, are essential to our success. Having used PowerShell on a regular basis as a .NET developer for the past few years, I was excited to see what Sherif’s newest book offered.

Requirements

The book recommends the following minimal software configuration to work through the code samples:

  • Windows Server 2012 R2 (includes PowerShell 4.0 and .NET 4.5)
  • SQL Server 2012
  • Visual Studio 2012/2013
  • Visual Studio Team Foundation Server (TFS) 2012/2013

To test the book’s samples, I provisioned a fresh VM, and using my MSDN subscription, installed the required Windows Server, SQL Server, and Team Foundation Server. I worked directly on the VM, as well as remotely from a Windows 7 Enterprise-based development machine with Visual Studio 2012 installed. The code samples worked fairly well, with only a few minor problems I found. There is still no errata published for the book as of the time of review.

A key aspect many authors do not address, is the complexities of using PowerShell in a corporate environment. Working individually or on a small network, developers don’t always experience the added burden of restrictive network security, LDAP, proxy servers, proxy authentication, XML gateways, firewalls, and centralized computer administration. Any code that requires access to remote servers and systems, often requires additional coding to work within a corporate environment. It can be frustrating to debug and extend simple examples to work successfully within an enterprise setting.

Contents

Windows PowerShell 4.0 for .NET Developers, at 115 pages in length, is divided into five chapters:

  • Chapter 1: Getting Started with Windows PowerShell
  • Chapter 2: Unleashing Your Development Skills with PowerShell
  • Chapter 3: PowerShell for Your Daily Administration Tasks
  • Chapter 4: PowerShell and Web Technologies
  • Chapter 5: PowerShell and Team Foundation Server

Chapter 1 provides a brief introduction to PowerShell. At a scant 30 pages, I would not recommend this book as a way to learn PowerShell for the beginner. For learning PowerShell, I recommend Instant Windows PowerShell, by Vinith Menon, also published by Packt Publishing. Alternatively, I recommend a few books by Manning Publications, including Learn Windows PowerShell in a Month of Lunches, Second Edition.

Chapter 2 discusses PowerShell in relationship to several key Microsoft technologies, including Windows Management Instrumentation (WMI), Common Information Model (CIM), Component Object Model (COM) and Extensible Markup Language (XML). As a .NET developer, it’s almost impossible not to have worked with one, or all of these technologies. Chapter 2 discusses how PowerShell works with .NET objects, and extend the .NET framework. The chapter also includes an easy-to-follow example of creating, importing, and calling a PowerShell binary module (compiled .NET class library), using Visual Studio.

Chapter 3 explores areas where .NET developer can start leveraging PowerShell for daily administrative tasks. In particular, I found the sections on PowerShell Remoting and administering IIS and SQL Server particularly useful. Being able to easily connect to remote web, application, and database servers from the command line (or, PowerShell prompt) and do basic system administration is a huge time savings in an agile development environment.

Chapters 4 focuses on how PowerShell interfaces with SOAP and REST based services, web requests, and JSON. Windows Communication Foundation (WCF) based service-oriented application development has been a trend for the last few years. Being able to manage, test, and monitor SOAP and RESTful services and HTTP requests/responses is important to .NET developers. PowerShell can often quicker and easier than writing and compiling service utilities in Visual Studio, or using proprietary third-party applications.

Chapter 5 is dedicated to Visual Studio Team Foundation Server (TFS), Microsoft’s end-to-end, Application Lifecycle Management (ALM) solution. Chapter 5 details the installation and use of TFS Power Tools and TFS PowerShell snap-in. Having held the roles of lead developer and Scrum Master, I have personally found some of the best uses for PowerShell in automating various aspects of TFS. Managing TFS often requires repetitive tasks, the place where PowerShell excels. You will need to explore additional resources beyond the scope of this book to really start automating TFS with PowerShell.

Conclusion

Overall, I enjoyed the book and felt it was well worth the time to explore. I applaud Sherif for targeting a PowerShell book specifically to developers. Due to its short length, the book did leave me wanting more information on a few subjects that were barely skimmed. I also found myself expecting guidance on a few subjects the book did not touch upon, such as PowerShell for cloud-based development (Azure), test automation, and build and deployment automation. For more information on some of those subjects, I recommend Sherif’s other book, also published by Packt Publishing, PowerShell 3.0 Advanced Administration Handbook.

, , , , , , , ,

Leave a comment

Retrieving and Displaying Data with AngularJS and the MEAN Stack: Part II

Explore various methods of retrieving and displaying data using AngularJS and the MEAN Stack.

Mobile View on Android Smartphone

Mobile View on Android Smartphone

Introduction

In this two-part post, we are exploring methods of retrieving and displaying data using AngularJS and the MEAN Stack. The post’s corresponding GitHub project, ‘meanstack-data-samples‘, is based on William Lepinski’s ‘generator-meanstack‘, which is in turn is based on Yeoman’s ‘generator-angular‘. As a bonus, since both projects are based on ‘generator-angular’, all the code generators work. Generators can save a lot of time and aggravation when building AngularJS components.

In part one of this post, we installed and configured the ‘meanstack-data-samples’ project from GitHub. In part two, we will we will look at five examples of retrieving and displaying data using AngularJS:

  • Function within AngularJS Controller returns array of strings.
  • AngularJS Service returns an array of simple object literals to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller using a resource object
    (In GitHub project, but not discussed in this post).
  • AngularJS Factory returns a collection of documents from MongoDB Database to the controller.
  • AngularJS Factory returns results from Google’s RESTful Web Search API to the controller.

Project Structure

For brevity, I have tried to limit the number of files in the project. There are two main views, both driven by a single controller. The primary files, specific to data retrieval and display, are as follows:

  • Default site file (./)
    • index.html – loads all CSS and JavaScript files, and views
  • App and Routes (./app/scripts/)
    • app.js - instantiates app and defines routes (route/view/controller relationship)
  • Views (./app/views/)
    • data-bootstrap.html – uses Twitter Bootstrap
    • data-no-bootstrap.html – basically the same page, without Twitter Bootstrap
  • Controllers (./app/scripts/controllers/)
    • DataController.js (DataController) – single controller used by both views
  • Services and Factories (./app/scripts/services/)
    • meanService.js (meanService) – service returns array of object literals to DataController
    • jsonFactory.js (jsonFactory) – factory returns contents of JSON file
    • jsonFactoryResource.js (jsonFactoryResource) – factory returns contents of JSON file using resource object (new)
    • mongoFactory.js (mongoFactory) – factory returns MongoDB collection of documents
    • googleFactory.js (googleFactory) – factory call Google Web Search API
  • Models (./app/models/)
    • Components.js – mongoose constructor for the Component schema definition
  • Routes (./app/)
    • routes.js – mongoose RESTful routes
  • Data (./app/data/)
    • otherStuff.json – static JSON file loaded by jsonFactory
  • Environment Configuration (./config/environments/)
    • index.js – defines all environment configurations
    • test.js – Configuration specific to the current ‘test’ environment
  • Unit Tests (./test/spec/…)
    • Various files – all controller and services/factories unit test files are in here…
Project in JetBrains WebStorm 8.0

Project in JetBrains WebStorm 8.0

There are many more files, critical to the project’s functionality, include app.js, Gruntfile.js, bower.json, package.json, server.js, karma.conf.js, and so forth. You should understand each of these file’s purposes.

Function Returns Array

In the first example, we have the yeomanStuff() method, a member of the $scope object, within the DataController.  The yeomanStuff() method return an array object containing three strings. In JavaScript, a method is a function associated with an object.

$scope.yeomanStuff = function () {
  return [
    'yo',
    'Grunt',
    'Bower'
  ];
};
'yeomanStuff' Method of the '$scope' Object

‘yeomanStuff’ Method of the ‘$scope’ Object

The yeomanStuff() method is called from within the view by Angular’s ng-repeat directive. The directive, ng-repeat, allows us to loop through the array of strings and add them to an unordered list. We will use ng-repeat for all the examples in this post.

<ul class="list-group">
  <li class="list-group-item"
	  ng-repeat="stuff in yeomanStuff()">
	{{stuff}}
  </li>
<ul>

Method1

Although this first example is easy to implement, it is somewhat impractical. Generally, you would not embed static data into your code. This limits your ability to change the data, independent of a application’s code. In addition, the function is tightly coupled to the controller, limiting its reuse.

Service Returns Array

In the second example, we also use data embedded in our code. However, this time we have improved the architecture slightly by moving the data to an Angular Service. The meanService contains the getMeanStuff() function, which returns an array containing four object literals. Using a service, we can call the getMeanStuff() function from anywhere in our project.

angular.module('generatorMeanstackApp')
  .service('meanService', function () {
    this.getMeanStuff = function () {
      return ([
        {
          component: 'MongoDB',
          url: 'http://www.mongodb.org'
        },
        {
          component: 'Express',
          url: 'http://expressjs.com'
        },
        {
          component: 'AngularJS',
          url: 'http://angularjs.org'
        },
        {
          component: 'Node.js',
          url: 'http://nodejs.org'
        }
      ])
    };
  });

Within the DataController, we assign the array object, returned from the meanService.getMeanStuff() function, to the meanStuff object property of the  $scope object.

$scope.meanStuff = {};
try {
  $scope.meanStuff = meanService.getMeanStuff();
} catch (error) {
  console.error(error);
}
'meanStuff' Property of the '$scope' Object

‘meanStuff’ Property of the ‘$scope’ Object

The meanStuff object property is accessed from within the view, using ng-repeat. Each object in the array contains two properties, component and url. We display the property values on the page using Angular’s double curly brace expression notation (i.e. ‘{{stuff.component}}‘).

<ul class="nav nav-pills nav-stacked">
  <li ng-repeat="stuff in meanStuff">
    url}}"
       target="_blank">{{stuff.component}}
  </li>
<ul>

Method2

Promises, Promises…

The remaining methods implement an asynchronous (non-blocking) programming model, using the $http and $q services of Angular’s ng module. The services implements the asynchronous Promise and Deferred APIs. According to Chris Webb, in his excellent two-part post, Promise & Deferred objects in JavaScript: Theory and Semantics, a promise represents a value that is not yet known and a deferred represents work that is not yet finished. I strongly recommend reading Chris’ post, before continuing. I also highly recommend watching RED Ape EDU’s YouTube video, Deferred and Promise objects in Angular js. This video really clarified the promise and deferred concepts for me.

Factory Loads JSON File

In the third example, we will read data from a JSON file (‘./app/data/otherStuff.json‘) using an AngularJS Factory. The differences between a service and a factory can be confusing, and are beyond the scope of this post. Here is two great links on the differences, one on Angular’s site and one on StackOverflow.

{
  "components": [
    {
      "component": "jQuery",
      "url": "http://jquery.com"
    },
    {
      "component": "Jade",
      "url": "http://jade-lang.com"
    },
    {
      "component": "JSHint",
      "url": "http://www.jshint.com"
    },
    {
      "component": "Karma",
      "url": "http://karma-runner.github.io"
    },
    ...
  ]
}

The jsonFactory contains the getOtherStuff() function. This function uses $http.get() to read the JSON file and returns a promise of the response object. According to Angular’s site, “since the returned value of calling the $http function is a promise, you can also use the then method to register callbacks, and these callbacks will receive a single argument – an object representing the response. A response status code between 200 and 299 is considered a success status and will result in the success callback being called. ” As I mentioned, a complete explanation of the deferreds and promises, is too complex for this short post.

angular.module('generatorMeanstackApp')
  .factory('jsonFactory', function ($q, $http) {
    return {
      getOtherStuff: function () {
        var deferred = $q.defer(),
          httpPromise = $http.get('data/otherStuff.json');

        httpPromise.then(function (response) {
          deferred.resolve(response);
        }, function (error) {
          console.error(error);
        });

        return deferred.promise;
      }
    };
  });

The response object contains the data property. Angular defines the response object’s data property as a string or object, containing the response body transformed with the transform functions. One of the properties of the data property is the components array containing the seven objects. Within the DataController, if the promise is resolved successfully, the callback function assigns the contents of the components array to the otherStuff property of the $scope object.

$scope.otherStuff = {};
jsonFactory.getOtherStuff()
  .then(function (response) {
    $scope.otherStuff = response.data.components;
  }, function (error) {
    console.error(error);
  });
'otherStuff' Property of the '$scope' Object

‘otherStuff’ Property of the ‘$scope’ Object

The otherStuff property is accessed from the view, using ng-repeat, which displays individual values, exactly like the previous methods.

<ul class="nav nav-pills nav-stacked">
  <li ng-repeat="stuff in otherStuff">
    <a href="{{stuff.url}}"
       target="_blank">{{stuff.component}}</a>
  </li>
</ul>

Method3

This method of reading a JSON file is often used for configuration files. Static configuration data is stored in a JSON file, external to the actual code. This way, the configuration can be modified without requiring the main code to be recompiled and deployed. It is a technique used by the components within this very project. Take for example the bower.json files and the package.json files. Both contain configuration data, stored as JSON, used by Bower and npm to perform package management.

Factory Retrieves Data from MongoDB

In the fourth example, we will read data from a MongoDB database. There are a few more moving parts in this example than in the previous examples. Below are the documents in the components collection of the meanstack-test MongoDB database, which we will retrieve and display with this method.  The meanstack-test database is defined in the test.js environments file (discussed in part one).

'meanstack-test' Database's 'components' Collection Documents

‘meanstack-test’ Database’s ‘components’ Collection Documents

To connect to the MongoDB, we will use Mongoose. According to their website, “Mongoose provides a straight-forward, schema-based solution to modeling your application data and includes built-in type casting, validation, query building, business logic hooks and more, out of the box.” But wait, MongoDB is schemaless? It is. However, Mongoose provides a schema-based API for us to work within. Again, according to Mongoose’s website, “Everything in Mongoose starts with a Schema. Each schema maps to a MongoDB collection and defines the shape of the documents within that collection.

In our example, we create the componentSchema schema, and pass it to the Component model (the ‘M’ in MVC). The componentSchema maps to the database’s components collection.

var mongoose = require('mongoose');
var Schema = mongoose.Schema;

var componentSchema = new Schema({
  component: String,
  url: String
});

module.exports = mongoose.model('Component', componentSchema);

The routes.js file associates routes (Request URIs) and HTTP methods to Mongoose actions. These actions are usually CRUD operations. In our simple example, we have a single route, ‘/api/components‘, associated with an HTTP GET method. When an HTTP GET request is made to the ‘/api/components‘ request URI, Mongoose calls the Model.find() function, ‘Component.find()‘, with a callback function parameter. The Component.find() function returns all documents in the components collection.

var Component = require('./models/component');

module.exports = function (app) {
  app.get('/api/components', function (req, res) {
    Component.find(function (err, components) {
      if (err)
        res.send(err);

      res.json(components);
    });
  });
};

You can test these routes, directly. Below, is the results of calling the ‘/api/components‘ route in Chrome.

Response from MongoDB Using Mongoose

Response from MongoDB Using Mongoose

The mongoFactory contains the getMongoStuff() function. This function uses $http.get() to call  the ‘/api/components‘ route. The route is resolved by the routes.js file, which in turn executes the Component.find() command. The promise of an array of objects is returned by the getMongoStuff() function. Each object represents a document in the components collection.

angular.module('generatorMeanstackApp')
  .factory('mongoFactory', function ($q, $http) {
    return {
      getMongoStuff: function () {
        var deferred = $q.defer(),
          httpPromise = $http.get('/api/components');

        httpPromise.success(function (components) {
          deferred.resolve(components);
        })
          .error(function (error) {
            console.error('Error: ' + error);
          });

        return deferred.promise;
      }
    };
  });

Within the DataController, if the promise is resolved successfully, the callback function assigns the array of objects, representing the documents in the collection, to the mongoStuff property of the $scope object.

$scope.mongoStuff = {};
mongoFactory.getMongoStuff()
  .then(function (components) {
    $scope.mongoStuff = components;
  }, function (error) {
    console.error(error);
  });
'mongoStuff' Property of the '$scope' Object

‘mongoStuff’ Property of the ‘$scope’ Object

The mongoStuff property is accessed from the view, using ng-repeat, which displays individual values using Angular expressions, exactly like the previous methods.

<ul class="list-group">
  <li class="list-group-item" ng-repeat="stuff in mongoStuff">
    <b>{{stuff.component}}</b>
    <div class="text-muted">{{stuff.description}}</div>
  </li>
</ul>

Method4

Factory Calls Google Search

In the last example, we will call the Google Web Search API from an AngularJS Factory. The Google Web Search API exposes a simple RESTful interface. According to Google, “in all cases, the method supported is GET and the response format is a JSON encoded result set with embedded status codes.” Google describes this method of using RESTful access to the API, as “for Flash developers, and those developers that have a need to access the Web Search API from other Non-JavaScript environment.” However, we will access it in our JavaScript-based MEAN stack application, due to the API’s ease of implementation.

Note according to Google’s site, “the Google Web Search API has been officially deprecated…it will continue to work…but the number of requests…will be limited. Therefore, we encourage you to move to Custom Search, which provides an alternative solution.Google Search, or more specifically, the Custom Search JSON/Atom API, is a newer API, but the Web Search API is easier to demonstrate in this brief post than Custom Search JSON/Atom API, which requires the use of an API key.

The googleFactory contains the getSearchResults() function. This function uses $http.jsonp() to call the Google Web Search API RESTful interface and return the promise of the JSONP-formatted (‘JSON with padding’) response. JSONP provides cross-domain access to a JSON payload, by wrapping the payload in a JavaScript function call (callback).

angular.module('generatorMeanstackApp')
  .factory('googleFactory', function ($q, $http) {
    return {
      getSearchResults: function () {
        var deferred = $q.defer(),
          host = 'https://ajax.googleapis.com/ajax/services/search/web',
          args = {
            'version': '1.0',
            'searchTerm': 'mean%20stack',
            'results': '8',
            'callback': 'JSON_CALLBACK'
          },
          params = ('?v=' + args.version + '&q=' + args.searchTerm + '&rsz=' +
            args.results + '&callback=' + args.callback),
          httpPromise = $http.jsonp(host + params);

        httpPromise.then(function (response) {
          deferred.resolve(response);
        }, function (error) {
          console.error(error);
        });

        return deferred.promise;
      }
    };
  });

The getSearchResults() function uses the HTTP GET method to make an HTTP request the following RESTful URI:
https://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=mean%20stack&rsz=8&callback=angular.callbacks._0

Using Google Chrome’s Developer tools, we can preview the Google Web Search JSONP-format HTTP response (abridged). Note the callback function that wraps the JSON payload.

Google Web Search Results in Chrome Browser

Google Web Search Results in Chrome Browser

Within the DataController, if the promise is resolved successfully, our callback function returns the response object. The response object contains a lot of information. We are able to limit that amount of information sent to the view by only assigning the actual search results, an array of eight objects contained in the response object, to the googleStuff property of the $scope object.

$scope.googleStuff = {};
googleFactory.getSearchResults()
  .then(function (response) {
    $scope.googleStuff = response.data.responseData.results;
  }, function (error) {
    console.error(error);
  });

Below is the full response returned by the The googleFactory. Note the path to the data we are interested in: ‘response.data.responseData.results‘.

Google Search Response Object

Google Search Response Object

Below is the filtered results assigned to the googleStuff property:

'googleStuff' Property of the '$scope' Object

‘googleStuff’ Property of the ‘$scope’ Object

The googleStuff property is accessed from the view, using ng-repeat, which displays individual values using Angular expressions, exactly like the previous methods.

<ul class="list-group">
  <li class="list-group-item"
      ng-repeat="stuff in googleStuff">
    <a href="{{unescapedUrl.url}}"
       target="_blank"><b>{{stuff.visibleUrl}}</b></a>

    <div class="text-muted">{{stuff.titleNoFormatting}}</div>
  </li>
</ul>

Method5

Links

, , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Retrieving and Displaying Data with AngularJS and the MEAN Stack: Part I

Explore various methods of retrieving and displaying data using AngularJS and the MEAN Stack.

Mobile View of Application on Android Smartphone

Mobile View of Application on Android Smartphone

Introduction

In the following two-part post, we will explore several methods of retrieving and displaying data using AngularJS and the MEAN Stack. The post’s corresponding GitHub project, ‘meanstack-data-samples‘, is based on William Lepinski’s ‘generator-meanstack‘, which is in turn is based on Yeoman’s ‘generator-angular‘. As a bonus, since both projects are based on ‘generator-angular’, all the code generators work. Generators can save a lot of time and aggravation when building AngularJS components.

In part one of this post, we will install and configure the ‘meanstack-data-samples’ project from GitHub, which corresponds to this post. In part two, we will we will look at several methods for retrieving and displaying data using AngularJS:

  • Function within AngularJS Controller returns array of strings.
  • AngularJS Service returns an array of simple object literals to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller using a resource object
    (In GitHub project, but not discussed in this post).
  • AngularJS Factory returns a collection of documents from MongoDB Database to the controller.
  • AngularJS Factory returns results from Google’s RESTful Web Search API to the controller.

Preparation

If you need help setting up your development machine to work with the MEAN stack, refer to my last post, Installing and Configuring the MEAN Stack, Yeoman, and Associated Tooling on Windows. You will need to install all the MEAN and Yeoman components.

For this post, I am using JetBrains’ new WebStorm 8RC to build and demonstrate the project. There are several good IDE’s for building modern web applications; WebStorm is one of the current favorites of developers.

Complexity of Modern Web Applications

Building modern web applications using the MEAN stack or comparable technologies is complex. The ‘meanstack-data-samples’ project, and the projects it is based on, ‘generator-meanstack’ and ‘generator-angular’, have dozens of moving parts. In this simple project, we have MongoDBExpressJSAngularJS, Node.js, yoGrunt, BowerGitjQueryTwitter BootstrapKarmaJSHint, jQueryMongoose, and hundreds of other components, all working together. There are almost fifty Node packages and hundreds of their dependencies loaded by npm, in addition to another dozen loaded by Bower.

Installing, configuring, and managing all the parts of a modern web application requires a basic working knowledge of these technologies. Understanding how Bower and npm install and manage packages, how Grunt builds, tests, and serves the application with ExpressJS, how Yo scaffolds applications, how Karma and Jasmine run unit tests, or how Mongoose and MongoDB work together, are all essential. This brief post will primarily focus on retrieving and displaying data, not necessarily how the components all work, or work together.

Installing and Configuring the Project

Environment Variables

To start, we need to create (3) environment variables. The NODE_ENV environment variable is used to determine the environment our application is operating within. The NODE_ENV variable determines which configuration file in the project is read by the application when it starts. The configuration files contain variables, specific to that environment. There are (4) configuration files included in the project. They are ‘development’, ‘test’, ‘production’, and ‘travis’ (travis-ci.org). The NODE_ENV variable is referenced extensively throughout the project. If the NODE_ENV variable is not set, the application will default to ‘development‘.

For this post, set the NODE_ENV variable to ‘test‘. The value, ‘test‘, corresponds to the ‘test‘ configuration file (‘meanstack-data-samples\config\environments\test.js‘), shown below.

// set up =====================================
var express          = require('express');
var bodyParser       = require('body-parser');
var errorHandler     = require('errorhandler');
var favicon          = require('serve-favicon');
var logger           = require('morgan');
var cookieParser     = require('cookie-parser');
var methodOverride   = require('method-override');
var session          = require('express-session');
var path             = require('path');
var env              = process.env.NODE_ENV || 'development';

module.exports = function (app) {
    if ('test' == env) {
        console.log('environment = test');
        app.use(function staticsPlaceholder(req, res, next) {
            return next();
        });
        app.set('db', 'mongodb://localhost/meanstack-test');
        app.set('port', process.env.PORT || 3000);
        app.set('views', path.join(app.directory, '/app'));
        app.engine('html', require('ejs').renderFile);
        app.set('view engine', 'html');
        app.use(favicon('./app/favicon.ico'));
        app.use(logger('dev'));
        app.use(bodyParser());
        app.use(methodOverride());
        app.use(cookieParser('your secret here'));
        app.use(session());

        app.use(function middlewarePlaceholder(req, res, next) {
            return next();
        });

        app.use(errorHandler());
    }
};

The second environment variable is PORT. The application starts on the port indicated by the PORT variable, for example, ‘localhost:3000′. If the the PORT variable is not set, the application will default to port ‘3000‘, as specified in the each of the environment configuration files and the ‘Gruntfile.js’ Grunt configuration file.

Lastly, the CHROME_BIN environment variable is used Karma, the test runner for JavaScript, to determine the correct path to browser’s binary file. Details of this variable are discussed in detail on Karma’s site. In my case, the value for the CHROME_BIN is ‘C:\Program Files (x86)\Google\Chrome\Application\chrome.exe'. This variable is only necessary if you will be configuring Karma to use Chrome to run the tests. The browser can be changes to any browser, including PhantomJS. See the discussion at the end of this post regarding browser choice for Karma.

You can easily set all the environment variables on Windows from a command prompt, with the following commands. Remember to exit and re-open your interactive shell or command prompt window after adding the variables so they can be used.

Install and Configure the Project

To install and configure the project, we start by cloning the ‘meanstack-data-samples‘ project from GitHub. We then use npm and bower to install the project’s dependencies. Once installed, we create and populate the Mongo database. We then use Grunt and Karma to unit test the project. Finally, we will use Grunt to start the Express Server and run the application. This is all accomplished with only a few individual commands. Please note, the ‘npm install’ command could take several minutes to complete, depending on your network speed; the project has many direct and indirect Node.js dependencies.

If everything was installed correctly, running the ‘grunt test’ command should result in output similar to below:

Results of Running 'grunt test' with Chrome

Results of Running ‘grunt test’ with Chrome

If everything was installed correctly, running the ‘grunt server’ command should result in output similar to below:

Results of Running 'grunt server' to Start Application

Results of Running ‘grunt server’ to Start Application

Running the ‘grunt server’ command should start the application and open your browser to the default view, as shown below:

Displaying the Application's Google Search Results on Desktop Browser

Displaying the Application’s Google Search Results on Desktop Browser

Karma’s Browser Choice for Unit Tests

The GitHub project is currently configured to use Chrome for running Karma’s unit tests in the ‘development’ and ‘test’ environments. For the ‘travis’ environment, it uses PhantomJS. If you do not have Chrome installed on your machine, the ‘grunt test’ task will fail during the ‘karma:unit’ task portion. To change Karma’s browser preference, simply change the ‘testBrowser’ variable in the ‘./karma.conf.js’ file, as shown below.

I recommend installing and using  PhantomJS headless WebKit, locally. Since PhantomJS is headless, Karma runs the unit tests without having to open and close browser windows. To run this project on continuous integration servers, like Jenkins or Travis-CI, you must PhantomJS. If you decide to use PhantomJS on Windows, don’t forget add the PhantomJS executable directory path to your ‘PATH’ environment variable to, after downloading and installing the application.

 

Code Generator

As I mentioned at the start of this post, this project was based on William Lepinski’s ‘generator-meanstack‘, which is in turn is based on Yeoman’s ‘generator-angular‘. Optionally, to install the ‘generator-meanstack’ npm package, globally, on our system use the following command The  ‘generator-meanstack’ code generator will allow us to generate additional AngularJS components automatically, within the project, if we choose. The ‘generator-meanstack’ is not required for this post.

npm install -g generator-meanstack

 

Part II

In part two of this post, we will explore each methods of retrieving and displaying data using AngularJS, in detail.

Links

, , , , , , , , , , , , , , , , ,

Leave a comment

Installing and Configuring the MEAN Stack, Yeoman, and Associated Tooling on Windows

Configure your Windows environment for developing modern web applications using the popular MEAN Stack and Yeoman suite of utilities.

MEAN Stack Using the MEAN.io Project, Running on Windows

MEAN Stack Using the MEAN.io Project Running on Windows

Introduction

It’s an exciting time to be involved in web development. There are dozens of popular open-source JavaScript frameworks, libraries, code-generators, and associated tools, exploding on to the development scene. It is now possible to use a variety of popular technology mashups to provide a complete, full-stack JavaScript application platform.

MEAN Stack

One of the JavaScript mashups gaining a lot of traction recently is the MEAN Stack. If you’re reading this post, you probably already know MEAN is an acronym for four leading technologies: MongoDB, ExpressJS, AngularJS, and Node.js. The MEAN stack provides an end-to-end JavaScript application solution. MEAN provides a NoSQL document database (Mongo), a server-side solution for JavaScript (Node/Express), and a magic client-side MV* framework (Angular).

Depending in which MEAN stack generator or pre-build project you start with, in addition to the main four technologies, you will pull down several other smaller libraries and frameworks.  These most commonly include jQuery, Twitter Bootstrap, Karma (test runner), Jade (template engine), JSHint, Underscore.js (utility-belt library), Mongoose (MongoDB object modeling tool), Passport (authentication), RequireJS (file and module loader), BreezeJS (data entity management), and so forth.

Common Tooling

If you are involved in these modern web development trends, then you are aware there is also a fairly common set of tools used by a majority of these developers, including source control, IDE, OS, and other helper-utilities. For SCM/VCS, Git is the clear winner. For an IDE, WebStormSublime Text, and Kompozer, are heavy favorites. The platform of choice for most developers most often appears to be either Mac or Linux. It’s far less common to see a demonstration of these technologies, or tutorials built on the Microsoft Windows platform.

Yeoman

Another area of commonality is help-utilities, used to make the development, building, dependency management, and deployment of modern JavaScript applications, easier. Two popular ones are Brunch and YeomanYeoman is also an acronym for a set of popular tools: yo, Grunt, and Bower. The first, yo, is best described as a scaffolding tool. Grunt is the build tool. Bower is a tool for client-side package and dependency management. We will install Yeoman, along with the MEAN Stack,  in this post.

Windows

There is no reason Windows cannot serve as your development and hosting platform for modern web development, without specifically using Microsoft’s .NET stack. In fact, with minimal set-up, you would barely know you were using Windows as opposed to Linux or Mac. In this post, I will demonstrate how to configure your Windows machine for developing these modern web applications using the MEAN Stack and Yeoman.

Here is a list of the components we will discuss:

Installations

Git

The use of Git for source control is obvious. Git is the overwhelming choice of modern developers. Git has been integrated into most major IDEs and hosting platforms. There are hooks into Git available for most leading development tools. However, there are more benefits to using Git than just SCM. Being a Linux/Mac user, I prefer to use a Unix-like shell on Windows, versus the native Windows Command Prompt. For this reason, I use Git for Windows, available from msysGit. This package includes Git SCM, Git Bash, and Git GUI. I use the Git Bash interactive shell almost exclusively for my daily interactions requiring a command prompt. I will be using the Git Bash interactive shell for this post. OpenHatch has great post and training materials available on using Git Bash with Windows.

Using Git Bash on Windows for a Unix-like Experience

Using Git Bash on Windows for a Unix-like Experience

Git for Windows provides a downloadable Windows executable file for installation. Follow the installation file’s instructions.

Git Installation Process

To test your installation of Git for Windows, call the Git binary with the ‘–version’ flag. This flag can be used to test all the components we are installing in this post. If the command returns a value, then it’s a good indication that the component is installed properly and can be called from the command prompt:

gstafford: ~/Documents/git_repos $ git --version
git version 1.9.0.msysgit.0

You can also verify Git using the ‘where’ and ‘which’ commands. The ‘where’ command will display the location of files that match the search pattern and are in the paths specified by the PATH environment variable. The ‘which’ command tells you which file gets executed when you run a command. These commands will work for most components we will install:

gstafford: ~/Documents/git_repos $ where git
C:\Program Files (x86)\Git\bin\git.exe
C:\Program Files (x86)\Git\cmd\git.cmd
C:\Program Files (x86)\Git\cmd\git.exe

gstafford: ~/Documents/git_repos $ which git
/bin/git

Ruby

The reasons for Git are obvious, but why Ruby? Yeoman, specifically yo, requires Ruby. Installing Ruby on Windows is easy. Ruby recommends using RubyInstaller for Windows. RubyInstaller downloads an executable file, making install easy. I am using Ruby 1.9.3. I had previously installed the latest 2.0.0, but had to roll-back after some 64-bit compatibility issues with other applications.

Ruby Installation Process

To test the Ruby installation, use the ‘–version’ flag again:

gstafford: ~/Documents/git_repos $ ruby --version
ruby 1.9.3p484 (2013-11-22) [i386-mingw32]

RubyGems

Optionally, you might also to install RubyGems. RubyGems allow you to add functionality to the Ruby platform, in the form of ‘Gems’. A common Gem used with the MEAN stack is Compass, the Sass-based stylesheet framework creation and maintenance of CSS. According to their website, Ruby 1.9 and newer ships with RubyGems built-in but you may need to upgrade for bug fixes or new features.

On Windows, installation of RubyGems is as simple as downloading the .zip file from the RubyGems download site. To install, RubyGems, unzip the downloaded file. From the root of the unzipped directory, run the following Ruby command:

ruby setup.rb

To confirm your installation:

gstafford: ~/Documents/git_repos $ gem --version
2.2.2

If you already have RubyGems installed, it’s recommended you update RubyGems before continuing. Use the first command, with the ‘–system’ flag, will update to the latest RubyGems. Use the second command, without the tag, if you want to update each of your individually installed Ruby Gems:

gem update --system
gem update

MongoDB

MongoDB provides a great set of installation and configuration instructions for Windows users. To install MongoDB, download the MongoDB package. Create a ‘mongodb’ folder. Mongo recommends at the root of your system drive. Unzip the MongoDB package to ‘c:\mongodb’ folder. That’s really it, there is no installer file.

Next, make a default Data Directory location, use the following two commands:

mkdir c://data && mkdir c://data/db

Unlike most other components, to call Mongo from the command prompt, I had to manually add the path to the Mongo binaries to my PATH environment variable. You can get access to your Windows environment variables using the Windows and Pause keys. Add the path ‘c:\mongodb\bin’ to end of the PATH environment variable value.

Adding Mongo to the PATH Environment Variable

Adding Mongo to the PATH Environment Variable

To test the MongoDB installation, and that the PATH variable is set correctly, close any current interactive shells or command prompt windows. Open a new shell and use the same ‘–version’ flag for Mongo’s three core components:

gstafford: ~/Documents/git_repos
$ mongo --version; mongod --version; mongos --version
MongoDB shell version: 2.4.9

db version v2.4.9
Sun Mar 09 16:26:48.730 git version: 52fe0d21959e32a5bdbecdc62057db386e4e029c

MongoS version 2.4.9 starting: pid=15436 port=27017 64-bit 
host=localhost (--help for usage)
git version: 52fe0d21959e32a5bdbecdc62057db386e4e029c
build sys info: windows sys.getwindowsversion(major=6, minor=1, build=7601, 
platform=2, service_pack='Service Pack 1') 
BOOST_LIB_VERSION=1_49

To start MongoDB, use the ‘mongod’ or ‘start mongod’ commands. Adding ‘start’ opens a new command prompt window, versus tying up your current shell. If you are not using the default MongoDB Data Directory (‘c://data/db’) you created in the previous step, use the ‘–dbpath’ flag, for example ‘start mongod –dbpath ‘c://alternate/path’.

MongoDB Running on Windows

MongoDB Running on Windows

.

MongoDB Default Data Directory Containing New Databases

MongoDB Default Data Directory Containing New MEAN Database

Node.js

To install Node.js, download and run the Node’s .msi installer for Windows. Along with Node.js, you will get npm (Node Package Manager). You will use npm to install all your server-side components, such as Express, yo, Grunt, and Bower.

Node Installation Process

gstafford: ~/Documents/git_repos 
$ node --version && npm --version
v0.10.26
1.4.3

Express

To install Express, the web application framework for node, use npm:

npm install -g express

The ‘-g’ flag (or, ‘–global’ flag) should be used. According to Stack Overflow, ‘if you’re installing something that you want to use in your shell, on the command line or something, install it globally, so that its binaries end up in your PATH environment variable:

gstafford: ~/Documents/git_repos $ express --version
3.5.0

Yeoman – yo, Grunt, and Bower

You will also use npm to install yoGrunt, and Bower. Actually, we will use npm to install the Grunt Command Line Interface (CLI). The Grunt task runner will be installed in your MEAN stack project, locally, later. Read these instructions on the Grunt website for a better explanation. Use the same basic command as with Express:

npm install -g yo grunt-cli bower

To test the installs, run the same command as before:

gstafford: ~/Documents/git_repos 
$ yo --version; grunt --version; bower --version
1.1.2
grunt-cli v0.1.13
1.2.8

If you already had Yeoman installed, confirm you have the latest versions with the ‘npm update’ command:

gstafford: ~/Documents/git_repos/gen-angular-sample 
$ npm update -g yo grunt-cli bower
npm http GET https://registry.npmjs.org/grunt-cli
npm http GET https://registry.npmjs.org/bower
npm http GET https://registry.npmjs.org/yo
npm http 304 https://registry.npmjs.org/bower
npm http 304 https://registry.npmjs.org/yo
npm http 200 https://registry.npmjs.org/grunt-cli

All of the npm installs, including Express, are installed and called from a common location on Windows:

gstafford: ~/Documents/git_repos/gen-angular-sample 
$ where express yo grunt bower
c:\Users\gstaffor\AppData\Roaming\npm\yo
c:\Users\gstaffor\AppData\Roaming\npm\yo.cmd
c:\Users\gstaffor\AppData\Roaming\npm\grunt
c:\Users\gstaffor\AppData\Roaming\npm\grunt.cmd
c:\Users\gstaffor\AppData\Roaming\npm\bower
c:\Users\gstaffor\AppData\Roaming\npm\bower.cmd

gstafford: ~/Documents/git_repos 
$ which express; which yo; which grunt; which bower
~/AppData/Roaming/npm/express
~/AppData/Roaming/npm/yo
~/AppData/Roaming/npm/grunt
~/AppData/Roaming/npm/bower

Use the command, ‘npm list –global | less’ (or, ‘npm ls -g | less’) to view all npm packages installed globally, in a tree-view. After you have generated your project (see below), check the project-specific server-side packages with the ‘npm ls’ command from within the project’s root directory. For the client-side packages, use the ‘bower ls’ command from within the project’s root directory.

If your in a hurry, or have more Windows boxes to configure you can use one npm command for all four components, above:

npm install -g express yo grunt-cli bower

MEAN Boilerplate Generators and Projects

That’s it, you’ve installed most of the core components you need to get started with the MEAN stack on Windows. Next, you will want to download one of the many MEAN boilerplate projects, or use a MEAN code generator with npm and yo. I recommend trying one or all of the following projects. They are each slightly different architecturally, but fairly stable:

Yeoman Running MEAN Generator

Yeoman Running MEAN Generator

.

MEAN Stack Using James Cryer's 'generator-mean'

MEAN Stack Using James Cryer’s ‘generator-mean’

Links

, , , , , , , , , , , , , , , , , , ,

7 Comments

Create Multi-VM Environments Using Vagrant, Chef, and JSON

Create and manage ‘multi-machine’ environments with Vagrant, using JSON configuration files. Allow increased portability across hosts, environments, and organizations. 

Diagram of VM Architecture3

Introduction

As their website says, Vagrant has made it very easy to ‘create and configure lightweight, reproducible, and portable development environments.’ Based on Ruby, the elegantly simple open-source programming language, Vagrant requires a minimal learning curve to get up and running.

In this post, we will create what Vagrant refers to as a ‘multi-machine’ environment. We will provision three virtual machines (VMs). The VMs will mirror a typical three-tier architected environment, with separate web, application, and database servers.

We will move all the VM-specific information from the Vagrantfile to a separate JSON format configuration file. There are a few advantages to moving the configuration information to separate file. First, we can configure any number VMs, while keeping the Vagrantfile exactly the same. Secondly and more importantly, we can re-use the same Vagrantfile to build different VMs on another host machine.

Although certainly not required, I am also using Chef in this example. More specifically, I am using Hosted Chef to further configure the VMs. Like the VM-specific information above, I have also moved the Chef-specific information to a separate JSON configuration file. We can now use the same Vagrantfile within another Chef Environment, or even within another Chef Organization, using an alternate configuration files. If you are not a Chef user, you can disregard that part of the configuration code. Alternately, you can substitute the Chef configuration code for Puppet, if that is your configuration automation tool of choice.

The only items we will not remove from the Vagrantfile are the Vagrant Box and synced folder configurations. These items could also be moved to a separate configuration file, making the Vagrantfile even more generic and portable.

The Code

Below is the VM-specific JSON configuration file, containing all the individual configuration information necessary for Vagrant to build the three VMs: ‘apps’, dbs’, and ‘web’. Each child ‘node’ in the parent ‘nodes’ object contains key/value pairs for VM names, IP addresses, forwarding ports, host names, and memory settings. To add another VM, you would simply add another ‘node’ object.

Next, is the Chef-specific JSON configuration file, containing Chef configuration information common to all the VMs.

Lastly, the Vagrantfile, which loads both configuration files. The Vagrantfile instructs Vagrant to loop through all nodes in the nodes.json file, provisioning VMs for each node. Vagrant then uses the chef.json file to further configure the VMs.

The environment and node configuration items in the chef.json reference an actual Chef Environment and Chef Nodes. They are both part of a Chef Organization, which is configured within a Hosted Chef account.

Each VM has a varying number of ports it needs to configue and forward. To accomplish this, the Vagrantfile not only loops through the each node, it also loops through each port configuration object it finds within the node object. Shown below is the Database Server VM within VirtualBox, containing three forwarding ports.

VirtualBox Port Forwarding Rules

VirtualBox Port Forwarding Rules

In addition to the gists above, this repository on GitHub contains a complete copy of all the code used in the post.

The Results

Running the ‘vagrant up’ command will provision all three individually configured VMs. Once created and running in VirtualBox, Chef further configures the VMs with the necessary settings and applications specific to each server’s purposes. You can just as easily create 10, 100, or 1,000 VMs using this same process.

VirtualBox View of Multiple Virtual Machines

VirtualBox View of Multiple Virtual Machines

.

Virtual Media Manager View of VMs

Virtual Media Manager View of VMs

Helpful Links

  • Dustin Collins’ ‘Multi-VM Vagrant the DRY way’ Blog Post (link)
  • Red Badger’s ‘Automating your Infrastructure with Vagrant & Chef – From Development to the Cloud’ Blog Post (link)
  • David Lutz’s Multi-Machine Vagrantfile GitHub Gist (link)
  • Kevin Jackson’s Multi-Machine Vagrantfile GitHub Gist (link)

, , , , , , , , , ,

1 Comment

Follow

Get every new post delivered to your Inbox.

Join 666 other followers