How to get vNext alpha 4 working on Ubuntu

Just a quick note…

First, follow these instructions, minus the last bit about linking the libuv:

Then follow this post:

Leave a comment

Getting NPM to prompt for username and password (POC/Hack)

This is an experimental POC hack that seems to be working. I don’t know if I will actually use this anywhere, but I wanted to try it. We use an corporate NTLM proxy at my current place of work and storing plaintext passwords is a non-starter. I am using “cntlm”, which I think is great, but might be hard for some groups in “enterprisey” environments to adopt.

If you want to completely undo this, you’ll probably have to reinstall node.js, which on Windows might require a reboot.

1. Install sync-prompt in the npm-registry-client folder

As we’ll be tinkering with npm, let’s get this out of the way first. “sync-prompt” is a way to synchronously get input at the command prompt from the user. We need this to ask for the user’s credentials, since we wont be storing them.

Go to your nodejs folder and find the npm\node_modules\npm-registry-client folder. Run this command:

npm install sync-prompt

2. Modify npm-registry-client to include our “npmfix” module

The “index.js” for npm-registry-client that npm uses will be in your nodejs install folder under \npm\node_modules\npm-registry-client\index.js.

Add this at the beginning:

var ntlmFix = require('./npmfix.js');

Then at line ~55 or so (after the “var alwaysAuth = this.conf.get(‘always-auth’)) add this line:

this.conf.get = ntlmFix.fix(this.conf.get);

3. Create the npmfix.js in the same folder as the index.js for the npm-registry-client

We need the script below with one change. Change the “proxyHostPort” variable to be the appropriate proxy hostname (ie: myservername) followed by a colon (:), followed by the appropriate port number (ie: 80). If this was made “production ready”, one change would be to use the actual proxy from NPM and add the credentials to it. 

Here’s what we need there:

var prompt = require('sync-prompt').prompt;
var proxyHostPort = "<proxy host>:<proxy port>";
var oldFunc;
var username;
var password;
var credentials;
var hidden = true;
function newFunc(arg) {
if(arg === 'proxy') {
if(!username) {
username = prompt('Username: ');
password = prompt('Password: ', hidden);
credentials = "http://" + encodeURIComponent(username) + ":" + encodeURIComponent(password) + "@" + proxyHostPort + "/";
console.log('Overriding proxy ...');
return credentials;
} else {
return oldFunc.apply(this, arguments);
function fix(getFunc) {
oldFunc = getFunc;

return newFunc;

module.exports = {
fix: fix

The objective of this code is to wrap the config “get” calls and if “proxy” is asked for, ask for credentials if it’s the first time this session, otherwise use the credentials currently in memory.

4. Set NPM to some junk proxy to make sure it’s working

I use the following command:

npm set proxy

Since I use cntlm, I also turn that off (NET STOP CNTLM).

If you’ve done everything correctly, you should see the same thing I do when you try something like “npm install uglify-js”. If I type my Windows NT credentials (<domain>\<username>) and then my password, it works.

, , , , , ,

Leave a comment

Setting up node.js on your own VPS

Rather than go with one of the providers like joyent or heroku, who I’m sure are great – but limit you to the stack they are using – I decided to go with an inexpensive, Linux-based, lightweight VPS. These can be rented out quite cheaply these days, and node.js doesn’t require much in terms of memory or CPU.

With my VPS, I am using SSH to get to a terminal (no GUI here, no GUI needed, really) and I am on Windows 8. Therefore, the first thing to do is to obtain a good SSH client. I am using CYGWIN and installed its SSH packages. My VPS is running Ubuntu, so these instructions apply specifically to Ubuntu. To upload and download files I used the CYGWIN “scp” command.

Once you have connected successfully to the terminal, there are a few things to install…

What version of linux am I running on?

To make sure everything is going to run smoothly, it’s easy to get the current version of your Linux distro at the terminal:

lsb_release -a

I happen to be using 12.04 “Precise”.

Setup node.js

If I attempted to install node.js out of the box, “apt-get” would retrieve a very old version.
An easy way to get the latest version installed was by adding the PPA/repository described here, doing an apt-get update and then installing node. Since I was on a barebones Linux install, I first needed to setup the “add-apt-repository” command”:

apt-get install software-properties-common python-software-properties

Then add the repository and install node:

add-apt-repository ppa:chris-lea/node.js
apt-get update
apt-get install nodejs

Now if you run “node -v” you should see the version of node installed on your system. In my case it replies “v0.10.26”.

Verify node.js is accessible externally

Most of this will have to do with your particular VPS setup. In my case, the major thing to watch out for was to tell node.js which IP address to listen to. The default “hello world” example will only listen to local connections ( or localhost, there’s a stackoverflow for this here). One option is to tell node.js to listen to any IP address, for example:

var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(80, '');

Setup nginx

Once we have node working, the next item on my agenda is to get nginx up and running. Installing nginx was easy ( apt-get install nginx ). Once nginx is installed, expect any node.js app to error when listening to port 80, as nginx will automatically start up and begin listening to that port. This is a good thing, as it makes it easy to verify that nginx is running and installed.

I want to run multiple node apps on the same server with the same IP address and port and although I don’t use digitalocean, there’s a good tutorial on their community site on how to do this here:

The key parts you need to do are:

1) Add server_names_hash_bucket_size 64; to the http section of your /etc/nginx/nginx.conf file
2) Setup a configuration file for your domain/port like this example:

server {
    listen 80;


    location / {
        proxy_pass http://localhost:{YOUR_PORT};
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

Don’t forget to start your node app and make sure it is listening on the port that nginx has been told to forward to ({YOUR_PORT} above).

I wanted to test this functionality right away from my Windows 8 laptop, even though I didn’t have a domain setup. No problem! Just edit your hosts. file under %windir%\system32\drivers\etc, add a line for your external VPS IP address and any domain you like (ie: Make sure a corresponding nginx config file is setup for this domain on the server end. 

Once you’ve added your config files, if you want to reload nginx, this is how you do that:

nginx -s reload

Now you should be able to navigate to the domain you’ve setup and if everything is configured properly, nginx should forward the request to your running node.js app.

Setup a persistent process or job to keep your node app running and start on boot

If you plan on using your VPS to host production apps, you’ll want to consider packages like “forever” to keep your node app running and manage background node processes. You’ll also want the node.js app to start on server reboot. These parts are also detailed in the digitalocean post, and their method basically involves creating a startup bash script that executes “forever” (npm package here) with arguments to start your particular node app immediately.

Forever can also start/stop any given forever-based node process whenever you like from any terminal session. I found the “forever -?” command (help) very useful.

Maybe sometime I’ll write a blog post on getting mongodb or the like setup.

, , , , , ,

Leave a comment

Adventures with Yeoman, Angular and node

I think node.js is awesome and angularjs is one of the top SPA frameworks, for sure.

Prior to these attempts I have never really used yeoman, bower or even grunt. Sure, I’d played a little, but that’s all.

I saw this neat video on a yeoman workflow, and decided I wanted to try and set it up on my Windows 8.1 laptop. Here’s what that involved (I’m assuming you have node installed already):

1. Install Ruby, Git, and the “compass” gem
I had msysgit installed the conservative way, and that doesn’t work well when bower goes looking for it through cmd. The “RailsInstaller” can install git for you, that will be setup in your PATH, etc.

Once ruby and git are installed and available from your PATH, run the “Command Prompt with Ruby and Rails” from the start screen and install the compass gem:

gem install compass

This gem seems to be required for “grunt serve” to work in this fashion.

2. Install yo, generator-angular, jasmine-node, karma-jasmine
“Yo” is, of course, yeoman. “Generator-angular” provides yeoman with bits to scaffold angular stuff. Jasmine-node and karma-jasmine seem to be required for the grunt testing that the generator-angular expects.

npm install -g yo
npm install -g generator-angular
npm install -g jasmine-node
npm install -g karma-jasmine

3. Create a yeoman/angularjs project
Open up your terminal window and create an angular project using yeoman:

yo angular

This sets up most everything, and thanks to ruby/compass “grunt serve” should be working. “grunt test” was still broken for me (first saying something about there being no jasmine provider, next saying its bits to use chrome were missing). I resolved these issues by running these commands:

npm install karma-jasmine --save-dev
npm install karma-chrome-launcher --save-dev

Now “grunt test” should also be working. Enjoy the awesome that is modern web app workflow!🙂

Leave a comment

Valid Reasons For Reformatting Code

I wanted to capture my personal philosophy and reasoning for when code should be reformatted, and when it should generally be left alone as part of a formatting concern. This applies broadly to enterprise-level projects where teams may be distributed or work may be spread across multiple teams and legacy code exists that must be maintained. In an ideal world, all teams would come to a reasonable agreement on style guidelines or maintain a level of flexibility agreeable to all parties involved.

Valid reasons for reformatting code:

1a. I’m working on this code

Invalid reasons for reformatting code:

1b. I’m working with this code
2b. It’s not readable (aka, it’s not readable to me)
3b. It exists in the same project I’m working on

Let me elaborate on this reasoning.

1a. I’m working on this code

Now, let me be perfectly clear; It’s absolutely valid to reformat code you are working on for readability. Your comprehension of the code you are currently working on is the highest “readability” priority. Future developers have the same carte blanche license you do, to reformat the code to the maximum readability possible for the change they must make.

1b. I’m working with this code

This is not a valid reason because we are doing object-oriented, modular programming. We use interfaces, services, repositories, unit tests and “black box” abstractions.

If the black box’s expected input/output doesn’t match, now you can start working on that code, updating the unit tests and making sure that it works in isolation. You shouldn’t need to comprehend the inner working of the code – only its publicly accessible API.

Changing the public API is not a formatting issue, it is a naming or “needs based” issue and most shared bits don’t have any formatting concerns.

2b. It’s not readable (aka it’s not readable to me)

You are spending company time and resources as well as taking a risk changing code that works. Assuming a skilled, diligent team of developers, the original author of that code wrote it in such a way with the maximum amount of readability for them as an individual. Their justification for formatting is as good as (or better than) yours.

If it…

  • Performs well
  • Passes unit tests
  • Meets all functional requirements
  • Maintains separation of concerns
  • You aren’t currently working on this code

…then stop, take a deep breath and leave it alone.

3b. It exists in the same project I’m working on

This is especially true for shops where you generally keep a one-class-to-a-file standard, which is why a “consistency within a class file” rule works so well.
One class file will generally have its own responsibility, will be tested in isolation, and so on.
For everything else, see 2b.

Leave a comment

JavaScript – Degrees of Organization

These days I’ve been working on multiple enterprise-level SPA web applications. These applications involve lots of JavaScript, which is great, because it empowers us to deliver rich client-side functionality (and I love JavaScript). While working on one of my latest projects, we’ve been using require.js and durandal.js, but not all projects require this degree of modularity and organization.

The guidelines below are cumulative depending on how complex your application becomes. Sometimes applications grow over time, so starting from a better place in an organizational sense will help you adapt with virtually no short-term cost to you, reaping long-term benefits.

Simple Sites and CRUD Applications

For simple websites and basic CRUD apps, and by that I mean limited to less than a dozen pages or views altogether that generally only utilize “postbacks” to ferry data, I advocate following some basic guidelines:

  • Avoid global variables. Scripts can execute within a closure, inside which you can turn on “use strict” and write immaculate JavaScript code.
  • Only inject JavaScript directly into markup to communicate server variables. If you’re injecting anything more than raw JSON data in a variable assignment, there’s a good chance you’re making a difficult to maintain mess. This is doubly true if you’re building JavaScript with a StringBuilder or something similar. Why would you do this? Why not a multi-line string, a resource file or other embedded resource? You shouldn’t be doing this anyway, because the functionality can remain in the actual JavaScript files, which picks up the data in the view.
  • Only use global variables for sharing modules. Global variables should be naturally unusual in “web sites” and simple CRUD apps that have very little JavaScript code. I’m not convinced that the whole world needs to use requirejs (although if you work with a modern pipeline like grunt or bower or something, you probably should).
    For projects like these, I use a simple namespace library that allows multiple modules to share a root namespace regardless of which one is loaded first.
    You still have to order your dependencies, but it’s very clear what belongs where.
  • File structure should reflect function. If you have a user account page that has page-specific javascript, it might make sense to have a “users.account.js” file, whose globally accessible variable happens to be an “account” object sitting atop a “users” object that sits on the global scope. This keeps everything very easy to navigate without resorting to “find in all files”. Consider putting shared scripts in a “scripts/lib” folder (ie: “common.js”, “ajax.js”) and individual scripts in a “scripts/view” folder.
  • Hold your JavaScript to the same standards (but not style) as any other language. JavaScript code can be clean, DRY and testable just like any other language, although sometimes it requires adopting a new mode of thinking. That said, the other side of this point is to embrace JavaScript itself. Don’t try to force it to be Java or C#, because it isn’t.
  • Consider using integrated JSHint or similar. I suggest JSHint over JSLint because if it’s integrated, you don’t want any noise from rules your team intends to ignore. You only want to check against a series of rules your team agrees make sense to hold. JSHint is much more configurable in this respect. I say it should be integrated in the same sense that unit tests and broken build notifications shouldn’t require any one person to run them either and you should get feedback as soon as possible.

Simple Web Applications and Multi-Page Apps

So, you’re designing a web application, but you can’t or don’t want to turn it into an SPA. This sort of application has some amount of rich client-side functionality, but wont become a full-on single page app. Multi-Page Apps are not or cannot become SPA’s, possibly because they have to live side-by-side with legacy code, but certain pages have very rich client-side functionality that allow editing data with some amount of business logic or query logic, etc.

So, all of the above, and…

  • Use require.js. Learn to love and embrace the future of JavaScript modularity. If you haven’t already, you should find out if your platform of choice supports requirejs integration. In my opinion, this means you can write your modular code, but then it gets combined on the server such that you don’t have any delay in loading all of the scripts that you need and it’s not any different than loading all of your scripts via <script> tags, except that it’s clean, organized and prerequisites don’t get mixed up. With require, if module A depends on B, and module B depends on C, even if A is loaded first, require will wait a configurable amount of time before executing A such that B and C can be loaded first.
  • Consider using an SPA framework. Durandal.js and other SPA tools can be used for certain pages or areas of the site where it makes sense to do so, leaving the rest of the site undisturbed. This might be overkill for your 3 tab, 6 button form, but if you have a lot of pages with client-side functionality, adopting a pattern will make development and maintenance easier. Adopting an established pattern with documentation, and possibly even paid support, is just so much better.
  • Consider investing in RESTful APIs. If you have client-side grids and filters and such, you should already be leveraging ASP.NET Web API or similar. If your RESTful API is rigid in setting up a query, find a balance between passing too many optional arguments and writing endpoints that return results for specific uses. Keep in mind that the latter is much easier to test as there is much lower cyclomatic complexity. If you have complex query situations, it may be time to invest in OData and/or something like breeze.js, much like graduating from ADO.NET to LINQ.
  • Consider establishing a strategy for homogeneous validation. This sort of thing allows you to leverage your server-side validation rules in your client-side code in a way that allows you to keep it DRY. ASP.NET MVC supports this with Data Annotations and jQuery unobtrusive validation, but at this time doesn’t come with plumbing to do this in an SPA context. If you use knockout.js, you might write or find an adapter that plugs in knockout validation to your viewmodels that your JavaScript uses. If you are using ASP.NET MVC and Razor is still generating your markup, you can write your own adapter for the data-* attributes that Razor generates for you with unobtrusive validation, or you can turn off unobtrusive validation and use the JSON that ASP.MVC can inject into your page. Either way, as we all know, server side validation is a must-have, but any good web app will not force the user to wait for a POST operation just to find out a single field was missed.
  • Establish a method for integrated JavaScript unit tests. If you have to run these tests manually to get feedback, they wont get run. Even Visual Studio and TFS users can leverage the Chutzpah plugin with QUnit and PhantomJS.

Single Page Applications

Hooray for the modern web! If you’re developing an SPA, you’ve managed to catch up with the rest of the world. In my experience, sorting out some of the below will help you when you discover later that you need this information but are trying to fix a bug or meet a pressing deadline. By establishing a pattern early on, it will be easy for developers to collaborate on shared widgets and to maintain legacy code. I assume by this point you have long ago worked out how to load dependencies and share both instance (prototypal) and singleton (JS object) modules across your application. I also assume you’ve worked out how your jQuery plugins and static content resources (ie: JSON or markup) are loaded and cached on demand.

  • Establish a strategy for widgets. In this context, examples of a widget include…
    A disconnected “sub header” module that contains summary or shared information that might need to be referenced from multiple pages, panels or views.
    A dialog or a reusable component that might need to be injected into the current workflow.
  • Establish a strategy for “child routing”. An example of this might be – perhaps your site has a global menu that navigates between major sections. Each section might have a series of tabs or bootstrap pills. These tabs/pills might have tabs/pills of their own, and so on. A good strategy should include details like – how will deep linking work? How can a descendant view/route communicate with a grandparent router/view? How can data be communicated up and down the routing structure?
  • Establish support for your framework of choice. If it’s homegrown, that may be just what you need – but make sure to invest proper resources and time to resolve issues and add functionality as required. If it’s something like durandal, angular, ember, backbone, etc. train your team and/or give them time to dig deep into the framework. There’s going to come a time when you will need to comprehend or otherwise debug the code in that framework, even if the bug is in your own code. Decide whether you want to (or can) pay for support externally (from the author or consultant) or internally (from your developers in the hours they spend).

Leave a comment

Upgrading my HP Envy m6 to SSD

When you upgrade to an SSD, likely you want to clone partitions.

At least in my case, don’t try to do this in Windows. I had my SSD connected via USB, and Windows kept disconnecting the USB disk when I tried it halfway through the process.

Download yourself a copy of Macrium Reflect free ( ), and then burn the recovery disk. Yes, the recovery disk can clone the drive. I burned it to a DVD disc and booted into it.

I can’t see my SSD! Don’t panic. This took awhile (a few minutes, maybe?), be patient. Eventually the SSD should show up and/or you can use the “refresh” option until it shows.

Macrium says my partitions wont fit! Don’t panic. Tell it “ok”, and you should see the partitions that DO fit. Assuming you made enough space available, “edit” the partition options for the partition that is too large and shrink the side until there is enough room for the recovery partition, etc. In my case, I left about 31 gb free. Then I dragged the recovery partition down to the SSD and selected to move to the next step.

If you did this right and everything went perfectly, you should just be able to boot straight into the disk without changing any options.🙂

, , , , ,

Leave a comment


Get every new post delivered to your Inbox.