Most software development efforts involve multiple source code files, and Javascript-heavy web applications are no exception. However, if you want your application to load fast, then you need to combine and minify all those Javascript files somehow. RequireJS is a library that can help with this process by allowing you to define javascript modules and the dependencies between them. In "development mode", RequireJS will download modules individually as they are needed. When all of your scripts are shiny and bug-free you can run the optimization tool which will analyze the module dependencies and create a single, minified Javascript file.
The optimization tool is usually run as part of a build process before deploying the application, but if your application runs on Node.js you can eliminate this extra step. Since the optimization tool is itself a Node.js script, it can be invoked directly from the application's start-up code. Here is what this might look like with the fantastic Express web framework:
In development mode, the application will serve the non-optimized Javascript files from the public folder. In production mode, the RequireJS optimizer will combine and minify the Javascript files and serve the optimized files from the public_build folder. Any other files in the public folder (such as images and stylesheets) will also be copied to the public_build folder, so you don't need to clutter your source code repository with generated files.
Saturday, December 10, 2011
Sunday, December 4, 2011
Commit early, commit often
Hidden deep inside TortoiseSVN is a reporting tool that can give you statistics about commits to your repository. I ran this report at my day job several times over the years and the output has always been a quiet source of pride for me. I decided to publish the report here since I'm moving on to a new project. If nothing else, it serves as a reminder to myself of what I spent in the last four years doing:
This is the commit history for the entire four years of the project up to the date when this post was published (names have been removed to protect the innocent). I account for about 18% of the total commits to the repository. One of my teammates (who incidentally owns the third highest bar in the graph) claimed that I average 7.5 commits per hour assuming a 40 hour work week (ha). I think his math might be a little off since I calculated a significantly less impressive 5 commits per day when I ran the numbers... although maybe his numbers are a little closer if you exclude the year I served in management.
Hold on a second...
This can't be fair, can it? I've been on the team longer than anyone else at this point, so of course I've amassed more commits. Well, let's look at the commit history for one month during which the team membership was relatively stable:
Me again, and by a wider margin, too.
A lot of my teammates were shocked as these graphs made their way around the team, but there are benefits to working at this pace:
For me, writing software is all about momentum. I am more stressed out at the end of the day if I don't commit a whole bunch of code.
Happy committing!
This is the commit history for the entire four years of the project up to the date when this post was published (names have been removed to protect the innocent). I account for about 18% of the total commits to the repository. One of my teammates (who incidentally owns the third highest bar in the graph) claimed that I average 7.5 commits per hour assuming a 40 hour work week (ha). I think his math might be a little off since I calculated a significantly less impressive 5 commits per day when I ran the numbers... although maybe his numbers are a little closer if you exclude the year I served in management.
Hold on a second...
This can't be fair, can it? I've been on the team longer than anyone else at this point, so of course I've amassed more commits. Well, let's look at the commit history for one month during which the team membership was relatively stable:
Me again, and by a wider margin, too.
A lot of my teammates were shocked as these graphs made their way around the team, but there are benefits to working at this pace:
- You are less likely to break something with a lot of small commits than you are with a few huge ones,
- Your teammates see your code sooner, so you get feedback sooner and reduce the risk of merge conflicts,
- Customers get features and bug fixes sooner (as long as other organizational factors don't stand in the way),
- And plenty of other reasons smarter people have already enumerated.
For me, writing software is all about momentum. I am more stressed out at the end of the day if I don't commit a whole bunch of code.
Happy committing!
Tuesday, November 15, 2011
Octophile: Social media widgets for GitHub
Last week I put one of those Twitter follow buttons on my blog. It was cool, but it looked a little lonely all by itself. It occurred to me that other developers can follow me on GitHub too, so I started looking for a similar button for GitHub. Much to my surprise, I was not able to find one. Luckily, GitHub has a pretty awesome API, so I decided to try to make a follow button for GitHub myself. I cobbled together a small sinatra application, pushed it to heroku, and octophile.com was born!
You can see the GitHub follow button in action off to the right (if you are reading this on my website and not a RSS reader). If you find this button useful, or would like to see some other widgets like this, I'd love to hear your feedback.
Happy coding!
Update 11/19/2011:
I have since found that something like this does already exist: github anywhere.
You can see the GitHub follow button in action off to the right (if you are reading this on my website and not a RSS reader). If you find this button useful, or would like to see some other widgets like this, I'd love to hear your feedback.
Happy coding!
Update 11/19/2011:
I have since found that something like this does already exist: github anywhere.
Thursday, October 13, 2011
Easy vagrant setup on windows with chocolatey
Setting up vagrant on windows is well-documented and, overall, pretty easy. However, it is a little more complicated than just:
Check out the code on github or report an issue.
Update 11/10/2011:
More recent versions of the package will not modify the putty registry with the vagrant profile. Instead I recommend installing vagrant-putty.
gem install vagrantLuckily, us Windows developers recently got chocolatey. I just put together a few packages that makes vagrant setup on Windows a snap. To try it out just follow the chocolatey installation instructions. Then open a command prompt and run:
cinst vagrantThis command will download and install JRuby, VirtualBox, PuTTY, and the vagrant gem. It will also automatically set the registry keys for putty so that once you set up a box with vagrant you can SSH into it with:
putty -load vagrantAt this point the package comes with the old "works on my machine" stamp.
Check out the code on github or report an issue.
Update 11/10/2011:
More recent versions of the package will not modify the putty registry with the vagrant profile. Instead I recommend installing vagrant-putty.
Thursday, September 22, 2011
Enabling Pound proxy support for the HTTP methods PUT and DELETE
I'm not sure how many people out there are using Pound as a reverse proxy, but if you have it sitting in front of a ReSTful web service you may run into some issues with HTTP methods other than GET, POST, and HEAD. By default Pound only supports GET, POST, and HEAD. Pound will return an HTTP status code of 501 (not implemented) if it encounters a different method, such as PUT or DELETE.
Resolving this issue is easy. Each ListenHTTP or ListenHTTPS section in your pound configuration may contain the xHTTP setting. The default value for this setting is 0 (GET, POST, and HEAD). Setting the value to 1 adds support for PUT and DELETE. So putting it all together:
Happy proxying!
Resolving this issue is easy. Each ListenHTTP or ListenHTTPS section in your pound configuration may contain the xHTTP setting. The default value for this setting is 0 (GET, POST, and HEAD). Setting the value to 1 adds support for PUT and DELETE. So putting it all together:
ListenHTTP # ... some settings xHTTP 1 #Support GET, POST, HEAD, PUT, and DELETE # ... some more settings End
Happy proxying!
Monday, August 29, 2011
HTML Basics: Labels and inputs
I stumbled across some code the other day that made me smile. Here is essentially what was going on:
Naturally, this bit of javascript will toggle the checkbox's checked state whenever the label is clicked. There is nothing functionally wrong with this approach, except that it is wasted effort. Every browser I have ever worked with will do this automatically without any javascript. Simply associate a label with a checkbox (or radio button) and clicking the label will toggle the checkbox's checked state:
<input id="checkbox" type="checkbox" /> <label id="label" for="checkbox">My Label</label> <script type="text/javascript"> $('#label').click(function() { var checkbox = $('#checkbox'); checkbox.attr('checked', !checkbox.is(':checked')); }); </script>
Naturally, this bit of javascript will toggle the checkbox's checked state whenever the label is clicked. There is nothing functionally wrong with this approach, except that it is wasted effort. Every browser I have ever worked with will do this automatically without any javascript. Simply associate a label with a checkbox (or radio button) and clicking the label will toggle the checkbox's checked state:
<input id="checkbox" type="checkbox" /> <label id="label" for="checkbox">My Label</label>
Sunday, March 27, 2011
Testing MySQL queries with NUnit
Even the most adamant unit-testing purist will admit that database queries need to be tested. If possible, these tests should run against the same database engine that will be used in production.
With a SQL Server database, testing queries is fairly easy, since most Visual Studio installations will include SQL Server Express. Just create the database in a SetupFixture, connect with Windows Authentication, and you're all set. MySQL presents a few more challenges, however. First, it needs to be installed. Second, the tests need the correct credentials to connect to the installed MySQL instance.
One simple way to handle this is to set up a central MySQL instance for testing only. However, this means team members must be on the network to run the tests. There is also the issue of multiple team members running the tests at the same time causing unexpected failures in each other's test runs, or worse, causing unexpected failures in an automated build.
A better alternative is to keep the MySQL binaries in version control with a known configuration that can be used in test runs. The binary distribution of MySQL is fairly large; however, with the right command line arguments, we only need two files: bin\mysqld-nt.exe and share\english\errmsg.sys (or whichever language you want to use). Include both of these as content files in a test project and set the build action to "Copy If Newer". In a SetupFixture, start MySQL with some code like this in the setup method:
The first two arguments (--standalone and --console) tell MySQL to run as a standalone instance and to keep the console window open (i.e. do not run as a service). The next three arguments (--basedir=., --language=. and --datadir=.) tell MySQL to run from the current directory, load language files (errmsg.sys) from the current directory, and write data files to the current directory. The --skip-grant-tables argument disables security so that the tests do not need to worry about providing credentials when connecting. The final two arguments (--skip-networking and --enable-named-pipe) tell MySQL not to listen for TCP connections and instead allow named pipe connections. This prevents our standalone MySQL instance from interfering with any other MySQL installations on the machine.
Once the instance has started, we can connect with a connection string like this: Data Source=localhost;Protocol=pipe;. Finally, kill the MySQL process in the SetupFixture teardown method.
On my team, we have rolled this functionality (and a few other goodies) into a NUnit addin, but that's a story for another blog post.
Happy testing!
With a SQL Server database, testing queries is fairly easy, since most Visual Studio installations will include SQL Server Express. Just create the database in a SetupFixture, connect with Windows Authentication, and you're all set. MySQL presents a few more challenges, however. First, it needs to be installed. Second, the tests need the correct credentials to connect to the installed MySQL instance.
One simple way to handle this is to set up a central MySQL instance for testing only. However, this means team members must be on the network to run the tests. There is also the issue of multiple team members running the tests at the same time causing unexpected failures in each other's test runs, or worse, causing unexpected failures in an automated build.
A better alternative is to keep the MySQL binaries in version control with a known configuration that can be used in test runs. The binary distribution of MySQL is fairly large; however, with the right command line arguments, we only need two files: bin\mysqld-nt.exe and share\english\errmsg.sys (or whichever language you want to use). Include both of these as content files in a test project and set the build action to "Copy If Newer". In a SetupFixture, start MySQL with some code like this in the setup method:
var process = new Process(); var arguments = new[] { "--standalone", "--console", "--basedir=.", "--language=.", "--datadir=.", "--skip-grant-tables", "--skip-networking", "--enable-named-pipe" }; process.StartInfo.FileName = "mysqld-nt.exe"; process.StartInfo.Arguments = string.Join(" ", arguments); process.StartInfo.UseShellExecute = false; process.StartInfo.CreateNoWindow = true; process.Start();
The first two arguments (--standalone and --console) tell MySQL to run as a standalone instance and to keep the console window open (i.e. do not run as a service). The next three arguments (--basedir=., --language=. and --datadir=.) tell MySQL to run from the current directory, load language files (errmsg.sys) from the current directory, and write data files to the current directory. The --skip-grant-tables argument disables security so that the tests do not need to worry about providing credentials when connecting. The final two arguments (--skip-networking and --enable-named-pipe) tell MySQL not to listen for TCP connections and instead allow named pipe connections. This prevents our standalone MySQL instance from interfering with any other MySQL installations on the machine.
Once the instance has started, we can connect with a connection string like this: Data Source=localhost;Protocol=pipe;. Finally, kill the MySQL process in the SetupFixture teardown method.
On my team, we have rolled this functionality (and a few other goodies) into a NUnit addin, but that's a story for another blog post.
Happy testing!
Thursday, February 10, 2011
NServiceBus assembly scanning: avoiding unintended consequences
The default startup behavior for a NServiceBus endpoint is to scan all assemblies in the deployment directory for any types it might be interested in, for example, message modules and message handlers. This is really handy, but occasionally it can lead to unintended consequences. Fortunately, the With method has a few overloads and one of them takes a list of assemblies to scan. If you want to prevent loading types from any stray assembly that might make it into your deployment directory, change your initialization code to look like this:
Now our endpoint initialization looks like this:
var assemblies = GetType().Assembly .GetReferencedAssemblies() .Select(n => Assembly.Load(n)) .ToList(); assemblies.Add(GetType().Assembly); NServiceBus.Configure.With(assemblies);
This restricts the assembly scanning process to only the assemblies that your endpoint explicitly references. We can extract this code to an extension method so we can use it in other endpoints:
public static IEnumerable<Assembly> ReferencedAssemblies(this Type type) { var assemblies = type.Assembly .GetReferencedAssemblies() .Select(n => Assembly.Load(n)) .ToList(); assemblies.Add(GetType().Assembly); return assemblies; }
Now our endpoint initialization looks like this:
NServiceBus.Configure.With(GetType() .ReferencedAssemblies());
Subscribe to:
Posts (Atom)