2013-12-30

Grunt your ASP.net builds

Visual Studio 2013 has made great strides at being a great tool for web development. When combined with the webessentials package it is actually a very good tool, maybe the best tool out there. However it has one shortcoming in my mind and that is the poor support for third party tools. It is difficult to track the latest trends in web development when you’re working on a yearly release cycle, as Visual Studio does. This is somewhat ameliorated by the rapid release of plugins and extensions. Still I think think that the real innovative development is happening in the nodejs community.

One of the things to come out of the nodejs community is the really amazing build tool grunt. If you haven’t seen grunt then it is worth watching my mentor, Dave Mosher’s video on automating front end workflows with grunt. In that video he takes us through a number of optimizations of CSS and JavaScript using grunt. Many of these optimizations are not things which exist yet in any Visual Studio tooling that I’ve seen. Even in cases where the tooling does exist I find that it is being incorrectly executed.

Ideally your development environment mirrors your production environment exactly. However time and financial constraints make that largely impossible. If production is a hundred servers there is just no reasonable way to get a hundred servers for each developer to work on all the time. Unless your company has pots of moneyMoney_in_Flower_Pot

In which case, give me a call, I would be happy to spend it on insane projects for you. Perhaps you need a website which runs on a full stack of poutine instead of a full stack of JavaScript”¦

Visual Studio falls down because when you F5 a project the environment into which you’re placed is not the same as the package which is generated by an upload to your hosting provider. It is close but there are some differences such as

  • files which are not included in the project but are on disk will be missing in the package
  • JavaScript combination and minification are typically turned off on dev
  • CSS files are similarly not minified or combined

These actions can be breaking changes which will not be apparent in the development environment. For instance changing the names of function arguments, as is common in minification, tends to break AngularJS’ injection.

Thus I actually suggest that you use Grunt instead of the built in minification in ASP.net. Let’s see how to do exactly that.

The first thing you’ll need is a copy of nodejs. This can simply be the copy which is installed on all your developer workstations and build server or it can be a version checked into source control(check in vs. install is a bit of a holy war and I won’t get into it in this post). If you’re going to check a copy of node in then you might want to look at minimizing the footprint of what you check in. It is very easy with node to install a lot of packages you don’t actually need. To isolate your packages to your current build then you can simply create a “node_modules” directory. npm, the package management system used by node, will recurse upwards to find a directory called node_modules and install to that directory.

Let’s assume that you have node installed in a tools directory at the root of your project, next to the solution file, for the purposes of this post. In there create an empty node_modules directory

Node and an empty node_modules directoryNode and an empty node_modules directory

Now that node is in place you can install grunt and any specific grunt tasks you need. I have a separate install of node on this machine which includes npm so I don’t need to put a copy in node_modules. I would like to have grunt in the modules directory, though.

npm install grunt

This will install a copy of grunt into the node_modules directory. There are a huge number of grunt tasks which might be of use to use in our project. I’m going to stick to a small handful for minifying JavaScript and base64 encoding images into data-uris.

npm install grunt-contrib-uglify npm install grunt-contrib-concat npm install grunt-data-uri

Now we will need a gruntfile to actually run the tasks. I played around a bit with where to put the gruntfile and I actually found that the most reliable location was next to the node executable. Because grunt and really node in general are more focused around convention over configuration we sometimes need to do things which seem like hacks. The basic gruntfile I created looked like

This is pretty basic but it does take all the javascript files in the project and combines them into one then minifies that file. It will also insert dataURIs into your css files for embedded images. A real project will need a more complete grunt file.

Now we need to run this grunt process as part of the build. As I mentioned this is a bit convoluted because of convention and also because we’re choosing to use a locally installed grunt and node instead of the global versions.

First we create a runGrunt.bat file. This is what visual studio will call as part of the post build.

Then tie this into the msbuild process. This can be done by editing the .csproj/.vsproj file or by adding it using Visual Studio

Screen Shot 2013-12-28 at 6.46.58 PM

The final step is to create a wrapper for grunt as we don’t have the globally registered grunt

Now when the build runs it will call out to grunt via node.One final thing to keep in mind is the output files from the grunt process are what is needed to be included in your html files instead of the source files. So you’ll want to include all.min.js instead of the variety of JavaScript files.

This opens up a world of advanced build tools which we wouldn’t have at our disposal otherwise. At publication there are 2 047 grunt plugins which do everything from zip files, to check JavaScript styles, to running JavaScript unit tests”¦ Writing your own tasks is also very easy(much easier than writing your own msbuild tasks). Setting up grunt in this fashion should work for automated builds as well.

Best of luck using node to improve your ASP.net builds!

2013-12-27

Specific Types and Generic Collections

Generics are pretty nifty tools in statically typed languages. They allow for one container to contain specific types, in effect allowing the container to become an infinite number of specifically typed collections. You still get the strong type checking when manipulating the contents of a collection but don’t have to bother creating specific collections for each type.

If you’re working in a strongly typed language then having access to generic collections can be a huge time saver. Before generics were introduced in a lot of timewas spent casting the contents of collections to their correct type.

Casting like this is error-prone and is the sort of thing which makes Hungarian notation look like a good idea. However generics are like most other concepts in programming: dangerous when overused.

C# provides a bunch of generic collections in System.Collections.Generic which are fantastic. However there are also some types which are ill advised. Tuple is one of the worst offenders

Does anyone have a valid use case for a Tuple<T1,T2> where an explicit strongly-typed value object (even if generic) would not be better?

“” David Alpert (@davidalpert) December 19, 2013

There are actually 8 overloads of Tuple allowing for near endless combinations of types. The more arguments the worse the abuse of generics. The final overload of tuple hilariously has a final argument called “Rest” which allows for passing in an arbitrary number of values. As David Alpert suggested it is far better to build proper types in place of these generic tuples. Having proper classes better communicates the meaning of the code and gives you more maneuverability in the case of code changes. Tuples don’t support naming each attribute so you have to guess what they are or go look up where the tuple was created. This is not maintainable at all and is of no help to other programmers who will be maintaining the code.

The C# team actually implemented Tuples as a convenience to help support F#’s use of tuples which is more reasonable. Eric Lippert suggests that you might want to group values together using a tuple when they are weakly related and have no business meaning. I think that’s garbage. The idea that OO programming objects should somehow all map to real world objects is faulty in my mind. Certainly some should map to real world items but if you’re building a system for a library then it pedantry to have a librarian class. The librarian is a construct of the actual business process which is checking out and returning books.

So that brings me to the first part of my rule on generics: use specific types for types instead of generic objects avoiding use of Tuple and similar generic classes. Building a class takes but a moment and even if it is just a container object it will at least have some context around it.

On the flip side generics are ideal for collections. I frequently run into libraries which have defined their own strongly typed collections. These collections are difficult to work with as they rarely implement IEnummerable or IQueryable. If new features are added to these interfaces such as with LINQ there is no automatic support for it in the legacy collections. It is also difficult to build the collections initially. For collections of arbitrary length make use of the generic collections, for collections of finite length use a custom class.

Generics are powerful but attention must be paid to proper development practices when using them.

2013-12-23

Speeding up page loading "“ Part 4

In the first three parts of this series we looked at JavaScript and CSS optimizations, image reduction and query reduction. In this part we’ll look at some options around optimizing the actual queries. I should warn you straight up that I am not a DBA. I’m only going to suggest the simplest of things, if you need to do some actual query optimizations then please consult with somebody who really understands this stuff like Mike DeFehror Markus Winand.

The number of different queries run on a site is probably smaller than you think. Certainly most pages on your site are, after the optimizations from part 3, going to be running only a couple of queries with any frequency. Glimpse will let you know what the queries are but, more importantly, it will show you how long each query takes to execute.

Without knowing an awful lot about the structure of your database and network it is hard to come up with a number for how long a query should take. Ideally you want queries which take well under 100ms as keeping page load times is important. People hate waiting for stuff to load, which is, I guess, the whole point of this series.

Optimizing queries is a tricky business but one trick is to take the query and paste it into SQL Management Studio. Once you have it in there click on the actual execution plan button. This will show you the execution plan which is the strategy the database believes is the optimal route to run the query.

Screen Shot 2013-12-19 at 10.08.56 PM

If you’re very lucky the query analyser will suggest a new index to add to your database. If not then you’ll have to drill down into the actual query plan. In there you want to see a lot of

Index seekIndex seek

Index seeks and index scans are far more efficient than table seeks and scans. Frequently table scans can be avoided by adding an index to the table. I blogged a while back about how to optimize queries with indexes. That article has some suggestions about specific steps you can take. One which I don’t think I talk about in that article is to reduce the number of columns returned. Unfortunately EF doesn’t have support for selectively returning columns or lazily loading columns. If that level of query tuning appeals to you then you may wish to swap out EF for something like Dapper or Massivewhich are much lower level.

If you happen to be fortunate enough to have a copy of Visual Studio ULTIMATE (or, as the local Ruby user group lead calls it: Ultra-Professional-Premium-Service-Pack-Two-Release-2013) then there is another option I forgot to mention in part 3. IntelliTrace will record the query text run by EF. I still think that Glimpse and EFProf are better options as they are more focused tools. However Glimpse does sometimes get a bit confused by single page applications and EFProf costs money so IntelliTrace will work.

2013-12-17

Learning Queue

There is so much stuff out there to learn it is crazy. It is said that the last person who knew everything was Thomas Goethe, or Immanuel Kant or perhaps John Stuart Mill. These men all lived some time between 1724 and 1873. Personally I think the assertion that human knowledge was ever so small as to have been knowable by a single person is bunkum. Even a millennium ago there were certainly those elders who spent their entire lives learning the patterns of flow of a single river or how to tell which plants to plant based on the weather. A person who knew everything is a romantic idea so I can certainly see the appeal of believing such a notion.

These days the breadth of knowledge is so expansive that knowing everything about even the smallest topic is impossible. I once heard a story (sorry, I don’t remember the source) of a famous wrist surgeon who operated only on right wrists. He was that specialized. The field of computing is a particularly difficult one to explore as the rate of new ideas is breakneck. 4 years ago there was no nodejs, no coffeescript, no less)- web development was totally different. Chef hadn’t been released and nuget was a twinkle in Phil Haack’s eye.

Somebody was talking about how the pace of new development ideas in computing has accelerated in the past two or three years and that since the .com crash of 2000 to 2009ish innovation had really been slow. If that is true it means that for almost my entire technical life innovation has been slow. I’ve been hanging on by my fingernails and innovation has been slow? Oh boy.

In an e-mail the other day I mentioned to somebody that I had to add Erlang to my learning queue. The learning queue is just an abstract idea, I don’t have a queue. Well, I didn’t have a queue. That changes today. I have so many things to learn that I can’t even remember them.

So with the most pressing things at the top of my queue the list currently looks like

  1. Azure Scheduler
  2. AngularJS
  3. Grunt
  4. NodeJS
  5. Async in .net (still don’t fully understand this)
  6. E-mail server signatures
  7. Erlang
  8. GoLang
  9. Kinect gestures
  10. Raspberry Pi
  11. Some hardware sensors about which I know so little I can’t even put a name on this list

I’ll keep this list up to date as I learn things and as things move up and down in order. What is in your learning queue?

2013-12-16

Speeding up page loading "“ Part 3

Welcome to part 3 of my series on speeding up pages loading. The first twoparts were pretty generic in nature but this one is going to focus in a little bit more on ASP.net. I think the ideas are still communicable to other platforms but the tooling I’m going to use is ASP.net only.

Most of the applications you’re going to write will be backed by a relational database. As websites become more complicated the complexity of queries tends to increase but also the number of queries increases. In .net a lot of the complexity of queries is hidden by abstraction layers like Entity framework and NHibernate. Being able to visualize the queries behind a piece of code which operates on an abstract collection is very difficult. Lazy loading of objects and the nefarious n+1 problem can cause even simple looking pages to become query rich nightmares.

Specialized tooling is typically needed to examine the queries running behind the scenes. If you’re working with the default Microsoft stack(ASP.net + Entity Framework + SQL Server) there are a couple of tools at which you should look. The first is Hibernating Rhino’s EFProf. This tool hooks into Entity Framework and provides profiling information about the queries being run. What’s better is that it is able to examine the patterns of queries and highlight anti-patterns.

The second tool is the absolutely stunning looking Glimpse. Glimpse is basically a server side version of the F12/FireBug/Developer Tools which exists in modern browsers. It profiles not just the SQL but also which methods on the server are taking a long time. It is an absolute revolution in web development as far as I’m concerned. Its SQL analysis is not as powerful as EFProf but you can still gain a great deal of insight from looking at the generated SQL.

We’ll be making use of Glimpse in this post and the next. There is already a fine installation guide available so I won’t go into details about how to do that. In the application I was profiling I found that the dashboard page, a rather complex page, was performing over 300 queries. It was a total disaster of query optimization and was riddled with lazy loading issues and repeated queries. Now a sensible developer would have had Glimpse installed right from the beginning and would have been watching it as they developed the site to ensure nothing was getting out of hand. Obviously I don’t fall into that category.

A lot of queriesA lot of queries

So the first step is to reduce the number of queries which are run. I started by looking for places where there were a lot of similar queries returning a single record. This pattern is indicative of some property of items in a collection being lazily loaded. There were certainly some of those and they’re an easy fix. All that is needed is to add an include to the query.

Thus the query which looked like

became

In another part of the page load for this page the users collection was iterated over as was the permission collection. This caused a large number of follow on queries. After eliminating these queries I moved onto the next problem.

If you notice in the screenshot above the number of returned records is listed. In a couple of places I was pulling in a large collection of data just to get its count. It is much more efficient to have this work done by the SQL server. This saves on transferring a lot of unneeded data to the web serving tier. I rewrote the queries to do a better job of using the SQL server.

These optimizations helped in removing a great number of the queries on the dashboard. The number fell from over 300 to about 30 and the queries which were run were far more efficient. Now this is still a pretty large number of queries so I took a closer look at the data being displayed on the dashboard.

Much of the data was very slow changing data. For instance the name of the company and the project. These are unlikely to change after creation and if they do probably only once. Even some of the summary information which summarized by week would not see significant change over the course of an hour. Slow changing data is a prime candidate for caching.

ASP.net MVC offers a very easy method of doing caching of entire pages or just parts of them. I split the single large monolithic page into a number of components each one of which was a partial view. For instance this section calls to 3 partial views

As each graph changes at a different rate the caching properties of the graph are different. The methods for the partial is annotate with an output cache directive. In this case we cache the output for 3600 seconds or an hour.

Sometimes caching the entire output from a view is more than you want. For instance one of the queries which was being commonly run was to get the name of the company from the Id. This is called in a number of places not of them in a view. Fortunately the caching mechanisms for .net are available outside of the views

One thing to remember is that the default caching mechanism in ASP.net is a local cache. So if you have more than one server running your site(and who doesn’t these days?) then the cache value will not be shared. If shared cache is needed then you’ll have to look to an external service such as the Azure cache or ElastiCacheor perhaps a memcache or Redis server in your own data center.

On my website these optimizations were quite significant. They reduced page generation time from about 3 seconds to 200ms.An optimization I didn’t try as it isn’t available in EF5 is to use asynchronous queries. When I get around to move the application to EF6 I may give that a shot.

2013-12-09

Speeding up page loading - part 2

In part 1 of this series I talked about speeding up page loading by combining and minimizing CSS and JavaScript files. Amongst the other things loaded by a web page are images. In fact images can be one of the largest items to load. There are a number of strategies for dealing with large or numerous images. I’ll start with the most radical and move to the least radical.

First approach is to get rid of some or all your images. While it seems crazy because you spent thousands on graphically designing your site it might be that you can duplicate the look without images. Removing even a couple of images can speed up page loading significantly. Instead of images you have a couple of options. You can do without images at all or you can replace the images with lower bandwidth options such as an icon font or pure CSS.

Icon fonts are all the rage at the moment. Downloading custom fonts in the browser is nothing new but a couple of years ago somebody realized that instead of having letters make up the font glyphs you could just as easily have icons. Thus fonts like fontawesomewere born. Fonts are a fairly bandwidth efficient method of providing small images for your site. They are vector based thus far smaller and more compressible than raster fonts. The problem is that an icon font might have hundreds of icons in it which you’re not using. This can be addressed by building a custom icon font. If you’re just interested in font-awesome then icnfntis the tool for doing that.

Alternately you can build your own images using pure CSS. The logo for one of the sites I run is pure CSS. At first it was kind of a geek thing to do to prove we could create a logo that way but having it in CSS actually allowed us to scale it infinitely and reduced the size of the page payload. It is a bit of an adventure in CSS to build anything complicated but worthwhile. If your image it too complicated for CSS then perhaps SVG is the tool for you. Scalable vector graphics can be directly included in the markup for your site so require no additional requests and are highly compressible.

Some images aren’t good candidates for vector images. For these there are fewer options. The first step is to play around with the image compression to see if you can trade some quality for image size. Try different compression algorithms like gif, and png then play with the quality settings. This is a bit of a rabbit hole, you can spend days on this. Eventually you end up kidnapping people off the street, strapping them into a chair and asking them “Which one is better A or B, B or C, A or C?”. This is okay if you’re an optician but for regular people it usually results in jail.

Once the image is optimized you can actually embed the image into your CSS. This is a brand new thing for me which my mentor told me about. In your CSS you can put a base 64 encoded version of your image. Manually this can be done at this cool websitebut more reasonably there is an awesome grunt task for processing CSS.

If you have a series of smaller images then you might consider using CSS sprites. With sprites you combine all your images into a larger image and then use CSS to display only portions of the larger image at a time.sprite

Personally I think this is a huge amount of work with the same results achieved through embedding the images in the CSS. I wouldn’t bother, but you might want to give it a try as combining the images can result in a smaller image overall due to encoding efficiencies.

I was pretty impressed with how much image optimization helped our use of data. We saved about 100Kib which doesn’t sound like a lot but on slow connections it is a lifetime. It is also a lot of data in aggregate.

So far we’ve been concentrating on reducing the amount of data sent and received. In the next part we’ll look at some activities on the server side to reduce the time spent building the response.

2013-12-04

Speeding up page loading - part 1

I started to take a look at a bug in my software last week which was basically “stuff is slow”. Digging more I found that the issue was that pages, especially the dashboard were loading very slowly. This is a difficult problem to definitively solve because page loading speed is somewhat subjective.

We don’t have any specifications about how quickly a page needs to load on the site. Less than 5 seconds? Less than 2 seconds? Such a thing is difficult to define because all too often we fail to define for whom the page loading should be quick. Loading is governed by any number of factors

  • time taken to build the HTML for the view (excuse the MVC style language ““ the same token replacement needs to be done on most frameworks)
  • speed of the server
  • speed of the connection from the server to the client
  • bandwidth between the client and the server
  • speed of the client to render the HTML
  • “¦

The list is pretty daunting so I thought I would write about what I did to improve the speed of my application.

The application is a pretty standard ASP.net MVC application with minimal fron end scripting(at least compared with some applications). This means that the steps I take here are pretty much globally applicable to any ASP.net MVC website and many of them are applicable to any website. My strategy was to pick off the low hanging fruit first, fixing easy to fix problems and those which had a big impact on the speed. This would give some breathing room to get time to fix the harder problems.

This post became quite long so I’ve split it into a number of parts.

  1. Bundling CSS/JS
    1. Removing images
    2. Reducing Queries
    3. Speeding Queries

I’ll post the later parts as I finish writing them.

Loading Resources

A web page is made up of a number of components each of which has to be retrieved from a server and delivered to a client. You start with the HTML and as the client parses the HTML it issues additional requests to retrieve resources such as pictures, CSS and scripts. There is a whole lot of optimization which can be done at this stage and it is where I started on this website.

I started on the slowest loading page: the dashboard. We’re not live yet but it is embarrassing that on our rather limited testing data we’re seeing page load times on the order of 15 seconds. It should never have got this far out of control. Performance is a feature and we should have been checking performance as we built the page. Never mind, we’ll jump on this now.

My tool of choice for this is normally Google Chrome but I thought I might give IE11”²s F12 tools a try. A lot of effort has been put in my Microsoft to improve Internet Explorer in the past few years and IE11 is really quite good. I have actually found myself in a position where I’m defending how good IE is now to other developers. I never imagined I would be in this position a couple of years ago, but I digress.

You can get access to the developer tools by hitting F12 and pressing the play button then reloading the page. This will result in something like this:

IE Profiling

This is actually the screen after some significant optimization. If you zoom in on this picture then you can see that this page is made up of 9 requests

  • 1 HTML
  • 3 CSS
  • 2 Scripts
  • 3 Fonts

Originally the page had several more script files and several images taking the total to something like 15. We want to attempt to minimize the number of files which make up a page as there is a pretty hefty overhead associated with setting up a new connection to the server. For each file type there is a strategy for reducing the number of requests.

HTML is pretty much mandatory. CSS files can be concatenated together to form a single file. Depending on how your CSS is constructed and how diligent you’ve been about avoiding reusing identifiers for different things across the site this step can be easy or painfully difficult. On this website I made use of the CSS bundling tools built into ASP.net. I believe that the templates for new ASP.net projects include bundling by default but if you’re working on an existing project it can be added by creating the bundles like so

You’ll note that I’m registering the bundles twice, this is just to demonstrate that you can include either individual files or a whole directory. Then call out to this in the Global.asax.cs’s application start

You can now replace all your inclusions of CSS with a single request to ~/bundles/Style (don’t worry about that ~/ razor will correctly interpret that for you and point it at the site root). If you look at the CSS file hosted there you’ll see that it is a combined and whitespace-stripped file. This minimization will save you some bandwidth and is an added benefit to bundling.

JavaScript files can be bundled in much the same way. If you’ve been smart and namespaced your JavaScript into modules then combining JavaScript should be a sinch. Otherwise you might want to look into how to structure your JavaScript.Bundling the script files is much the same as the CSS

The script bundle will concatenate all your script files together and also minify) them. This saves not only on bandwidth but also on the number of connections which need to be opened.

Reducing the number of requests is a pretty small improvement but it is also pretty simple to do. In the next part we’ll look at removing images to speed up page loading.

2013-12-02

Content-Disposition comes to azure

The Azure team accepts requests for new features on their user voice voice page. I have spent an awful lot of votes on this request “Allow Content-Disposition http header on blobs“. Well now it has been released!

Why am I so excited about this? Well when I put files up into storage in order to avoid name conflicts I typically use a random file name such as a GUID and then store that GUID somewhere so I can easily look up the file and access it. The problem arises when I try to let people directly download file from blob storage, they get a file which is named as a random string of characters. That isn’t very user friendly. Directly accessing blob storage lets me offload the work from my web servers and onto any number of storage servers. So I don’t want to abandon that either. Content disposition lets me hijack the name of the file which is downloaded.

To make use of the new header is actually very simple. I updated the save to blob storage method in the project to take an optional file name which I push into the content disposition

Now when I link people directly to the blob they get a sensibly named file.To do this you’ll need the latest Azure storage dll(3.0).One note of caution is that as of writing the storage emulator hasn’t been updated and will throw some odd errors if you attempt to use the new storage dll against it. Apparently it will all be fixed in the next release of the emulator.

Setting the content disposition header on the blob ensures that everybody who downloads it gets the renamed file. It is also possible to set the header using the shared access signature (SAS) so that you can modify the name of the document for each download. Although, I’ll be honest, I could not find a way of doing this from the managed storage library. I can only find examples using the REST API.

2013-11-23

What makes a senior developer

LinkedIn was kind enough to send me an email with some suggestions about people I might know. In that collection was a young fellow with whom I interact infrequently. He graduated from university in 2009 at which point he started working for the company he remains with to this day. About a year and a half after he stated with the company he became a senior developer.

So this fellow 18 months out of school, who has worked with one company on one project in one language is a senior developer.

Oh. My.

If this fellow works for another 40 years I’m not sure what title he will end up with Ultra-super Megatron Developer? 8th Degree Developer Black Belt? Or something truly silly like Archtect?

The real issue, though, is that companies pay people by title. I would guess that this fellow deserved a raise and that to get that raise his manager had to bump his title. The whole system devalues the concept of experience which is very important.

As an industry we still has bent quite figured out the career path for people who like to program. We shove them into management roles because that is what we have always done with other disciplines. There are countless blogs and articles about that problem. By moving experienced developers to manager roles we’re losing years of great experience and young developers have to relearn the lessons of the past.

We are never going to be able to change business titles, there is too much momentum behind job titles. We need to borrow an idea from “The Naming of Cats”, the T S Eliot poem. Each cat has 3 different names one of which is the cat’s secret name, the name which no human will ever know. Equally developers need to have names that business doesn’t know. I’m reminded of those Geek Codes from the days of slashdot. These were a way of identifying just what sort of a geek you were.

We should have a way of talking about our abilities and skills which is distinct from job titles. This is somewhat similar to the ideas of software craftsmanship which Uncle Bob uses. I think that the craftsmanship movement is a bit too narrow and focused on complying with one way of thinking. So I would suggest that a senior developer should have

  • Worked for a number of companies
  • Developed on a number of different platforms
  • Worked with several different programming paradigms (OO, procedural, functional,”¦)
  • Shipped new software
  • Supported existing software
  • Improved the culture at a company (introduced source control, introduced builds, moved a team to agile,”¦)
  • An understanding of scale, databases and caching

In addition a senior software developer should be able to have reasonable discussions about almost anything in computers. They should have strong opinions on most technology and they should be willing to change these ideas when faced with new and better ones. A senior developer should watch emerging trends, understand them and take advantage of them.

In short a senior developer should be awesome. I’ve only known a handful of people who are sufficiently awesome to be senior software developers. I wouldn’t count myself among them, but I’m working on it. You should too.

2013-10-10

2 Days with ScriptCS

A couple of days ago I found myself in need of doing some programming but without a copy of Visual Studio. There are, obviously, a million options for free programming tools and environments. I figured I would give ScriptCS a try.

If you haven’t heard of ScriptCS I wouldn’t be surprised. I don’t get the impression that it has a huge following. It is basically a project which makes use of the Roslyn C# compiler to build your shell script like C# into binaries and execute them. It provides a REPL environment for C#. On the surface that seems like a pretty swell thing. My experience was mixed.

The Good

Having the syntax around from C# made my life much easier. I didn’t have to look up any syntaxes for loops or the whatnot which always seems to catch me when I script something in a more traditional scripting language. It was also fantastic to have access to the full CLR. I could do things quickly which would have been a huge pain to figure out in other scripting languages like access a database.

You can also directly install nuget packages using scriptcs simply by running

scriptcs.exe -install DocX

It will download the package, create or update a packages.cs and the scripts automatically find the packages without having to explicitly include the libraries. I was able to throw together a tool which manipulated word documents with relative ease.

I had a lot of fun not having access to Intellisense. At first I was concerned that I really didn’t know how to programme at all and that everything I did was just leaning on a good IDE. After an hour or so I was only slightly less productive that I would have been with Visual Studio. I used sublime as my editor and threw it into C# mode which gave me syntax highlighting. Later I discovered a sublime plugin which provided C# completion! Woo, if not for coderush sublime could have replaced Visual Studio outright.

It was easy to define classes within my scripts something I find cumbersome in some other scripting languages. The ability to properly encapsulate functionality was a joy.

I didn’t try it but I bet you would have no problem integrating unit tests into your scripts which puts you a huge step up on bash”¦ is there a unit testing framework for bash? (Yep there is:https://github.com/spbnick/epoxy).

The Not So Good

Of course nothing is perfect. I used scriptCS as one would use bash. I would write a chunk of code then run the script, check the output and then add more code. Problem is that scriptcs is SLOW to start up. Like kicking off a fully fledged compiler slow. This wouldn’t have been too bad except that for some reason every time I ran a script it would lock the output file.

C:tempscriptcs> scriptcs.exe .test1.csx ERROR: The process cannot access the file ‘C:tempscriptcsbintest1.dll’ because it is being used by another process. C:tempscriptcs> del bin* C:tempscriptcs> scriptcs.exe .test1.csx hi C:tempscriptcs>

I wanted to stab the stupid thing in the face after 5 minutes. I opened up process explorer to see if I could see what was locking the file. As it turns out: nothing was using the file. I don’t know if this is a bug in windows or in scriptcs. In either case it is annoying. I discovered that you can pass the -inMemory flag which avoids the file locking issue by not writing out a file. I guess this will become the default in the future but it brings me to:

The documentation isn’t so hot either. I get that, it is a new project and nobody wants to be slowed down by having to write documentation. However I couldn’t even find documentation on what the flags are for scriptcs. When I went to find out how to use command line arguments I could only find an inconclusive series of discussions on a bug.

The Bad

There were a couple of things which were so serious they would stop me from running ScriptCS for anything important. The first was script polution. If you have two scripts in a folder when running the second one you’ll get the output from the first script. Yikes! So let’s say you have

delete-everything.csx send-reports-to-boss.csx

running

scriptcs send-reports-to-boss.csx

will run delete-everything.csx. Ooops. (Already documented ashttps://github.com/scriptcs/scriptcs/issues/475)

I also ran into a show stopping issue with using generic collections which I further documented here:https://github.com/scriptcs/scriptcs/issues/483

The final show stopper for me making more use of ScriptCS is that command line argument passing hasn’t really been figured out yet. See the issue is that passing arguments normally passes them to ScriptCS instead of the script

scriptcs.exe .test1.csx -increase-awesome=true

the solution seems to be that you have to add “” to tell scriptcs to use that argument for the script

scriptcs.exe .test1.csx – -increase-awesome=true

However some version of powershell hate that. The issue is well documented inhttps://github.com/scriptcs/scriptcs/issues/474

Am I going to Keep Using It?

Well it doesn’t look like I’ll be getting a full environment any time soon in this job. As such I will likely keep up with ScriptCS. I hate not having a real environment because it means I can’t contribute back very well. Although my discovery of C# completion in Sublime might change my mind”¦

If scriptcs worked on mono(it might, I don’t know) and if there was a flag to generate executables from scripts I would be all over it. It is still early in the project and there is a lot of potential. I’ll be keeping an eye on the project even if I don’t continue to use it.