2013-03-22

Excel is XML - When EPPlus Doesn't Support You

If your application needs to write out Excel files then your best option is to make use of the near magical EPPlus. I’ve written about EPPlus beforeand today’s post actually stems from that one. See the default databars which EPPlus puts in are a little limited. The specific issue the customer had was that when a value of 0 was entered the databar still showed a little bit. If you manually add databars through Excel the problem doesn’t exist. I was curious about what options were missing in the excel file created by EPPlus which existed in the Excel created file. Could I duplicate them in EPPlus?

Excel is a pretty old application dating back to when I was just a wee lad. As you might expect over time the file format became more and more complicated as new features were added and hacks put in (ask me about how dates work in excel). In Office 2007 Microsoft threw out their old file format and moved to a new, XML based, file format. They did it for all the major office applications: Word, Excel and Powerpoint. Access is still a disaster, but you should be using LightSwitch, right? It is a far better file format than the old one even though it is XML.

Excel file are actually a collection of XML files all zipped up with a standard zip algorithm. That means that you can just rename your .xlsx file to .zip and you can open it up with your fully registered version of WinZip. Inside you’ll find a bunch of files but the one we’re intersted in is sheet1.xml.

Excel ContentsExcel Contents

I put in some simple databars and opened up this file’s XML. I won’t include the full contents here as it is pretty long but you can look at it in a gist. Comparing this with what EPPlus generated the difference became apparent: the whole extLst section was missing. My understanding is that these extList and ext sections are used to provide extensions to the file format. So while the conditional formatting section existed in EPPlus’ version it didn’t have the extensions and it was the extensions which provided the styling I wanted.

Fortunately there is a solution inside EPPlus. When defining the worksheet there is a property called WorksheetXML which contains the full XML document which will be exported when the excel package is saved. I tied into this XML using the Linq2XML API and injected the appropriate extLst information and the line in the conditional formatting to link to it. It took me a little while to get everything set up the way it needed to be but that was mainly because of my mental block around XML namespaces.

It certainly isn’t a pretty way to manipulate Excel documents but if you need some functionality which hasn’t been bound by EPPlus yet then it’s a pretty good solution. Being able to open an existing Excel file and make small changes then compare it with the previous version gives an easy path to reverse engineer the file format.

2013-03-21

The Standard Language Fallacy

I had to write a small bit of throw away code today to validate some database entries. I’ve been playing a bit with F# at night and this would have been an ideal application of what I’ve been learning.Unfortunatelywork has a strict C# only policy. I’m sure this isn’t uncommon. Policies like this are in place so that other developers can jump into an application quickly and help out or pick up where others left off. On the surface it seems to be a great idea and I’ve probably suggested putting policies like this in place in a previous life. Today I got to thinking about the policy and decided that it is actually terribly flawed.

The premise is that the hardest thing about picking up a new project is learning the language in which it is written. Theinterestingthing about that is that if you ask any developer how long it takes to pick up a new language they will say “an afternoon” or “a couple of days”.Admittedlysome of that may be developer hubris but even if the estimate is out by a factor of two it still isn’t very long. The vast majority of the learning curve for joining a new project is in understanding the architecture and the design as well as figuring out where to make changes. Knowing the language in which the application is written is going to be of minor help in that case. Good architecture can and does exist in any language.

On the other side of the equation preventing developers from trying out new languages narrows their field of expertise. Supposing that the language is only used by the developer briefly for one project they may still learn new practices or patterns which are unique to the language.

The principle oflinguistic relativityholds that the structure of alanguageaffects the ways in which its speakers conceptualize their world, i.e. theirworld view, or otherwise influences theircognitive processes.

-Wikipedia

I know that learning typescript and F# has opened my eyes to different ways of doing things in my regular C# code. There are also some tasks which are easier in one language or another. I find it very difficult to justify spending more money on the development of an application in C# if the same business needs can besatisfiedfor half the price in another language, say Python or Scala. Just as only planting a single crop is dangerous so is supporting only a single language.

As an added bonus experimenting with other languages is a great way to keep developers. We devs love learning new things and if your company can’t keep developers through high pay then cool technology a good alternative.

Developing in other languages isn’t something to be feared, it is something which should be cautiously explored.

2013-03-20

How to Write a Bug Report

I could not believe it when I checked and found that I don’t have a post on how to write a bug report. Over the years I’ve seen a lot of bug reports. Of course most of them aren’t real bugs as that would imply that the code I write isn’t perfect. Aridiculousproposition. The bug reports are usually substandard in some way.

The purpose of a bug report is to explain to the developers or support group exactly what went wrong. To do this I like to see bugs which tell a story:

I was attempting to load the add user form from the main menu but when I clicked on “Add User” I was taken to the user list screen. I expected to be taken to a screen where I could add the user.

This story contains the key elements which a developer would need to reproduce the bug. Obviously this is a very simple scenario and one in which it is apparent what the issue is. As the scenarios get more complicated the information needed from the user increases.Fortunatelymuch of this information can be gained from examining logs instead of pestering the user with questions like “What server were you on?” and “At what time did the failure occur?”.

Make no mistake writing bug reports is a pain and for every issue you get a report of there are likely to be a dozen users who just give up. Frustratedusers aren’t what you want. Making reporting as simple as possible for users should be a key part of your user interaction strategy. To streamline this process I usually suggest using a tool like jing. This allows users to quickly record their screen so they don’t even need to type in a bug report.

This still requires that users let you know when they have problems. I just stumbled across MouseFlowwhich is the actualization of an idea that I had some time back for tracking user interactions in the browser. Traditional web analytics let you know what users are doing on your site, where they are clicking and what they’re looking at. MouseFlow allows you to look at how users are acting on the page. As the web evolves more of the user interactions are moving within a page using Ajax or even single page applications. These are more difficult to profile than traditional websites. Some interactions don’t need to make server trips so the user’s activities are lost. MouseFlow captures these interactions and gives you some great tools for analyzing them.

I haven’t used MouseFlow but it looks amazing. When I do get around to trying it out then I’ll post back with my experiences.

2013-03-18

Nuget on TeamCity for AngelaSmith

Well I had quite the adventure setting up builds for AngelaSmith. CodeBetter are kind enough to host a TeamCity instance on which you can build open source projects. I got an account and started to set up the build. The first part went okay, I got builds running and unit tests going as well as code coverage. I mostly put in code coverage as a metric to annoy Donald Belchamwho doesn’t believe in metrics.

Then I turned my attention to building nuget packages. I decided that I was going to do things right and build not just release packages but also symbol packages on the off chance that somebody would like to debug into our very simple library. I was also going to have the build upload packages directly to nuget.org.

The first thing I did was create a nuspec file.

I didn’t specify any explicit contents for it. Instead I use a convention based directory structure which allows me to add files to the package easily. In the build Icopy the output of the build into a lib directory. I copy both the .dll and the .pdb file which is needed for constructing a source package. I also create a src directory adjacent to my lib directory. Into that directory I copy the source code too as this needs to be in the symbol package. To do this I just use a bat script. Sometimes the simple solution is still the best.

TeamCity has a great set of nuget package tasks. I started with the one for assembling the package.

TeamCity Nuget Package SettingsTeamCity Nuget Package Settings

This is very simple, you just need to set the path to the nuspec file. I added a version number based on the build number and a “-beta”. When I took this screenshot I was pretty sure that this followed semantic version numbering. As it turns out it doesn’t. As these are builds not for release I should be numbering them as +build${buildNumber}.Unfortunatelynuget does not allow for proper semantic versioning so we have to live with a ““ instead of a +.

The next step was to publish the packages to nuget.org. To do this I set up a new account on nuget.org just for publishing AngelaSmith packages. The secret key is entered into TeamCity which takes care of uploading both the standard package as well as the symbol packages. I don’t know how secure the key is which is one of the reasons I used a standalone account. If it is compromised none of the other nuget packages I maintain will be vulnerable.

With this all set up the AngelaSmith builds now push directly to nuget.org.

2013-03-18

When to Unit Test

A few days back there was a big debate about when to do unit testing and how much should be tested. A bunch of the programmers I really admire like Greg Young, Jimmy Bogadt and Uncle Bob Martin weighed in. The debate was about what you would expect: Uncle Bob was prosthelytizing his belief that testing is key and that the quest should be for 100% code coverage.

New Blog. The Start-Up Trap. http://t.co/341M4wNDh2

“” Uncle Bob Martin (@unclebobmartin) March 5, 2013

On the other side of the debate the more pragmatic Greg young and Jimmy Bogard were suggesting that blindly testing everything is a naive approach. One should pick and choose the code to test. How much code should be tested? Well it depends. I’ve been in the same boat as Jimmy, testing nothing, testing everything and testing something in-between. What’s the right mixture? I have no idea.

To me it comes down to: what is of value to your business. What’s the consequenceto failure?

If you’re a bank then the consequence of failure is pretty bad. If you’re a service like twitter then it isn’t too bad. For a long time twitter was super unreliable, but they got through it and became highly successful.

Remember this guy?Remember this guy?

Test the portions of your application which would have the greatest impact if they were broken. Are you reliant on processing invoices? Then invoice processing should be the focus of your testing. Are you a service which authenticates users? Then that should be the focus of your testing.

Another approach is to test the parts of your application which change the most frequently. The idea being that the parts which change the most frequently are unlikely to have the same level of exposure to user testing as the rest of the site. Changing stable code is necessary but likely to introduce problems.

I don’t know what the solution is to how much testing needs to be done. As with all difficult problems I feel like a single solution is over simplifying the problem.

No testing? Test everything? Somewhere in the middle.

2013-03-15

Typescript - Cleaning up Warnings

When you’re getting started with typescript you’re probably going to run into a bunch of warnings and errors. Pay attention to the messages, I’ve found them to be almostentirelycorrect. However you will see errors of the form

The name $ does not exist in the current scope

Why that’s jQuery, why doesn’t typescript know about jQuery? Well typescript doesn’t know about anything other than what’s in the current file. It is a dumb as a muffin. A tasty, tasty cranberry orange muffin. Maybe it has thatstreuseltoping on it”¦ sorry, I really like muffins. What can you do to solve this problem? Well that’s where definition files come into play. Definition files are, in effect, C include files. They define the public interfaces of the libraries you’re trying to use. Libraries like jQuery and d3.js. Libraries like your own libraries.

If you’re referencing your own typescript file then you don’t need to create a declarationfile. Typescript can read its own typescript files so there is no need to generate declarationfor them unless you’re distributing the declarationto other developers to develop against. To instruct the transcompiler to make use of either an external typescript file or declarationfile you can include a reference in your typescript file.

/// ///

  • *You can see here that I pulled in both a declarationfile for jQuery and a typescript file I created.

If you’re looking for a declarationfile for apubliclydistributed library then there are some great options. You can manually download it from Boris Yankov’s github repo athttps://github.com/borisyankov/DefinitelyTyped. Or you can grab a package off nuget. If you’re using node then you can install the typescirpt definition tool fromhttp://www.tsdpm.com/and install packages using.

tsd install node

To generate your own definition files from a typescript file you can pass ““declaration to the transcompiler. This will,unfortunately only generate from typescript files. If you want to generate declarations from pure javascript then you’ll need to do it by hand.

2013-03-14

Chip Away at It

Sometimes I go to a bootcamp style gym and theprescribedworkout is something insane like 100 pullups, 200 pushup and 300 squats. On the surface this seems impossible. On a good day I can do perhaps 20 pullups in a row which is only 20% of the number needed here. During the workout, which took me a little over an hour last time the muscle rich coaches yell platitudes like something from a Simpsons episode. One of the favorite chants is “Chip away at it, chip away”.

With ripped hands and believing that I would never again be able to inhale like a normal person again I have very little interest in hearing the chants. (My favorite one of the last week was “burpies should be your rest, they’re easy”) However the chip away comment I find to be pretty applicable to software development. On any project of any size and of any age there is going to be a great deal of technical debt. Trying to pay down technical debt all at once is impossible. You’ll never convince management that all new development needs to stop so that you can make invisible improvements to the code base. Instead you should make as small a changes as possible while you’re doing other development.

My rule is that if I’ve opened a file, even if I’m just reading code, then I need to make an improvement. It could be as simple as removing unused using statements or changing a stringconcatenationto a string.format. These don’t seem like they’re going to do much to pay down your technical debt but their effects arecumulative. Eventually you run out of trivial changes to make in your files and you start making slightly larger changes. These slightly larger changes add up to larger changes. Before you know it your code base has been improved.

I followed this idea at a job once. When I started it took us 2 months to do a realease of our software. We identified the biggest pain point and automated it. Then we repeated this process until we actually ran out of things to automate. It took a couple of years but we got builds down so that they were running every night and producing full release packages. I automated myself out of a job. At least I would have if we didn’t end up responsible for two dozen additional products. If we hadn’t worked away at it wewouldhave been working 90 hour weeks just to stay on top of things.

Paying down technical debt is like paying a mortgage. If you just pay a little bit more each week then your debt will be paid down much sooner.

Chip away at it.

2013-03-13

Why you shouldn't be bothering with routes

In my mind one of the most abused bits of functionality in ASP.net MVC is the routing engine. I don’t think this is a problem unique to ASP.net MVC as other MVC frameworks like Rails also make heavy use of routing.

In a nutshell routing allows you to map the URL(or URI if you want to beentirelycorrect) to a file or action on the server. For instance if you look at the URL for this post

http://blog.simontimms.com/2013/03/04/why-you-shouldnt-be-bothering-with-routes

You can see that there is some information buit into it. I don’t know the internals of wordpress but you can be pretty much assured that there isn’t actually a file on disk calledwhy-you-shouldnt-be-bothering-with-routes in the directory/2013/03/04/. Instead this URL is used by a routing engine to passappropriateparameters to a script which looks up information in a database. The combination of the domain and the name of the blog is probably sufficient to identify the record in the database. The date information in there is just a hint to readers so they know when the blog was written.

The only thing is: its wrong. I’m writing this blog on the 9th of March and it is scheduled to be published on the 13th. The reason that date is there is that I created a draft on the 4th. That I created a draft on the 4th isimmaterial. I have some drafts from 9 months ago. Heck, I have a half written rant about an architectural failure at Backblaze which is so out of date that it will likely never be published. The draft date is not important in the least.Fortunatelynobody looks at the URL to gather this information.

URLs are not meant to convey useful information to people, they’re there as instructions to computers. From time to time it is useful to have a friendly URL that people can remember but this is typically only for allowing them to type it in from a piece of paper. URLshorteningservices are fantastic for that.

Embedding information in the URL might look nice but it has very limited utility. In the .net world routing is provided by System.Web.Routing and this is typically configured in the startup of an MVC application. I have seen a number of MVC applications which have dozens, even hundreds of routes defined. Even Phil Haacked has contributed to the madness by providing a route debugger. Stop the madness! Use the default routes!

Having complex routes makes it very difficult to figure out which controller it is that is causing you trouble. It is also a boat load of extra code you need to test and understand. The fact that a route debugger is necessary should be a big hint that your logic is too complicated. The default route is sufficient for 99% of the requests coming to your site.

The only legitimate uses I can think of for routing requests are:

  1. You need to do some optimization for search engines. Apparently google place some emphasis on the structure of a URL. It is difficult to tell because google keep pretty quiet about how they do page ranking. Optimizing for search engines is a prettysleazybusiness anyway so you’re likely better of not even trying.

  2. Mapping old URLs. This is a really good call. It sucks when there is a change to the underlying engine behind a website and now none of your links work. By setting up a mapping you can avoid a lot of 404s in your error logs.

If these two reasons don’t match your use case then don’t even bother adding custom routes. It will save you headaches in the future.

2013-03-12

Fizz Bizz with Shell Scripting

It seems like I’m writing a lot of fizzbizz examples at the moment. It is kind of fun experimenting with different languages. There are always different constructs for looping and recursing. I’m also super happy that there are a lot of languages to get through before I have to write it in prolog. I have prettytraumaticmemories of prolog from university. Today’s language is shell script.

Of course shell script isn’t just one language, it depends on which shell you’re using. When I worked a lot with unix derivatives I mostly worked with bash scripting. Unless it was aparticularlyold or odd OS in which case we would end up on plain sh. There are a bunch of other shells out there and I can remember a time when both csh and ksh were also popular. I can also remember when druids sacrificed goats. Is there a link between ksh’s stupidarcanesyntax and goat slaughter? I can’t proveconclusivelythat there is but there are no dead goats here and no ksh syntax either. Draw your own conclusions.

I thought I would try using zsh style scripting for fizz bizz. Zsh is a newer shell which has many of the features of bash and also borrows from other shells. Now I say “newer” but it still dates to 1990.

The script starts with a sha-bang which instructs the program loader that to run the script it should execute /bin/zsh which is where zsh lives on my machine. It might be better to replace it with #!/usr/bin/env zsh which instructs the program loader to launch env which searches for zsh. This allows for searching of the path for zsh. There is an increased security risk with doing so as an alternate zsh might be selected. However this risk is probably worth it for increased portability.

I was hoping that it was possible to putmathematicalexpressions in the case statements but that’s not possible. Instead I took advantage of case statements here and the fact that we can take the remainder modulo 6 to do most of the fizz bizz heavy lifting. On line 3 there is a bit of a syntactic oddity. Zsh is not really designed for doing a lot of math so arithmetic operations need to live inside double parenthesis.

Gosh, I can’t wait for the next interview where I’m asked about fizz bizz. I am going to kill that question.

2013-03-11

Adding TypeScript to an Existing Project

If you read this blog with any sort of frequency you’ll know that I’m somewhere in the range of 15-18% about typescript. In my travels the other day I created a new web project on a machine without typescript installed(go on, ask me how many computers I have at home). When I went to add typescript to the project it didn’t compile and I squinted my third hardest squint. Digging around on the web I found that one might need to add a target to the .csproj file. The examples I found all pointed to the typescript found in C:\program files.

I hate that.

Doing so means that you have to install typescript, in the default location, for the compile to work. That’s too much friction for setting up the project on a new machine. Instead I copied the contents of that directory into a tools directory in my project and checked it in. Now when people compile it they won’t even notice that it is building their typescript for them and the friction for a new developer is 0.

The target? Well just paste this into your .csproj file right before the

I put typescript in a tools/typescript directory at the same level as my .sln file. All good to go.