2009-09-16

IE Caches JSON

I ran into an interesting problem today on everybody’s favorite browser Internet Explorer. At issue was a page I had which was partially populated using jQuery’s getJSON function. As it turns out even though I had the caching turned to no-cache on the server IE was perfectly happy to cache the document because it was fetched using GET. Apparently this is OK to do. Obviously this ruins my site’s functionality so I instructed jQuery to override it by setting

$.ajaxSetup({ cache: true });

This function works by adding a nonsense value to the end of the request. Looking at the actual jQuery source we can see that the current time is appened to the request which makes the browser believe it is a new URL.

if ( s.cache === false && type === “GET” ) {
var ts = now();

// try replacing = if it is there
var ret = s.url.replace(rts, “$1
=” + ts + “$2”);

// if nothing was replaced, add timestamp to the end
s.url = ret + ((ret === s.url) ? (rquery.test(s.url) ? “&” : “?”) + “_=” + ts : “”);
}

Problem solved

2009-08-28

Caching and Why You Shouldn't Listen to Blogs

A while ago I wrote and article entitled “HTML Helper for Including and Compressing Javascript“, no don’t click on that link because it is all wrong. The gist of the article was that in order to save clients from downloading a bunch javascript files each time they visited the site opening up a bunch of costly connections a handler would include all those files in the HTML page and, as an added bonus, compress them. I forgot one key thing and leddt was good enough to point it out. Because each page you load on the site has the javascript inlined there is no way to cache it.

So what do we do? I think the best solution is to stop compressing the javascript on request. Instead combine and compress the javascript as part of the build process and then serve it up as a separate request. The disadvantage here is that if you use a library like jquery-ui on only one page you end up downloading it for any page the user visits. However the price you pay in getting that is a one time cost while using the terrible solution I suggested before you pay for it again and again and again. In that way it is much like not taking out the trash, I have to hear about it every day plus it smells. You wouldn’t believe how hard it is to get the smell out of the fur of the 40 dogs in the puppy mill I run in my garage.

How things are cached on a browser has always been a mystery to me and there isn’t a whole lot on the internet about the technicalities of what browsers do and do not cache. Basically it seems to come down to the cache-control header which govern how devices retain content. The ever so verbose W3C HTTP 1.1 spec defines the grammar for cache-control as


Cache-Control = "Cache-Control" ":" 1#cache-directive  
 cache-directive = cache-request-directive  
 | cache-response-directive  
 cache-request-directive =  
 "no-cache" ; Section 14.9.1  
 | "no-store" ; Section 14.9.2  
 | "max-age" "=" delta-seconds ; Section 14.9.3, 14.9.4  
 | "max-stale" [ "=" delta-seconds ] ; Section 14.9.3  
 | "min-fresh" "=" delta-seconds ; Section 14.9.3  
 | "no-transform" ; Section 14.9.5  
 | "only-if-cached" ; Section 14.9.4  
 | cache-extension ; Section 14.9.6  
 cache-response-directive =  
 "public" ; Section 14.9.1  
 | "private" [ "=" 1#field-name ] ; Section 14.9.1  
 | "no-cache" [ "=" 1#field-name ]; Section 14.9.1  
 | "no-store" ; Section 14.9.2  
 | "no-transform" ; Section 14.9.5  
 | "must-revalidate" ; Section 14.9.4  
 | "proxy-revalidate" ; Section 14.9.4  
 | "max-age" "=" delta-seconds ; Section 14.9.3  
 | "s-maxage" "=" delta-seconds ; Section 14.9.3  
 | cache-extension ; Section 14.9.6  
 cache-extension = token [ "=" ( token | quoted-string ) ]

Pretty simple. The fields you have to watch out for are max-age and public/private. The age is how long the cache is permitted to retain the document before it must rerequest it and public indicates the page is public and should be cached for all users. How the actual browsers will implement these is a function of the users so all you can do is make sure your site obeys the standard.

But don’t trust me, I’m just a blogger and I already lied to you once. In my next post I’ll talk a bit about caching in ASP.net and how to save database trips.

2009-08-19

Bookmarklet for MSDN

Today I was cruising the old MSDN using their much better low bandwidth version when I stumbled across a page on events in C#. What got my attention was the example code all grey and boring not to mention hard to follow. What this page needed was a little bit of SyntaxHighlighter Alex Gorbatchev’s glorious javascript library which add syntax to source code. I use it right here on my blog as does everybody else who passes the coolness test. The test, of course, being the use of SyntaxHighlighter. I hacked at the jQueryify bookmarklet and managed to get it to load the correct SyntaxHighlighter libraries and stylesheets. The next time you’re on MSDN squinting at a piece of code try hitting MSDN Style. It only works on firefox at the moment but I’ll update it to work on IE as well. Simply drag this link to your bookmarks bar and you’re good to go:

MSDN Style

Source

javascript:%20(function(){
function getScript(url,success){
var script=document.createElement(‘script’);
script.src=url;
var head=document.getElementsByTagName(“head”)[0], done=false;
script.onload=script.onreadystatechange = function(){
if ( !done && (!this.readyState ||
this.readyState == “loaded” || this.readyState == “complete”) ) {
done=true;
success();
}
};
head.appendChild(script);
};
getScript(‘http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shCore.js',function(){});
getScript(‘http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shBrushCSharp.js',function(){});
getScript(‘http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js',function() {
loadStyle(‘http://alexgorbatchev.com/pub/sh/2.0.320/styles/shCore.css');
loadStyle(‘http://alexgorbatchev.com/pub/sh/2.0.320/styles/shThemeDefault.css');
return completeLoad();
});
function loadStyle(url)
{
style=document.createElement(‘link’);
style.href = url;
style.rel = “stylesheet”;
style.type=”text/css”;
document.getElementsByTagName(“head”)[0].appendChild(style);
};

function completeLoad() {
$(“.libCScode:not(div)”).addClass(“brush: csharp”);

SyntaxHighlighter.highlight();
};
})();

2009-08-07

A watershed moment

News in the twitterverse today is all about this article at inforworld or, more precisely, the guidelines produced by the American Law Institute. In a small section they suggest that software companies should be held liable for shipping software with known bugs. Damn right they should.

The software world has long survived protected from their own mistakes through the EULA. Let’s look at a license agreement, perhaps the Windows XP license agreement as a typical example. Here is a excerpt from section 2.15:

LIMITATION ON REMEDIES; NO CONSEQUENTIAL OR OTHER DAMAGES. Your exclusive remedy for any breach of this Limited Warranty is as set forth below. Except for any refund elected by Microsoft, YOU ARE NOT ENTITLED TO ANY DAMAGES, INCLUDING BUT NOT LIMITED TO CONSEQUENTIAL DAMAGES

So if Windows XP crashes and you loose an assignment or if your battleship sinks Microsoft is sorry but they aren’t going to stand up and take responsibility. At least not more than the purchase price of Windows and I’ll bet you it is a trial to get them to cough up even that. A lot of people are upset about this guideline. By a lot of people I, of course, mean software vendors. I suppose I too would be upset if I shipped known bad software, but I don’t because my customers deserve more than that. I’m not saying my software is perfect, it isn’t, however it has no known bugs. And that’s the key right there: know bugs.

All software is going to have bugs in it, even with the most rigorous testing and test driven development there are still going to be issues. Most of these bugs are not covered under the guidelines because a concerted effort was made to find bugs and fix them during the development process. This doesn’t mean that you can’t ship software with know issues, it just means that now the risk of doing so is spread more evenly between you and your client. Which is only right.

Drawing again on the tired car analogy people would be outraged if Ford shipped a car which they knew imploded in the rain. Sure the car industry isn’t a perfect analogy but it is pretty good. Lots of software is responsible for our lives in the same way as cars, heck there is a huge amount of software in cars. Why should software vendors be any less liable?

What does this really mean for companies? In a word: transparency. Let’s say that I ship some software with an issue and it hurts somebody and they decide to sue. It is going to cost me money in lawyers even if I had no idea that defect existed. I’ve got to show that I didn’t know the defect existed at shipping time. That is going to require a bunch of e-discovery and searching of e-mails, costly. What can I do to try to avoid being sued in the first place? Easy, publish all the bugs in my software in a system the public can see. I mean internal bugs as well as external bugs, everything. Next I fix bugs before I write new code and I fix them promptly. If I can gain a reputation as open and responsive people are far more likely to write off my mistakes as just that.

Now somebody is going to argue that publishing every defect puts me at a PR disadvantage compared to the guys down the street who don’t advertise their bugs. I don’t believe that for a second. People who buy software are generally not dumb, they know that bugs exist and they know that they’re probably going to find some in the software they buy. Do you want to deal with a company which won’t admit that they have bugs and even threatens to sue people who bring them issues or with one which has shown itself to be responsive to customer issues? I’ll take responsive every time.

The time has passed when software was used only for esoteric purposes by men in white coats and it is time the industry grew up and learned that if you get a paper route you can’t dump the papers in the garbage without consequences. Stop dumping bad code on the world.

2009-08-07

A Wonderful Example of Comments Causing Problems

Current thinking is that comments in code should be helpful and should only document why and not how, that is what the code is for. That wasn’t always the case, for example

$courseNick = $_POST[‘courseNick’]; //Course nick
$phone = $_POST[‘phone’]; //Course nick

See how useless that is?

2009-06-23

HTML Helper for Including and Compressing Javascript

This article is outdated and ill advised

If you’re like me then you probably have a whole bunch of javascript includes on your site cluttering up the top of your page. I tend to keep most of mine in Site.Master, it isn’t so much that I use them all on every page but I do make use of a large enough subset that I can’t be bothered to load them in each view. For example here is what I have at the top of a Site.Master page for my Activity Tracker project:

2009-06-22

Bar Graphs Using Flot and ASP.net MVC - Part II

I wasn’t really planning a part II to this article but I left the code in something of a mess and we can do much better than having to write a bunch of javascript each time we need to make a graph. Let’s use the HtmlHelper and extension methods. The HtmlHelper is a class in System.Web.Mvc which is often extended to supply shortcuts for writing out HTML. xVal extends it to add client side validation and we’re going to do the same thing to write out the javascript for our graphs. We start by creating a new extension class to complement HtmlHelper

public static class HTMLExtensions  
 {  
 public static string BarGraph(this HtmlHelper helper, string divName, IDictionary dataPoints)  
 {  
 String dataPointsArrayName = divName + "Values";  
 String dataPointLabelsArrayName = divName + "Labels";  


 String returnText = @"  
 var {0} = [";   

 int counter = 0;  
 foreach (KeyValuePair kvp in dataPoints)  
 {  

 if (counter > 0)  
 returnText += ",";  
 returnText += "[" + counter + ".5, " + kvp.Value + "]";  
 counter++;  
 }  
 returnText += "];n";  

 returnText += @"var {1} = [";   
 counter = 1;  

 foreach (KeyValuePair kvp in dataPoints)  
 {  
 if (counter > 1)  
 returnText += ",";  
 returnText += "[" + counter + ","" + kvp.Key + ""]";  
 counter++;  
 }  
 returnText += "];n";  

 returnText += @"$(document).ready(function()   
 {  
 $.plot($(""#{2}""), [{0}], {   
 series: {  
 data: {0},   
 color: ""rgb(182,188,194)""   
 },   
 bars: { show: true, barWidth: 1.0 },   
 yaxis: { min: 0 },  
 xaxis: { ticks: {1}}   
 });  
 });";  

 returnText += "";  

 returnText = returnText.Replace("{0}",dataPointsArrayName).Replace("{1}", dataPointLabelsArrayName).Replace("{2}", divName);  

 return returnText.ToString();  
 }  

 }

This class is a bit of a mess of string appending but hopefully changes to it will be infrequent. I know somebody is going to jump on my string appending but before you do take a read of Jeff Atwood’s excellent examination of micro optimization. The key take away is that all the messy javascript generation which we did have in the views is now centralized into one place.

So now we have an extension method we can call from our View, so let’s pop back over to that. The page has two graphs on it and it use to have a huge amount of hand coded javascript. That can all be replaced with

<%Dictionary topTeamsDictionary = (ViewData[“TopTeams”] as List).ToDictionary(n => n.team.teamName, n => n.total);
Dictionary userActivityTotals = (ViewData[“UserActivityTotals”] as List).ToDictionary(n => n.ActivityName, n => n.TotalPoints);%>

The data in the ViewData was in a list and that list is need by other things on the page so we quickly transform the List into a Dictionary using LINQ. That is pretty much it. So long as the div referenced in the call to BarGraph exists you should get the same stylish graphs as yesterday. Obviously there are a lot of other options which can be passed through to the graph, these are left as an exercise for the reader. I’ve always wanted to say that.

2009-06-21

Bar Graphs Using Flot and ASP.net MVC

There are a bunch of posts out there about using flot to create nifty HTML graphs but the Internet is all about reiteration and I need to start posting more frequently. Flot is a javascript library which uses HTML Canvas) from HTML 5 to draw simple vector graphics. It works natively on good browsers and with a little bit of hacking on IE8 too. At its most simple one needs only include the jquery libraries and the flot libraries. Technically the excanvas library needs only be included for IE but it doesn’t seem to do any harm to include it all the time and the packed version is just a few kb.

2009-04-29

Building Adobe Air Files

Assembling Adobe Air installers is pretty easy. In this article I’ll show you how to put together an ant build.xml file which will build for you.

For the incurably impatient here is the whole thing:

Let’s break it down a bit

Here we set up some properties to be used in the rest of the file. I’m sure I don’t have to quote the Pragmatic Programmer about repeating yourself.

Air applications have to be signed. As part of a release process you’ll want to generate a proper key and keep in somewhere safe like on a cd in the cat’s litter box. However we don’t want developer builds getting out into the wild so we’ll just generate a key here and use it.

You’ll notice on line 12 we sleep for a couple of seconds. Why is that? It turns out that the key generator returns before it is actually done generating so we’ll give it a few more seconds otherwise you’ll get an error like

failed while unpackaging: [ErrorEvent type=”error” bubbles=false cancelable=false eventPhase=2 text=”invalid package signature” errorID=5022]
starting cleanup of temporary files
application installer exiting

in the Air installation log. (See http://kb.adobe.com/selfservice/viewContent.do?externalId=kb403123 to read how to enable logging of Air installations)

Finally we come to actually building the application

Here I’m replacing the version number in the application.xml with the SVN_REVISION. Our builds run from inside the wonderful Hudson build management system which is kind enough to pass through a token containing the revision number from SVN. We finally execute adt the Air packaging tool giving it a list of the files we would like to have inside the package. We list them by hand as a double check against accidentally including extra files, like bankingInformation.xls.

That’s it! The only real trick is pausing for the key generation to complete.

2009-04-18

ASP.net MVC returning JSONP

I’ve been working on a piece of code which returns JSON in response to some request. It has all being going fine on the local host but I’ve just deployed it up to my new test server and started to access it using JQuery and it stooped working. This is, of course, typical. This time however, my problem was ignorance rather than a programming blunder. As it turns out when returning JSON across domains you need to actually return JSONP in order to get call backs working. What is JSONP? I’m glad you asked, because I had no idea either. Basically it is the same JSON you love but wrapped with a bit of text which specifies the name of the function to call upon returning.

JSON:

{“userID”:”00000000-0000-0000-0000-000000000000”³,”success”:”false”}

JSONP:

somefunction({“userID”:”00000000-0000-0000-0000-000000000000”³,”success”:”false”})

Easy enough. In JQuery you need to just add another parameter to the JSON call in order to pass the name of the function to the server

$.getJSON(URL +“/json/Message/sendMessage?userName=”+ $(“#userName3”).val()+
“&messageText=”+ $(“#message”).val()+
“&userKey=”+ key +
“&jsoncallback=?”,
function(json)
{
}
);

JQuery will automatically replace the ? with the name of your callback function, in this case an anonymous function.

Having discovered all of this I realized that I now had a ton of code returning JsonResults which needed to be changed. I figured the best way to do this was to actually create a JsonpResult which was based off of the JsonResult. So I did just that, basing it off of the now open sourced ASP.net MVC JsonResult

publicclass JsonpResult : System.Web.Mvc.JsonResult
{
publicoverridevoid ExecuteResult(ControllerContext context)
{
if(context ==null)
{
thrownew ArgumentNullException(context);
}

HttpResponseBase response = context.HttpContext.Response;

if(!String.IsNullOrEmpty(ContentType))
{
response.ContentType = ContentType;
}
else
{
response.ContentType =application/json;
}
if(ContentEncoding !=null)
{
response.ContentEncoding = ContentEncoding;
}
if(Data !=null)
{
// The JavaScriptSerializer type was marked as obsolete prior to .NET Framework 3.5 SP1

#pragma warning disable 0618
HttpRequestBase request = context.HttpContext.Request;

JavaScriptSerializer serializer =new JavaScriptSerializer();
if(null!= request.Params[jsoncallback])
response.Write(request.Params[jsoncallback]+(+ serializer.Serialize(Data)+));
else
response.Write(serializer.Serialize(Data));

#pragma warning restore 0618
}
}
}

I then extended Controller

publicclass WopsleController : Controller
{
protectedinternal JsonpResult Jsonp(object data)
{
return Jsonp(data,null/ contentType /);
}

protectedinternal JsonpResult Jsonp(object data,string contentType)
{
return Jsonp(data, contentType,null);
}

protectedinternalvirtual JsonpResult Jsonp(object data,string contentType, Encoding contentEncoding)
{
returnnew JsonpResult
{
Data = data,
ContentType = contentType,
ContentEncoding = contentEncoding
};
}

}

and altered all my controllers to extend WopsleController rather than Controller. Seems to work pretty well.