2015-07-30

Casting in Telerik Reports

Short post as I couldn’t find this documented anywhere. But if you need to cast a value inside the expression editor inside a Telerik Report then you can use the conversion functions

  • CBool
  • CDate
  • CDbl
  • CInt
  • CStr

I used it to cast the denominator here to get a percentage complete:

http://imgur.com/LE1hUUP.png

I also used the Format function to format it as a percentage. I believe the Format string here is passed directly into .net’s string format function so anything that works there will work here.

2015-07-22

Unit Conversions Done (Mostly) Right

Thanks to a certain country which, for the purposes of this blog let’s call it Backwardlandia, which uses a different unit system there is frequently a need to use two wildly different units for some value. Temperature is a classic one, it could be represented in Centigrade, Fahrenheit or Kelvin Rankine (that’s the absolute temperature scale, same as Kelvin, but using Fahrenheit). Centigrade is a great, well devised unit that is based on the freezing and boiling points of water at one standard atmosphere. Fahrenheit is a temperature system based on the number of pigs heads you can fit in a copper kettle sold by some bloke on Fleet Street in 1832. Basically it is a disaster. None the less Backwardlandia needs it and they have so many people and so much money that we can’t ignore them.

I cannot count the number of terrible approaches there are to doing unit conversions. Even the real pros get it wrong from time to time. I spent a pretty good amount of time working with a system that put unit conversions in between the database and the data layer in the stored procedures. The issue with that was that it wasn’t easily testable and it meant that directly querying the table could yield you units in either metric or imperial. You needed to explore the stored procedures to have any idea what units were being used. It also meant that any other system that wanted to use this database had to be aware of the, possibly irregular, units used within.

Moving the logic a layer away from the database puts it in the data retrieval logic. There could be a worse place for it but it does mean that all of your functions need to have the unit system in which they are currently operating passed into them. Your nice clean database retrievals become polluted with knowing about the units.

It would likely end up looking something like this:

public IEnumerable<Pipes> GetPipesForWell(int wellId, UnitSystem unitSystem)
{
    using(var connection = GetConnection()){
        var result = connection.Query<Pipes>("select id, boreDiameter from pipes where wellId=@wellId", new { wellId});
        return NormalizeForUnits(result, unitSystem);
    }
}

I’ve abstracted away some of the complexity with a magic function that accounts for the units and it is still a complex mess.

##A View Level Concern
I believe that unit conversion should be treated as a view level concern. This means that we delay doing unit conversions until the very last second. By doing this we don’t have to pass down the current unit information to some layer deep in our application. All the data is persisted in a known unit system(I recommend metric) and we never have any confusion about what the units are. This is the exact same approach I suggest for dealing with times and time zones. Everything that touches my database or any persistent store is in a common time zone, specifically UTC.

If you want to feel extra confident then stop treating your numbers as primitives and treat them as a value and a unit. Just by having the name of the type contain the unit system you’ll make future developers, including yourself, think twice about what unit system they’re using.

public class TemperatureInCentigrade{
    private readonly double _value;
    public TemperatureInCentigrade(double value){
        _value = value;
    }

    public TemperatureInCentigrade Add(TemperatureInCentigrade toAdd) 
    {
        return new TemperatureInCentigrade(_value + toAdd.AsNumeric());
    }
}

You’ll also notice in this class that I’ve made the value immutable. By doing so we save ourselves from a whole bunch of potential bugs. This is the same approach that functional programming languages take.

Having a complex type keep track of your units also protects you from taking illogical actions. For instance consider a unit that holds a distance in meters. The DistanceInMeters class would likely not contains a Multiply function or, if it did, the function would return AreaInSquareMeters. The compiler would protect you from making a lot of mistakes and this sort of thing would likely eliminate a bunch of manual testing.

The actual act of converting units is pretty simple and there are numerous libraries out there which can do a very effective job for us. I am personally a big fan of the js-quantities library. This lets you push your unit conversions all the way down to the browser. Of course math in JavaScript can, from time to time, be flaky. For the vast majority of non-scientific applications the level of resolution that JavaScripts native math supports is wholly sufficient. You generally don’t even need to worry about it.

If you’re not doing a lot of your rendering in JavaScript then there are libraries for .net which can handle unit conversions (disclaimer, I stole this list from the github page for QuantityType and haven’t tried them all).

Otherwise this might be a fine time to try out F# which supports units of measure natively

The long and short of it is that we’re trying to remove unit system confusion from our application and to do that we want to expose as little of the application to divergent units as possible. Catch the units as they are entered, normalize them and then pass them on to the rest of your code. You’ll save yourself a lot of headaches by taking this approach, trust a person who has done it wrong many times.

2015-06-09

Getting Lookup Data Into You View ASP.net MVC 6 Version

This is a super common problem I encounter when building ASP.net MVC applications. I have a form that has a drop down box. Not only do I need to select the correct item from the edit model to pick from the drop down but I need to populate the drop down with the possible values.

Over the years I’ve used two approaches to doing this. The first is to push into the ViewBag a list of values in the controller action. That looks like

public ActionResult Edit(int id){
    var model = repository.get(id);

    ViewBag.Provinces = provincesService.List();

    return View(model);
}

Then in the view you can retrieve this data and use it to populate the drop down. If you’re using the HTML helpers then this looks like

@Html.DropDownListFor(x=>x.province, (IEnumerable<SelectListItem>)ViewBag.Provinces)

This becomes somewhat messy when you have a lot of drop downs on a page. For instance consider something like

public ActionResult Edit(int id){
  var model = repository.get(id);

    ViewBag.Provinces = provincesService.List();
    ViewBag.States = statesService.List();
    ViewBag.StreetDirections = streetDirectionsService.List();
    ViewBag.Countries = countriesService.List();
    ViewBag.Counties = countiesService.List();

    return View(model);
}

The work of building up the data in the model becomes the primary focus of the view. We could extract it to a method but then we have to go hunting to find the different drop downs that are being populated. An approach I’ve taken in the past is to annotate the methods with an action filter to populate the ViewBag for me. This makes the action look like

[ProvincesFilter]
[StatesFilter]
[StreetDirectionsFilter]
[CountriesFilter]
[CountiesFilter]
public ActionResult Edit(int id){
  var model = repository.get(id);
  return View(model);
}

One of the filters might look like

public override void OnActionExecuting(ActionExecutingContext filterContext)
{
    var countries = new List<SelectListItem>();
    if ((countries = (filterContext.HttpContext.Cache.Get(GetType().FullName) as List<SelectListItem>)) == null)
    {
        countries = countriesService.List();
        filterContext.HttpContext.Cache.Insert(GetType().FullName, countries);
    }
    filterContext.Controller.ViewBag.Countries = countries;
    base.OnActionExecuting(filterContext);
}

This filter also adds a degree of caching to the request so that we don’t have to keep bugging the database.

Keeping a lot of data in the view bag presents a lot of opportunities for error. We don’t have any sort of intellisense with the dynamic view object and I frequently use the wrong name in the controller and view, by mistake. Finally building the drop down box using the HTML helper requires some nasty looking casting. Any time I cast I feel uncomfortable.

@Html.DropDownListFor(x=>x.province, (IEnumerable<SelectListItem>)ViewBag.Provinces)

Now a lot of people prefer transferring the data as part of the model; this is the second approach. There is nothing special about this approach you just put some collections into the model.

I’ve always disliked this approach because it mixes the data needed for editing with the data for the drop downs which is really incidental. This data seems like a view level concern that really doesn’t belong in the view model. This is a bit of a point of contention and I’ve challenged more than one person to a fight to the death over this very thing.

So neither option is particularly palatable. What we need is a third option and the new dependency injection capabilities of ASP.net MVC open up just such an option: we can inject the data services directly into the view. This means that we can consume the data right where we retrieve it without having to hammer it into some bloated DTO. We also don’t have to worry about annotating our action or filling it with junk view specific code.

To start let’s create a really simple service to return states.

public interface IStateService
{
    IEnumerable<State> List();
}

public class StateService : IStateService
{
    public IEnumerable<State> List() {
        return new List<State>
        {
            new State { Abbreviation = "AK", Name = "Alaska" },
            new State { Abbreviation = "AL", Name = "Alabama" }
        };
    }
}

Umm, looks like we’re down to only two states, sorry Kentucky.

Now we can add this to our container. I took a singleton approach and just registered a single instance in the Startup.cs.

services.AddInstance(typeof(IStateService), new StateService());

This is easily added the the view by adding

@inject ViewInjection.Services.IStateService StateService

As the first line in the file. Then the final step is to actually make use of the service to populate a drop down box:

<div class="col-lg-12">
        @Html.DropDownList("States", StateService.List().Select(x => new SelectListItem { Text = x.Name, Value = x.Abbreviation }))
</div>

That’s it! Now we have a brand new way of getting the data we need to the view without having to clutter up our controller with anything that should be contained in the view.

What do you think? Is this a better approach? Have I brought fire down upon us all with this? Post a comment. Source is at https://github.com/stimms/ViewInjection

2015-06-07

Building a Simple Slack Bot

A couple of friends and I have a slack channel we use to discuss deep and powerful questions like “should we make a distilled version of the ASP.net community standup that doesn’t waste everybody’s time?” or “could we create a startup whose business model was to create startups?”. We have so many terrible earth-shatteringly brilliant idea we needed a place to keep them. Fortunately Trello provides just such list functionality. There is already a Trello integration for Slack but it doesn’t have the ability to create cards but just notifies about changes to existing cards.

Lame.

Thus began our quest to build a slackbot. We wanted to be able to use /commands for our bot so

/trellobot add Buy a cheese factory and replace the workers with robotic rats

The bot should then reply to us with a link to the card should we need to fill in more details like the robot rat to worker ratio.

We started by creating what slack call a slash integration. This means that it will respond to IRC style commands (/join, /leave, …). This can be done from the slack webapp. Most of the fields were intuitive to full out but then we got to the one for a URL. This is the address to which slack sends an HTTP request when it sees a slash command matching yours.

This was a bit tricky as we were at a conference on wifi without the ability to route to our machines. We could have set up a server in the cloud but this would slow us down iterating. So we used http://localtunnel.me/ to tunnel request to us. What a great service!

The service was going to be simple so we opted for nodejs. This let us get up and running without ceremony. You can build large and impressive applications also with node but I always feel it excels at rapidly iterating. It other words we just hacked some thing out and you shouldn’t base your banking software on the terrible code here.

To start we needed an http server

  var http = require('http');
  var Uri = require('jsuri');

  http.createServer(function (req, res) {
  req.setEncoding('utf8');
  req.on('data', function(data){

    startResponse(res);

    var uri = new Uri();
    uri.setQuery(data);

    var text = uri.getQueryParamValue('text');

    var responseSettings = {
                            channelId: uri.getQueryParamValue('channel_id'),
                            userId: uri.getQueryParamValue('user_id')
                          };
    if(text.split(' ')[0] === "add")
    {
      performAdd(text, res, responseSettings);
    }
  });

}).listen(port);
console.log('Server running at http://127.0.0.1:/' + port);

The information passed to us by slack is URL encoded so we can just parse it out using the jsuri package. We’re looking for any message that starts with “add”. When we find it we run the performAdd function giving it the message, the response to write to and the response settings extracted from the request. We want to know the channel in which the message was sent and the user who sent it so we can reply properly.

If your bot doesn’t need to reply to the whole channel and instead needs to whisper back to the person sending the command that can be done by just writing back to the response. The contents will be show in slack.

Now we need to create our Trello card. I can’t help but feel that coupling a bunch of APIs together is going to be a big thing in the future.

Trello uses OAuth to allow authentication. This is slightly problematic, as we need to have a user agree to allow our bot to interact with it as them. This is done using a prompt on a website, which we don’t really have. If this was a fully-fledged bot we could find a way around it but instead we’re going to take advantage of Trello permitting a key that never expires. This is kind of a security problem on their end but for our purposes it is great.

Visit https://trello.com/1/appKey/generate and generate a key pair for Trello integration. I didn’t find a need for the private one but I wrote it down anyway, might need it in the future.

With that key visit https://trello.com/1/authorize?key=PUBLIC_KEY_HERE&name=APPLICATION_NAME_HERE&expiration=never&response_type=token&scope=read,write in a browser logged in using the account you want to use to post to Trello. The resulting key will never expire and we can use it in our application.

We’ll use this key to find the list to which we want to post. I manually ran

curl "https://trello.com/1/members/my/boards?key=PUBLIC_KEY_HERE&token=TOKEN_GENERATED_ABOVE_HERE"

Which gave me back a list of all of the boards in my account. I searched through the content using the powerful combination of less and my powerful reading eyes finding, in short order, the ID of a board I had just created for this purpose. Using the ID of the board I wanted I ran

curl "https://api.trello.com/1/boards/BOARD_ID_HERE?lists=open&list_fields=name&fields=name,desc&key=PUBLIC_KEY_HERE&token=TOKEN_GENERATED_ABOVE_HERE"

Again using my reading eyes I found the ID of the list in the board I wanted. (It wasn’t very hard, there was only one). Now I could hard code that into the script along with all the other bits and pieces (I mentioned not writing your banking software like this, right?). I put everything into a config object, because that sounded at least a little like something a good programmer would do - hide the fact I’m using global variables by putting them in an object, stored globally.

function performAdd(text, slackResponse, responseSettings){
  var pathParameters = "key=" + config.trello.key + "&token=" + config.trello.token + "&name=" + querystring.escape(text.split(' ').splice(1).join(" ")) + "&desc=&idList=" + config.trello.listId;
  var post_options = {
      host: 'api.trello.com',
      port: '443',
      path: '/1/cards?' + pathParameters,
      method: 'POST'
  };

  // Set up the request
  var post_req = https.request(post_options, function(res) {
      res.setEncoding('utf8');
      res.on('data', function (chunk) {
          var trelloResponse = JSON.parse(chunk);
          getUserProperties(responseSettings.userId, 
                              function(user){ 
                                responseSettings.user = user; 
                                postToSlack(["/" + text, "I have created a card at " + trelloResponse.shortUrl], responseSettings)});
      });
  });

  // post an empty body to trello, content is in url
  post_req.write("");
  post_req.end();
  //return url
}

Here we send a request to Trello to create the card. Weirdly, despite the request being a POST, we put all the data in the URL. I honestly don’t know why smart people like those at Trello design APIs like this…

Anyway the callback will send a message to Slack with the short URL extracted from the response from Trello. We want to make the response from the bot seem like it came from one of the people in the channel, specifically the one who sent the message. So we’ll pull the user information from Trello and set the bot’s name to be theirs as well as matching the icon.


function postToSlack(messages, responseSettings){
console.dir(responseSettings);
  for(var i = 0; i < messages.length; i++)
  {

    var pathParameters = "username=" + responseSettings.user.name + "&icon_url=" + querystring.escape(responseSettings.user.image) + "&token=" + config.slack.token + "&channel=" + responseSettings.channelId + "&text=" + querystring.escape(messages[i]);
    var post_options = {
        host: 'slack.com',
        port: '443',
        path: '/api/chat.postMessage?' + pathParameters,
        method: 'POST'
    };

    // Set up the request
    var post_req = https.request(post_options, function(res) {
        res.setEncoding('utf8');
        res.on('data', function (chunk) {

          console.log(chunk);
        });
    });
    post_req.on('error', function(e) {
      console.log('problem with request: ' + e);
    });
    // post the data
    post_req.write("");
    post_req.end();
  }
}

function getUserProperties(userId, callback){
  var pathParameters = "user=" + userId + "&token=" + config.slack.token;
  var post_options = {
      host: 'slack.com',
      port: '443',
      path: '/api/users.info?' + pathParameters,
      method: 'GET'
  };
  var get_req = https.request(post_options, function(res) {
      res.setEncoding('utf8');
      res.on('data', function (chunk) {
        var json = JSON.parse(chunk);
        callback({name: json.user.name, image: json.user.profile.image_192});
      });
  });
  get_req.end();
}

This is all we need to make a Slack bot that can post a card to trello. As it turns out this was all made rather more verbose by the use of callbacks and API writers inability to grasp what the body of a POST request is for. Roll on ES7 async/await, I say.

It should be simple to apply this same sort of logic to any number of other Slack bots.

Thanks to Canadian James for helping me build this bot.

2015-05-25

Free SSL For Azure Websites

Goodness is it that time again when I have to install a stupid SSL certificate on Azure again? There are likely words to describe how much I hate paying money for a large number which isn’t even my own number, however the only person who could describe my hatred has been dead for quite some time.

There are some options for getting free SSL certificates. I’m very excited about the EFF’s Let’s Encrypt but it has yet to be released. This time I decided to try the free SSL certificate from Comodo.

It only lasts 90 days before you have to reissue it and reinstall it but that’s a small price to pay for not paying out a bunch of money. I guess my tollerance for paying for a large number is pretty low as compared with my willingness to follow some steps on a website every 3 months.

Step one was to generate a new key and a new certificate signing request. I had my mac handy so OpenSSL was the tool of choice

openssl req -new -sha256 -key my.domain.com.key -out my.domain.com.csr

openssl req -noout -text -in my.domain.com.csr

The second command prompts you for a variety of information such as your company, address and country. The resulting file should be pasted into the box on the comodo site. The generation software should be listed as OTHER and the hash algorithm SHA-2.

Screenshot of the information form

Eventually you’ll be e-mailed a zip file containing a cluster of keys. Some of theme are server intermediate keys but comodo is a pretty well known so you probably don’t need these certificates. The one you want is the one called my.domain.com.crt.

Thie crt is combined with the key file generated by the first OpenSSL

openssl pkcs12 -export -out my.domain.com.pfx -inkey my.domain.com.key -in my.domain.com.crt

Now we have a .pfx file we can upload this to our azure website under the Custom domains and SSL tab.

http://i.imgur.com/yX2aemt.jpg

Because of the beauty of SNI you can have multiple domains on a single instance using their own SSL certificates.

Now you have a nice free SSL certificates that you just need to remember to renew every 90 days.

2015-05-09

Do you really want "bank grade" security in your SSL? Canadian edition

In the past few days I’ve seen a few really interesting posts about having bank grade security. I was intersted in them because I frequently tell my clients that the SSL certificates I’ve got them and the security I’ve set up for them are as good as what they use when they log into a bank.

As it turns out I’ve been wrong about that: the security I’ve configured is better than their bank. The crux of the matter is that simply picking an SSL cert and applying it is not sufficient to have good security. There are certain configuration steps that must be taken to avoid using old cyphers or weak signatures.

There are some great tools out there to test if your SSL is set up properly. I like SSL Labs’ test suite. Let’s try running those tools against Canada’s five big banks.


































































Bank Grade SSL 3 TLS 1.2 SHA1 RC4 Forward Secrecy POODL
Bank of Montreal B Pass Pass Pass Fail Fail Pass
CIBC B Pass Pass Pass Fail Fail Pass
Royal Bank B Pass Pass Pass Fail Fail Pass
Scotia Bank B Pass Pass Pass Fail Fail Pass
Toronto Dominion B Pass Pass Pass Fail Fail Pass

So everybody is running with a grade of B and everybody is restricted to B because they still accept the RC4 cypher. There are some attacks available on RC4 but they don’t currently appear to be practical. That’s not to say that they won’t become practical in short order. The banks should certainly be leading the charge against RC4 because it is possible that when a practical exploit is found that it will be found by somebody who won’t be honest enough to report it.

Out of curiousity I tried the same test on some of Canada’s smaller banks such as ATB Financial(I have a friend who works there an I really wanted to stick it to him for having bad security).














































Bank Grade SSL 3 TLS 1.2 SHA1 RC4 Forward Secrecy POODL
ATB Financial A Pass Pass Pass Pass Pass Pass
Banque Laurentienne A Pass Pass Pass Pass Pass Pass
Canadian Western Bank A Pass Pass Pass Pass Pass Pass

So all these little banks are doing a really good job, that’s nice to see. It is a shame they can’t get their big banking friends to fix their stuff.

##But Simon, we have to support old browsers

Remember that time that your doctor suggested that your fluids were out of balance and you needed to be bled? No? That’s because we’ve moved on. For most things I recommend looking at your user statistics to see what percentage of your users you’re risking alienating if you use a feature that isn’t in their browser. I cannot recommend the same approach when dealing with security. This is one area where requiring newer browsers is a good call - allowing your users to be under the false impression that their connection is secure is a great disservice.

2015-05-05

A way to customize bootstrap variables

I have been learning a bunch about building responsive websites this last week. I had originally been using a handful of media-queries but I was quickly warned off this. Apparently the “correct” way of building responsive websites is to lean heavily on the pre-defined classes in bootstrap.

This approach worked great right up until I got to the navbar, that thing that sits at the top of the screen on large screens and collapses on small screens. My issue was that my navbar had a hierarchy to it so was a little wider than the normal version. As a result the navbar looked cramped on medium screens. I wanted to change the point at which the break between the collapsed and full navbar fired.

Unfortunatly the suggested approach for this is to customize the boostrap source code and rebuild it.

Bootstrap documentation

I really didn’t want to do this. The issue is that I was pulling in a nice clean bootstrap from bower. If I started to modify it then anybody who wanted to upgrade in the future would be trapped having to figure out what I did and apply the same changes to the updated bootstrap.

The solution was to go to the web brain trust that is James Chambers and David Paquette. After some discussion we came up with just patching the variables.less file in bootstrap.

#How does that look?

My project already used gulp but was built on top of sass for css so I had to start by adding a few new packages to my project

npm install --save-dev gulp-less
npm install --save-dev gulp-minify-css
npm install --save-dev gulp-concat

Then I dropped into my gulpfile. As it turns out I already had a target that moved about some vendor css files. All the settings for this task were defined in my config object. I added 4 lines to that object to list the new bootstrap variables I would need.

vendorcss: {
      input: ["bower_components/leaflet/dist/*.css", "bower_components/bootstrap/dist/css/bootstrap.min.css"],
      output: "wwwroot/css",
      bootstrapvariables: "bower_components/bootstrap/less/variables.less",
      bootstrapvariablesoverrides: "style/bootstrap-overrides.less",
      bootstrapinput: "bower_components/bootstrap/less/bootstrap.less",
      bootstrapoutput: "bower_components/bootstrap/dist/css/"
    },

The bootstrapvariables gives the location of the variables.less file within bootstrap. This is what we’ll be patching. The bootstrapvariablesoverrides gives the file in my scripts directory that houses the overrides. The bootstrapinput is the name of the master file that is passed to less to do the compilation. Finally the bootstrapoutput is the place where I’d like my files put.

To the vendorcss target I added

gulp.src([config.vendorcss.bootstrapvariables,config.vendorcss.bootstrapvariablesoverrides])
    .pipe(concat(config.vendorcss.bootstrapvariables))
    .pipe(gulp.dest('.'));

This takes an override file that I keep in my style directory and appends it to the end of the bootstrap variables. In it I can redefine any of the variables for bootstrap.

  gulp.src(config.vendorcss.bootstrapinput)
   .pipe(less())
   .pipe(minifyCSS())
   .pipe(rename(function (path) {
       path.basename = "bootstrap.min";
   }))
   .pipe(gulp.dest(config.vendorcss.bootstrapoutput));

This bit simply runs the bootstrap build and produces and output file. Our patched variables.less is included in the newly rebuild code. The output is passed along to the rest of the task which I left unmodified.

The result of this is that I now have a modified bootstrap without having to actually change bootstrap. If another developer, or me, comes along to upgrade boostrap it should be apparent what was changed as it is all isolated in a single file.

2015-03-25

WebJobs and Deployment Slots (Azure)

I should start this post by apologizing for getting terminology wrong. Microsoft just renamed a bunch of stuff around Azure WebSites/Web Apps so I’ll likely mix up terms from the old ontology with the new ontology (check it, I used “ontology” in a sentence, twice!). I will try to favour the new terminology.

On my Web App I have a WebJob that does some background processing of some random tasks. I also use scheduler to drop messages onto a queue to do periodic tasks such as nightly reporting. Recently I added a deployment slot to the Web App to provide a more seamless experience to my users when I deploy to production, which I do a few times a day. The relationship between WebJobs and deployment slots is super confusing in my mind. I played with it for an hour today and I think I understand how it works. This post is an attempt to explain.

If you have a deployment slot with a webjob and a live site with a webjob are both running?

Yes, both of these two jobs will be running at the same time. When you deploy to the deployment slot the webjob there is updated and restarted to take advantage of any new functionality that might have been deployed.

My job uses a queue, does this mean that there are competing consumers any time I have a webjob?

If you have used the typical way of getting messages from a queue in a webjob, that is to say using the QueueTrigger annotation on a parameter:

public static void ProcessQueueMessage([QueueTrigger("somequeue")] string messageText, TextWriter log)
{...}

then yes. Both of your webjobs will attempt to read this message. Which ones gets it? Who knows!

Doesn’t that kind of break your functionality if you’re deploying different functionality for the same queue giving you a mix of old and new functionality?

Yep! Messages might even be processed by both. That can happen in normal operation on multiple nodes anyway which is why your jobs should be idempotent. You can either turn off the webjob for your slot or use differently named queues for production and your slot. This can then be configured using the new slot app settings. To do this you need to set up a QueueNameResolver, you can read about that here

What about the webjobs dashboard, will that help me distinguish what was run?

Kind of. As far as I can tell the output part of this page shows output from the single instance of the webjob running on the current slot.

Imgur

However the functions invoked list shows all invocations across any instance. So the log messages might tell you one thing and the function list another. Be warned that whey you swap a slot the output flips from one site to another. So if I did a swap on this dashboard and then refreshed the output would be different but the functions invoked list would be the same.

2015-03-18

How is Azure Support?

From time to time I stumble on an Azure issue I just can’t fix. I don’t like to rely too heavily on people I know in the Azure space because they shouldn’t be punished for knowing me too much (sorry, Tyler). I’ve never opened a support ticket before and I imagine most others haven’t either. This is how the whole thing unrolled:

This time the issue was with database backups. A week or so ago I migrated one of my databases to v12 so I could get some performance improvements. I tested the migration and the performance on a new server so I was confident. I did not, however, think about testing backups. Backups are basic and would have been well tested by Microsoft, right? Turns out that isn’t the case.

The first night my backup failed and, instead of a nice .bacpac file I was left with ten copies of my database.

http://i.imgur.com/qzZReCc.png

Of course each one of these databases is consuming the same S1 sized slot on the server and is being billed to me at S1 levels. Perhaps more damning was that the automatic backup task seemed to have deleted itself from the portal. I put the task back and waited for the next backup window to hit. I also deleted the extra databases and ran a manual backup.

When the next backup window hit the same problem reoccurred. This was an issue too deep inside Azure for me to diagnose myself. I ponied up the $30/month for support and logged an issue. I feel like with my MSDN subscription I probably get some support incidents for free but it was taking me so long to find how to use them $30 was cheaper.

The timeline of the incident was something like

noon - log incident
3:42 - incident assigned to somebody from Teksystems
3:48 - scope of incident defined
3:52 - incident resolved

This Teksystems dude works fast! I hope all his incidents were as easy to solve as mine. The resolution: “Yeah, automatic backups are broken with v12. We’ll fix it at some point in the future. Do manual backups for now”

I actually think that is a pretty reasonable response. I’m not impressed that backups were broken in this way but things break and get fixed all the time. With point in time restore there was no real risk of losing data but it did throw off my usual workflow (download last night’s backup every day for development today).

What I’m upset about is that this whole 4 hour problem could have been prevented by putting this information on the Azure health page. Back in November there was a big Azure failure and one of the lessons Microsoft took away was to do a better job of updating the health dashboard. At least they claimed to have taken that lesson away. From what I can see we’re not there yet. If we, as an industry, are going to put our trust in Azure and other cloud providers then we desperately need to have transparency into the status of the system.

I was once told, in an exit interview, that I needed to do a better job of not volunteering information to customers. To this day I am totally aghast at the idea that we wouldn’t share technical details with paying customers. Customers might not care but the willingness to be above board should always be there. The CEO of the company I left is being indicted for fraud which you won’t get if everybody was dedicated to the truth.

This post has diverged from the original topic of Azure support. My thoughts there are that it is really good. That $30 saved me hours of messing about with backups for days. If I had a lot of stuff running on Azure I would buy the higher support levels which, I suspect, provide an even better level of service.

2015-02-23

Book Review - Learn D3.js Mapping

I’m reviewing the book “Learn D3.js Mapping” by Thomas Newton and Oscar Villarreal.

Disclaimer: While I didn't receive any compensation for reviewing this book I did get a free digital review copy. So I guess if being paid in books is going to sway my opinion take that into account.

The book starts with an introduction to running a web server to play host for our visualizations. For this they have chosen to use node which is an excellent choice for this sort of lightweight usage. The authors do fall into the trap of thinking that npm stands for something, honestly it should stand for Node Package Manager.

The first chapter also introduces the development tools in the browser.

Chapter 2 is an introduction to the SVG format including the graphical primitives and the coordinate system. The ability to style elements via CSS is also explored. One of the really nice things is that the format for drawing paths, which is always somewhat confusing, are covered. Curved lines are even explored. The complexity of curved lines acts as a great introduction to using the mapping functionality in d3 as it acts as an abstraction over top of the complexity of wavy lines.

In chapter 3 we finally run into d3.js. The enter, exit and update functions, which are key to using d3 are introduced. The explanation is great! These are such important things and difficult to explain to fist time users of d3. Finally the chapter talks about how to retrieve data for the visualization from a remote data source using ajax.

In chapter 4 we get down to business. The first thing we see is a couple of different projections available within d3. I can’t read about Mercator projections without thinking about the map episode of the West Wing. That it isn’t referenced here is, I think, a serious flaw in the book. Once a basic map has been created we move onto creating bounding boxes, choropleths(that’s a map with colours representing some dimension of data) and adding interaction through click handlers. No D3 visualization is complete without some nifty looking transitions and the penultimate section of this chapter satisfies that need. Finally we learn how to add points of interest.

Chapter 5 continues to highlight transition capabilities of D3. This includes a great introduction to zooming and panning the map through the use of panning and zooming behaviours. The chapter then moves onto changing up projections to actually show a globe instead of a two dimensional map. The map even spins! A great example and nifty to see in action.

The GeoJSON and TropoJSON file formats are explained in chapter 6. In addition the chapter explores how to simplify map data. This is actually very important to get any sort of reasonably sized map on the internet. At issue is that today’s cartographers are really good and maps tend to have far more detail than we would ever need in a visualization.

The book finishes off with a discussion of how to go about testing visualizations and JavaScript in general.

This is an excellent coverage of a quite complex topic: mapping using D3. I would certainly recommend that if you have some mapping to do using D3 that purchasing this book might save you a whole lot of headaches.