Simon Online

2015-03-18

How is Azure Support?

From time to time I stumble on an Azure issue I just can’t fix. I don’t like to rely too heavily on people I know in the Azure space because they shouldn’t be punished for knowing me too much (sorry, Tyler). I’ve never opened a support ticket before and I imagine most others haven’t either. This is how the whole thing unrolled:

This time the issue was with database backups. A week or so ago I migrated one of my databases to v12 so I could get some performance improvements. I tested the migration and the performance on a new server so I was confident. I did not, however, think about testing backups. Backups are basic and would have been well tested by Microsoft, right? Turns out that isn’t the case.

The first night my backup failed and, instead of a nice .bacpac file I was left with ten copies of my database.

http://i.imgur.com/qzZReCc.png

Of course each one of these databases is consuming the same S1 sized slot on the server and is being billed to me at S1 levels. Perhaps more damning was that the automatic backup task seemed to have deleted itself from the portal. I put the task back and waited for the next backup window to hit. I also deleted the extra databases and ran a manual backup.

When the next backup window hit the same problem reoccurred. This was an issue too deep inside Azure for me to diagnose myself. I ponied up the $30/month for support and logged an issue. I feel like with my MSDN subscription I probably get some support incidents for free but it was taking me so long to find how to use them $30 was cheaper.

The timeline of the incident was something like

noon - log incident
3:42 - incident assigned to somebody from Teksystems
3:48 - scope of incident defined
3:52 - incident resolved

This Teksystems dude works fast! I hope all his incidents were as easy to solve as mine. The resolution: “Yeah, automatic backups are broken with v12. We’ll fix it at some point in the future. Do manual backups for now”

I actually think that is a pretty reasonable response. I’m not impressed that backups were broken in this way but things break and get fixed all the time. With point in time restore there was no real risk of losing data but it did throw off my usual workflow (download last night’s backup every day for development today).

What I’m upset about is that this whole 4 hour problem could have been prevented by putting this information on the Azure health page. Back in November there was a big Azure failure and one of the lessons Microsoft took away was to do a better job of updating the health dashboard. At least they claimed to have taken that lesson away. From what I can see we’re not there yet. If we, as an industry, are going to put our trust in Azure and other cloud providers then we desperately need to have transparency into the status of the system.

I was once told, in an exit interview, that I needed to do a better job of not volunteering information to customers. To this day I am totally aghast at the idea that we wouldn’t share technical details with paying customers. Customers might not care but the willingness to be above board should always be there. The CEO of the company I left is being indicted for fraud which you won’t get if everybody was dedicated to the truth.

This post has diverged from the original topic of Azure support. My thoughts there are that it is really good. That $30 saved me hours of messing about with backups for days. If I had a lot of stuff running on Azure I would buy the higher support levels which, I suspect, provide an even better level of service.

2015-02-23

Using Two Factor Identity in ASP.net 5

First off a disclaimer:

1
2
3
ASP.net 5 is in beta form and I know for a fact that some of the identity related stuff is going to change next release. I know this because the identity code in [git](https://github.com/aspnet/Identity) is different from what's in the latest build of ASP.net that comes with Visual Studio 2015 CTP 5. So this tutorial will stop working pretty quickly.
Update: yep, 3 hours after I posted this the next release came out and broke everything. Check the bottom of the article for an update.

With that disclaimer in place let’s get started. This tutorial supposes you have some knowledge of how multi-factor authentication works. If not then lifehacker have a decent introduction or for a more exhaustive examination the Wikipedia page.

If we start with a new ASP.net 5 project in Visual Studio 2015 and select the starter template then we get some basic authentication functionality built in.

Starter project

Let’s start, appropriately, in the Startup.cs file. Here we’re going to enable two factor at a global level by adding the default token providers to the identity registration:

1
2
3
services.AddIdentity<ApplicationUser, IdentityRole>(Configuration)
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();

The default token providers are an SMS token provider to send messages to people’s phones and an E-mail token provider to send messages to people’s e-mail. If you only want one of these two mechanisms then you can register just one with

1
2
.AddTokenProvider(typeof(PhoneNumberTokenProvider<>).MakeGenericType(UserType))
.AddTokenProvider(typeof(EmailTokenProvider<>).MakeGenericType(UserType))

Next we need to enable two factor authentication on individual users. If you want this for all users then this can be enabled by setting User.TwoFactoreEnabled during registration in the AccountController.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Register(RegisterViewModel model)
{
if (ModelState.IsValid)
{
var user = new ApplicationUser
{
UserName = model.UserName,
Email = model.Email,
CompanyName = model.CompanyName,
TwoFactorEnabled = true,
EmailConfirmed = true
};
var result = await UserManager.CreateAsync(user, model.Password);
if (result.Succeeded)
{
await SignInManager.SignInAsync(user, isPersistent: false);
return RedirectToAction("Index", "Home");
}
else
{
AddErrors(result);
}
}
// If we got this far, something failed, redisplay form
return View(model);
}

I also set EMailConfirmed here, although I really should make users confirm it via an e-mail. This is required to allow the EMailTokenProvider to generate tokens for a user. There is a similar field called PhoneNumberConfirmed for sending SMS messages.

Also in the account controller we’ll have to update the Login method to handle situations where the signin response is “RequiresVerification”

1
2
3
4
5
6
7
8
9
10
11
switch (signInStatus)
{
case SignInStatus.Success:
return RedirectToLocal(returnUrl);
case SignInStatus.RequiresVerification:
return RedirectToAction("SendCode", returnUrl);
case SignInStatus.Failure:
default:
ModelState.AddModelError("", "Invalid username or password.");
return View(model);
}

This implies that there are going to be a couple of new actions on our controller. We’ll need one to render a form for users to enter the code from their e-mail and another one to accept that back and finish the login process.

We can start with the SendCode action

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[HttpGet]
[AllowAnonymous]
public async Task<IActionResult> SendCode(string returnUrl = null)
{
var user = await SignInManager.GetTwoFactorAuthenticationUserAsync();
if(user == null)
{
return RedirectToAction("Login", new { returnUrl = returnUrl });
}
var userFactors = await UserManager.GetValidTwoFactorProvidersAsync(user);
if (userFactors.Contains(TOKEN_PROVIDER_NAME))
{
if(await SignInManager.SendTwoFactorCodeAsync(TOKEN_PROVIDER_NAME))
{
return RedirectToAction("VerifyCode", new { provider = TOKEN_PROVIDER_NAME, returnUrl = returnUrl });
}
}
return RedirectToAction("Login", new { returnUrl = returnUrl });
}

I’ve taken a super rudimentary approach to dealing with errors here, just sending users back to the login page. A real solution would have to be more robust. I’ve also hard coded the name of the token provider (it is “Email”). I’m only allowing one token provider but I thought I would show the code to select one. You can render a view that shows a list from which users can select.

The key observation here is the sending of the two factor code. That is what sends the e-mail to the user.

Next we render the form into which users can enter their code:

1
2
3
4
5
6
[HttpGet]
[AllowAnonymous]
public IActionResult VerifyCode(string provider, string returnUrl = null)
{
return View(new VerifyCodeModel{ Provider = provider, ReturnUrl = returnUrl });
}

The view here is a simple form with a text box into which users can paste their code

Entering a code

The final action we need to add is the one that receives the post back from this form

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[HttpPost]
[AllowAnonymous]
public async Task<IActionResult> VerifyCode(VerifyCodeModel model)
{
if(!ModelState.IsValid)
{
return View(model);
}
var result = await SignInManager.TwoFactorSignInAsync(model.Provider, model.Code, false, false);
switch (result)
{
case SignInStatus.Success:
return RedirectToLocal(model.ReturnUrl);
default:
ModelState.AddModelError("", "Invalid code");
return View(model);
}
}

Again you should handle errors better than me, but it gives you an idea.

The final component is to hook up an class to send the e-mail. In my case this was as simple as using SmtpClient.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
using System;
using System.Net;
using System.Net.Mail;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNet.Identity;
using Microsoft.Framework.ConfigurationModel;
namespace IdentityTest
{
public class EMailMessageProvider : IIdentityMessageProvider
{
private readonly IConfiguration _configuration;
public EMailMessageProvider(IConfiguration configuration)
{
_configuration = configuration;
}
public string Name
{
get
{
return "Email";
}
}
public async Task SendAsync(IdentityMessage identityMessage, CancellationToken cancellationToken = default(CancellationToken))
{
var message = new MailMessage
{
From = new MailAddress(_configuration.Get("MailSettings:From")),
Body = identityMessage.Body,
Subject = "Portal Login Code"
};
message.To.Add(identityMessage.Destination);
var client = new SmtpClient(_configuration.Get("MailSettings:Server"));
client.Credentials = new NetworkCredential(_configuration.Get("MailSettings:UserName"), _configuration.Get("Password"));
await client.SendMailAsync(message);
}
}
}

This provider will need to be registered in the StartUp.cs so the full identity registration looks like:

1
2
3
4
services.AddIdentity<ApplicationUser, IdentityRole>(Configuration)
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders()
.AddMessageProvider<EMailMessageProvider>();

You should now be able to log people in using multifactor authentication just like the big companies. If you’re interested in using SMS messages to verify people both Tropo and Twilio provide awesome phone system integration options.

Update

Sure enough, as I predicted in the disclaimer, 3 hours after I posted this my install of VS2015 CTP 6 finished and all my code was broken. The fixes weren’t too bad though:

  • The Authorize attribute moved and is now in Microsoft.AspNet.Security.
  • The return type from TwoFactorSignInAsync and PasswordSignInAsync have changed to a SignInResult. This changes the code for the Login and VerifyCode actions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Login(LoginViewModel model, string returnUrl = null)
{
if (ModelState.IsValid)
{
var signInResult = await SignInManager.PasswordSignInAsync(model.UserName, model.Password, model.RememberMe, shouldLockout: false);
if (signInResult.Succeeded)
return Redirect(returnUrl);
if (signInResult.RequiresTwoFactor)
return RedirectToAction("SendCode", returnUrl);
}
ModelState.AddModelError("", "Invalid username or password.");
return View(model);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[HttpPost]
[AllowAnonymous]
public async Task<IActionResult> VerifyCode(VerifyCodeModel model)
{
if (!ModelState.IsValid)
{
return View(model);
}
var signInResult = await SignInManager.TwoFactorSignInAsync(model.Provider, model.Code, false, false);
if (signInResult.Succeeded)
return RedirectToLocal(model.ReturnUrl);
ModelState.AddModelError("", "Invalid code");
return View(model);
}
  • EF’s model builder styntax changed to no longer have Int() and String() extension methods. I think that’s a mistake but that’s not the point. It can be fixed by deleting and regenerating the migrations using:

    k ef migration add initial

You may need to specify the connection string in the Startup.cs as is explained here: http://stackoverflow.com/questions/27677834/a-relational-store-has-been-configured-without-specifying-either-the-dbconnectio

2015-02-23

Book Review - Learn D3.js Mapping

I’m reviewing the book “Learn D3.js Mapping” by Thomas Newton and Oscar Villarreal.

1
Disclaimer: While I didn't receive any compensation for reviewing this book I did get a free digital review copy. So I guess if being paid in books is going to sway my opinion take that into account.

The book starts with an introduction to running a web server to play host for our visualizations. For this they have chosen to use node which is an excellent choice for this sort of lightweight usage. The authors do fall into the trap of thinking that npm stands for something, honestly it should stand for Node Package Manager.

The first chapter also introduces the development tools in the browser.

Chapter 2 is an introduction to the SVG format including the graphical primitives and the coordinate system. The ability to style elements via CSS is also explored. One of the really nice things is that the format for drawing paths, which is always somewhat confusing, are covered. Curved lines are even explored. The complexity of curved lines acts as a great introduction to using the mapping functionality in d3 as it acts as an abstraction over top of the complexity of wavy lines.

In chapter 3 we finally run into d3.js. The enter, exit and update functions, which are key to using d3 are introduced. The explanation is great! These are such important things and difficult to explain to fist time users of d3. Finally the chapter talks about how to retrieve data for the visualization from a remote data source using ajax.

In chapter 4 we get down to business. The first thing we see is a couple of different projections available within d3. I can’t read about Mercator projections without thinking about the map episode of the West Wing. That it isn’t referenced here is, I think, a serious flaw in the book. Once a basic map has been created we move onto creating bounding boxes, choropleths(that’s a map with colours representing some dimension of data) and adding interaction through click handlers. No D3 visualization is complete without some nifty looking transitions and the penultimate section of this chapter satisfies that need. Finally we learn how to add points of interest.

Chapter 5 continues to highlight transition capabilities of D3. This includes a great introduction to zooming and panning the map through the use of panning and zooming behaviours. The chapter then moves onto changing up projections to actually show a globe instead of a two dimensional map. The map even spins! A great example and nifty to see in action.

The GeoJSON and TropoJSON file formats are explained in chapter 6. In addition the chapter explores how to simplify map data. This is actually very important to get any sort of reasonably sized map on the internet. At issue is that today’s cartographers are really good and maps tend to have far more detail than we would ever need in a visualization.

The book finishes off with a discussion of how to go about testing visualizations and JavaScript in general.

This is an excellent coverage of a quite complex topic: mapping using D3. I would certainly recommend that if you have some mapping to do using D3 that purchasing this book might save you a whole lot of headaches.

2015-02-20

Replace Grunt with Gulp in ASP.net 5

The upcoming version of ASP.net and Visual Studio includes first class support for both Grunt and Gulp. I’ve been using Gulp a fair bit as of late but when I created a new ASP.net 5 project I found that the template came with a gruntfile instead of a gulpfile. My tiny brain can only hold so many different tools so I figured I’d replace the default Grunt with Gulp.

Confused? Read about grunt and gulp. In short they are tools for building and working with websites. They are JavaScript equivlents of ant or gnumake - alghough obviously with a lot of specific capabilities for JavaScript.

The gruntfile.js is pretty basic

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// This file in the main entry point for defining grunt tasks and using grunt plugins.
// Click here to learn more. http://go.microsoft.com/fwlink/?LinkID=513275&clcid=0x409
module.exports = function (grunt) {
grunt.initConfig({
bower: {
install: {
options: {
targetDir: "wwwroot/lib",
layout: "byComponent",
cleanTargetDir: false
}
}
}
});
// This command registers the default task which will install bower packages into wwwroot/lib
grunt.registerTask("default", ["bower:install"]);
// The following line loads the grunt plugins.
// This line needs to be at the end of this this file.
grunt.loadNpmTasks("grunt-bower-task");

It looks like all that is being done is bower is being run. Bower is a JavaScript package manager and running it here will simply install the packages listed in the bower.json.

So to start we need to create a new file at the root of our project and call it gulpfile.js

Next we can open up the package.json file that controls the packages installed via npm and add in a couple of new packages for gulp.

1
2
3
4
5
6
7
8
9
"version": "0.0.0",
"name": "IdentityTest",
"devDependencies": {
"grunt": "^0.4.5",
"grunt-bower-task": "^0.4.0",
"gulp": "3.8.11",
"gulp-bower": "0.0.10"
}
}

We have gulp here as well as a task that will install bower. The packages basically mirror those found in the file already for grunt. Once we’re satisfied we’ve replicated grunt properly we can come back and take out the two grunt entries. Once that’s done you can run

1
npm install

From the command line to add these two packages.

In the gulp file we’ll pull in the two required packages, gulp and gulp-bower. Then we’ll set up a default task and also one for running bower

1
2
3
4
5
6
7
8
9
10
var gulp = require('gulp');
var bower = require('gulp-bower');
gulp.task('default', ['bower:install'], function () {
return;
});
gulp.task('bower:install', function () {
return bower({ directory: "wwwroot/lib" });
});

We can test if it works by deleting the contents of wwwroot/lib and running bower from the command line. (If you don’t already use gulp then you’ll need to install it globally using

install -g gulp```) The contents of the directory are restored and we can be confident that gulp is working.
1
2
We can now set this up as the default by editing the project.json file. Right at the bottom is

“scripts”: {
“postrestore”: [ “npm install” ],
“prepare”: [ “grunt bower:install” ]
}

1
2
We'll change this from grunt to gulp

“scripts”: {
“postrestore”: [ “npm install” ],
“prepare”: [ “gulp bower:install” ]
}
```

As a final step you may want to update the bindings between visual studio actions and the gulp build script. This can normally be done through the task runner explorer, however at the time of writing this functionality is broken in the Visual Studio CTP. I’m assured that it will be fixed in the next release. In the meantime you can read more about gulp on David Paquette’s excelent blog.

2015-02-12

Apple Shouldn't be Asking for Your Password

My macbook decided that yesterday was a good day to become inhabited by the spirits of trackpads past and just fire random events. When I bought this machine I also bought apple care, which, as it turns out was a really, really good idea. It cost me something like $350 and has, so far, saved me






















Power adapter$100

Trackpad
$459
Screen$642
Total$1201

In the process of handing my laptop over for the latest round of repairs the apple genius asked for my user name and password.

I blinked.

“You want what?”

The genius explained that to check that everything was working properly they would need to log in and check things like sound and network. This is, frankly, nonsense. There is no reason that the tech should need to log into the system to perform tests of this nature. In the unlikely case that the sophisticated diagnostic tools they have at their disposal couldn’t check the system then it should be standard procedure to boot into a temporary, in memory, version of OSX.

When I pushed back they said I could create a guest account on the machine. This is an okay solution but it still presents opportunity to leverage local privilege escalation exploits, should they exist. It is certainly not unusual for computer techs to steal data from the computers they are servicing. Why give Apple that opportunity?

What I find more disturbing is that a large computer company that should know better is teaching people that it is okay to give out password. It isn’t. If there were a 10 commandments of computer security then

Thou shalt not give out thy password

Would be very close to the top of the list. At risk is not just the integrity of that computer but also of all the passwords stored on that computer. How many people have chrome save their passwords for them? Or have active sessions that could be taken over by an attacker with their computer password? Or use the same password on many systems or sites? I don’t think a one of us could claim that none of these apply to them.

I don’t know why Apple would want to take on the liability of knowing people’s passwords. When people, even my wife, offer to give me their passwords I run from the room, fingers in ears screaming “LA LA LA LA LA” because I don’t want to know. If something goes wrong I want to be above suspicion. If there is some other way of performing the task without knowing the password then I’ll take it, even if it is slightly harder.

Apple, this is such an easy policy change, please stop telling people it is okay to give out their password. Use a live CD instead or, if it is a software problem, sit with the customer while you fix it. Don’t break the security commandments, nothing good can come of it.

2015-02-12

Visual Studio 2015 Not Launching

If you’re having a problem with Visual Studio 2015 not launching then, perhaps, you’re in the same boat as me. I just installed VS 2015 CTP and when I went to launch it the splash screen would blink up then disapear at once. Staring with SafeMode didn’t help and there was nothing in any log I could find to explain it. In the end I found the solution was to open up regedit and delete the 14.0 directories under HKEY_CURRENT_USER\Software\Microsoft\VisualStudio. Any settings you had will disapear but it isn’t like you could get into Visual Studio to use those settings anyway.

Regedit
Hopefully this helps somebody.

2015-01-30

Sending Message to Azure Service Bus Using REST

Geeze, that is a long blog title.

In this post I’m going to explore how to send messages to Azure service bus using the HTTPS end point. HTTP has become pretty much a lingua franca when it comes to sending messages. While it is a good thing in that most every platform has a web client HTTP is not necessarily a great protocol for this. There is quite a bit of overhead some from HTTP itself and some from TCP at least we’re not sending important messages over UDP.

In my case here I have an application running on a device in the field (that’s what we in the oil industry call anything that isn’t in our nice offices). The device is running a full version of Windows but the application from which we want to send messages is running an odd sort of programming language that doesn’t have an Azure client built for it. No worries, it does have an HTTP Client.

The first step is to set up a queue to which you want to send your messages. This has to be done from the old portal. I created a namespace and in that I created a single queue.

Portal

Next we add a new queue and in the configuration for this queue we add a couple of access policies:

Access policies

I like to add one for each of the combinations of services. I don’t like a single role that isn’t the manager to be able to send and listen on a queue. It is just the principle of least privilege in action.
Now we would like to send a message to this queue using REST. There are a couple of ways of getting this done. The way I knew about was to generates a WRAP token by talking to the access control server. However as of August 2014 the ACS namespace is not generated by default for new service bus name spaces. I was pointed to an article about it by my good buddy Alexandre Brisebois*. He also recommended using Shared Access Signatures instead.

A shared access signature is a token that is generated from the access key and an expiry date. I’ve seen these used to grant limited access to a blob in blob storage but didn’t realize that there was support for them in service bus. These tokens can be set to expire quite quickly meaning that even if they fall into the hands of an evil doer they’re only valid for a short window which has likely already passed.

Generating one of these for service bus is really simple. The format looks like

SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}
1
[HMACSHA256 using key of [[queue address] + [new line] + [expiry date in seconds since epoch]]]
  • 2 is the expiry, again in seconds since epoch
  • 3 is the key name, in our case Send

This will generate something that looks like

1
2
SharedAccessSignature sr=https%3a%2f%2fultraawesome.servicebus.windows.net%2fawesomequeue%2f&sig=WuIKwkBuB%2fjxMgK6x79o3Xrf4nKZtWX9defu7HLdzWg%3d&se=1422636195&skn=Sen
d

With that in place I can now drop to CURL and attempt to send a message

1
curl -X POST https://ultraawesome.servicebus.windows.net/awesomequeue/messages -H "Authorization: SharedAccessSignature sr=https%3a%2f%2fultraawesome.servicebus.windows.net%2fawesomequeue%2f&sig=WuIKwkBuB%2fjxMgK6x79o3Xrf4nKZtWX9defu7HLdzWg%3d&se=1422636195&skn=Send" -H "Conent-Type:application/json" -d "I am a message"

This works like a dream and in the portal we can see the message count tick up.

In our application we need only generates the shared access token and we can send the message. If the environment is lacking the ability to do HMACSHA256 then we could call out to another application or even pre-share a key with a long expiry, although that would invalidate the advantages of time locking the keys.

*We actually only met once at the MVP summit but I feel like we’re brothers in arms.

2015-01-24

CSS Animated Confirmations

I’ve been playing a little bit with changing how my app acknowledges that actions are running and have completed. I use a system, from time to time, that doesn’t do good feedback for when an action is running. It drives me up the wall and puts me back in my Comput 301 class with the teacher talking about the importance of feedback. Rapid feedback is what makes the difference between an application feeling snappy and it feeling slow. Even if the actions take the same time to complete a system that does something to indicate it is working right away will make users feel far better.

So knowing that my users are more and more on modern browsers I though I would give some animation a try.



This was my first attempt and I’m going to put it into the field and see what people think. It is implemented using the animation css attribute. I start with some key frame defintions:

1
2
3
4
5
6
@keyframes save-sucessful-animation {
0% { background-color: white}
25% { background-color: #DFF2BF;}
75% { background-color: #DFF2BF;}
100%{ background-color: white}
}

I’ve listed the un-prefixed ones here but you’ll need to prefix with -moz or -webkit or -ms for your needs. You could also use a CSS precompiler to do that for you (check out http://bourbon.io/). The transform here changes the colour to green, pauses for 50% of the animation and turns it back to white.

Next we need to apply the style to our element

1
2
3
.saving {
animation: save-sucessful-animation 3s forwards;
}

And finally hook up some JavaScript to trigger it on click

1
2
3
4
5
6
7
8
9
10
11
12
$(".trigger").click(function(event){
var target = $(event.target).siblings(".target");
if(target.hasClass("saveSuccessful"))
{
var replacement = target.clone(true);
target.before(replacement);
target.remove();
}
else{
target.addClass("saveSuccessful");
}
});

I remove the element if the class already exists as that is the easiest way to restart the animation. (See http://css-tricks.com/restart-css-animation/)

Now when users click on the button they get a nifty little animation while the save is going on. I’ve ignored failure conditions here but this is going to be a big win already for my users.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
<style>
@-moz-keyframes save-sucessful-animation {
0% { background-color: white}
25%{ background-color: #DFF2BF}
75% {
background-color: #DFF2BF;
}
100%{ background-color: white}
}
@-ms-keyframes save-sucessful-animation {
0% { background-color: white}
25%{ background-color: #DFF2BF}
75% {
background-color: #DFF2BF;
}
100%{ background-color: white}
}
@-webkit-keyframes save-sucessful-animation {
0% { background-color: white}
25%{ background-color: #DFF2BF}
75% {
background-color: #DFF2BF;
}
100%{ background-color: white}
}
@keyframes save-sucessful-animation {
0% { background-color: white}
25%{ background-color: #DFF2BF;}
75% {
background-color: #DFF2BF;
}
100%{ background-color: white}
}
.saveSuccessful {
-moz-animation: save-sucessful-animation 3s forwards;
-webkit-animation: save-sucessful-animation 3s forwards;
-o-animation: save-sucessful-animation 3s forwards;
animation: save-sucessful-animation 3s forwards;
}
</style>
<script>
$(function(){
$(".trigger").click(function(event){
var target = $(event.target).siblings(".target");
if(target.hasClass("saveSuccessful"))
{
var replacement = target.clone(true);
target.before(replacement);
target.remove();
}
else{
target.addClass("saveSuccessful");
}
});
});
</script>
2015-01-22

Getting Bower Components in Gulp

I’m embarking on an adventure in which I update the way my work project handles JavaScript. Inspired by David Paquette’s blog I’m moving to using gulp and dropping most of the built in bundling and minimization stuff from ASP.net. This is really just a part of a big yak shaving effort to try out using react on the site. I didn’t expect it to turn into this huge effort but it is really going to result in a better solution in the end. It is also another step on the way to aligning how we in the .net community develop JavaScript with the way the greater web development community develops JavaScript. It will be the way that the next version of ASP.net handles these tasks.

One of the many yaks that need shaving is moving many of my JavaScript dependencies to using bower. Bower is to JavaScript as nuget is to .net or as CPAN is to perl or as gem is to ruby: it is a package manager. There is often some confusion between bower and npm as both are package managers for JavaScript. I like to think of npm as being the package manager for build infrastructure and running on node. Whereas bower handles packages that are sent over the wire to my clients.

So on a normal project you might have jquery, underscore/lodash and require.js all installed by bower.

We would like to bundle up all of these bower components into a single minified file along with all our other site specific JavaScript. We can use Gulp, a build tool for JavaScript, to do that. Unfortunately bower packages may contain far more than they actually need to and there doesn’t seem to be a good standard in place to describe which file should be loaded into your bundle. Some projects include a bower.json file that defines a main file to include. This is an excerpt from the bower.json file from the flux library:

1
2
3
4
5
6
{
"name": "flux",
"description": "An application architecture based on a unidirectional data flow",
"version": "2.0.2",
"main": "dist/Flux.js",
...

Notice the main file listed there. If we could read all the bower.json files from all our bower packages then you could figure out what files to include. There is, as with all things gulp, a plugin for that. You can install it by running

npm install --save-dev main-bower-files

Now you can reference this from your gulp file by doing

var mainBowerFiles = require('main-bower-files');

You can plug this task into gulp like so

1
2
3
4
5
6
gulp.task('3rdpartybundle', function(){
gulp.src(mainBowerFiles())
.pipe(uglify())
.pipe(concat('all.min.js'))
.pipe(gulp.dest('./Scripts/'));
});

From time to time you might find a package that fails to properly specify a main file. This is a bit annoying and certainly something you should consider fixing and submitting back to the author. To work around it you can specify an override in your own bower.json file.

1
2
3
4
5
6
7
8
9
"dependencies": {
"marty": "~0.8.3",
"react": "~0.12.2"
},
"overrides": {
"superagent":{
"main": "superagent.js"
}
}

Great! Okay now what if you have an ordering dependency? Perhaps you need to load requirejs as the last thing in your bundle. This can be done through the ordering plugin. Start with npm and install the plugin:

npm install --save-dev gulp-order

Again you’ll need to specify the package inside the gulp file

var order = require('gulp-order');

Now you can plug the ordering into the build pipeline

1
2
3
4
5
6
7
gulp.task('3rdpartybundle', function(){
gulp.src(mainBowerFiles({paths: {bowerJson: 'bower.json', bowerDirectory: 'bower_components'}}))
.pipe(order(["*react*", "*requirejs*"])
.pipe(uglify())
.pipe(concat(config.all3rdPartyFile))
.pipe(gulp.dest('./Scripts/'));
});

The ordering plugin doesn’t, at the time of writing, support matching the last rule so making something appear last is harder than making it appear first. There is, however, a pull request out to fix that.

This is really just scratching the surface of the nifty stuff you can get up to with gulp. I’m also building typescript files, transcompiling es6 to es5 and linting my JavaScript. It’s a brave new world!

2015-01-15

Importing On-Premise SQL to SQL Azure

Microsoft have done a great job building Azure and SQL Azure. However one of the places where I feel like they have fallen down is how to get your on premise data up into Azure. It isn’t that there isn’t a good way it is that there are a million different ways to do it. If you look at the official MSDN entry there are 10 different approaches. How are we to know which one to use? I think we could realistically reduce the number to two methods:

  1. Export and import a data-tier application
  2. Synchronize with an on premise application

The first scenario is what you want in most instances. It will require downtime for your application as any data created between exporting your database and importing it will not be seen up in Azure SQL. It is a point in time migration.

If you have a zero downtime requirement or your database is so large that it will take an appreciable period of time to export and import then you can use the data sync. This will synchronize your database to the cloud and you then simply need to switch over to using your up to date database in the cloud.

This article is going to be about method #1.

The first step is to find the database you want to export in SQL Server Management Studio

Database selected

Now under tasks select export data-tier application. This will create a .bacpac file that is portable to pretty much any SQL server.

Select export data-tier application

Here you can export directly to a blob storage container on Azure. I’ve blanked out the storage account and container here for SECURITY. I just throw the backup in an existing container I have for moving data about. If you don’t have a container just create a new storage account and put a container in it.

Imgur

The export may take some time depending on the size of your database but the nice part is that the bacpac file is uploaded to azure as part of the process so you can just fire and forget the export.

Now jump over to the Azure management portal. I like to use the older portal for this as it provides some more fidelity when it comes to finding and selecting your bacpack file. Here you can create a new database from an export. Unfortunately there doesn’t seem to be a way to apply an import to an existing database. That’s probably not a very common use case anyway.

Create database

Here we can specify the various settings for the new database. One thing I would suggest is to use a more performant database that you might usually. It will speed up the import and you can always scale down after.

Import settings

Now you just need to wait for the import to finish and you’ll have a brand new database on Azure complete with all your data.

Import complete