Simon Online

2010-12-15

Selenium - UI Testing - Part 1

I had a bug to fix earlier this week which was really just a UI bug. The details aren’t all that important but it was something like a field in a popup div wasn’t being properly updated. This felt a lot like something which could use not just a test in the back end to ensure the correct data was being passed back but also a test in the front end to make sure it was being displayed. As our front end is all webby, as most front ends tend to be these days. Years ago I did some work with Mercury QuickTest as it was then called(I believe it is now owned by HP which makes it pure evil, just like HP-UX). I looked at a couple of tools for automating the browser and decided on selenium partially because it appeared to be the most developed and partially because I couldn’t refuse a tool with an atomic mass of 78.96.

This is the part where I warn you that I really have no idea what I’m doing, I’m no selenium expert and I’ll probably change my mind about how to best implement it. I’ll post updates as I change my mind.

I started by installing the selenium IDE and, on another computer, the remote control. The IDE is just a plugin for firefox while the RC is a java based tool which happily runs on windows. I opened up a port for it in the windows firewall”¦ oh who am I kidding I turned the damn thing off.

I recorded a number of actions using the IDE and added a number of verifications(you can add a check by right clicking on an element and selecting one of the verify options in there). I don’t consider these to be unit tests since they tend to cross over multiple pages. What I’m creating are work flow tests or behavioral tests. These could be used as part of BDD in conjunction with a tool such as Cucumber or SpecFlow. As such I don’t really care how long these tests take to run, I don’t envision them being run as part of our continuous integration process but rather as part of hourly builds. If things really get out of hand(and they won’t our application isn’t that big) then the tests can be distributed over a number of machines and even run in the cloud on some Amazon instances.

In the Selenium IDE I exported the test to C# so they could be run in our normal build process. Unfortunately the default export template export nunit tests and our project makes use of the visual studio test suite. There are no technical reasons for not having both testing frameworks but I don’t want to burden the other developers too much with a variety of frameworks. So my job for tomorrow is to explore alternatives to the standard export. I am also not crazy about having to maintain the tests in C#, it requires that a developer be involved in testing changes when it should really be easy enough for a business person to specify the behaviour. I have some ideas about running the transformations from Selenium file format(which is an html table) into C# using either T4 or the transformation code extracted from the IDE running in a server side javascript framework.

I’ll let you know what I come up with. Chances are it will be crazy, because sensible isn’t all that fun.

Useful Blogs

  • Design of Selenium tests for Asp.netA blog with some specific suggestions about how to work with selenium and asp.net. I’m not in 100% agreement with his ideas but I might change my mind later.
  • Adam Goucher’s blog ““ Adam is a hell of a nice guy who has been helping me out with Selenium on the twitter.
2010-11-19

Web.config Tranformation Issues

As you all should I am using web.config transformations during the packaging of my ASP.net web applications. Today I was working with a project which didn’t have transformations defined previously so I thought I would go ahead and add them. All went well until I built a deployment package and noticed that my transformations were not being applied. Looking at the build log I found warnings like these

C:\Users\stimms\Documents\Visual Studio 2010\Projects\SockPuppet\Web.Debug.config(5,2): Warning : No element in the source document matches '/configuration'  
Not executing SetAttributes (transform line 18, 12)  
Output File: objDebugTransformWebConfigtransformedWeb.config

This started a fun bit of debugging during which I removed everything from the web.config and built it from the ground up. Eventually I traced the problem to a hold over from some previous version of .net, in the Web.config the configuration was defined as

<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">

Changing that to just

<configuration>

Solved the problem. I imagine this is some sort of odd XML namespace issue, hopefully if you run into this you’ll find this post and not waste an hour like me.

2010-10-05

MySQL in an NServiceBus Handler

I have an autonomous component in my NServiceBus implementation which needs to talk to a MySQL database. When I first implemented the handler I configured the end point to be transactional, mostly because I wasn’t too sure about the difference between configuring AsA_Service and AsA_Client and transactions sounded like they might be something I would like. What the transactional endpoint does is wrap the endpoint in a distributed transaction. A distributed transaction is a mechanism which allows you to share a transaction between a number of databases so that if you’re writing to several databases when you commit to one database and it fails then the transactions in the other databases will rollback.

However when I went to test the handler it failed with an error:

MySql /Net connector does not support distributed transactions

I solved it by configuring the endpoint AsA_Client. The problem with that configuration is that the handler wipes the queue on startup which isn’t ideal. None of the built in configurations are quite right for our situation so we override the configuration as described here: http://www.nservicebus.com/GenericHost.aspx.

2010-06-28

Fixing Table Identities

I make heavy use of Red Gate’s excellent SQL Compare tools. I know I’m a bit of a shrill for them but they are time savers when dealing with multiple environments(development, testing, production) which is a pretty common occurrence in any sort of agile development. One flaw in them is that they often mess up the sequences in the destination database. Say you have a table Students with 15 records in it in development and 30 in production then performing a copy often brings along the sequence even if you don’t select syncing that table. This results in duplicate key errors whenever a new record is inserted.

For weeks I have been saying “I should write a script to check and fix that”. Well I finally did it

SET ANSI_NULLS ON  
GO  
SET QUOTED_IDENTIFIER ON  
GO  

create PROCEDURE dbo.FixTableIdentities  

AS  
BEGIN  
 SET NOCOUNT ON;  
 declare @currentKeyValue int  
 declare @currentIdentityValue int  
 declare @toRun nvarchar(500)  
 declare @tableToCheck nvarchar(500)  
 declare @idColumnCount int  
 declare db_cursor cursor for select name from sysobjects where type='U'  

 open db_cursor  
 fetch next from db_cursor into @tableToCheck  

 while @@FETCH_STATUS = 0  
 BEGIN  
 select @idColumnCount = count(*) from syscolumns where id=object_id(@tableToCheck) and name='id'  
 if(@idColumnCount = 1)  
 BEGIN  
 select @currentKeyValue = ident_current(@tableToCheck)   
 set @toRun = N'select @currentIdentityValue = max(id) from ' + @tableToCheck;  
 EXEC sp_executesql @toRun, N'@currentIdentityValue int OUTPUT', @currentIdentityValue OUTPUT;  
 if(@currentIdentityValue @currentKeyValue)  
 BEGIN  
 DBCC CHECKIDENT (@tableToCheck,reseed, @currentIdentityValue)   
 END  
 END  
 FETCH NEXT FROM db_cursor into @tableToCheck  
 END  
 CLOSE db_cursor  
 deallocate db_cursor  
END  
GO

When run this procedure will go through all your tables and ensure that the id column is in sync with the sequence. At the moment it just looks at the column called id and manipulates that.

2009-12-13

Measuring Language Productivity

I recently asked a question over at stackoverflow about the productivity gains in various languages.

Does anybody know of any research or benchmarks of how long it takes to develop the same
application in a variety of languages? Really I’m looking for Java vs. C++ but any
comparisons would be useful. I have the feeling there is a section in Code Complete
about this but my copy is at work.

I was really asking because I wanted to help justify my use of the Pellet semantic reasoner over the FaCT++ reasoner in a paper.

What emerged from the question was that there really was not much good research into the topic of language productivity and that any research which had been done was from the 2000 time-frame. What makes research like this difficult is finding a large sample size and finding problems which don’t favour one class of language greatly over another. That got me thinking, what better source of programmers is there than stackoverflow? There are developers from all across the spectrum of languages and abilities; there is even a pretty good geographic disbursement.

Let’s do this research ourselves! I propose a stackoverflow language programming contest. We’ll develop a suite of programming tasks which try as hard as possible to not focus on the advantages of one particular language and gather metrics. I think we should gather

  • Time taken to develop
  • Lines of code required
  • Runtime over the same input
  • Memory usage over the same input
  • Other things I haven’t thought of

I’ll set up a site to gather people’s solutions to the problems and collate statistics but the problems should be proposed by the community. We’ll allow people to checkout the problem set, time how long it takes to the to complete it and then submit the code for their answers. I’ll run the code and benchmark the results and after, say two weeks of having the contest open, publish my results as well as the dataset for anybody else to analyze.

2009-12-09

Abuse of Extension Methods

In the code base I’m working with we have a number of objects which augment existing objects. For instance I have a User object which is generated by my ORM so it looks like

 string userName;  
 string firstName;  
 string lastName;  
 int companyID;  
 int locationID;

In order to display user objects it is useful to have the name of the company and the location which are stored in another table. To limit the amount of stuff being passed around we defined an ExtendedUser which extends User and adds the fields

 string companyName;  
 string locationName;

Creating these extended classes requires passing in a base class and then pulling all the properties off of it and assigning them to the extended class. This is suboptimal because it means that when a new property is added to the bass class it has to be added to the code which extracts the properties in the extended class. To address this I created a method which iterates over the properties in the base class and assigns them to the extended class.

public static void CopyProperties(this object destination, object source, List<string> ignoreList)  
 {  
 foreach (var property in destination.GetType().GetProperties())  
 {  
 if (source.GetType().GetProperties().Select(p => p.Name).Contains(property.Name) && !ignoreList.Contains(property.Name))  
 {  
 var sourceProperty = source.GetType().GetProperty(property.Name);  
 if (property.CanWrite && sourceProperty.GetType() == property.GetType() && sourceProperty.GetValue(source, null) != null)  
 property.SetValue(destination, sourceProperty.GetValue(source, null), null);  
 }  
 }  
 }

If you have sharp eyes you’ll notice that I’ve defined this method as an extension method. This allows me to do insane things like

ExpandedClass expandedClass = new ExpandedClass();  
expandedClass.CopyProperties(cls);  
expandedClass.LocationName = GetLocationNameFromID(cls.LocationID);  
expandedClass.CourseName = GetCourseNameFromID(cls.CourseID);  
expandedClass.InstructorName = GetInstructorNameFromID(cls.InstructorID);  
expandedClass.CompanyName = GetCompanyNameFromID(cls.CompanyID);

I can also do this for any other two classes which share property names.

2009-11-14

Persisting in MVC

Rob Conery, who is a giant in the ASP.net MVC world(he wrote the ASP.net store front and is also the author of a 200 page book on inversion of control) is calling for suggestions about an active record persistence engine. I wanted to present how I think it should be done which is just a bit too long for a comment on the tail end of Rob’s blog. I’ve been reading a lot lately about areas in MVC2 and the portable areas project which is part of MVC contrib project. Now the portable areas aren’t yet finalized but the idea is that these areas will be able to be dropped into a project and will provide some set of controllers and views which will provide a swack of functionality.

The example I’ve seen bandied about is that of a forum. Everybody has written a forum or two in their time now you can just drop in an area and get the functionality for free. I can see a lot of these components becoming available on codeplex or git hub. Component based software like this is “the future” just like SOA was the future a few years ago. The problem with components like this is that it is difficult to keep things consistent across the various components. At one end of the spectrum of self containment If each component is self contained then it has to provide for its own data persistence as well as any other services it consumes.

I have helpfully harnessed the power of MS Paint to create an illustration of the spectrum between a component being self contained and being reliant on services being provided for it. If anybody is interested my artistic skills are available for hire. The further to the left the more portable the further to the right the more reliant the components are on services being provided for them and the less portable. We want to be towards the left, because left is right.

If these components really are the future then we need to find a way to couple the components and provide communication between them. This is where MEF steps up to the plate. What I’m proposing is that rather than spending our time creating unified interfaces for storing data we create a method agnostic object store. Components would call out to MEF for a persistence engine and then pass in whatever it was they wanted to save. The engine should handle the creation of database tables on the fly or files or web service callouts to a cloud. That is what I believe should exist instead of a more concrete IActiveRecordEngine.

What’s the advantage? We keep the standard interface for which Rob is striving but we can now have that interface implemented by a plugable component rather than having it hard coded into a web.config.

The second part of Rob’s post is about creating opinionated controllers. I have to say I’m dead against that. I agree with the goal of implementing the basic CRUD operations for people, in fact I’m in love with it. What I don’t like is that it is implemented in a base class from which my controllers descend. If I’m reading the post correctly then the base controller is implementing actual actions. It is dangerous to implement actions willy nilly, actions which could be dangerous and people wouldn’t even realize the actions exist. Chances are very good that users are just going to leave the actions implemented rather than overriding them with noop actions.

Another option is that I’m reading this post incorrectly and the methods in the base class are private and not actions. I like that a lot more, but even more I like generating template controllers. Subsonic 3 follows this methodology and it is really nice to be able to twiddle with bits of the implementation. What’s more the generation doesn’t have to stop at the controller. If the implementation in the controller is known then why not generate the views as well?

All in all I like the idea of improving the object store support in ASP.net MVC but I would like it to be as flexible as possible.

2009-10-28

Quick post on getting node information from Umbraco with IronPython

I was just working with the IronPython page type in umbraco and needed to get a property from the page I was on. This can be done by accessing the Node API found in umbraco.presentation.nodeFactory. In order to be able to pull a value you will need pull in that part of the API

import umbraco.presentation.nodeFactory
from umbraco.presentation.nodeFactory import *

Now you can get the current node and query its properties

print Node.GetCurrent().GetProperty(“Address”).Value

2009-10-07

xVal PostSharp 1.0 Demo Project

After my last post I thought I would look at the demo project for xVal 1.0 and see if I could get it working with PostSharp. It was a little bit differnt in how it way set up from my projects but I figured it could still be improved with PostSharp. My first issue was that the method I was intercepting was in the entity itself rather than in a repository. This meant that there were methods in the entity which I didn’t wish to intercept. There were two classes of those

  1. Accessor methods ““ we don’t need to intercept getters
  2. Internal methods ““ ASP.net MVC uses reflection to examine the internals of the data classes in order to bind form results to them. We can avoid intercepting these by ignoring methods which start with “˜.’

Next because the entity already contained all of the data it needed to persist the persistance method didn’t have any arguments. In the previous post I assumed that this would always be the case. You know what they say about assuming: if you assume you make a jerk out of everybody in Venice. Pretty sure that is the saying.

This required and expansion of the current validator

public override void OnEntry(MethodExecutionEventArgs eventArgs)
{
if (IsAccessor(eventArgs) || IsInternal(eventArgs))
return;
if (HasArguments(eventArgs))
{
CheckPassedInformation(eventArgs);
}
else
{
CheckSelfContainedEntity(eventArgs);
}
base.OnEntry(eventArgs);
}

private static bool HasArguments(MethodExecutionEventArgs eventArgs)
{
return eventArgs.GetReadOnlyArgumentArray() != null && eventArgs.GetReadOnlyArgumentArray().Count() &rt; 0;
}

private static void CheckPassedInformation(MethodExecutionEventArgs eventArgs)
{
var toValidate = eventArgs.GetReadOnlyArgumentArray()[0];
var errors = DataAnnotationsValidationRunner.GetErrors(toValidate);
if (errors.Any())
throw new RulesException(errors);
}

private static void CheckSelfContainedEntity(MethodExecutionEventArgs eventArgs)
{
var toValidate = eventArgs.Instance;
var errors = DataAnnotationsValidationRunner.GetErrors(toValidate);
if (errors.Any())
throw new RulesException(errors);
}

public bool IsAccessor(MethodExecutionEventArgs eventArgs)
{
if (eventArgs.Method.Name.StartsWith(“get_”))
return true;
return false;

}

public bool IsInternal(MethodExecutionEventArgs eventArgs)
{
if (eventArgs.Method.Name.StartsWith(“.”))
return true;
return false;

}

You can see I did a little bit of clean code refactoring in there to extract some methods. Now two different methods of saving information are checked.

The lesson here seems to be that the way in which you construct your data persistance layer has an effect on the construction of the validator such that there is no generic aspect which you can download and use.

You can download my modified xVal project here. In order to get it running you’ll need to have PostSharp installed but you’re going to want it for lots of other stuff so get going on installing it.

Oh just one more note, when you’re trying it out be sure to disable javascript so that the page actually posts back and doesn’t validate using the javascript validation.

2009-09-16

Cleaning Up xVal Validation With PostSharp

Even though the new ASP.net MVC 2.0 framework comes with built in validation it is useful to look at some alternatives. Afterall you don’t want Microsoft telling you how to do everything, do you? One of the better validation frameworks is the xVal framework which just went 1.0. In this release there has been added support for a number of new features, probably the coolest of which is AJAX based validation for complex input.

However xVal does have one drawback, it is quite verbose to implement. In every method which alters data you will have to put something like

 var errors = DataAnnotationsValidationRunner.GetErrors(this).ToList();  

 if(errors.Any())  
 throw new RulesException(errors);

This is a bit repedative and kind of tiresome. Sure we could extract a method from this and just call that each and every time we edit data but that doesn’t really solve the underlying problem of having code which is repeated.

Enter PostSharp.

PostSharp is an aspect oriented addon for .net languages. It acutally does most of its weaving in a post build step modifying the MSIL the compiler generates. We can extract the validation into an aspect.

 [Serializable]  
 public class ValidateAttribute : OnMethodBoundaryAspect  
 {  
 public ValidateAttribute(){}  

 public override void OnEntry(MethodExecutionEventArgs eventArgs)  
 {  
 if (eventArgs.GetReadOnlyArgumentArray() != null && eventArgs.GetReadOnlyArgumentArray().Count() > 0)  
 {  
 var toValidate = eventArgs.GetReadOnlyArgumentArray()[0];  
 var errors = DataAnnotationsValidationRunner.GetErrors(toValidate);  
 if (errors.Any())  
 throw new RulesException(errors);  
 }  
 }  
 }

Here we create an OnEntry method which is called before the method we are intercepting. We skip any method with no arguments since it isn’t likely to be updating data. Then we extract the argumetns and pass them into the validator for it to do its business.

Finaly we give the PostSharp framework a bit of information about where to use this aspect

AssemblyInfo.cs

[assembly: CertificateSearch.Aspects.Validate(AttributeTargetTypes = "CertificateSearch.Models.*Repository")]

I have applied it to all the methods in files which end with Repository the Models namespace. That covers all the data modification methods in the project. I can now add new methods without the cost of ensuring that I validate each one.