2014-07-21

Getting ASP.net vNext Running on OSX

I’m giving a talk this week at the local .net user group about ASP.net vNext. I thought I would try to get it running on my Mac because that is a pretty nifty demo. 1. If you don’t have mono installed at all then go grab the latest binaries. You need to have a functioning mono installation to build mono. Weird, huh? 2. Install mono from git. I set my prefix in the autogen step to be in the same directory as my current version of mono

git clone https://github.com/mono/mono.git cd mono ./autogen.sh –prefix=/Library/Frameworks/Mono.framework/Versions/3.6.1 make sudo make install

Now when I first did this I had all sorts of weird compilation problems. I messed around with it for a while but without much success. Google was no help so in a last ditch effort I pulled the latest and everything started to work again. So I guess the moral is that the cutting edge sometimes fails to build. On the other hand it would be good if the mono team had a CI server which would spot this stuff before it hit dumb end users like me.

Update: Mono 3.6 has been released which should fix most of the issues people were having with vNext on OSX. You don’t need to build from source anymore. The updated packages should be in brew in the next little while.

  1. Install homebrew if you don’t already have it. You can find instructions athttp://brew.sh/

  2. Use brew to install k. Whyk? I don’t know but it is a prefix which is used all over the vNext stuff.

brew tap aspnet/k brew install kvm source kvm.sh

This will set up kvm which is the version manager.

  1. Use kvm to install a runtime. The wiki for vNext suggests this runtime but it is really old.

kvm install 0.1-alpha-build-0446

I’ve been using the default latest and it seems to be more or less okay.

  1. Pull down the home depot fromhttps://github.com/aspnet/Home/. This repois the meeting point for the various aspnet projects and the wiki there is quite helpful.

  2. Jump into the ConsoleApp directory and run

k run

This will compile the code and execute it. It will be compiled with Roslyn which is cool enough to make me happy. There is very little printed by default but you can change that by setting an environmental variable

export KRE_TRACE=1

I did run into an issue running the sample web application and sample MVC applications from the home repo.

System.TypeInitializationException: An exception was thrown by the type initializer for HttpApi —> System.DllNotFoundException: httpapi.dll

I chatted with some folks in the Jabbr chatroom for aspnet vNext and it turns out that the current self hosted ASP.net doesn’t work full yet on OSX. However there is an alternative inKestrela libuv based http server. I pulled that repo and tried the sample project which worked great.

If you’re around Calgary on Thursday then why not come to my talk and watch me stumble around trying to explain all of this stuff?http://www.meetup.com/Calgary-net-User-Group/

2014-07-17

d3 Patterns

I’m a big fan of the d3 data visualization library to the point where I wrote a book about it. Today I came across and interesting problem with a visualization I’d created. I had a bunch of rows which I’d colored using a 10 color scale.rows

The users wanted to be able to click on a row and have it highlight. Typically I would have done this by changing the color of the row but I had kind of already used up my color space just building the rows. I needed some other way to highlight a row. I tried setting the border on the row but that looked ugly and became a tangled mess when adjacent rows were highlighted.

rows2

What I really wanted was to put some sort of a pattern on the row. As it turns out this is quite easy to do. SVG already provides a mechanism for applying patterns as fills. The one issue is that you can’t apply a pattern as an overlay to an existing fill you have to replace it completely.

First I created the pattern in d3

Here I create a new pattern element and put a rectangle in it. I rotate the whole pattern on a 45 degree angle to get a more interesting pattern. You may notice that the code references the variable d. I’m actually creating and applyingthis pattern inside of a click handler for the row. This allows me to create a new pattern for each row and color it correctly. The full code looks like

The finished product looks like

rows3

You can change the pattern to come up with more interesting effects

rows4

rows5

2014-07-09

Parsing Command Line Arguments in C#

If you have the need to parse command line arguments in C# then might I recommend the excellent Command Line Parser. It can be installed from nuget by simply running

Install-Package CommandLineParser

Once you have it installed you start by setting up an options file which contains properties for all the options you would like your application to understand. Mine looks like

For each parameter you would like parsed you can decorate it with an Option attribute. Within this I defined the shortoptions name(u for export users and h for help) followed by the long option name. I also set the help text, required status and default value for each option.

Within the Main method of my application I called out to the option parser like so

The help text is particularly useful as you can now easily print print out help which is always a bit of a pain to maintain otherwise.commandline

Another nifty feature of the library is the ability to define subcommands as part of your options. This allows building a command line interface similar to git in which the behaviour of the tool changes drastically based on the first undashed option:

git remoteadd http://some.url/project.git

This is quite well documented on the github wiki. I haven’t tried it yet as my needs are not that complex.

There is a version 2.0 of the library under development on githubwhich changes the interface pretty drastically. There doesn’t seem to be a whole lot of development on it so I’ve opted for the more stable version for my projects.

Overall I would recommend this library over Mono.Options which I’ve used with a high degree of success in the past.

2014-05-23

Background Tasks in ASP.net Redux

A short while ago I wrote about how bad it is running background tasks in ASP.net. It basically comes down to “you don’t know when the application will recycle and your task will be killed”. My solution was to farm out background tasks to another machine through the use of a messaging system the likes of Azure Service Bus or MSMQ. I still believe that this is a great solution for running in the cloud. The issue with the cloud is that not only may the app pool recycle but the whole machine might disappear and pop up on another physical machine somewhere else in the data center. There may, however be scenarios in which you might like to perform an asynchronous actions on the server itself without having to worry about app pool recycles.

Until now there has been no way to do this. With .net framework 4.5.2 that all changes.

First a warning: at the time of writing 4.5.2 is very fresh. It is not yet supported on Azure or probably most other hosts you might use. To develop against it you’ll need the Microsoft .NET Framework 4.5.2 Developer Pack which can be downloaded here. 4.5.2 will eventually be installed on Azure and should also be included in updates to Windows and Visual Studio. I’ve never been able to get .net framework version adoption numbers out of Microsoft so I have no idea how long it will be before you can reasonably expect 4.5.2 to show up on the majority of machines. It really doesn’t matter for this feature as you only need updates to your servers.

The first thing you’ll need is to acquire the new developer pack and install it. Next you’ll need to update the target for your current application to target .net 4.5.2. framework Getting to be quite a few platforms in there now, huh? Let’s not look at the portable class library options.

A real good example background task is sending e-mail. This is an operation which can take quite a long time(relatively speaking) and is not, typically, something which needs status information fed back to the user. This, as it turns out, is a bit more complicated than we would like. There is a huge post over on StackOverflow about how to correctly send an asynchronous e-mail. Much of the confusion comes in around sending mail asynchronously. If you look at the SmtpClient class there are now two different flavours of sending async: SendAsync and SendMailAsync. The first is the old method for sending asynchronously complete with a cancellation token nobody has ever used. The second is the newer method which uses the async/await format. The old method caused a lot of issues with there was a problem sending mail. The errors would frequently be swallowed or bubble up so high as to blow up the worker role. This was because it executed outside of the normal page context so people forgot that their normal error catching code wouldn’t intercept issues. Whenever .net 5 is release I hope Microsoft make removing the old method a breaking change. To do so they really should have marked the method as deprecated in this release - perhaps for .net 6 then.

In this example we’ll actually use the newer SendMailAsync method and we’ll do it in a background thread. Using async allows fewer threads to be used in situations where there is some load on the system we should get better performance.

You will almost certainly wish to add some additional error checking in there. You should hook into UnobservedTaskException as the method returns a void task.

A Final Warning

A background task started in this fashion still only delays the recycling of an app pool for 30 seconds. Thus if your task takes longer than 30 seconds to execute it will still be killed. For long running tasks you are still far better saving them to a persistent queue or some other storage.

References:

http://www.davidwhitney.co.uk/Blog/2014/05/09/exploring-the-queuebackgroundworkitem-in-asp-net-framework-4-5-2/

http://blogs.msdn.com/b/dotnet/archive/2014/05/05/announcing-the-net-framework-4-5-2-release.aspx

2014-05-09

Let's build a map!

If you spend any time working in oil and gas in this province then you’re going to run into a situation where you need to put some stuff on a map. If you’re like me that involves complaining about it on twitter

I don’t know how I get myself into working with GIS data. I seriously have no idea.

“” Simon Timms (@stimms) April 28, 2014

The problem is that GIS stuff is way harder than it looks on the surface. Maybe not timezone hard but still really hard. Most of it comes from the fact that we live on some sort of roughly spherical thing. If we live in flatland mapping would be trivial. As it stands we need to use crazy projections to map a 3D world onto a 2D piece of paper

https://www.youtube.com/watch?v=n8zBC2dvERM

There are literally hundreds of projections out there which stress different things. Add to that a variety of coordinate systems which can be layered on top of it. There is the latitude/longitude system with which we’re all familiar but there are also a bunch of others. In Western Canadathe important one is the Dominion Land Survey(well most of Western Canada, we’ll get to that). The Dominion Land Survey was actually a series of surveys starting as far back as the 1870s. Bands of bearded men (there may have been bearded women too, everybody back then had beards) traveled around Canada plunking down lines to divide the land into 1sq mile sections called, well, sections. Why miles? Because of some guys calledJ. S. Dennis and William McDougall figured that a lot of people would be coming up from the states and would better understand miles. Thanks for screwing us over, again, with your stupid outdated measurement system, United States.

Anyway you can read a ton more about the system over atwikipedia. The important thing to know is that Western Canada is divided into 6 mile by 6 mile blocks know as townships and that these are divided into 36 sections and each section is divided into 4 quarter sections. Sections can also be divided into 16 legal subdivisions commonly known as LSDs. The LSDsare numbered in the stupidest way possible starting in the bottom right corner and counting up by going left and then up then pretending that we’re a snake and just flipping back and forward

13141516
1211109
5678
4321
You might has well have numbered this things completely randomly as far as I’m concerned

1727cranberry
126also 69
567coke
433null1
I’m told that this all makes sense if you have some background in cartography.

How I picture the average cartographer How I picture the average cartographer

The point of all of this is that LSDs are super important in the oil and gas industry because that is how you lease land. The result is that people want to see LSDs on their maps and, as seems to frequently happen, I got the task of building a map. This post is about how to get LSDs onto a map and show it to your clients who aren’t asses. Mostly.

The company for which I’m working has most of its interests in Saskatchewan, Alberta and BC. I started with Saskatchewanas I figured that I might as well get into thinking like a cartographer and working right to left straight off the bat(I hear that cartographers from Arabic countries work from left to right to maximize the confusion). The first step was to find some LSD data. Saskatchewan have an open data portal at http://opendatask.ca/data/which contains a link for LSD data. What you want in particular is theSaskGrid2012. This file contains a lot of stuff but the four things you want are the various high level map structures: township, section, quarter section and legal sub division. We probably don’t need quarter section as once we get to that level most people are interested in LSDs.grids Inside one of these zip files are a number of files which, as it turns out, are Esri shape files. Esri is a piece of GIS software. It is expensive. However there is a free alternative which has all the functionality we need along with 9000 pieces of functionality we don’t:QGIS. If you download and install this software it will let you take a look at the shape files. You can add it by clicking on “Add Vector Layer” then pointing it at the .shp file.add layerIf you load the township file it will get you something which looks like very much likeSaskatchewan. What’s more is that if you click on the little identify featuretool then on the map it will tell you the name of that township. Awesome!

To give you an idea of how many of these features there are here is what the townships look like:sask

For each township there are 36 section (6×6) and for each section there are 16 (4×4) LSDs. So for Saskatchewan there are something like 7000 townships, 250 000 section and a mind blowing 4 million LSDs. Quite a bit of data.

So now we have a map. But it only works inside of QGIS and I’m sure not going to go around supporting that. It would be really nice if this layer was available on a Google maps like thing.

Leaflet

Leaflet.js is a nifty library for manipulating maps. It can use any number of map backends but I usedOpenStreeMap because it is awesome. Like really awesome. Start a new ASP.net project and then go and grab the latest leaflet from theirsite. Leaflet is in nuget but it is an older version which hasn’t been updated for 6 months or so. Add the files to the site bundles in BundleConfig.cs. I also included my site files in this bundle for convenience

I created a typescript file for the home page based on the example on the lealflet home page

Once I’d added a div in my Index.cshtml under the home controller and constructed the map I ended up with a map centered on, roughly, the border between Alberta and Saskatchewan.map1

Note:We’re using the open street map tiles directly from their title server in this example. This is frowned upon as it cost the project money. There are a number of proxy services you can use instead or write your own. Be a good citizen and cache the tiles so that the project can spend money on something else.
Now to get our grid lines onto the map. To start I added a simple polygon

#file-index-ts-L14-L18 This builds a map which includes a nifty redbox.[![map2](http://stimms.files.wordpress.com/2014/05/map2.png)](http://stimms.files.wordpress.com/2014/05/map2.png)This basically proves that we can draw out LSDs as needed. So the next thing is going to be combining the shape file data we had above and the map we have from OpenStreetMap. # Exporting KML Files I went down a few blind alley with this one before coming up with, what I think, is the best option. I decided to exploit the power of the geometry types in SQL Server to find any LSDs inside a bounding box. To display all the LSDs or even the townships at a low zoom level is messy. It covers up the map and is too detailed for that level. As such only showing a few at a time is a good idea. To figure out which ones to show requires putting in place a filter to show only LSDs within a bounding box, the bounding box described by the map at a high zoom level. Getting the data into SQL server is a two step process: this first is to create KML files and the second is to load them into SQL server. For each of township, section and LSD I loaded them into QGIS. Then right clicked on the layer and hit Save as. In that dialog I selected KML as the format and a file into which to save the layer.[![save](http://stimms.files.wordpress.com/2014/05/save.png)](http://stimms.files.wordpress.com/2014/05/save.png)This generates a pretty sizable export file, well over 4GB for the LSDs. Don't worry though because these crummy things are XML so most of that will disappear when loaded into SQL server. If you want you can attempt to simplify the geometry in QGIS which will reduce the export size at the cost of fidelity. The file size does, however, pose a bit of a problem for our import tools as they use a DOM Parser for reading KML instead of a SAX parser. If I were going to make a living at manipulating maps this would be a place I expended some effort to correct. # Importing KML Files into SQL Server I hunted around and found a tool called[KML2SQL](https://github.com/Pharylon/KML2SQL)to import KML files into SQL server. It had a few issues with it so I[forked it](https://github.com/Pacesetter/KML2SQL)and made some updates. If you're going to make use of the tool then you're going to be better off using my fork at the current time. However if my pull requests are merged then the master repo may be more healthy. As I mentioned the KML files are too large to be consumed by the import tool. To solve this I wrote a quick KML splitter application, which splits the KML files into 500 feature blocks. I've included the source on[github](https://github.com/stimms/LSDMap/tree/master/KMLSplitter). It isn't pretty but it gets the job done. [![kml2sql](http://stimms.files.wordpress.com/2014/05/kml2sql.png)](http://stimms.files.wordpress.com/2014/05/kml2sql.png) The KML2SQL tool will dynamically build tables with the correct columns in them so that's great. All I did was plop in my SQL server credentials and point the tool at the directory containing the output files from the splitter. I left this to churn for a few minutes. A few hours for the LSD divisions. I did see some import errors related to open polygons which I generally ignored. It is something I'll have to come back to in a while but they were perhaps 1% of the imports. SQL server wants the polygons to have the same star coordinate as they have end coordinate, which is only reasonable. The data from the government doesn't quite have that but for free data you can't complain too much. # Querying the Data Now that we have the data in the database the next step is to get it out onto the map. To start we need to have the map ask for some data when it is resized or panned. I hooked into the zoomend and dragend events in Leaflet. I found that it made sense to display the townships starting at zoom level 10 and at higher zoom levels show more detailed data such as sections or LSDs. I threw together a Web API controller to do the work of querying the database. Doing geospatial queries in SQL server is a bit more complicated that I would like but complex types in relational databases always are. I don't know, relational databases, man. I'm a big fan of the light weight ORM Dapper so I installed that and stole a bit of code for doing spatial queries from a posting by Sam Saffron on StackOverflow. I had to modify it a bit and ended up with:

This can be used to select the intersecting polygons by doing

STIntersects will check the number of intersection points between the given polygon and the ones in the database. If there are any intersections then we have a match and we add that to the result set. Actually building the search area can be a pain as you actually build a geometry like so

This provides a set of data to return to the client.

Plotting the Data

We’re almost there, folks, thanks for staying with it. The last step is to get the returned data plotted on the map. This is simply done by adding the polygons to a multipolygon layer

That’s it folks! You now have a map which looks like

map3The labels are a bit jaggedy right now as I’m just using the envelope center to calculate the position of the label. It doesn’t take a whole lot to screw that up. Putting them in the top left corner helps with that a lot

map4

P.S. I promised I would get back to BC’s system. They don’t use the DLS they use theNational Topographic System

2014-04-22

The permission model in android is totally broken

When installing a new application on a cell phone I typically agree to whatever the stupid app wants. My approach is “just do it and stop asking me questions”. There have been numerous reports about how apps are stealing data. I had to rebuild my phone this week after getting a replacement from Google due to some rather nasty screen issues. I thought I would be a bit more circumspect in installing applications this time. I took a close look at the permissions applications were requesting as I installed them.

It is absolutely amazing the permissions applications are requesting. Of the 10 or 11 clock applications I looked at every last one of them wanted some permission which I deemed unnecessary. Reading caller ids, access to the network, access to contacts, ability to send e-mails without me knowing,”¦ Outrageous! I’m sure an argument could be made for many of these but I cannot imagine how the argument for being able to read my text messagesor read my contacts would go. If you’re not paying for something then you’re the product has never been more true.

Asking to read my text messages? That's a paddlin'Asking to read my text messages? That’s a paddlin’

What’s the solution?

I think it is actually a pretty easy solution: grant permissions in the same way as HTML5 or OpenID. HTML5 will request permission when a page performs some activity such as capturing images from your web camera. If the script isn’t granted permission to access the camera then it should degrade or cancel based on this. Equally when you’re logging into an OpenID site and it requests additional fields from the login provider then you can click cancel and the application should accept this and compensate.

Sorry, you need what permissions?Sorry, you need what permissions?

As it stands I either accept that my alarm clock needs to read my text messagesor I don’t install it. Usually I just don’t install it. If I were able to pick and chose the permissions the application could have then it could degrade and still give me some functionality. Developers would have a much harder time sneaking malware onto phones if this could be done. As an added bonus I would like to see developers have to enter a reason why each permission was needed and have that show up during the install.

The correct set of permissions for an alarm clockThe correct set of permissions for an alarm clock

I can’t believe that Google is just letting this stuff go. Say what you will for Apple but they’re pretty willing to crack down on stuff like this.

2014-04-21

Glimpse for raw ADO

If you haven’t used Glimpse and you’re developing a web application in .net then you’re missing out on a great deal of fun. Glimpse is sort of like the F12 developer tools but running on your server and offering insight into how the server side of the page is doing. Installing Glimpse is also very easy and can be done entirely from nuget.

install-package Glimpse.AspNet

Because there are all sorts of different configurations for ASP.net applications Glimpse is divided up into a main assembly and then a bunch of helper assemblies which can be put together like Lego. For instance if you’re using a brand new ASP.net MVC site with Entity Framework then you can install Glimpse and then the modules for that specific configuration

install-package Glimpse.EF6 install-package Glimpse.MVC5

This is a very well supported configuration. If you’re less fortunate and you need to hook Glimpse into an application which makes use of a lot of legacy data layer code then EF profiling isn’t going to be available for you. You can, however, make use of the ADO Glimpse package. To do this you’ll first need to install the package

install-package Glimpse.ADO

If you’re application makes use of DbProviderFactory then you’re done. If not then you’ll need to transition the site to make use of this method of building connections.DbProviderFactory is a method of abstracting out the connection provider so that you can easily swap different database strategies into place. If you were originally setting up connections to open like so

Then all you need to do is convert it to look like

The issue is that the ADO tooling for .net provides very few extension points. Glimpse.ADO works by hooking itself in as a DbProvider which wraps the SqlProvider and intercepts all the calls. You don’t need to modify your entire site at once but if you don’t you’ll get a funny mixture of pages which work and pages which don’t work. A few well crafted regular expressions got me 90% of the way there on a medium sized application and I did a full transition in about an hour so it isn’t a huge time investment.

With the site using DbProviderFactory and Glimpseup and running I was able to get some really good hints about why pages were somewhat slow.

glimpseHaving this sort of information exposed to developers makes it much easier to debug and solve performance issues before the site hits users.

2014-04-18

Limitations of WebForms

I’m spending a lot of time working with WebForms at the moment. I haven’t written WebForms forms since”¦ 2003 maybe 2004. When WebForms was created it was done as way to transition developers from the drag and drop world of Windows development to the exciting world of the Internet. Of course the Internet is not a Windows form. The result is that WebForms is a leaky abstraction. The abstraction has been getting leakier and leakier as web technology progressed.

One of the key features of WebForms is that it keeps track of transient data in ViewState. By using ViewState WebForms is able to provide a web experience which is more similar to a desktop application: it is expected that when pressing a button on a Windows form that the rest of the form isn’t wiped out. This would happen without a ViewState storing the form state.Depending on how you configure your application the ViewState is either kept in a hidden field which is sent to the client or in some sort of server side storage mechanism. You can hook ViewState persistence up to a database like SQL server or to a distributed cache like memcache or Azure cache.

However the vast majority of sites keep ViewState in that hidden field. As your ViewState grows then so does each page load. There is very little room to optimize ViewState because, instead of it being sensibly stored in a key value fashion it is persisted as a single blob. The entire thing is persisted and reloaded each time. The result is that most interactions with the server need to include this viewstate. This makes lightweight AJAX calls difficult. When AJAX started to become popular update panels were introduced. These were chunks of the page which could be refreshed independently.

Again these were justplugging up a leaky abstraction.

As web applications become more JavaScript based it became apparent that the HTML produced by WebForms was brittle. Controls were named with a near indecipherable id which changed based on the rest of the page. Later versions of ASP.net brought more predictible contol names but, again, this was just patching the abstraction. If you’re interested in building a modern web application it should not be done using webforms. There are just too many places where the abstraction leaks and makes your job much more difficult.

Modern web applications make much greater use of client side framework the likes of Angular, Ember and Backbone. With this class of application the server side framework starts to matter less and less. Eventually it is reduced to a tool for sending views to the client and providing data end points. I won’t miss WebForms on new applications but we’re not all lucky enough to work on new applications. For legacy applications which are written using WebForms there are upgrade paths available to you.

I’m going to start blogging a bit over the next few weeks about how to start taking steps towards more modern WebForms applications without jeopardizing existing functionality. Stay tuned!

2014-04-14

A Quick tip on adding dependency injection

I ran into the need today to move quite a number of classes into dependency injection. This can be a bit of a pain as you have to go through a ton of places to find where the class is used and get it out of the DI container instead of simply newing it up. See the constructor remains valid so you can still create an instance using just var b = new blah();

One trick I used which really helped speed up finding the places where the class was being manually created was to, temporarily, make the constructor private. This will cause all the places where the class is being instantiated to be highlighted as compiler errors. Once you’re done fixing it then you can return the constructor to its previous state and go about your business. This is really just an application of the Lean on the Compiler pattern I first learned about in Michael Feathers’ excellent book Working Effectively with Legacy Code. Well worth a read if you have any untested code to maintain.

2014-04-04

Roslyn Changes Everything

Yesterday at Microsoft’s build conference there was a huge announcement: Microsoft were open sourcing their new C#/VB.net compiler. On the surface this seems like a pretty minor thing. I mean who looks at how compilers work? “This is probably going to be interesting to academics who study compilers and nobody else.”

Well I disagree. I think it is going to be a huge turning point in how programmers work with code.

There are other open source compilers: GCC,LLVM both come to mind as great examples. The differences between these and Roslyn are huge. First Roslyn is a much more modern compiler than almost anything else out there. I still think of clang, which is based on LLVM, as the new kid on the block, however LLVM was started in 2000: 14 years ago. Roslyn was written from the ground up over the last 4 years. I haven’t looked but I would bet that it makes much better use of things like parallel processing than other compilers. There is a pretty vague post on the C# blogabout how they’re treating performance of the compiler as a feature. Idon’t know what progress they’ve made on that front but we’ll certainly be seeing some benchmarks come out in the next few weeks as people dig into Roslyn.

Next is that Roslyn written in a much more accessible language: C#. It is going to be far easier for the average developer to jump into modifying the compiler than it would be add some functionality to LLVM. Roslyn was designed to be an extensible compiler. It has a well defined API and some phenomenal extension points into which people can plug. I think that we’re going to see a huge number of plugable modules which mutate the language.

The build pipeline for Roslyn taken from the overview on codeplex http://roslyn.codeplex.com/wikipage?title=Overview&referringTitle=HomeThe build pipeline for Roslyn taken from the overview on codeplex http://roslyn.codeplex.com/wikipage?title=Overview&referringTitle=Home

Finally I’m excited that Roslyn will enable smaller, more incremental changes to the languages it compiles. Already we’re seeing some hints as to this. In the Tour of Roslyn post there was an example of inline declarations:

public static void Main(string[] args) { if (int.TryParse(args[0], out var n1)) { Console.WriteLine(n1); } }

There is support for these in Roslyn but not in the classic C# compiler. Little things like this are going to add up and make the language much better. If shipping for Roslyn can be decoupled from Visual Studio, a given for open source projects, then we can see awesome new features enabled rapidly instead of waiting for the full releases of Visual Studio.

What can we do with the compiler?

Here are some quick ideas I had about what we could plug into Roslyn. Some of them are mad dreams but some of them are almost certain to get made.

Aspect Weaver

There is already and AOP weaver available for the .net platform in Aspect Sharp. It has a bit of a reputation for being slow. It works by rewriting the IL instructions which is kind of hacky and presents some problems. With Roslyn there should be no need to hook into the build that late. I think you could manipulate the syntax tree to inject calls to the aspects whenever needed.

syntax tree

AOP should be vastly easier and may even be more powerful with this syntax tree rewriting.

Custom Compiler Errors

Is there some practice you’re trying to avoid in your team? Perhaps long methods are really a huge deal for you and you want to fail the build when some mouth-breather writes a method which is over 50 lines long. No problem! Just plug into the syntax tree API and fail the compile when long methods are detected. Perhaps you want to check for and fail on concatenating strings and them running them against a database(SQL injection). Again this could be plugged in without a great deal of trouble.

Domain Specific Languages

There are plenty of nifty places where it would be fun to be able to define a custom syntax for certain projects. Perhaps you’re writing a message based system and you want to make it easier to write message handlers. With some Roslyn work a new syntax could be added so that instead of writing

You could just write

and all the wireup would be dealt with by the compiler.

Random other Syntax Improvements

You know what syntax I really like? Post if statements. I think they’re nifty and read more like human language.

This the sort of thing which can just be added by rewritingthe syntax tree. Oh or how about cleaning up the accessors for collections?

That’s probably a terrible syntax now I think about it”¦ whatever it is still possible.

It is going to be awesome!

I envision a future where any project of appreciable size will include a collection of syntax and compiler modules. These will be compiled first, plugged into Roslyn and then used to build the rest of the project. Coding standards will be easier to enforce, compilations will be more powerful. There is a risk that the language proliferation will get out of hand but I’m betting it will settle down after 2 or 3 years and we’ll get a handful of new dialects out of this. There will need to be new tooling developed to make changing compilers in VS easier. Package managers like nuget will need to be updated to support compiler modules but that seems trivial.

It is an exciting time to be a .net developer. I’m so glad that when I had the option I decided to go down the .net path and not the Java path. Those suckers just got lambdas and we’re working with the most modern, flexible, extensible compiler in the world? No contest.