How I fixed OneDrive like Mark Russinovich

Fellow Monster David Paquette sent me a link to a shared OneDrive folder today with some stuff in it. Clicking on the link I was able to add it to my OneDrive. The dialog told me files would appear on my machine soon. So I waited.

After an outrageously long time, 37 seconds, the files weren't there and I went hunting to find out why. As it turns out OneDrive wasn't even running. That's suppose to be a near imposiblity in Windows 10 so I hopped on the Interwebernets to find out why. Multiple sources suggested solutions like clearing my credentials and running OneDrive.exe /reset. Of course none of them worked.

Something was busted.

Running the OneDrive executable didn't bring up the UI it didn't do any of the things the Internet told me it should. My mind went back to when I was setting up my account on this computer and how I fat fingered stimm instead of stimms as my user name. Could it be the OneDrive was trying to access some files that didn't exist?

Channeling my inner Mark Russinovich I opened up ProcessMonitor a fantastic tool which monitors file system and registry access. You can grab your own copy for free from https://technet.microsoft.com/en-us/sysinternals/bb896645.aspx.

In the UI I added filters for any process with the word "drive" in it and then filtered out "google". I did this because I wasn't sure if the rename from skydrive to onedrive had missed anything. Then I ran the command line to start up OneDrive again.

Process monitor found about 300 results before the process exited. Sure enough as I went through the file accesses I found
http://i.imgur.com/soAh4PR.png Sure enough OneDrive is trying to create files inside of a directory which doesn't exist. Scrolling further up I was able to find some references to values in the registry under HKCU\SOFTWARE\Microsoft\OneDrive which, when I opened them up, contained the wrong paths. I corrected them
http://i.imgur.com/arhWYgt.png And with that in place was able to start up OneDrive successfully again and sync down the pictures of cats that David had sent me.

The story here is that it is possible, and even easy, to figure out why a compiled application on your machine isn't working. By examining the file and registry accesses it is making you might be able to suss out what's going wrong and fix it.

CI with F# SQL Type Providers

My experimentation with F# continues. My latest challenge has been figuring out how to get SQL type providers to work with continuous integration. The way that SQL type providers work (and I'm speaking broadly here because there are about 4 of them) is that they examine a live database and generate types from it. On your local machine this is a perfect set up because you have the database locally to do development against. However on the build server having a database isn't as likely.

In my particular case I'm using Visual Studio Online or TFS Online or whatever the squid it is called these days. Visual studio team services that's what it is called.

Screenshot of VSTS

I'm using a hosted build agent which may or may not have a database server on it - at least not one that I really want to rely on. I was tweeting about the issue and Dmitry Morozov (who wrote the type provider I'm using - the F# community on twitter is amazing) suggested that I just make the database part of my version control. Of course I'm already doing that but in this project I was using EF migrations. The issue with that is that I need to have the database in place to build the project and I needed to build the project to run the migrations... For those who are big into graph theory you will have recognized that there is a cycle in the dependency graph and that ain't good.

Graph cycles

EF migrations are kind of a pain, at least that was my impression. I checked with Canada's Julie Lerman, David Paquette, to see if maybe I was just using them wrong.

Discussion with Dave Paquette

So I migrated to roundhouse which is a story for another post. With that in place I set up a super cheap database in azure and I hooked up the build process to update that database on every deploy. This is really nice because it catches database migration issues before the deployment step. I've been burned by migrations which locked the database before on this project and now I can catch them against a low impact database.

One of the first step in my build process is to deploy the database.
Build process

In my F# I have a setting module which holds all the settings and it includes

module Settings =  
    [<Literal>]
    let buildTimeConnectionString = "Server=tcp:builddependencies.database.windows.net,1433;Database=BuildDependencies;User ID=build@builddependencies;Password=goodtryhackers;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"

And this string is used throughout my code when I create the SQL based types

type Completions = SqlProgrammabilityProvider<Settings.buildTimeConnectionString>  

and

let mergeCommand = new SqlCommandProvider<"""  
        merge systems as target
        ...""", Settings.buildTimeConnectionString>(ConnectionStringProvider.GetConnection)

In that second example you might notice that the build time connection string is different from the run time connection string which is specified as a parameter.

How I wish it worked

For the most part having a database build as part of your build process isn't a huge deal. You need it for integration tests anyway but it is a barrier for adoption. It would be cool if you could check in a serialized version of the schema and, during CI builds, point the type provider at this serialized version. This serialized version could be generated on the developer workstations then checked in. I don't think it is an ideal solution and now I've done the footwork to get the build database I don't think I would use it.

Running your app on Windows Server Core Containers

Most of the day I work on an app which makes use of NServiceBus. If you've ever talked to me about messaging, then you know that I'm all over messaging like a ferret in a sock.
Sock Ferret

So I'm, understandibly, a pretty big fan of NServiceBus - for the most part. The thing with architecting your solution to use SOA or Microservices or whatever we're calling it these days is that you end up with a lot of small applications. Figuring out how to deploy these can be a bit of a pain as your system grows. One solution I like is to make use of the exciting upcoming world of containers. I've deployed a few ASP.NET Core applications to a container but NServiceBus doesn't work on dotnet Core so I need to us a Windows container here.

First up is to download the ISO for Windows Server Core 2016 from Microsoft. You can do that for free here. I provisioned a virtual box VM and installed Windows using the downloaded ISO. I chose to use windows server core as opposed to the version of windows which includes a full UI. The command line was good enough for Space Quest II and by gum it is good enough for me.

Starting up this vm gets you this screen
Imgur

Okay, let's do it. Docker isn't installed by default but there is a great article on how to install it onto an existing machine here. In short I ran

powershell.exe

Which started up powershell for me (weird that powershell isn't the default shell). Then

wget -uri https://aka.ms/tp4/Install-ContainerHost -OutFile C:\Install-ContainerHost.ps1  
& C:\Install-ContainerHost.ps1

I didn't specify the -HyperV flag as in the linked article because I wanted Docker containers. There are two flavours of containers on Windows at the moment. HyperV containers which are heavier weight and Docker containers which are lighter. I was pretty confident I could get away with Docker containers so I started with that. The installer took a long, long time. It had to download a bunch of stuff and for some reason it decided to use the background downloader which is super slow.

Slowwwww

By default, the docker daemon only listens on 127.0.0.1 which means that you can only connect to it from inside the virtual machine. That's not all that useful as all my stuff is outside of the virtual machine. I needed to do a couple of things to get that working.

The first was to tell docker to listen on all interfaces. Ideally you shouldn't allow docker to bind to external interfaces without the TLS certificates installed. That was kind of a lot of work so I ignored the warning in the logs that it generates

/!\\ DONT' BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!\\

Yeah that's fine. To do this open up the docker start command and tell it to listen on the 0.0.0.0 interface.

notepad c:\programdata\docker\runDockerDaemon.cmd  

Now edit the line

docker daemon -D -b "Virtual Switch"  

to read

docker daemon -D -b "Virtual Switch" -H 0.0.0.0:2376  

Now we need to relax the firewall rules or, in my case, turn off the firewall completely.

Set-NetFirewallProfile -name * -Enabled "false"  

Now restart docker

net stop docker  
net start docker  

We should now be able to access docker from the host operating system. And indeed I can by specifying the host to connect to when using the docker tools. In my case on port 2376 on 192.168.0.13

docker -H tcp://192.168.0.13:2376 ps -a  
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS   PORTS               NAMES  

Finally, we can actually start using docker.

I hammered together a quick docker file which sucked in the output of my NSB handler's build directory.

FROM windowsservercore

ADD bin/Debug /funnel

WORKDIR /funnel

ENTRYPOINT NServiceBus.Host.exe  

This dockerfile is based on the windowservercore image which was loaded onto the virtual machine during the setup script. You can check that using the images command to docker. To get the docker file running I first build the image then ask for it to be run

docker -H  tcp://192.168.0.13:2376 build -t funnel1 -f .\Dockerfile .  
docker -H  tcp://192.168.0.13:2376 run -t -d funnel1  

The final command spits out a big bunch of letters and numbers which is the Id of the image. I can use that to get access to the command line output from that image

docker -H  tcp://192.168.0.13:2376 logs fb0d6f23050c9039e65a106bea62a9049d9f79ce6070234472c112fed516634e  

Which gets me
Output

With that I'm most of the way there. I still need to figure out some networking stuff so NSB can find my database and put our license file in there and check that NSB is actually able to talk to MSMQ and maybe find a better way to get at the logs... okay there is actually a lot still to do but this is the first step.

I squash my pull requests and you should too

A couple of weeks ago I made a change to my life. It was one of those big, earth shattering changes which ripple though everything: I started to squash the commits in my pull requests.

It was a drastic change but I think an overdue one. The reasons for my change are pretty simple: it makes looking at commit histories and maintaining long-lived branches easier. Before my pull requests would contain a lot of clutter: I'd check in small bits of work when I got them working and whoever was reviewing the pull request would have to look at a bunch of commits, some of which would later be reversed, to get an idea for what was going on. By squashing the commits down to a single commit I can focus on the key parts of the pull request without people having to see the mess I generated in between.

If you have long lived branches (I know, I know) then having a smaller number of commits during rebasing is a real help. There are fewer things that need merging so you don't end up fixing the same change over and over again.

Finally the smaller number of commits in mainline give a clearer view of what has changed in the destination branch. You individual commits might just have messages like "fixing logging" but when squashed into a PR the commit becomes "Adding new functionality to layout roads automatically". Looking back that "fixing logging" commit isn't at all helpful once you're no longer in the thick of the feature.

What has it done for me?

I've already talked about some of the great benefits in the code base but for me, individually, there were some nicities too. First off is that I don't have to worry about screwing up so much. If I go down some crazy path in the code which doesn't work then I can bail out of it easily and without worrying that other developers on my team (Hi, Nick!) will think less of me.

I find myself checking in code a lot more frequently. I have a git alias to just do

git wip  

and that checks in whatever I have lying around. It is a very powerful undo feature. It do a git wip whenever I find myself at the completion of some logical step be it writing a unit test or finishing some function or changing some style.

How do you do it?

It is easy. Just work as you normally would but with the added freedoms I mentioned. When you're ready to create a pull request then you can issue

git log  

and find the first commit in your branch. There are some suggested ways to do this automatically but they never seem to work for me so I just do it manually.

Now you can rebase the commits

git rebase -i 0ec9df23  

where 0ec9df23 is the sha of the last commit on the parent branch. This will open up an editor showing all the commits in chronological order. On left you'll see the word pick.

pick 78fc6bc added PrDC session to speaking data  
pick 9741725 Podcast with Yves Goeleven

# Rebase 9d792c2..9741725 onto 9d792c2

Starting at the bottom just change all but the first of these to squash or simply s. Save the file and exit. Git will now chug a bit and merge all the changes into one. With this complete you can push to the central repo and issue the pull request. You can add additional changes to this PR to address comments and, just before you do the merge, do another squash. You may need to push with -f but that's okay, this time.

I'm a big fan of this approach and I hope you will be too. It is better for your sanity, for the team’s sanity and for the git history's sanity.

SQL Server Alias

Ever run into that problem where everybody on your team is using a different database instance name and every time you check out you have to update the config file with your instance name?

Boy have I seen some complicated solutions around this involving reading from environments and having private, unversioned configuration files. One of the developers on my team recommended using SQL Server Aliases to solve the problem. I fumbled around with these for a bit because I couldn't get them to work. Eventually, with some help, I got there.

Let's say that you have an instance on your machine called sqlexpress but that your project needs an instance called sqlexpress2012. The first thing is to open up the SQL Server Configuration Manager. The easiest way to do this is to run

SQLServerManager13.msc

where the 13 is the version number of SQL server so SQL 2014 is 12 and SQL 2016 is 13. That will give you

SQL Server Configuration Manager

The first thing to check is that your existing instance is talking over TCP/IP.

Enable TCP/IP

Then click on the properties for TCP/IPO and in the IP Addresses tab check for the TCP Dynamic Ports setting

Dynamic ports

Make note of that number because now we're going to jump to the alias section.
Aliases Right click in there and add a new alias
In here we are going to set the alias name to the new name we want to use. The port number is what we found above, the protocol is TCP/IP and the server is the existing server name. You then have to repeat this for the 64 bit client configuration and then restart your SQL server. You should now be able to use the new name, localhost\sqlexpress2012 to access the server.