Quantcast
Channel: Code rant
Viewing all 112 articles
Browse latest View live

A Geek Christmas Quiz–The Answers!

$
0
0

Thanks for everyone who had a go at my Geek Christmas Quiz. The response was fantastic with both Iain HolderandRob Pickering sending me emails of their answers. I’m pretty sure neither of them Googled any of the questions, since their scores weren’t spectacular :)

So, now the post you’ve all been waiting for with such anticipation… the answers!

Computers

  1. G.N.U stands for GNU is Not Unix. A recursive acronym, how geeky is that?
  2. The A in ARM originally stood for ‘Acorn’ as in Acorn Risc Machine. Yes, I know it stands for ‘Advanced’ now, but the question said ‘originally’.
  3. TCP stands for Transmission Control Protocol.
  4. Paul Allen founded Microsoft with Bill  Gates. I’ve just finished reading his memoirs ‘Ideas Man’. Hard work!
  5. F2 (hex) is 15(F) * 16 + 2 = 242.  1111 0010 (binary)
  6. Windows ME was based on the Balmer Peak theory of software development.
  7. The first programmer was Ada Lovelace. Yes yes, I know that’s contentious, but I made up this quiz, so there!
  8. UNIX time started in 1970 (1st January to be exact). I know this because I just had to write a System.DateTime to UNIX time converter.
  9. SGI, the mostly defunct computer maker. You get a mark for Silicon Graphics International (or Inc).
  10. Here’s monadic ‘Bind’ in C#: M<B> Bind<A,B>(M<A> a, Func<A, M<B>> func)

Name That Geek!

[Name_that_geek%255B4%255D.png]

  1. Bill Gates – Co-founder of Microsoft with Paul Allen.
  2. Tim Berners-Lee – Creator of the World Wide Web.
  3. Larry Ellison – Founder of Oracle. Lives in a Samurai House (how geeky is that?)
  4. Linus Torvalds – Creator of Linux.
  5. Alan Turing – Mathematician and computer scientist. Described the Turing Machine. Helped save the free world from the Nazis.
  6. Steve Jobs – Founded Apple, NeXT and Pixar.
  7. Natalie Portman – Actress and self confessed geek.

Science

  1. The four ‘letters’ of DNA are C, G, T and A. If you know the actual names of the nucleotides (guanine, adenine, thymine, and cytosine), give yourself a bonus point – you really are a DNA geek!
  2. The ‘c’ in E = mc2 is a constant, the speed of light.
  3. The next number in the Fibonacci sequence 1 1 2 3 5 8 is 13 (5 + 8).
  4. C8H10N402 is the chemical formula for caffeine.
  5. According to Wikipedia, Australopithecus, the early hominid, became extinct around 2 million years ago.
  6. You would not find an electron in an atomic nucleus.
  7. Nitrogen is the most common gas in the Earth’s atmosphere.
  8. The formula for Ohm’s Law is I = V/R (current = voltage / resistance).
  9. A piece of paper that, when folded in half, maintains the ratio between the length of its sides, has sides with a length ratio of 1.618, ‘the golden ratio’. Did you know that the ratio between successive Fibonacci sequence numbers also tends to the golden ratio? Maths is awesome!
  10. The closest living land mammal to the cetaceans (including whales) is the Hippopotamus.

Space

  1. The second (and third) stage of the Apollo Saturn V moon rocket was powered by five J2 rocket engines.
  2. Saturn’s largest moon is Titan. Also the only moon in the solar system (other than our own) that a spaceship has landed on.
  3. You would experience 1/6th of the Earth’s gravity on the moon. Or there about.
  4. This question proved most contentious. The answer is false, there is nowhere in space that has no gravity. Astronauts are weightless because they are in free-fall. Gravity itself is a property of space.
  5. A Geosynchronous spaceship has an orbital period of 24 hours. So it appears to be stationary to a ground observer.
  6. The furthest planet from the sun is Neptune. Far fewer people know this, than know that Pluto used to be the furthest planet from the sun. Actually Pluto was only the furthest for part of it’s, irregular, orbit.
  7. There are currently 6 people aboard the International Space Station.
  8. According to Google (yes, I know) there are 13,000 earth satellites.
  9. Prospero was the only satellite built and launched by the UK. It was launched by stealth after the programme had been cancelled, that’s the way we do things in the UK.
  10. The second man on the moon was Buzz Aldrin. He’s never forgiven NASA.

Name That Spaceship!

[Name_that_spaceship%255B4%255D.png]

In this round, give yourself a point if you can name the film or TV series the fictional spacecraft appeared in.

  1. Red Dwarf. Sorry, you probably have to be British to get this one.
  2. Space 1999. Sorry, you really have to be British and over 40 to get this one … or a major TV space geek.
  3. Voyager. Difficult, interplanetary probes all look similar.
  4. Apollo Lunar Excursion Module (LEM). You can have a point for ‘Lunar Module’, but no, you don’t get a point for ‘Apollo’. Call yourself a geek?
  5. Skylab. The first US space station, made out of old Apollo parts. Not many people get this one. A read a whole book about it, that’s how much of a space geek I am.
  6. Darth Vader’s TIE fighter. You can have a point for ‘TIE Fighter’. You can’t have a point for ‘Star Wars’. Yes yes, I know I’m contradicting myself, but, come on, every geek should know this.
  7. Curiosity. No, no points for ‘Mars Rover’.
  8. 2001 A Space Odyssey. Even I don’t know what the ship is called.
  9. Soyuz. It’s been used by the Russians to travel into space since 1966. 46 years! It’s almost as old as me. Odd, when space travel is so synonymous with high-technology, that much of the hardware is actually ancient.

Geek Culture

  1. ‘Spooky’ Mulder was the agent in the X-Files, played by actor David Duchovny.
  2. Kiki is the trainee witch in ‘Kiki’s Delivery Service’, one of my favourite anime movies by the outstanding Studio Ghibli.
  3. The actual quote: “Humans are a disease, a cancer of this planet.” by Agent Smith. You can have a point for Virus or Cancer too. Thanks Chris for the link and clarification.
  4. Spiderman of course!
  5. “It’s a Banana” Kryten, of Red Dwarf, learns to lie.
  6. My wife, who is Japanese, translates ‘Otaku’ as ‘geek’. Literally it means ‘you’ and is used to describe someone with obsessive interests. An appropriate question for a geek quiz I think.
  7. The name R2D2  apparently came about when Lucas heard someone ask for Reel 2 Dialog Track 2 in the abbreviated form ‘R-2-D-2’. Later it was said to stand for Second Generation Robotic Droid Series 2, you can have a point for either.
  8. Clarke’s 3rd law states: “Any sufficiently advanced technology is indistinguishable from magic.”
  9. African or European? From Monty Python’s Holy Grail.
  10. Open the pod bay doors please HAL. 2001 A Space Odyssey. Or on acid here.

So there you are. I hope you enjoyed it, and maybe even learnt a little. I certainly did. I might even do it again next year.

A very Merry Christmas to you all!


Converting Between Unix Time And DateTime

$
0
0

Unix time is the time value used in Unix based operating systems and is often exposed by Unix based APIs. To convert it to, or from, a .NET System.DateTime simply calculate the number of seconds since the Unix Epoch, midnight on the 1st January 1970. I’ve created a little class you can use to do just that. Note that the Unix Epoch is UTC, so you should always convert your local time to UTC before doing the calculation.

publicclass UnixTime
{
privatestaticreadonly DateTime epoch = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);

publicstaticlong FromDateTime(DateTime dateTime)
{
if (dateTime.Kind == DateTimeKind.Utc)
{
return (long)dateTime.Subtract(epoch).TotalSeconds;
}
thrownew ArgumentException("Input DateTime must be UTC");
}

publicstatic DateTime ToDateTime(long unixTime)
{
return epoch.AddSeconds(unixTime);
}

publicstaticlong Now
{
get
{
return FromDateTime(DateTime.UtcNow);
}
}
}

You can convert from Unix time to a UTC DateTime like this:

var calculatedCurrentTime = UnixTime.ToDateTime(currentUnixTime);

Convert to Unix time from a UTC DateTime like this:

var calculatedUnixTime = UnixTime.FromDateTime(myDateTimeValue);

And get the current time as a UTC time value like this:

Console.Out.WriteLine(UnixTime.Now);
 
As an interesting aside, the 32 bit signed integer used in older Unix systems will overflow at 14 minutes past 3 o’clock and 7 seconds on the morning of 19th January 2038 and interpret the date as 1901. I shall come out of retirement and spend happy well paid hours as a ‘unix time consultant’. 64 bit systems will overflow in the year 292,277,026,596.

Visual Studio: Multiple Startup Projects

$
0
0

I’ve been using Visual Studio for over 10 years, but still keep on learning new tricks. That’s because I learn things very slowly dear reader! Today’s ‘wow, I didn’t know you could do that moment’, was finding out that it’s possible to launch multiple startup projects when one hits F5 – or Control-F5 in this case.

My scenario is that I’m writing a little spike for a distributed application. Each of my services is implemented as a console app. During development, I want to run integration tests where the services all talk to each other, so before running the tests I want to run all the services. Now I’m used to right-clicking on a project and choosing ‘Set as startup project’, but you can’t select multiple projects this way and it’s very tedious to launch multiple projects by going to each one in turn, setting it as the startup project, and then hitting ctrl-F5. What I didn’t know is that you can right click on the Solution node, select ‘Set Startup Projects’ and you get this dialogue:

Select_startup_project

You can then select multiple startup projects and choose any number of them to ‘Start without debugging’.

Now I can hit ctrl-F5 and all my little services start up. Wonderful.

How to Write Scalable Services

$
0
0

SDO_arch

I’ve spent the last five years implementing and thinking about service oriented architectures. One of the core benefits of a service oriented approach is the promise of greatly enhanced scalability and redundancy. But to realise these benefits we have to write our services to be ‘scalable’. What does this mean?

There are two fundamental ways we can scale software: 'Vertically' or 'horizontally'.

  • Vertical Scaling addresses the scalability of a single instance of the service. A simple way to scale most software is simply to run it on a more powerful machine; one with a faster processor or more memory. We can also look for performance improvements in the way we write the code itself. An excellent example of company using this approach is LMAX. However, there are many drawbacks to the vertical scaling approach. Firstly the costs are rarely linear; ever more powerful hardware tends to be exponentially more expensive and the costs (and constraints) of building sophisticated performance optimised software are also considerable. Indeed premature performance optimisation often leads to overly complex software that's hard to reason about and therefore more prone to defects and high maintenance costs. Most importantly, vertical scaling does not address redundantcy; vertically scaling an application just turns a small single point of failure into a large single point of failure.

  • Horizontal Scaling. Here we run multiple instances of the application rather than focussing on the performance of a single instance. This has the advantage of being linearly scalable; rather than buying a bigger, more expensive box, we just buy more copies of the same cheap box. With the right architectural design, this approach can scale massively. Indeed it's the approach taken by almost all of largest internet scale companies: Facebook, Google, Twitter etc.. Horizontal Scaling also introduces redundancy; the loss of a single node need not impact the system as a whole. For these reasons, horizontal scaling is the preferred approach to building scalable, redundant systems.

So, the fundamental approach to building scalable systems is to compose them of horizontally scaled services. In order to do this we need to follow a few basic principles:

  • Stateless. Any services that stores state across an interaction with another service is hard to scale. For example, a web service that stores in-memory session state between requests requires a sophisticated session-aware load balancer. A stateless service, by contrast, only requires simple round-robin load balancing. For a web application (or service) you should avoid using session state or any static or application level variables.

  • Coarse Grained API. To be stateless, a service should expose an API that exposes operations as a single interaction. A chatty API, where one sets up some data, asks for some transition, and then reads off some results, implies statefulness by its design. The service would need to identify a session and then maintain information about that session between successive calls. Instead a single call, or message, to the service should encapsulate all the information that the service requires to complete the operation.

  • Idempotent. Much scalable infrastructure is a trade-off between competing constraints. Delivery guarantees are one of these. For various reasons it's is far simpler to guarantee 'at least once' delivery than 'exactly once'. If you can make your software tolerant of multiple deliveries of the same message it will be easier to scale.

  • Embrace Failure. Arrays of services are redundant if the system as a whole can survive the loss of a single node. You should design your services and infrastructure to expect and survive failure. Consider implementing a Chaos Monkey that randomly kills processes. If you start by expecting your services to fail, you'll be prepared when they inevitably do.

  • Avoid instance specific configuration. A scalable service should be designed in such a way that it doesn't need to know about other instances of itself, or have to identify itself as a specific instance. I shouldn't need to have to configure one instance any differently than another. This would include communication mechanisms that require messages to be addressed to a specific instance of the service, or some non-convention based way that the service was required to identify itself. Instead we should rely on infrastructure (load-balancers, pub-sub messaging etc.) to manage the communication between arrays of services.

  • Simple automated deployment. Have a service that can scale is no advantage if we can't deploy it when we are close to capacity. A scalable system must have automated processes to deploy new instances of services as the need arises.

  • Monitoring. We need to know when services are close to capacity so that we can add additional service instances. Monitoring is usually an infrastructure concern; we should be monitoring CPU, network, and memory usage and have alerts in place to warn us when these pass certain trigger points. Sometimes it's worth introducing application specific alerts when some internal trigger is reached, such as the number of items in an in-memory queue, for example.

  • KISS - Keep It Small and Simple. This is good advice for any software project, but is especially pertinent to building scalable resilient systems. Large monolithic codebases are hard to reason about, hard to monitor, and hard to scale. Building your system out of many small pieces makes it easy to address those pieces independently. Design your system so that each service has only one purpose and is decoupled from the operations of other services. Have your services communicate using non-proprietary open standards to avoid vendor lock-in and allow for a heterogeneous platform. JSON over HTTP, for example, is an excellent choice for intra-service communication. Every platform has HTTP and JSON libraries and there is abundant off-the-shelf infrastructure (proxies, load-balancers, caches) that can be used to help your system scale.

This post just gives a few pointers to building scalable systems, for far more detailed examples and case studies I can't recommend the High Scalability Blog enough. The Dodgy Coder blog has a very nice summary of some of the High Scalability case studies here.

EasyNetQ in Africa

$
0
0

Anthony Moloney got in touch with me recently. He’s using EasyNetQ, my simple .NET API for RabbitMQ, with his team in Kenya. Here’s an email he sent me:

Hi Mike,

Further to the brief twitter exchange today about using EasyNetQ on our Kenyan project. We started using EasyNetQ back in early November and I kept meaning to drop you a line to thank you for all your good work.

Virtual City are based in Nairobi and supply mobile solutions to the supply chain and agribusiness industry in Africa. African solutions for African problems. I got involved with them about 2 years ago to help them improve the quality of their products and I have been working on and off with them since then. Its been a bit of a journey we are getting there.

We have a number of client applications including android and wpf working in an online/offline mode over mobile networks. We need to process large amounts of incoming commands from these applications. These commands are also routed via the server to other client apps.

The application had originally used MVC and SQL server to synchronously process and store the commands but we were running into severe performance problems. We looked at various MQ solutions and decided to use RabbitMQ, WebApi & Mongo to improve processing throughput. While researching a .Net API for RabbitMQ I noticed that you had created the EasyNetQ API.

EasyNetQ greatly simplifies interacting with RabbitMQ and providing your needs are not too complicated you really don't need to know too much about the guts of RabbitMQ. We replaced the existing server setup in about a week. The use of RabbitMQ has greatly increased the scalability of the product and allows us to either scale up or scale out.

We are also using the EasyNetQ management API for monitoring queue activity on our customer services dashboard.

Kind Regards

Anthony Moloney

One of the great rewards of running an open source project is hearing about the fascinating ways that it’s used around the world. I really like that it’s an ‘African solution for African problems’ and built by a Kenyan development team. It’s also interesting that they’ve used OSS projects like RabbitMQ and Mongo alongside .NET. It reminds me of the Stack Overflow architecture, a .NET core surrounded by OSS infrastructure.

EasyNetQ on .NET Rocks!

$
0
0

dotnetrocks

Last week I had the pleasure of being interviewed by Carl Franklin and Richard Campbell for a .NET Rocks episode on RabbitMQ and EasyNetQ. It was terrific fun and a real honour to be invited on the show. I’ve been listening to .NET Rocks since it started in 2002 so you can imagine how excited I was. Carl and Richard are seasoned pros when it comes to podcasting, and the awesome ninja editing skills they posses turned my rather hesitant and rambling answers into something that almost sounded coherent.

You can listen to the show on the link below:

http://www.dotnetrocks.com/default.aspx?ShowNum=848

Now Richard, about that Tesla …

Fractured Product Syndrome

$
0
0

How do you sell your software product? Do you fork the source code of your previous customer, modify it a little and deploy it as its own instance with its own database? Maybe each customer even has their own server? Their own set of servers?

You are suffering from the Fractured Product Syndrome.

Let me tell you a story.

Colin and Alex have a little web development consultancy, called Coaltech (get it?) they are pretty handy with ASP.NET and SQL Server and already had a respectable list of satisfied clients when a local car hire company, Easy Wheels, asked them if they could build them a car booking system. It would keep a list of Easy Wheels’ cars and allow customers to view them and book them online.

Colin and Alex signed a contract with Easy Wheels and got busy. Their “just do it” attitude to getting features done meant that the code wasn’t particularly beautiful, but it did the job and after a couple of months work the system went live. Easy Wheels were very happy with their new system, so happy in fact that they started to tell other independent car rental companies about it. A few months later Colin and Alex got a call from Hire My Ride. They loved what Coaltech had done for Easy Wheels and wanted a similar system. Of course it would have to be redesigned to fit in with Hire My Ride’s branding, but the basic functionality was pretty much identical. Colin and Alex did the obvious thing, they took a copy of the Easy Wheels code and tweaked it for Hire My Ride. Of course they charged the same bespoke software price for Hire My Ride’s system.

Soon a third and fourth car hire company had asked Coaltech for their ‘system’. Each time Colin and Alex took a copy of the last customer’s code – it made sense to take the most recent code because they inevitably added a few refinements with each customer – and then they altered it to match the new customer’s requirements. Before long they had arranged a deal with another web agency to take over all their non-car-hire work and decided to concentrate full time on the ‘system’. It needed a name too, they couldn’t carry on calling it ‘Easy Wheels’ and they certainly couldn’t market it like that. Soon it became ‘Coaltech Cars’ with a new marketing website. They also found that they couldn’t keep up with demand with just the two of them doing customer implementations. Each new customer took around six weeks of development work with the inevitable to-and-fro of slightly different requirements and design changes. To help meet demand they started to hire developers. First Jez, then Dave, then Neville. They all modified the software in slightly different ways, but it didn’t seem to matter at the time.

Fast forward five years. Coaltech now has 50 employees and around 50 customers. It’s fair to say that they are the leading vendor of independent car rental management systems. The majority of their staff are developers, although they also have a small sales, HR and customer relationship management team. You would have thought that Colin and Alex would be happy to have turned their little web development company into such a success, but instead life just seemed to be one long headache. Although the company had grown, it always seemed hard to turn a profit. As it was customers often balked at the price of the system. The same bug would turn up again and again for different customers. There never seemed to be enough time to fix them all. Even though they might have delivered a new feature for one client, it always seemed to take just as long to implement it for another; sometimes longer depending on what code they’d been forked from. The small team that looked after the servers were in constant fire fighting mode and got very upset when anyone suggested ‘upgrading’ any of the clients -  it always meant bugs, downtime and screaming customers. And then the government changed the Value Added Tax rate. Colin had to cancel his holiday and they lost two of their best developers after several weeks of late nights and no weekends while they updated and redeployed 50 separate web applications.

The end for Coaltech cars came slowly and painfully. Alex had the first hint of it when he was made aware of a little company called RentBoy. They had a little software-as-a-service product for small car hire companies. To use it you entered your credit card number, a logo, a few other details, and you were good to go. They weren’t any immediate competition for Coaltech Cars, having only a small subset of the features, but they soon captured the low end of the market, the one or two man band car companies that had never been able to afford Coaltech’s sign up or licensing fees.

But then the features started coming thick and fast and soon Coaltech found they were loosing out to RentBoy when bidding for clients. Colin found an article on the High Scalability blog about RentBoy’s architecture. They had a single scalable application that served all their customers, one place to fix bugs, one point of scalability and maintenance. They practiced continuous deployment allowing them to quickly build and release new features for all their customers. The company had four employees and already more customers than Coaltech. They charged about a tenth of Coaltech’s fees.

Coaltech’s new client revenues soon dried up. They’d always made a certain amount of money from sign-up fees. Too late they realised that they had to start shedding staff, always a painful thing for a small closely knit company. The debts mounted until the bank would no longer support them, and before long they had to declare bankruptcy. Luckily Colin and Alex managed to get jobs as project managers in enterprise development teams.

The moral of the story? Try to avoid Fractured Product Syndrome if you possibly can. Although simply forking the source code for each new customer appears by far the easiest thing to do at the start, it simply doesn’t scale. Start thinking about how to build multi-tenanted software-as-a-service long before you get to where Coaltech got to. Learn to say ‘no’ to customers if you have to. It’s far better to have a high number of low value customers than smaller numbers of higher value ones on a differentiated platform. It’s much easier for a low-value volume software provider to move into the high-value space than for a high-value provider to move down.

Learn to recognise Fractured Product Syndrome and address it before it gets serious!

Coders, Musicians and Cooks

$
0
0

In a light hearted Twitter exchange yesterday I asked why so many coders also played guitar. Mark Seemann suggested that there was also a high correlation with cooking.

code_play_cook_tweet

How about a survey? Mark and I have over 4000 twitter followers between us so I was sure we could get a few of them to click a few radio buttons, with no promise of reward, but with the warm feeling that they were helping to progress the anthropological understanding of the coding tribe. So I created a survey and waited for the results to stream in. Of course, as Rob Pickering pointed out, this kind of self selecting survey is very unscientific, but fun nonetheless.

What have we learnt?

  1. It is ridiculously easy to create an online survey using Google Drive. Just click ‘new form’, type in a few questions, and Bob’s yer uncle. It automatically builds a nice summary graphic on the fly as the survey progresses (see below).
  2. Most of Mark and I’s followers are C# coders. 81%. That’s not really surprising.
  3. More coders cook than play musical instruments: 85% verses 53%. That’s not really surprising either. I expect if you took a survey of the general population you’d find quite a high percentage who cook verses those who play.
  4. Guitar is by far the most popular instrument among coders. 30% play guitar. This confirms my hunch, but doesn’t answer my question – why it’s such a high percentage.

Here is the Google Drive graphic of the results.

code_play_cook_survey


Lua as a Distributed Workflow Scripting Language

$
0
0

I’ve been spending a lot of time recently thinking about ways of orchestrating long running workflows in a service oriented architecture. I was talking this over at last Tuesday’s Brighton ALT NET when Jay Kannan, who’s a game developer amongst other things, mentioned that Lua is a popular choice for scripting game platforms. Maybe I should check it out. So I did. And it turns out to be very interesting.

If you haven’t heard of Lua, it’s a “powerful, fast, lightweight, embeddable scripting language” originally conceived by a team at the Catholic University of Rio de Janeiro in Brazil. It’s the leading scripting language for game platforms and also pops up in other interesting locations including Photoshop and Wikipedia. It’s got a straightforward C API that makes it relatively simple to p-invoke from .NET, and indeed there’s a LuaInterface library that provides a managed API.

I got the source code from the Google code svn repository and built it in VS 2012, but there are NuGet packages available as well.

It turned out to be very simple to use Lua to script a distributed workflow. Lua has first class coroutines which means that you can pause and continue a Lua script at will. The LuaInterface library allows you inject C# functions and call them as Lua functions, so it’s simply a case of calling an asynchronous C# ‘begin’ function, suspending the script by yielding the coroutine, waiting for the asynchronous function to return, setting the return value, and starting up the script again.

Let me show you how.

First here’s a little Lua script:

a = 5
b = 6

print('doing remote add ...')

r1 = remoteAdd(a, b)

print('doing remote multiply ...')

r2 = remoteMultiply(r1, 4)

print('doing remote divide ...')

r3 = remoteDivide(r2, 2)

print(r3)

The three functions ‘remoteAdd’, ‘remoteMultiply’ and ‘remoteDivide’ are all asynchronous. Behind the scenes a message is sent via RabbitMQ to a remote OperationServer where the calculation is carried out and a message is returned.

The script runs in my LuaRuntime class. This creates and sets up the Lua environment that the script runs in:

publicclass LuaRuntime : IDisposable
{
privatereadonly Lua lua = new Lua();
privatereadonly Functions functions = new Functions();

public LuaRuntime()
{
lua.RegisterFunction("print", functions, typeof(Functions).GetMethod("Print"));
lua.RegisterFunction("startOperation", this, GetType().GetMethod("StartOperation"));

lua.DoString(
@"
function remoteAdd(a, b) return remoteOperation(a, b, '+'); end
function remoteMultiply(a, b) return remoteOperation(a, b, '*'); end
function remoteDivide(a, b) return remoteOperation(a, b, '/'); end

function remoteOperation(a, b, op)
startOperation(a, b, op)
local cor = coroutine.running()
coroutine.yield(cor)

return LUA_RUNTIME_OPERATION_RESULT
end
"
);
}

publicvoid StartOperation(int a, int b, string operation)
{
functions.RunOperation(a, b, operation, result =>
{
lua["LUA_RUNTIME_OPERATION_RESULT"] = result;
lua.DoString("coroutine.resume(co)");
});
}

publicvoid Execute(string script)
{
conststring coroutineWrapper =
@"co = coroutine.create(function()
{0}
end)"
;
lua.DoString(string.Format(coroutineWrapper, script));
lua.DoString("coroutine.resume(co)");
}

publicvoid Dispose()
{
lua.Dispose();
functions.Dispose();
}
}

When this class is instantiated it creates a new LuaInterface environment (the Lua class) and a new instance of a Functions class that I’ll explain below.

The constructor is where most of the interesting setup happens. First we register two C# functions that we want to call from inside Lua: ‘print’ which simply prints from the console, and ‘startOperation’ which starts an asynchronous math operation.

Next we define our three functions: ‘remoteAdd’, ‘remoteMultiply’ and ‘remoteDivide’ which all in turn invoke a common function ‘remoteOperation’. RemoteOperation calls the registered C# function ‘startOperation’ then yields the currently running coroutine. Effectively the script will stop here until it’s started again. After it starts, the result of the asynchronous operation is accessed from the  LUA_RUNTIME_OPERATION_RESULT variable and returned to the caller.

The C# function StartOperation calls RunOperation on our Functions class which has an asynchronous callback. In the callback we set the result value in the Lua environment and execute ‘coroutine.resume’ which restarts the Lua script at the point where it yielded.

The Execute function actually runs the script. First it embeds it in a ‘coroutine.create’ call so that the entire script is created as a coroutine, then it simply starts the coroutine by calling ‘coroutine.resume’.

The Functions class is just a wrapper around a function that maintains an EasyNetQ connection to RabbitMQ and makes an EasyNetQ request to a remote server somewhere else on the network.

publicclass Functions : IDisposable
{
privatereadonly IBus bus;

public Functions()
{
bus = RabbitHutch.CreateBus("host=localhost");
}

publicvoid Dispose()
{
bus.Dispose();
}

publicvoid RunOperation(int a, int b, string operation, Action<int> resultCallback)
{
using (var channel = bus.OpenPublishChannel())
{
var request = new OperationRequest()
{
A = a,
B = b,
Operation = operation
};
channel.Request<OperationRequest, OperationResponse>(request, response =>
{
Console.WriteLine("Got response {0}", response.Result);
resultCallback(response.Result);
});
}
}

publicvoid Print(string msg)
{
Console.WriteLine("LUA> {0}", msg);
}
}
 
Here’s a sample run of the script:
 
DEBUG: Trying to connect
DEBUG: OnConnected event fired
INFO: Connected to RabbitMQ. Broker: 'localhost', Port: 5672, VHost: '/'
LUA> doing remote add ...
DEBUG: Declared Consumer. queue='easynetq.response.143441ff-3635-4d5d-8e42-6b379b3f8356', prefetchcount=50
DEBUG: Published to exchange: 'easy_net_q_rpc', routing key: 'Mike_DistributedLua_Messages_OperationRequest:Mike_DistributedLua_Messages', correlationId: '50560dd9-2be1-49a1-96f6-9c62641080ae'
DEBUG: Recieved
RoutingKey: 'easynetq.response.143441ff-3635-4d5d-8e42-6b379b3f8356'
CorrelationId: '50560dd9-2be1-49a1-96f6-9c62641080ae'
ConsumerTag: '101343d9-9497-4893-88e6-b89cc1de29a4'
Got response 11
LUA> doing remote multiply ...
DEBUG: Declared Consumer. queue='easynetq.response.f571f6d7-b963-4a88-bf62-f05785009e39', prefetchcount=50
DEBUG: Published to exchange: 'easy_net_q_rpc', routing key: 'Mike_DistributedLua_Messages_OperationRequest:Mike_DistributedLua_Messages', correlationId: '0ea7e1c3-6f12-4cb9-a861-2f5de8f2600d'
DEBUG: Model Shutdown for queue: 'easynetq.response.143441ff-3635-4d5d-8e42-6b379b3f8356'
DEBUG: Recieved
RoutingKey: 'easynetq.response.f571f6d7-b963-4a88-bf62-f05785009e39'
CorrelationId: '0ea7e1c3-6f12-4cb9-a861-2f5de8f2600d'
ConsumerTag: '2c35f24e-7745-4475-885a-d214a1446a70'
Got response 44
LUA> doing remote divide ...
DEBUG: Declared Consumer. queue='easynetq.response.060f7882-685c-4b00-a930-aa4f20f7c057', prefetchcount=50
DEBUG: Published to exchange: 'easy_net_q_rpc', routing key: 'Mike_DistributedLua_Messages_OperationRequest:Mike_DistributedLua_Messages', correlationId: 'ea9a90cc-cd7d-4f05-b171-c6849026ac4a'
DEBUG: Model Shutdown for queue: 'easynetq.response.f571f6d7-b963-4a88-bf62-f05785009e39'
DEBUG: Recieved
RoutingKey: 'easynetq.response.060f7882-685c-4b00-a930-aa4f20f7c057'
CorrelationId: 'ea9a90cc-cd7d-4f05-b171-c6849026ac4a'
ConsumerTag: '90e6b024-c5c4-440a-abdf-cb9a000c131c'
Got response 22
LUA> 22
DEBUG: Model Shutdown for queue: 'easynetq.response.060f7882-685c-4b00-a930-aa4f20f7c057'
Completed
DEBUG: Connection disposed

You can see the Lua print statements interleaved with EasyNetQ DEBUG statements showing the messages being published and consumed.
 
So there you go, a distributed workflow scripting engine in under 100 lines of code. All I’ve got to do now is serialize the Lua environment at each yield and then restart it again from its serialized state. This is possible according to a bit of googling yesterday afternoon. Watch this space.
 
You can find the code for all this on GitHub here:
 

Serializing Lua Coroutines With Pluto

$
0
0

In my last post I showed how it’s possible to use Lua as a distributed workflow scripting language because of its build in support for coroutines. But in order to create viable long-running workflows we have to have some way of saving the script’s state at the point that it pauses; we need a way to serialize the coroutine.

Enter Pluto:

Pluto is a heavy duty persistence library for Lua. Pluto is a library which allows users to write arbitrarily large portions of the "Lua universe" into a flat file, and later read them back into the same or a different Lua universe.

I downloaded the Win32 build of Tamed Pluto from here and placed it in a directory alongside my Pluto.dll. You have to tell Lua where to find C libraries by setting the Lua package.cpath:

using (var lua = new Lua())
{
lua["package.cpath"] = @"D:\Source\Mike.DistributedLua\Lua\?.dll";

...
}

Lua can now find the Pluto library and we can import it into our script with:

require('pluto')

Here’s a script which defines a function foo which calls a function bar. Foo is used to create a coroutine. Inside bar the coroutine yields. The script uses Pluto to serialize the yielded coroutine and saves it to a file.

-- import pluto, print out the version number 
-- and set non-human binary serialization scheme.
require('pluto')
print('pluto version '..pluto.version())
pluto.human(false)

-- perms are items to be substituted at serialization
perms = { [coroutine.yield] = 1 }

-- the functions that we want to execute as a coroutine
function foo()
local someMessage = 'And hello from a long dead variable!'
local i = 4
bar(someMessage)
print(i)
end

function bar(msg)
print('entered bar')
-- bar runs to here then yields
coroutine.yield()
print(msg)
end

-- create and start the coroutine
co = coroutine.create(foo)
coroutine.resume(co)

-- the coroutine has now stopped at yield. so we can
-- persist its state with pluto
buf = pluto.persist(perms, co)

-- save the serialized state to a file
outfile = io.open(persistCRPath, 'wb')
outfile:write(buf)
outfile:close()

This next script loads the serialized coroutine from disk, deserializes it, and runs it from the point that it yielded:

-- import pluto, print out the version number 
-- and set non-human binary serialization scheme.
require('pluto')
print('pluto version '..pluto.version())
pluto.human(false)

-- perms are items to be substituted at serialization
-- (reverse the key/value pair that you used to serialize)
perms = { [1] = coroutine.yield }

-- get the serialized coroutine from disk
infile, err = io.open(persistCRPath, 'rb')
if infile == nil then
error('While opening: ' .. (err or'no error'))
end

buf, err = infile:read('*a')
if buf == nil then
error('While reading: ' .. (err or'no error'))
end

infile:close()

-- deserialize it
co = pluto.unpersist(perms, buf)

-- and run it
coroutine.resume(co)

When we run the scripts, the first prints out ‘entered bar’ and then yields:

LUA> pluto version Tamed Pluto 1.0
LUA> entered bar

The second script loads the paused ‘foo’ and ‘bar’ and continues from the yield:

LUA> pluto version Tamed Pluto 1.0
LUA> And hello from a long dead variable!
LUA> 4

Having the ability to run a script to the point at which it starts a long running call, serialize its state and store it somewhere, then pick up that serialized state and resume it at another place and time is a very powerful capability. It means we can write simple procedural scripts for our workflows, but have them execute over a distributed service oriented architecture.

The complete code for this experiment is on GitHub here.

Stop Your Console App The Nice Way

$
0
0

When you write a console application, do you simply put

Console.WriteLine("Hit <enter> to end");
Console.ReadLine();

at the end of Main and block on Console.ReadLine()?

It’s much nicer to to make your console application work with Windows command line conventions and exit when the user types Control-C.

The Console class has a static event CancelKeyPress that fires when Ctrl-C is pressed. You can create an AutoResetEvent that blocks until you call Set in the CancelKeyPress handler.

It’s also nice to stop the application from being arbitrarily killed at the point where Ctrl-C is pressed by setting the EventArgs.Cancel property to true. This gives you a chance to complete what you are doing and exit the application cleanly.

Here’s an example. My little console application kicks off a worker thread and then blocks waiting for Ctrl-C as described above. When Ctrl-C is pressed I send a signal to the worker thread telling it to finish what it’s doing and exit.

using System;
using System.Threading;

namespace Mike.Spikes.ConsoleShutdown
{
class Program
{
privatestaticbool cancel = false;

staticvoid Main(string[] args)
{
Console.WriteLine("Application has started. Ctrl-C to end");

// do some cool stuff here
var myThread = new Thread(Worker);
myThread.Start();

var autoResetEvent = new AutoResetEvent(false);
Console.CancelKeyPress += (sender, eventArgs) =>
{
// cancel the cancellation to allow the program to shutdown cleanly
eventArgs.Cancel = true;
autoResetEvent.Set();
};

// main blocks here waiting for ctrl-C
autoResetEvent.WaitOne();
cancel = true;
Console.WriteLine("Now shutting down");
}

privatestaticvoid Worker()
{
while (!cancel)
{
Console.WriteLine("Worker is working");
Thread.Sleep(1000);
}
Console.WriteLine("Worker thread ending");
}
}
}

Much nicer I think.

The Benefits of a Reverse Proxy

$
0
0

A typical ASP.NET public website hosted on IIS is usually configured in such a way that the server that IIS is installed on is visible to the public internet. HTTP requests from a browser or web service client are routed directly to IIS which also hosts the ASP.NET worker process.  All the functionality needed to produce the web site is embodied in a single server. This includes caching, SSL termination, authentication, serving static files and compression. This approach is simple and straightforward for small sites, but is hard to scale, both in terms of performance, and in terms of managing the complexity of a large complex application. This is especially true if you have a distributed service oriented architecture with multiple HTTP endpoints that appear and disappear frequently.

A reverse proxy is server component that sits between the internet and your web servers. It accepts HTTP requests, provides various services, and forwards the requests to one or many servers.

font-side-proxy

Having a point at which you can inspect, transform and route HTTP requests before they reach your web servers provides a whole host of benefits. Here are some:

Load Balancing

This is the reverse proxy function that people are most familiar with. Here the proxy routes incoming HTTP requests to a number of identical web servers. This can work on a simple round-robin basis, or if you have statefull web servers (it’s better not to) there are session-aware load balancers available. It’s such a common function that load balancing reverse proxies are usually just referred to as ‘load balancers’. There are specialized load balancing products available, but many general purpose reverse proxies also provide load balancing functionality.

Security

A reverse proxy can hide the topology and characteristics of your back-end servers by removing the need for direct internet access to them. You can place your reverse proxy in an internet facing DMZ, but hide your web servers inside a non-public subnet.

Authentication

You can use your reverse proxy to provide a single point of authentication for all HTTP requests.

SSL Termination

Here the reverse proxy handles incoming HTTPS connections, decrypting the requests and passing unencrypted requests on to the web servers. This has several benefits:

  • Removes the need to install certificates on many back end web servers.
  • Provides a single point of configuration and management for SSL/TLS
  • Takes the processing load of encrypting/decrypting HTTPS traffic away from web servers.
  • Makes testing and intercepting HTTP requests to individual web servers easier.

Serving Static Content

Not strictly speaking ‘reverse proxying’ as such. Some reverse proxy servers can also act as web servers serving static content. The average web page can often consist of megabytes of static content such as images, CSS files and JavaScript files. By serving these separately you can take considerable load from back end web servers, leaving them free to render dynamic content.

Caching

The reverse proxy can also act as a cache. You can either have a dumb cache that simply expires after a set period, or better still a cache that respects Cache-Control and Expires headers. This can considerably reduce the load on the back-end servers.

Compression

In order to reduce the bandwidth needed for individual requests, the reverse proxy can decompress incoming requests and compress outgoing ones. This reduces the load on the back-end servers that would otherwise have to do the compression, and makes debugging requests to, and responses from, the back-end servers easier.

Centralised Logging and Auditing

Because all HTTP requests are routed through the reverse proxy, it makes an excellent point for logging and auditing.

URL Rewriting

Sometimes the URL scheme that a legacy application presents is not ideal for discovery or search engine optimisation. A reverse proxy can rewrite URLs before passing them on to your back-end servers. For example, a legacy ASP.NET application might have a URL for a product that looks like this:

http://www.myexampleshop.com/products.aspx?productid=1234

You can use a reverse proxy to present a search engine optimised URL instead:

http://www.myexampleshop.com/products/1234/lunar-module

Aggregating Multiple Websites Into the Same URL Space

In a distributed architecture it’s desirable to have different pieces of functionality served by isolated components. A reverse proxy can route different branches of a single URL address space to different internal web servers.

For example, say I’ve got three internal web servers:

http://products.internal.net/
http://orders.internal.net/
http://stock-control.internal.net/

I can route these from a single external domain using my reverse proxy:

http://www.example.com/products/    -> http://products.internal.net/
http://www.example.com/orders/ -> http://orders.internal.net/
http://www.example.com/stock/ -> http://stock-control.internal.net/

To an external customer it appears that they are simply navigating a single website, but internally the organisation is maintaining three entirely separate sites. This approach can work extremely well for web service APIs where the reverse proxy provides a consistent single public facade to an internal distributed component oriented architecture.

So …

So, a reverse proxy can off load much of the infrastructure concerns of a high-volume distributed web application.

We’re currently looking at Nginx for this role. Expect some practical Nginx related posts about how to do some of this stuff in the very near future.

Happy proxying!

SSH.NET

$
0
0

I’ve recently had the need to automate configuration of Nginx on an Ubuntu server. Of course, in UNIX land we like to use SSH (Secure Shell) to log into our servers and manage them remotely. Wouldn’t it be nice, I thought, if there was a managed SSH library somewhere so that I could automate logging onto my Ubuntu server, run various commands and transfer files. A short Google turned up SSH.NET by the somewhat mysterious Olegkap (at least I couldn’t find out anything else about them) which turned out to be just what I wanted. Here’s the blurb on the CodePlex site:

“This project was inspired by Sharp.SSH library which was ported from java and it seems like was not supported for quite some time. This library is complete rewrite using .NET 4.0, without any third party dependencies and to utilize the parallelism as much as possible to allow best performance I can get.”

It does exactly what it says on the tin. It’s on NuGet, so you can grab it with:

PM> Install-Package SSH.NET

Here’s how you run a remote command. First you need to build a ConnectionInfo object:

public ConnectionInfo CreateConnectionInfo()
{
const string privateKeyFilePath = @"C:\some\private\key.pem";
ConnectionInfo connectionInfo;
using (var stream = new FileStream(privateKeyFilePath, FileMode.Open, FileAccess.Read))
{
var privateKeyFile = new PrivateKeyFile(stream);
AuthenticationMethod authenticationMethod =
new PrivateKeyAuthenticationMethod("ubuntu", privateKeyFile);

connectionInfo = new ConnectionInfo(
"my.server.com",
"ubuntu",
authenticationMethod);
}

return connectionInfo;
}

Then you simply create an SshClient instance and run commands:

public void Connect()
{
using (var ssh = new SshClient(CreateConnectionInfo()))
{
ssh.Connect();
var command = ssh.CreateCommand("uptime");
var result = command.Execute();
Console.Out.WriteLine(result);
ssh.Disconnect();
}
}

Here I’m running the ‘uptime’ command which output this when I ran it just now:

14:37:46 up 22 days,  3:59,  0 users,  load average: 0.08, 0.03, 0.05

To transfer a file, just use the ScpClient:

publicvoid GetConfigurationFiles()
{
using (var scp = new ScpClient(CreateNginxServerConnectionInfo()))
{
scp.Connect();

scp.Download("/etc/nginx/", new DirectoryInfo(@"D:\Temp\ScpDownloadTest"));

scp.Disconnect();
}
}

Which grabs all my Nginx configuration and transfers it to a directory tree on my windows machine.

All in all a very nice little library that’s been working well for me so far. Give it a try if you need to interact with a UNIX-like machine from .NET code.

NuGet Install Is Broken With F#

$
0
0

There’s a very nasty bug when you try and use NuGet to add a package reference to an F# project. It manifests itself when either the assembly that is being installed also has a version in the GAC or a different version already exists in the output directory.

First let’s reproduce the problem when a version of the assembly already exists in the GAC.

Create a new solution with an F# project.

Choose an assembly that you want to install from NuGet that also exists in the GAC on your machine. For ironic purposes I’m going to choose NuGet.Core for this example.

It’s in my GAC:

D:\>gacutil -l | find "NuGet.Core"
NuGet.Core, Version=1.0.11220.104, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL
NuGet.Core, Version=1.6.30117.9648, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL

You can see that the highest version in the GAC is version 1.6.30117.9648

Now let’s install NuGet.Core version 2.5.0 from the official NuGet source:

PM> Install-Package NuGet.Core -Version 2.5.0
Installing 'Nuget.Core 2.5.0'.
Successfully installed 'Nuget.Core 2.5.0'.
Adding 'Nuget.Core 2.5.0' to Mike.NuGetExperiments.FsProject.
Successfully added 'Nuget.Core 2.5.0' to Mike.NuGetExperiments.FsProject.

It correctly creates a packages directory, downloads the NuGet.Core package and creates a packages.config file:

D:\Source\Mike.NuGetExperiments\src>tree /F
D:.
│ Mike.NuGetExperiments.sln

├───Mike.NuGetExperiments.FsProject
│ │ Mike.NuGetExperiments.FsProject.fsproj
│ │ packages.config
│ │ Spike.fs
│ │
│ ├───bin
│ │ └───Debug
│ │
│ └───obj
│ └───Debug

└───packages
│ repositories.config

└───Nuget.Core.2.5.0
│ Nuget.Core.2.5.0.nupkg
│ Nuget.Core.2.5.0.nuspec

└───lib
└───net40-Client
NuGet.Core.dll

But when when I look at my fsproj file I see that it has incorrectly referenced the NuGet.Core version (1.6.30117.9648) from the GAC and there is no hint path pointing to the downloaded package.

<Reference Include="NuGet.Core, Version=1.6.30117.9648, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
<Private>True</Private>
</Reference>

Next let’s reproduce the problem when a version of an assembly already exists in the output directory.

This time I’m going to EasyNetQ as my example DLL. First I’m going to take a recent version of EasyNetQ.dll, 0.10.1.92 and drop it into to the projects output directory (bin\Debug).

Next use NuGet to install an earlier version of the assembly:

Install-Package EasyNetQ -Version 0.9.2.76
Attempting to resolve dependency 'RabbitMQ.Client (= 3.0.2.0)'.
Attempting to resolve dependency 'Newtonsoft.Json (≥ 4.5)'.
Installing 'RabbitMQ.Client 3.0.2'.
Successfully installed 'RabbitMQ.Client 3.0.2'.
Installing 'Newtonsoft.Json 4.5.11'.
Successfully installed 'Newtonsoft.Json 4.5.11'.
Installing 'EasyNetQ 0.9.2.76'.
Successfully installed 'EasyNetQ 0.9.2.76'.
Adding 'RabbitMQ.Client 3.0.2' to Mike.NuGetExperiments.FsProject.
Successfully added 'RabbitMQ.Client 3.0.2' to Mike.NuGetExperiments.FsProject.
Adding 'Newtonsoft.Json 4.5.11' to Mike.NuGetExperiments.FsProject.
Successfully added 'Newtonsoft.Json 4.5.11' to Mike.NuGetExperiments.FsProject.
Adding 'EasyNetQ 0.9.2.76' to Mike.NuGetExperiments.FsProject.
Successfully added 'EasyNetQ 0.9.2.76' to Mike.NuGetExperiments.FsProject.

NuGet reports that everything went according to plan and that EasyNetQ 0.9.2.76 has been successfully added to my project.

Once again the packages directory was successfully created and the correct version of EasyNetQ has been downloaded. The packages.config file also has the correct version of EasyNetQ. I won’t show you the output from ‘tree’ again, it’s much the same as before.

Again, when I look at my fsproj file the version of EasyNetQ is incorrect, it’s 0.10.1.92, and again there’s no hint path:

<Reference Include="EasyNetQ, Version=0.10.1.92, Culture=neutral, PublicKeyToken=null">
<Private>True</Private>
</Reference>

Yup, NuGet install is most definitely broken with F#.

This bug makes using NuGet and F# together an exercise in frustration. Our team has wasted days attempting to get to the bottom of this.

It seems that it’s a well know problem. Just take a look at this workitem, reported over a year ago:

http://nuget.codeplex.com/workitem/2149

After much cursing of NuGet, the problem actually appears to be with the F# project system rather than with NuGet itself:

“F# knows about this behavior and they will release the fix”

Hmm, it hasn’t been fixed yet.

We had a dig around the NuGet code. The interesting piece is this file snippet (from NuGet.VisualStudio.VsProjectSystem):

   1:publicvirtualvoid AddReference(string referencePath, Stream stream)
   2: {
   3:string name = Path.GetFileNameWithoutExtension(referencePath);
   4:try
   5:     {
   6:// Get the full path to the reference
   7:string fullPath = PathUtility.GetAbsolutePath(Root, referencePath);
   8:string assemblyPath = fullPath;
   9:  
  10:         ...
  11:  
  12:// Add a reference to the project
  13:         dynamic reference = Project.Object.References.Add(assemblyPath);
  14:  
  15:         ...
  16:  
  17:         TrySetCopyLocal(reference);
  18:  
  19:// This happens if the assembly appears in any of the search
  20:// paths that VS uses to locate assembly references. Most commonly, 
  21:// it happens if this assembly is in the GAC or in the output path.
  22:if (!reference.Path.Equals(fullPath, StringComparison.OrdinalIgnoreCase))
  23:         {
  24:// Get the msbuild project for this project
  25:             MsBuildProject buildProject = Project.AsMSBuildProject();
  26:  
  27:if (buildProject != null)
  28:             {
  29:// Get the assembly name of the reference we are trying to add
  30:                 AssemblyName assemblyName = AssemblyName.GetAssemblyName(fullPath);
  31:  
  32:// Try to find the item for the assembly name
  33:                 MsBuildProjectItem item = 
  34:                     (from assemblyReferenceNode in buildProject.GetAssemblyReferences()
  35:where AssemblyNamesMatch(assemblyName, assemblyReferenceNode.Item2)
  36:                     select assemblyReferenceNode.Item1).FirstOrDefault();
  37:  
  38:if (item != null)
  39:                 {
  40:// Add the <HintPath> metadata item as a relative path
  41:                     item.SetMetadataValue("HintPath", referencePath);
  42:  
  43:// Save the project after we've modified it.
  44:                     Project.Save();
  45:                 }
  46:             }
  47:         }
  48:     }
  49:catch (Exception e)
  50:     {
  51:         ...
  52:     }
  53: }

On line 13 NuGet calls out to the F# project system and asks it to add a reference to the assembly at the given path. We assume that the F# project system then does the wrong thing by searching for the assembly name anywhere in the GAC or the output directory rather than referencing the explicit assembly NuGet is asking it to reference.

Interestingly, it looks as if the NuGet team have attempted to code a work-around for this bug from line 22 onwards. Could this be why C# projects don’t exhibit this behaviour? Unfortunately the work around doesn’t work in the F# case. We think it’s because F# doesn’t respect assembly versions and will happily replace any requested assembly with another one so long as it’s got the same simple name. At line 33, no assemblies are found in the fsproj file because the ‘AssemblyNamesMatch’ function does an exact match using all four elements of the full assembly name (simple name, version, culture, and key) and of course the assembly that the F# project system has found and added has a different version.

So, come on F# team, pull your finger out and fix the Visual Studio F# project system. In the meantime, in my next post I’ll talk about some of things our team, and especially the excellent Michael Newton (@mavnn) has been doing to try and work around these problems.

Automating Nginx Reverse Proxy Configuration

$
0
0

proxyautomation

It’s really nice if you can decouple your external API from the details of application segregation and deployment.

In a previous post I explained some of the benefits of using a reverse proxy. On my current project we’ve building a distributed service oriented architecture that also exposes an HTTP API, and we’re using a reverse proxy to route requests addressed to our API to individual components. We have chosen the excellent Nginx web server to serve as our reverse proxy; it’s fast, reliable and easy to configure. We use it to aggregate multiple services exposing HTTP APIs into a single URL space. So, for example, when you type:

http://api.example.com/product/pinstripe_suit

It gets routed to:

http://10.0.1.101:8001/product/pinstripe_suit

But when you go to:

http://api.example.com/customer/103474783

It gets routed to

http://10.0.1.104:8003/customer/103474783

To the consumer of the API it appears that they are exploring a single URL space (http://api.example.com/blah/blah), but behind the scenes the different top level segments of the URL route to different back end servers. /product/… routes to 10.0.1.101:8001, but /customer/… routes to 10.0.1.104:8003.

We also want this to be self-configuring. So, say I want to create a new component of the system that records stock levels. Rather than extending an existing component, I want to be able to write a stand-alone executable or service that exposes an HTTP endpoint, have it be automatically deployed to one of the hosts in my cloud infrastructure, and have Nginx automatically route requests addressed http://api.example.com/stock/whatever to my new component.

We also want to load balance these back end services. We might want to deploy several instances of our new stock API and have Nginx automatically round robin between them.

We call each top level segment ( /stock, /product, /customer ) a claim. A component publishes an ‘AddApiClaim’ message over RabbitMQ when it comes on line. This message has 3 fields: ‘Claim', ‘ipAddress’, and ‘PortNumber’. We have a special component, ProxyAutomation, that subscribes to these messages and rewrites the Nginx configuration as required. It uses SSH and SCP to log into the Nginx server, transfer the various configuration files, and instruct Nginx to reload its configuration. We use the excellent SSH.NET library to automate this.

A really nice thing about Nginx configuration is wildcard includes. Take a look at our top level configuration file:

   1: ...
   2:  
   3: http {
   4:     include       /etc/nginx/mime.types;
   5:     default_type  application/octet-stream;
   6:  
   7:     log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
   8:'$status $body_bytes_sent "$http_referer" '
   9:'"$http_user_agent" "$http_x_forwarded_for"';
  10:  
  11:     access_log  /var/log/nginx/access.log  main;
  12:  
  13:     sendfile        on;
  14:     keepalive_timeout  65;
  15:  
  16:     include /etc/nginx/conf.d/*.conf;
  17: }

Line 16 says, take any *.conf file in the conf.d directory and add it here.

Inside conf.d is a single file for all api.example.com requests:

   1: include     /etc/nginx/conf.d/api.example.com.conf.d/upstream.*.conf;
   2:  
   3: server {
   4:     listen          80;
   5:     server_name     api.example.com;
   6:  
   7:     include         /etc/nginx/conf.d/api.example.com.conf.d/location.*.conf;
   8:  
   9:     location / {
  10:         root    /usr/share/nginx/api.example.com;
  11:         index   index.html index.htm;
  12:     }
  13: }

This is basically saying listen on port 80 for any requests with a host header ‘api.example.com’.

This has two includes. The first one at line 1, I’ll talk about later. At line 7 it says ‘take any file named location.*.conf in the subdirectory ‘api.example.com.conf.d’ and add it to the configuration. Our proxy automation component adds new components (AKA API claims) by dropping new location.*.conf files in this directory. For example, for our stock component it might create a file, ‘location.stock.conf’, like this:

   1: location /stock/ {
   2:     proxy_pass http://stock;
   3: }

This simply tells Nginx to proxy all requests addressed to api.example.com/stock/… to the upstream servers defined at ‘stock’. This is where the other include mentioned above comes in, ‘upstream.*.conf’. The proxy automation component also drops in a file named upstream.stock.conf that looks something like this:

   1: upstream stock {
   2:     server 10.0.0.23:8001;
   3:     server 10.0.0.23:8002;
   4: }

This tells Nginx to round-robin all requests to api.example.com/stock/ to the given sockets. In this example it’s two components on the same machine (10.0.0.23), one on port 8001 and the other on port 8002.

As instances of the stock component get deployed, new entries are added to upstream.stock.conf. Similarly, when components get uninstalled, the entry is removed. When the last entry is removed, the whole file is also deleted.

This infrastructure allows us to decouple infrastructure configuration from component deployment. We can scale the application up and down by simply adding new component instances as required. As a component developer, I don’t need to do any proxy configuration, just make sure my component publishes add and remove API claim messages and I’m good to go.


Redis: Very Useful As a Distributed Lock

$
0
0

In a Service Oriented Architecture you sometimes need a distributed lock; an application lock across many servers to serialize access to some constrained resource. I’ve been looking at using Redis, via the excellent ServiceStack.Redis client library, for this.

It really is super simple. Here’s a little F# sample to show it in action:

   1: module Zorrillo.Runtime.ProxyAutomation.Tests.RedisSpike
   2:  
   3: open System
   4: open ServiceStack.Redis
   5:  
   6: let iTakeALock n = 
   7:     async {
   8:         use redis = new RedisClient("redis.local")
   9:         let lock = redis.AcquireLock("integration_test_lock", TimeSpan.FromSeconds(10.0))
  10:         printfn "Aquired lock for %i" n 
  11:         Threading.Thread.Sleep(100)
  12:         printfn "Disposing of lock for %i" n
  13:lock.Dispose()
  14:     }
  15:  
  16: let ``should be able to save and retrieve from redis`` () =
  17:
  18:     [for i in [0..9] -> iTakeALock i]
  19:         |> Async.Parallel
  20:         |> Async.RunSynchronously

The iTakeALock function creates an async task that uses the SeviceStack.Redis AquireLock function. It then pretends to do some work (Thread.Sleep(100)), and then releases the lock (lock.Dispose()).

Running 10 iTakeALocks in parallel (line 16 onwards) gives the following result:

Aquired lockfor 2
Disposing of lockfor 2
Aquired lockfor 6
Disposing of lockfor 6
Aquired lockfor 0
Disposing of lockfor 0
Aquired lockfor 7
Disposing of lockfor 7
Aquired lockfor 9
Disposing of lockfor 9
Aquired lockfor 5
Disposing of lockfor 5
Aquired lockfor 3
Disposing of lockfor 3
Aquired lockfor 8
Disposing of lockfor 8
Aquired lockfor 1
Disposing of lockfor 1
Aquired lockfor 4
Disposing of lockfor 4

Beautifully serialized access from parallel processes. Very nice.

Guest Post: Working Around F#/NuGet Problems

$
0
0

A first for me. A guest post by my excellent colleague Michael Newton.

Michael normally blogs at http://blog.mavnn.co.uk and works at 15below. He’s the build manager at 15below and has developed various work arounds for the F# NuGet issues we’ve been experiencing. I’ll hand over and let him explain …

In his first post Mike did an excellent job explaining the bugs we found when trying to add or update NuGet references in fsproj files.

Unfortunately, the confusion doesn’t stop there. It turns out that if you examine the NuGet code, the logic for updating project files is not in NuGet.Core (the shared dll that drives core functionality like how to unpack a nupkg file) but is re-implemented in each client. This means that you get different results if you are running commands from the command line client than if you are using the Visual Studio plugin or the hosted PowerShell console. The reason for this starts to become obvious once you realise that the command line client has no parameters for specifying which project and/or solution you are working against, whilst that information is either already available in the Visual Studio plugin or required via little dropdown boxes in the hosted PowerShell console.

So, between everyday usage, preparing to move some of our project references to NuGet and the needs of our continuous integration we now had the following requirements:

  1. Reliable installation of NuGet references to C#, VB.net and F# projects. Preferably with an option of doing it via the command line to help scripting the project reference to NuGet reference moves.
  2. Reliable upgrades of NuGet references in all project types. Again, a command line option would be useful.
  3. Reliable downgrades of NuGet references in all project types. It’s painful to try a new release/pre-release of a NuGet package across a solution and then discover that you have to manually downgrade all of the projects separately if you decide not to take the new version.
  4. Reliable removal of NuGet references turns out to be a requirement of reliable downgrades.
  5. Sane solution wide management of references. Due to the way project references work, we need an easy way to ensure that all of the projects in a solution use the same version of any particular NuGet reference, and to check that this will not case any version conflicts. So ideally, upgrade and downgrade commands will run against a solution.

Looking at our requirements in terms of what is already handled by NuGet and what is affected by the bugs that Mike discussed last time, we get:

  1. Very buggy for F# projects, fix relies on a fix to the underlying F# project type which is probably unlikely before Visual Studio version next. Also, command line installing that adds project references is not supported in nuget.exe by design.
  2. Again, broken for F# projects. Otherwise works.
  3. Not supported by design in any of the NuGet clients.
  4. Appears to work.
  5. Not supported by NuGet.

As we looked at the list, it became apparent that we were unlikely to see the F# bug fixed any time soon, and even if we did there would still be several areas of functionality that we would be missing but would help us greatly. The number of options that we needed that the mainline NuGet project does not support by design swung the balance for us from a work-around or bug patch in NuGet’s Visual Studio project handling to a full blown wrapper library.

So, the NuGetPlus project was born. As always, the name is a misnomer. Because naming things is hard. But the idea is to build a NuGet wrapper that provides the functionality above, and as a bonus extra for both us and the F# community, does not exhibit the annoying F# bugs from the previous post. Because the command line exe in NuGetPlus is only a very thin wrapper around the dll, it also allows you to easily call into the process from code and achieve results the same as running the command line program without having to write 100s of lines of supporting boiler plate. For those of you who have tried to use NuGet.Core directly, you’ll know that it’s a bit of an exercise in frustration actually mimicking the full behaviour of any of the clients.

It is very much still a work in progress. For example, it respects nuget.config files, but at the moment only makes use of the repository path and source list config options – we haven’t checked if we need to be supporting more. But it covers scenarios 1-4 above nicely, and we’re hoping to add 5 (solution level upgrade, downgrade and checking) fairly shortly. Although it has been developed on work time as functionality we desperately need in house, it is also a fully open source MIT licensed project that we are more than happy to receive pull requests for if there is functionality the community needs.

So whether you’re a F# NuGet user, or you just see the value of the additional functionality above, take it for a spin and let us know what you think.

Brighton ALT NET @ Brighton Digital Festival

$
0
0

 

BDF_logo_date_flame_cmyk

 

TL;DR We need you! Present your project at our BDF special event in September.

Brighton Digital Festival is a month long celebration of all things digital throughout September in the lovely city of Brighton. There’s absolutely loads going on, and one would need to take the whole month off work to see everything. Last year was a blast, and this year looks to be even better, with full time festival organisers and funding from the arts council.

Brighton ALT NET is a monthly meet-up for .NET developers. We’ve been going for several years now. Each month we meet up, drink beer, and geek out about the .NET framework and software development in general. For our September event we thought it would be excellent fun to join in with the Digital Festival and do something special. We want to showcase some of the brilliant things developers like you are doing in the local area; a show-and-tell of awesome if you will. If you’ve got a project you’d like to show off, something you’ve done at work, or in your spare time, get in touch with me. It can be anything from a Netduino mouse-trap to online community of traffic cone spotters.

Email me at mike-at-suteki.co.uk.

EasyNetQ: Big Breaking Changes in the Advanced Bus

$
0
0

logo_design_240

EasyNetQ is my little, easy to use, client API for RabbitMQ. It’s been doing really well recently. As I write this it has 24,653 downloads on NuGet making it by far the most popular high-level RabbitMQ API.

The goal of EasyNetQ is to make working with RabbitMQ as easy as possible. I wanted junior developers to be able to use basic messaging patterns out-of-the-box with just a few lines of code and have EasyNetQ do all the heavy lifting: exchange-binding-queue configuration, error management, connection management, serialization, thread handling; all the things that make working against the low level AMQP C# API, provided by RabbitMQ, such a steep learning curve.

To meet this goal, EasyNetQ has to be a very opinionated library. It has a set way of configuring exchanges, bindings and queues based the .NET type of your messages. However, right from the first release, many users said that they liked the connection management, thread handling, and error management, but wanted to be able to set up their own broker topology. To support this we introduced the advanced API, an idea stolen shamelessly from Ayende’s RavenDb client.

You access the advanced bus (IAdvancedBus) via the Advanced property on IBus:

var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced;

Sometimes something can seem like a good idea at the time, and then later you think, “WTF! Why on earth did I do that?” It happens to me all the time. I thought it would be cool if one created the exchange-binding-queue topology and then passed it to the publish and subscribe methods, which would then internally declare the exchanges and queues and do the binding. I implemented a tasty little visitor pattern in my ITopologyVisitor. I optimised for (my) programming fun rather than an a simple, obvious, easy to understand API.

I realised a while ago that a more straightforward set of declares on IAdvancedBus would be a far more obvious and intentional design. To this end, I’ve refactored the advanced bus to separate declares from publishing and consuming. I just pushed the changes to NuGet and have also updated the Advanced Bus documentation. Note these are breaking changes, so please be careful if you are upgrading to the latest version, 0.12, and upwards.

Here are some tasters of how it works:

Declare a queue, exchange and binding, and consume raw message bytes:

var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced;

var queue = advancedBus.QueueDeclare("my_queue");
var exchange = advancedBus.ExchangeDeclare("my_exchange", ExchangeType.Direct);
advancedBus.Bind(exchange, queue, "routing_key");

advancedBus.Consume(queue, (body, properties, info) => Task.Factory.StartNew(() =>
{
var message = Encoding.UTF8.GetString(body);
Console.Out.WriteLine("Got message: '{0}'", message);
}));

Note I’ve renamed ‘Subscribe’ to ‘Consume’ to better reflect the underlying AMQP method.

Declare an exchange and publish a message:

var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced;

var exchange = advancedBus.ExchangeDeclare("my_exchange", ExchangeType.Direct);

using (var channel = advancedBus.OpenPublishChannel())
{
var body = Encoding.UTF8.GetBytes("Hello World!");
channel.Publish(exchange, "routing_key", new MessageProperties(), body);
}

You can also delete exchanges, queues and bindings:

var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced;

// declare some objects
var queue = advancedBus.QueueDeclare("my_queue");
var exchange = advancedBus.ExchangeDeclare("my_exchange", ExchangeType.Direct);
var binding = advancedBus.Bind(exchange, queue, "routing_key");

// and then delete them
advancedBus.BindingDelete(binding);
advancedBus.ExchangeDelete(exchange);
advancedBus.QueueDelete(queue);

advancedBus.Dispose();

I think these changes make for a much better advanced API. Have a look at the documentation for the details.

EasyNetQ: Topic Confusion!

$
0
0

This is a quick post to highlight a common cause of confusion when people play with topics in EasyNetQ.

You can subscribe to a message type with a topic like this:

bus.Subscribe<MyMessage>("id1", myHandler, x => x.WithTopic("X.*"));

Topics are dot separated strings that match with the routing key attached to a message at publication. In the code above I’ve said, give me any message of type MyMessage who’s topic matches “x.*”. The ‘*’ character is a wildcard, so “X.A” would match, as would “X.Z”, but “Y.A” wouldn’t.

You publish with a topic like this:

using (var publishChannel = bus.OpenPublishChannel())
{
publishChannel.Publish(myMessage, x => x.WithTopic("X.A"));
}

The confusion occurs when the topic in the subscribe call changes. Maybe you are experimenting with topics by changing the string in the WithTopic( … ) method, or perhaps you are hoping to dynamically change the topic at runtime? Maybe you’ve done several subscribes, each with a different handler and a different topic, but the same subscription Id. Either way, you’ll probably be surprised to find that you still get all the messages matched by previously set topics as well as those matched by the current topic.

In order to explain why this happens, let’s look at how EasyNetQ creates exchanges, queues and bindings when you call these API methods.

If I call the subscribe method above, EasyNetQ will declare a topic-exchange named after the type, a queue named after the type and the subscription id, and a bind them with the given topic:

queue_binding_with_topic

If I change the topic and run the code again like this:

bus.Subscribe<MyMessage>("id1", myHandler, x => x.WithTopic("Y.*"));

EasyNetQ will declare the same queue and exchange. The word ‘declare’ is the key here, it means, “if the object doesn’t exist, create it, otherwise do nothing.” The queue and exchange already exist, so declaring them has no effect. EasyNetQ then binds them with the given routing key. The original binding still exists – we haven’t done anything to remove it – so we now have two bindings with two different topics between the exchange and the queue:

queue_binding_with_topic2

This means that any message matching ‘X.*’ or‘Y.*’ will be routed to the queue, and thus to our consumer.

So, beware when playing with topics!

As an aside, the ability to create multiple bindings is a very powerful feature. It allows you to implement ‘OR’ semantics for routing messages. If this is what you want, you should concatenate multiple WithTopic methods rather than make multiple calls to Subscribe. For example, say we wanted to implement ‘*.B OR Y.*’:

bus.Subscribe<MyMessage>("id4", myHandler, x => x.WithTopic("Y.*").WithTopic("*.B"));

Which would give us the desired result:

queue_binding_with_topic3

Happy routing!

Viewing all 112 articles
Browse latest View live