Quantcast
Channel: Code rant
Viewing all 112 articles
Browse latest View live

RabbitMQ: AMQP Channel Best Practices

$
0
0

I’ve long been confused with best practices around AMQP channel handling. This post is my attempt to explain my current thoughts, partly in an attempt to elicit feedback.

The conversation between a message broker and a client is two-way. Both the client and the server can initiate ‘communication events’; the client can invoke a method on the server: ‘publish’ or ‘declare’, for example; and the server can invoke a method on the client such as ‘deliver’ or ‘reject’. Because the client needs to receive methods from the server, it needs to keep a connection open for its lifetime. This means that broker connections may last long periods; hours, days, or weeks. Maintaining these connections is expensive, both for the client and the server. In order to have many logical connections without the overhead of many physical TCP/IP connections, AMQP has the concept of ‘channel’ (confusingly represented by the IModel interface in the Java and .NET clients). You can create multiple channels on a single connection and it’s relatively cheap to create and dispose of them.

But what are the recommended rules for handling these channels? Should I create a new one for every operation? Or should I just keep a single one around and execute every method on it?

First some hard and fast rules (note I’m using the RabbitMQ .NET client for reference here, other clients might have different behaviour):

Channels are not thread safe. You should create and use a channel on a single thread. From the .NET client documentation (section 2.10):

“In general, IModel instances should not be used by more than one thread simultaneously: application code should maintain a clear notion of thread ownership for IModel instances.”

Apparently this is not a hard and fast rule with the Java client, just good advice. The java client serializes all calls to a channel.

An ACK should be invoked on the same channel on which the delivery was received. The delivery tag is scoped to the channel and an ACK sent to a different channel from which the delivery was received will cause a channel error. This somewhat contradicts the ‘Channels are not thread safe’ directive above since you will probably ACK on a different thread from one where you invoked basic.consume, but this seems to be acceptable.

Now some softer suggestions:

It seems neater to create a channel for each consumer. I like thinking of the consumer and channel as a unit: when I create a consumer, I also create a channel for it to consume from. When the consumer goes away, so does the channel and vice-versa.

Do not mix publishing and consuming on the same channel. If you follow the suggestion above, it implies that you have dedicated channels for each consumer. It follows that you should create separate channels for server and client originated methods.

Maintain a long running publish channel. My current implementation of EasyNetQ makes creating channels for publishing the responsibility of the user. I now think this is mistake. It encourages the pattern of: create a channel, publish, dispose the channel. This pattern doesn’t work with publisher confirms where you need to keep the channel open at least until you receive an ACK or NACK. And although creating channels is relatively lightweight, there is still some overhead. I now favour the approach of maintaining a single publish channel on a dedicated thread and marshalling publish (and declare) calls to it. This is potentially a bottleneck, but the impressive performance of non-transactional publishing means that I’m not overly concerned about it.


EasyNetQ: IConsumerDispatcher

$
0
0

logo_design_150

EasyNetQ has always had a single dispatcher thread that runs user message handlers. This means that a slow handler on one queue can cause other handlers on other queues to wait. The intention is that one shouldn’t write long running consumers. If you are doing long running IO you should use the SubsribeAsync method and return a task that completes when the long running IO completes.

A recent change to EasyNetQ has been to make the dispatcher a separate abstraction. This means you can replace it with your own implementation if desired.

IConsumerDispatcher

Inside EasyNetQ the dispatcher receives deliveries from the RabbitMQ C# Client library and places the delivery information on an internal concurrent queue. By default, all consumers share a single internal queue. A single dispatcher thread pulls deliveries from the queue and then asks the consumer to invoke the user message handler.

The consumer dispatcher implementation is very simple, it simply maintains a queue of Action and a thread which takes those actions from the end of the queue and runs them. You could use the same pattern whenever you need to marshal an action onto a single thread. I wrote about this more general terms here.

publicclass ConsumerDispatcher : IConsumerDispatcher
{
privatereadonly Thread dispatchThread;
privatereadonly BlockingCollection<Action> queue = new BlockingCollection<Action>();
privatebool disposed;

public ConsumerDispatcher(IEasyNetQLogger logger)
{
Preconditions.CheckNotNull(logger, "logger");

dispatchThread = new Thread(_ =>
{
try
{
while (true)
{
if (disposed) break;

queue.Take()();
}
}
catch (InvalidOperationException)
{
// InvalidOperationException is thrown when Take is called after
// queue.CompleteAdding(), this is signals that this class is being
// disposed, so we allow the thread to complete.
}
catch (Exception exception)
{
logger.ErrorWrite(exception);
}
}) { Name = "EasyNetQ consumer dispatch thread" };
dispatchThread.Start();
}

publicvoid QueueAction(Action action)
{
Preconditions.CheckNotNull(action, "action");
queue.Add(action);
}

publicvoid Dispose()
{
queue.CompleteAdding();
disposed = true;
}
}

An implementation of IConsumerDispatcherFactory maintains a single instance of IConsumerDispatcher:

publicclass ConsumerDispatcherFactory : IConsumerDispatcherFactory
{
privatereadonly Lazy<IConsumerDispatcher> dispatcher;

public ConsumerDispatcherFactory(IEasyNetQLogger logger)
{
Preconditions.CheckNotNull(logger, "logger");

dispatcher = new Lazy<IConsumerDispatcher>(() => new ConsumerDispatcher(logger));
}

public IConsumerDispatcher GetConsumerDispatcher()
{
return dispatcher.Value;
}

publicvoid Dispose()
{
if (dispatcher.IsValueCreated)
{
dispatcher.Value.Dispose();
}
}
}

If you wanted to use an alternative dispatch strategy; say for example you (quite reasonably) wanted a new dispatch thread for each consumer, you would simply implement an alternative IConsumerDispatcherFactory and register it with EasyNetQ like this:

var bus = RabbitHutch.CreateBus("host=localhost", 
x => x.Register<IConsumerDispatcherFactory>(_ => new MyConsumerDispatcherFactory()));

Happy messaging!

Brighton Fuse

$
0
0

brighton_fuse

Last night I attended the launch of The Brighton Fuse report. This is the result of a two year research project into Brighton’s Creative, Digital and IT (CDIT) sector. It’s one of the most detailed investigations into a tech cluster ever carried out in the UK and makes fascinating reading. I was involved as one of the interviewees, so I’ve got a personal interest in the findings.

The research found 1,485 firms which fit the CDIT label operating in the city. Of these they sampled 485. For me the most surprising results are the size and rate of growth of the CDIT sector. As someone who’s lived and worked in Brighton over the last 18 years, I’ve certainly been aware of it, but had no idea of the scale. The numbers from the survey are very impressive. Brighton is of course famous as a resort town, ‘London on sea’, but the CDIT sector is now equivalent to the tourist industry, with both generating around £700 million in sales. Around 9000 people work in the sector, that’s 10% of Brighton’s workforce. The rate of growth is phenomenal: 14% annually, and that’s in the context of recession conditions in the UK as  whole. It accounts for all the job growth in Brighton, and makes the city one of the fastest growing in the UK.

Why Brighton? Why should such a dynamic cluster of companies spring up here, rather than in any of the tens of other medium sized cities in the UK? According to the report, it’s not because of any government initiative, or the presence of any large leading companies in the sector, or even because entrepreneurs made a conscious decision to start companies here. No, it seems to simply to be that it’s a very attractive place to live. People come here (85% of founders are from outside Brighton) because they want to live here, not for economic opportunity. The kinds of people who are attracted to Brighton also seem to be the kinds of people who have a tendency to start creative digital businesses. It even seems to be that some of the economic disadvantages of Brighton, the lack of large established employers, leaves many with little alternative to becoming entrepreneurs.

So what makes Brighton so attractive? There are the obvious geographic advantages: London, Europe’s, if not the world’s, capital city, is only an hour’s train ride away; Gatwick, one of London’s three main airports, is just up the A23. It’s truly beautiful. Sussex has a world class landscape with the South Downs national park rolling down to the white cliffs; and of course there’s the sea. If you are at all interested in history, it’s a constant delight; I can take a 20 minute walk from my house in Lewes, past a Norman castle, the 15th century half-timbered home of a Tudor queen and an Elizabethan mansion.  But it’s the cultural side of Brighton that is probably its main attraction. It has a strong identity as the bohemian capital of the UK, with the largest gay population outside London and the biggest arts festival in England. Almost every weekend there’s some event or festival going on. We have the UK’s only green MP and the only Green controlled council. It’s also the least religious place in the UK, but conversely has the highest number of Jedi Knights (2.6% of the population indeed).

To sum up the findings of the report: (warning: personal tongue-in-cheek, not-to-be-taken-seriously interpretation follows)

In order to create a successful CDIT cluster, do the following:

  1. Find somewhere pretty with good transport links and access to London.
  2. Throw in a couple of universities.
  3. Add an arts festival.
  4. Avoid big government or commercial institutions.
  5. Discourage conservatives and Christians.
  6. Make it a playground for bohemians, gays and artists.
  7. Leave to simmer.

My personal anecdotal experience of living in Brighton over the last 18 years chimes nicely with the report’s findings. I arrived in Brighton in 1995, after two years living in Japan, to attend an Information Systems MSc at Brighton University. I immediately fell in love with the place, but after graduating I moved to London for work since there were few programming jobs available in the city. But I couldn’t stay away and came back and bought a flat here in 1997. I commuted for the first year or so, but then got a job as a junior programmer at American Express, one of Brighton’s few large corporate employers. By then the dot-com boom was raging and I left after six months to double my income as a freelancer. I haven’t looked back. Up until around 2007 I rarely found clients in Brighton, but would travel to work at client sites throughout the South East. Since 2007, with the growth of the CDIT sector, the opposite has been the case; I haven’t worked more than half a mile from Brighton station.

Three of my local clients have been exactly the kinds of businesses covered in the report: Cubeworks, a digital agency; Madgex, a product company selling SaaS jobsites; and 15below, my current clients, who build airline integration systems. All started around 10 years ago as small 2, 3, or 4 man operations, and all have grown rapidly since. 15below, for example, now has close to 70 employees and a multi-million pound annual turnover. Cubeworks grew rapidly into a successful agency brand and has since been acquired, and Madgex has grown to around 40 employees. All show huge growth rates and not only provide employment, but also clients for a large pool of freelancers like me.

Despite the recession in the rest of the UK, it’s now an incredibly exciting time to be working as a freelancer in Brighton. There’s a real buzz about the place, and you can’t turn a corner without bumping into somebody who’s about to start a new indie-game company or launch their kickstarter. Of course there’s a certain amount of pretention and froth that inevitably goes with it, but there’s enough genuine success to feel like we’re in the midst of something rather wonderful.

Download and read the report, I’ll think you’ll be impressed too.

EasyNetQ: Big Breaking Changes in the Publish API

$
0
0
logo_design_150

From version 0.15 the way that publish works in EasyNetQ has dramatically changed. Previously the client application was responsible for creating and disposing the AMQP channel for the publication, something like this:

using (var channel = bus.OpenPublishChannel())
{
    channel.Publish(message);
}

There are several problems with this approach.

The practical ones are that it encourages developers to use far more channels than they need to. The codebases that I’ve looked at often have the pattern exactly as it’s given above, even if publish is invoked in a loop. Channel creation is relatively cheap, but it’s not free and frequent channel creation imposes a cost on the broker. However, if a developer tries to keep a channel open for a number of publish calls, they then have to deal with the complex scenario of recovering from a connection loss.

The more conceptual, design oriented problem, is that it fails in terms of EasyNetQ’s mission statement, which is to make building .NET applications with RabbitMQ as easy as possible. With the core (IBus) API, the developer shouldn’t have to be concerned about AMQP specifics like channel handling, the library should do all that for them.

From version 0.15, you don’t need to open a publish channel, simply call the new Publish method directly on the IBus interface:

bus.Publish(message);

Internally EasyNetQ maintains a single channel for all outgoing AMQP calls and marshals all client invocations onto a single internally maintained thread. So while EasyNetQ is thread-safe, all internal calls to the RabbitMQ.Client library are serialised. Consumers haven’t changed and are invoked from a separate thread.

The Request call has also been moved to the IBus API:

bus.Request<TestRequestMessage, TestResponseMessage>(new TestRequestMessage {Text = "Hello World!"},
    response => Console.WriteLine("Got response: " + response.Text));

The change also means that EasyNetQ can take full responsibility for channel reconnection in the event of connection failure and leads to a much nicer publisher confirms implementation which I’ll be blogging about soon.

Happy messaging!

AsmSpy Coloured Output

$
0
0

AsmSpy is a little tool I put together a while back to view assembly reference conflicts. Even though it took just an hour or so to knock together, it’s proved to be one of my more successful open source offerings. Since I initially put it up on GitHub there have been a trickle of nice pull requests. Just today I got an excellent coloured output submission from Mahmoud Samy Abuzied which makes it much easier to see the version conflicts.

Here’s the output from EasyNetQ showing that it’s awkwardly using two different versions of the .NET runtime libraries.

AsmSpy_colored

I’ve also uploaded a build of AsmSpy.exe to Amazon S3 so you now don’t have to clone the repository and build it yourself. Grab it from http://static.mikehadlow.com/AsmSpy.zip

Happy conflicting!

EasyNetQ: Publisher Confirms

$
0
0
logo_design_150

Publisher confirms are a RabbitMQ addition to AMQP to guarantee message delivery. You can read all about them here and here. In short they provide a asynchronous confirmation that a publish has successfully reached all the queues that it was routed to.

To turn on publisher confirms with EasyNetQ set the publisherConfirms connection string parameter like this:

var bus = RabbitHutch.CreateBus("host=localhost;publisherConfirms=true");

When you set this flag, EasyNetQ will wait for the confirmation, or a timeout, before returning from the Publish method:

bus.Publish(new MyMessage
    {
        Text = "Hello World!"
    });
// here the publish has been confirmed.

Nice and easy.

There’s a problem though. If I run the above code in a while loop without publisher confirms, I can publish around 4000 messages per second, but with publisher confirms switched on that drops to around 140 per second. Not so good.

With EasyNetQ 0.15 we introduced a new PublishAsync method that returns a Task. The Task completes when the publish is confirmed:

bus.PublishAsync(message).ContinueWith(task =>
    {if (task.IsCompleted)
        {
            Console.WriteLine("Publish completed fine.");
        }if (task.IsFaulted)
        {
            Console.WriteLine(task.Exception);
        }
    });

Using this code in a while loop gets us back to 4000 messages per second with publisher confirms on.

Happy confirms!

RabbitMQ Request-Response Pattern

$
0
0

If you are programming against a web service, the natural pattern is request-response. It’s always initiated by the client, which then waits for a response from the server. It’s great if the client wants to send some information to a server, or request some information based on some criteria. It’s not so useful if the server wants to initiate the send of some information to the client. There we have to rely on somewhat extended HTTP tricks like long-polling or web-hooks.

With messaging systems, the natural pattern is send-receive. A producer node publishes a message which is then passed to a consuming node. There is no real concept of client or server; a node can be a producer, a consumer, or both. This works very well when one node wants to send some information to another or vice-versa, but isn’t so useful if one node wants to request information from another based on some criteria.

All is not lost though. We can model request-response by having the client node create a reply queue for the response to a query message it sends to the server. The client can set the request message properties’ reply_to field with the reply queue name. The server inspects the reply_to field and publishes the reply to the reply queue via the default exchange, which is then consumed by the client.

request-response

The implementation is simple on the request side, it looks just like a standard send-receive. But on the reply side we have some choices to make. If you Google for ‘RabbitMQ RPC’, or ‘RabbitMQ request response’, you will find severaldifferent opinions concerning the nature of the reply queue.

  • Should there be a reply queue per request, or should the client maintain a single reply queue for multiple requests?
  • Should the reply queue be exclusive, only available to this channel, or not? Note that an exclusive queue will be deleted when the channel is closed, either intentionally, or if there is a network or broker failure that causes the connection to be lost.

Let’s have a look at the pros and cons of these choices.

Exclusive reply queue per request.

Here each request creates a reply queue. The benefits are that it is simple to implement. There is no problem with correlating the response with the request, since each request has its own response consumer. If the connection between the client and the broker fails before a response is received, the broker will dispose of any remaining reply queues and the response message will be lost.

The main implementation issue is that we need to clean up any replies queues in the event that a problem with the server means that it never publishes the response.

This pattern has a performance cost because a new queue and consumer has to be created for each request.

Exclusive reply queue per client

Here each client connection maintains a reply queue which many requests can share. This avoids the performance cost of creating a queue and consumer per request, but adds the overhead that the client needs to keep track of the reply queue and match up responses with their respective requests. The standard way of doing this is with a correlation id that is copied by the server from the request to the response.

Once again, there is no problem with deleting the reply queue when the client disconnects because the broker will do this automatically. It does mean that any responses that are in-flight at the time of a disconnection will be lost.

Durable reply queue

Both the options above have the problem that the response message can be lost if the connection between the client and broker goes down while the response is in flight. This is because they use exclusive queues that are deleted by the broker when the connection that owns them is closed.

The natural answer to this is to use a non-exclusive reply queue. However this creates some management overhead. You need some way to name the reply queue and associate it with a particular client. The problem is that it’s difficult for the client to know if any one reply queue belongs to itself, or to another instance. It’s easy to naively create a situation where responses are being delivered to the wrong instance of the client. You will probably wind up manually creating and naming response queues, which removes one of the main benefits of choosing broker based messaging in the first place.

 EasyNetQ

For a high-level re-useable library like EasyNetQ the durable reply queue option is out of the question. There is no sensible way of knowing whether a particular instance of the library belongs to a single logical instance of a client application. By ‘logical instance’ I mean an instance that might have been stopped and started, as opposed to two separate instances of the same client.

Instead we have to use exclusive queues and accept the occasional loss of response messages. It is essential to implement a timeout, so that an exception can be raised to the client application in the event of response loss. Ideally the client will catch the exception and re-try the message if appropriate.

Currently EasyNetQ implements the ‘reply queue per request’ pattern, but I’m planning to change it to a ‘reply queue per client’. The overhead of matching up responses to requests is not too onerous, and it is both more efficient and easier to manage.

I’d be very interested in hearing other people’s experiences in implementing request-response with RabbitMQ.

EasyNetQ: Big Breaking Changes to Request-Response

$
0
0

My intensive work on EasyNetQ (our super simple .NET API for RabbitMQ) continues. I’ve been taking lessons learned from nearly two years of production and the fantastic feedback from EasyNetQ’s users, mashing this together, and making lots changes to both the internals and the API. I know that API changes cause problems for users; they break your application and force you to revisit your code. But the longer term benefits should outweigh the immediate costs as EasyNetQ slowly morphs into a solid, reliable library that really does make working with RabbitMQ as easy as possible.

Changes in version 0.17 are all around the request-response pattern. The initial implementation was very rough with lots of nasty ways that resource use could run away when things went wrong. The lack of timeouts also meant that your application could wait forever when messages got lost. Lastly the API was quite clunky, with call-backs where Tasks are a far better choice. All these problems have been corrected in this version.

API changes

There is now a synchronous Request method. Of course messaging is by nature a asynchronous operation, but sometimes you just want the simplest possible thing and you don’t care about blocking your thread while you wait for a response. Here’s what it looks like:

var response = bus.Request<TestRequestMessage, TestResponseMessage>(request);

The old call-back Request method has been removed. There was no need for it when the RequestAsync that returned a Task<TResult> was always a better choice:

var task = bus.RequestAsync<TestRequestMessage, TestResponseMessage>(request)

task.ContinueWith(response =>
{
    Console.WriteLine("Got response: '{0}'", response.Result.Text);
});

Timeouts

Timeouts are an essential ingredient of any distributed system. This probably deserves a blog post of its own, but no matter how resilient you make your architecture, if an important piece simply goes away (like the network for example), you need a circuit breaker. EasyNetQ now has a global timeout that you can configure via the connection string:

var bus = RabbitHutch.CreateBus("host=localhost;timeout=60");

Here we’ve configured the timeout as 60 seconds. The default is 10 seconds. If you make a request, but no response is received within the timeout period, a System.Timeout exception will be thrown.

If the connection goes away while a request is in-flight, EasyNetQ doesn’t wait for the timeout to fire, but immediately throws an EasyNetQException with a message saying that the connection has been lost. Your application should catch both Timeout and EasyNetQ exceptions and react appropriately.

Internal Changes

My last blog post was a discussion of the implementation options of request-response with RabbitMQ. As I said there, I now believe that a single exclusive queue for all responses to a client is the best option. Version 0.17 implements this. When you call bus.Request(…) you will see a queue created named easynetq.response.<some guid>. This will last for the lifetime of the current connection.

Happy requesting!


EasyNetQ: Polymorphic Publish and Subscribe

$
0
0
logo_design_150

From version 18.0 of EasyNetQ, you can now subscribe to an interface, then publish implementations of that interface.

Let's look at an example. I have an interface IAnimal and two implementations Cat and Dog:

publicinterface IAnimal
{string Name { get; set; }
}publicclass Cat : IAnimal
{publicstring Name { get; set; }publicstring Meow { get; set; }
}publicclass Dog : IAnimal
{publicstring Name { get; set; }publicstring Bark { get; set; }
}

I can subscribe to IAnimal and receive both Cat and Dog classes, or any other class that implements IAnimal:

bus.Subscribe<IAnimal>("polymorphic_test", @interface =>
    {
        var cat = @interfaceas Cat;
        var dog = @interfaceas Dog;if (cat != null)
        {
            Console.Out.WriteLine("Name = {0}", cat.Name);
            Console.Out.WriteLine("Meow = {0}", cat.Meow);
        }elseif (dog != null)
        {
            Console.Out.WriteLine("Name = {0}", dog.Name);
            Console.Out.WriteLine("Bark = {0}", dog.Bark);
        }else
        {
            Console.Out.WriteLine("message was not a dog or a cat");
        }
    });

Let's publish a cat and a dog:

var cat = new Cat
{
    Name = "Gobbolino",
    Meow = "Purr"
};

var dog = new Dog
{
    Name = "Rover",
    Bark = "Woof"
};

bus.Publish<IAnimal>(cat);
bus.Publish<IAnimal>(dog);

Note that I have to explicitly specify that I am publishing IAnimal. EasyNetQ uses the generic type specified in the Publish and Subscribe methods to route the publications to the subscriptions.

Happy polymorphismising!

EasyNetQ: Changes to Conventions With Version 0.18

$
0
0

TL:DR: Exchange and queue names used to have ‘.’ replaced with ‘_’. From version 0.18 this is no longer the case.

Yesterday I announced version 0.18 of EasyNetQ. The big change was the addition of polymorphic publish and subscribe.

I forgot to mention that there’s a slight change to the conventions that EasyNetQ uses, that might affect you when you upgrade.

EasyNetQ’s publish-subscribe pattern is implemented in AMQP as follows:

  • A topic exchange named after the published type is created.
  • A queue named by concatenating the published type and the subscriber id is created.
  • The exchange is bound to the queue with the wildcard, ‘#’, binding key.

So if you have this code:

bus.Subscribe<MyMessage>("test", MessageHandler);
bus.Publish<MyMessage>(message);

Note, in my case MyMessage is in an assembly ‘EasyNetQ.Tests’ with the same namespace.

You will see this in the RabbitMQ management UI:

img title="default_binding" style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; display: inline; border-top-width: 0px" border="0" alt="default_binding" src="$default_binding5.png" width="715" height="487" />

Previously The exchange and queue names would have had the ‘.’ replaced with ‘_’: ‘EasyNetQ_Tests_MyMessage:EasyNetQ_tests’.

If you upgrade some services to version 0.18, but leave others at earlier version numbers, they won’t be able to communicate.

EasyNetQ: Consumer Cancellation

$
0
0

Consumer cancellation has been a requested feature of EasyNetQ for a while now. I wasn’t intending to implement it immediately, but a pull request by Daniel White today made me look at the whole issue of consumer cancellation from the point of view of a deleted queue. It led rapidly to a quite straightforward implementation of user cancellations. The wonders of open source software development, and the generosity of people like Daniel, never fails to impress me.

So what is consumer cancellation? It means that you can stop consuming from a queue without having dispose of the entire IBus instance and close the connection. All the IBus Subscribe methods, and the IAdvancedBus Consume methods now return an IDisposable. To stop consuming, just call Dispose like this:

var cancelSubscription = bus.Subscribe<MyMessage>("subscriptionId", MessageHandler);
.
.
// sometime later stop consuming
cancelSubscription.Dispose();

Nice :)

EasyNetQ: Multiple Handlers per Consumer

$
0
0

A common feature request for EasyNetQ has been to have some way of implementing a command pipeline pattern. Say you’ve got a component that is emitting commands. In a .NET application each command would most probably be implemented as a separate class. A command might look something like this:

publicclass AddUser
{
publicstring Username { get; private set; }
publicstring Email { get; private set; }

public AddUser(string username, string email)
{
Username = username;
Email = email;
}
}

Another component might listen for commands and act on them. Previously in EasyNetQ it would have been difficult to implement this pattern because a consumer (Subscriber) was always bound to a given message type. You would have had to use the lower level IAdvancedBus binary message methods and implement your own serialization and dispatch infrastructure.

But now EasyNetQ comes with multiple handlers per consumer out of the box.

From version 0.20 there’s a new overload of the Consume method that provides a fluent way for you to register multiple message type handlers to a single consumer, and thus a single queue.

Here’s an example:

bus = RabbitHutch.CreateBus("host=localhost");

var queue = bus.Advanced.QueueDeclare("multiple_types");

bus.Advanced.Consume(queue, x => x
.Add<AddUser>((message, info) =>
{
Console.WriteLine("Add User {0}", message.Body.Username);
})
.Add<DeleteUser>((message, info) =>
{
Console.WriteLine("Delete User {0}", message.Body.Username);
})
);

Now we can publish multiple message types to the same queue:

bus.Advanced.Publish(Exchange.GetDefault(), queue.Name, false, false, 
new Message<AddUser>(new AddUser("Steve Howe", "steve@worlds-best-guitarist.com"))));
bus.Advanced.Publish(Exchange.GetDefault(), queue.Name, false, false,
new Message<DeleteUser>(new DeleteUser("Steve Howe")));

By Default, if a matching handler cannot be found for a message, EasyNetQ will throw an exception. You can change this behaviour, and simply ignore messages that do not have a handler, by setting the ThrowOnNoMatchingHandler property to false, like this:

bus.Advanced.Consume(queue, x => x
.Add<AddUser>((message, info) =>
{
Console.WriteLine("Add User {0}", message.Body.Username);
})
.Add<DeleteUser>((message, info) =>
{
Console.WriteLine("Delete User {0}", message.Body.Username);
})
.ThrowOnNoMatchingHandler = false
);

Very soon there will be a send/receive pattern implemented at the IBus level to make this even easier. Watch this space!

Happy commanding!

EasyNetQ: Send Receive Pattern

$
0
0

From version 0.21, EasyNetQ supports a new message pattern: Send/Receive.

Whereas the Publish/Subscribe and Request/Response patterns are location transparent, in that you don't need to specify where the consumer of the message is located, the Send/Receive pattern is specifically designed for communication via a named queue. It also makes no assumptions about the types of message that can be sent via the queue. This means that you can send different types of message via the same queue.

The send/receive pattern is ideal for creating 'command pipelines', where you want a buffered channel to a single command processor.

To send a message, use the Send method on IBus, specifying the name of the queue you wish to sent the message to and the message itself:

bus.Send("my.queue", new MyMessage{ Text = "Hello Widgets!" });

To setup a message receiver for a particular message type, use the Receive method on IBus:

bus.Receive<MyMessage>(queue, message => Console.WriteLine("MyMessage: {0}", message.Text));

You can set up multiple receivers for different message types on the same queue, for example, calling Receive again for a different message type now sets up receivers for both MyMessage and MyOtherMessage:

bus.Receive<MyOtherMessage>(queue, message => Console.WriteLine("MyOtherMessage: {0}", message.Text));

If a message arrives on a receive queue that doesn't have a matching receiver, EasyNetQ will write the message to the EasyNetQ error queue with an exception saying 'No handler found for message type <the message type>'.

EasyNetQ: Non-Generic Subscribe

$
0
0

logo_design_150

Since it’s very first version, EasyNetQ has allowed you to subscribe to a message simply by providing a handler for a given message type (and a subscription id, but that’s another discussion).

bus.Subscribe<MyMessage>("subscriptionId", x => Console.WriteLine(x.Text));

But what do you do if you are discovering the message type at runtime? For example, you might have some system which loads add-ins and wants to subscribe to message types on their behalf. Before today (version 0.24) you would have had to employ some nasty reflection mojo to deal with this scenario, but now EasyNetQ provides you with non-generic subscription methods out-of-the-box.

Just add the this using statement:

using EasyNetQ.NonGeneric;

Which provides you with these extension methods:

publicstatic IDisposable Subscribe(this IBus bus, Type messageType, string subscriptionId, Action<object> onMessage)
publicstatic IDisposable Subscribe(
this IBus bus,
Type messageType,
string subscriptionId,
Action<object> onMessage,
Action<ISubscriptionConfiguration> configure)
publicstatic IDisposable SubscribeAsync(
this IBus bus,
Type messageType,
string subscriptionId,
Func<object, Task> onMessage)
publicstatic IDisposable SubscribeAsync(
this IBus bus,
Type messageType,
string subscriptionId,
Func<object, Task> onMessage,
Action<ISubscriptionConfiguration> configure)

They are just like the Subscribe methods on IBus except that you provide a Type argument instead of the generic argument, and the message handler is an Action<object> instead of an Action<T>.

Here’s an example of the non-generic subscribe in use:

var messageType = typeof(MyMessage);
bus.Subscribe(messageType, "my_subscriptionId", x =>
{
var message = (MyMessage)x;
Console.Out.WriteLine("Got Message: {0}", x.Text);
});

Very useful I think, and one of the most commonly asked for features.
 
Happy runtime discovery!

EasyNetQ’s Minimalist DI Container

$
0
0

I’ve been a long time fan of IoC (or DI) containers ever since I first discovered Castle Windsor back in 2007. I’ve used Windsor in every major project I’ve been involved in since then, and if you’d gone to a developer event in the late naughties, you may well have encountered me speaking about Windsor. Indeed, Seb Lambla had the cheek to call me ‘Windsor man’. I shall label him ‘Rest-a-man’ in revenge.

When I started working on EasyNetQ in 2011, I initially thought it would be a very simple lightweight wrapper around the RabbitMQ.Client library. The initial versions were very procedural, ‘just get it done’, script-ish code burps. But as it turned into a more serious library, I started to factor the different pieces into more SRPish classes using dependency injection. At this point I was doing poor-man’s dependency injection, with an initial piece of wire-up code in the RabbitHutch class.

As EasyNetQ started to gain some traction outside 15below, I began to get questions like, “how do I use a different serializer?” And “I don’t like your error handling strategy, how can I implement my own?” I was also starting to get quite bored of maintaining the ever-growing wire up code. The obvious solution was to introduce a DI container, but I was very reluctant to take a dependency on something like Windsor. Writing a library is a very different proposition than writing an application. Every dependency you introduce is a dependency that your user also has to take. Imagine you are a happily using AutoFac and suddenly Castle.Windsor appears in your code base, or even worse, you are an up-to-date Windsor user, but EasyNetQ insists on installing an old version of Windsor alongside. Nasty. Windsor is an amazing library, some of its capabilities are quite magical, but I didn’t need any of these advanced features in EasyNetQ. In fact I could be highly constrained in my DI container requirements:

  • I only needed singleton instances.
  • The lifetime of all DI provided components matches the lifetime of the main IBus interface.
  • I didn’t need the container to manage component disposal.
  • I didn’t need open generic type registration.
  • I was happy to explicitly register all my components, so I didn’t need convention based registration.
  • I only needed constructor injection.
  • I can guarantee that a component implementation will only have a single constructor.

With this highly simplified list of requirements, I realised that I could write a very simple DI container in just few lines of code (currently 113 as it turns out).

My super simple container only provides two registration methods. The first takes a service interface type and a instance creation function:

IServiceRegister Register<TService>(Func<IServiceProvider, TService> serviceCreator) where TService : class;

The second takes an instance type and an implementation type:

IServiceRegister Register<TService, TImplementation>()
where TService : class
where TImplementation : class, TService;

These are both defined in a IServiceRegister interface. There is also a single Resolve method in an IServiceProvider interface:

TService Resolve<TService>() where TService : class;

You can see all 113 lines of the implementation in the DefaultServiceProvider class. As you can see, it’s not at all complicated, just three dictionaries, one holding the list of service factories, another holding a list of registrations, and the last a list of instances. Each registration simply adds a record to the instances dictionary. When Resolve is called, a bit of reflection looks up the implementation type’s constructor and invokes it, recursively calling resolve for each constructor argument. If the service is provided by a service factory, it is invoked instead of the constructor.

I didn’t have any worries about performance. The registration and resolve code is only called once when a new instance of IBus is created. EasyNetQ is designed to be instantiated at application start-up and for that instance to last the lifetime of the application. For 90% of applications, you should only need a single IBus instance.

EasyNetQ’s component are registered in a ComponentRegistration class. This provides an opportunity for the user to register services ahead of the default registration, and since the first to register wins, it provides an easy path for users to replace default services implementations with their own. Here’s an example of replacing EasyNetQ’s default console logger with a custom logger:

var logger = new MyLogger();
var bus = RabbitHutch.CreateBus(connectionString, x => x.Register<IEasyNetQLogger>(_ => logger));

You can replace pretty much any of the internals of EasyNetQ in this way: the logger, the serializer, the error handling strategy, the dispatcher threading model … Check out the ComponentRegistration class to see the whole list.

Of course the magic of DI containers, the first rule of their use, and the thing that some people find extraordinarily hard to grok, is that I only need one call to Resolve in the entire EasyNetQ code base at line 146 in the RabbitHutch class:

return serviceProvider.Resolve<IBus>();

IoC/DI containers are often seen as very heavyweight beasts that only enterprise architecture astronauts could love, but nothing could be more wrong. A simple implementation only needs a few lines of code, but provides your application, or library, with considerable flexibility.

EasyNetQ is open source under the (very flexible) MIT licence. Feel free to cut-n-paste my DefaultServiceProvider class for your own use.


EasyNetQ: Replace the Internal DI Container

$
0
0

logo_design_150

EasyNetQ, is made up of a collection of independent components. Internally it uses a tiny internal DI (IoC) container called DefaultServiceProvider. If you look at the code for the static RabbitHutch class that you use to create instances of the core IBus interface, you will see that it simply creates a new DefaultServiceProvider, registers all of EasyNetQ’s components, and then calls container.Resolve<IBus>() creating a new instance of IBus with its tree of dependencies supplied by the container:

publicstatic IBus CreateBus(IConnectionConfiguration connectionConfiguration, Action<IServiceRegister> registerServices)
{
Preconditions.CheckNotNull(connectionConfiguration, "connectionConfiguration");
Preconditions.CheckNotNull(registerServices, "registerServices");

var container = createContainerInternal();
if (container == null)
{
thrownew EasyNetQException("Could not create container. " +
"Have you called SetContainerFactory(...) with a function that returns null?");
}

registerServices(container);
container.Register(_ => connectionConfiguration);
ComponentRegistration.RegisterServices(container);

return container.Resolve<IBus>();
}

But what if you want EasyNetQ to use your container of choice? From version 0.25 the RabbitHutch class provides a static method, SetContainerFactory, that allows you to register an alternative container factory method that provides whatever implementation of EasyNetQ.IContainer that you care to supply.

In this example we are using the Castle Windsor IoC container:

// register our alternative container factory
RabbitHutch.SetContainerFactory(() =>
{
// create an instance of Windsor
var windsorContainer = new WindsorContainer();
// wrap it in our implementation of EasyNetQ.IContainer
returnnew WindsorContainerWrapper(windsorContainer);
});
// now we can create an IBus instance, but it's resolved from
// windsor, rather than EasyNetQ's default service provider.
var bus = RabbitHutch.CreateBus("host=localhost");

Here is how I implemented WindsorContainerWrapper:

publicclass WindsorContainerWrapper : IContainer, IDisposable
{
privatereadonly IWindsorContainer windsorContainer;

public WindsorContainerWrapper(IWindsorContainer windsorContainer)
{
this.windsorContainer = windsorContainer;
}

public TService Resolve<TService>() where TService : class
{
return windsorContainer.Resolve<TService>();
}

public IServiceRegister Register<TService>(System.Func<IServiceProvider, TService> serviceCreator)
where TService : class
{
windsorContainer.Register(
Component.For<TService>().UsingFactoryMethod(() => serviceCreator(this)).LifeStyle.Singleton
);
returnthis;
}

public IServiceRegister Register<TService, TImplementation>()
where TService : class
where TImplementation : class, TService
{
windsorContainer.Register(
Component.For<TService>().ImplementedBy<TImplementation>().LifeStyle.Singleton
);
returnthis;
}

publicvoid Dispose()
{
windsorContainer.Dispose();
}
}

Note that all EasyNetQ services should be registered as singletons.

It’s important that you dispose of Windsor correctly. EasyNetQ doesn’t provide a Dispose method on IContainer, but you can access the container via the advanced bus (yes, this is new too), and dispose of windsor that way:

((WindsorContainerWrapper)bus.Advanced.Container).Dispose();
bus.Dispose();

Happy containerisation!

Are Your Programmers Working Hard, Or Are They Lazy?

$
0
0

When people are doing a physical task, it’s easy to assess how hard they are working. You can see the physical movement, the sweat. You also see the result of their work: the brick wall rising, the hole in the ground getting bigger. Recognising and rewarding hard work is a pretty fundamental human instinct, it is one of the reasons we find endurance sports so fascinating. This instinctive appreciation of physical hard work is a problem when it comes to managing creative-technical employees. Effective knowledge workers often don’t look like they are working very hard.

Back in 2004, I was a junior developer working in a large team on a cable TV company’s billing and provisioning system. Like all large systems it was made up of a number of relatively independent components, with different individuals or small teams looking after them. The analogue TV and digital TV provisioning systems were almost entirely separate, with a different team looking after each.

The analogue TV team had decided to base their system around an early version of Microsoft Biztalk. There were four of our guys and a team from Microsoft developing it and running it in production. They all appeared to work really hard. They would often be seen working into the night and at weekends. Everyone would drop what they were doing to help with production issues, often crowding around a single guy at a desk, offering suggestions about what could be wrong, or how to fix something. There was constant activity, and anyone could see, just by looking that, not only did everyone pull together as a team, but they were all working really really hard.

The digital TV provisioning team was very different. The code had been mostly written by a single guy, let’s call him Dave. I was a junior maintenance developer on the team. Initially I had a great deal of trouble understanding the code. There wasn’t one long procedure somewhere where all the stuff happened, instead there were lots of small classes and methods with just a few lines of code. Several of my colleagues complained that Dave made things overcomplicated. But Dave took me under his wing and suggested that I read a few books on object oriented programming. He taught me about design patterns, the SOLID principles, and unit testing. Soon the code started to make sense, and the more I worked on it the more I came to appreciated its elegant design. It didn’t go wrong in production, just hummed away doing its job. It was relatively easy to make changes to the code too, so implementing new features was often quite painless. The unit tests meant that few bugs made it into production.

The result of all this was that it didn’t look like we were working very hard at all. I went home at 5.30pm, I never worked weekends, we didn’t spend hours crowded around each other’s desks throwing out guesses about what could be wrong with some failing production system. From the outside it must have looked like we’d been given a far easier task than the analogue TV guys. In truth, the requirements were very similar, we just had better designed and implemented software, and better supporting infrastructure, especially the unit tests.

Management announced that they were going to give out pay rises based on performance. When it was my turn to talk to the boss, he explained that it was only fair that the pay increases went to the people who worked really hard, and that our team just didn’t seem to care so much about the company, not compared to the heroes who gave up their evenings and weekends.

The cable company was a rare laboratory, you could observe a direct comparison between the effects of good and bad software design and team behaviour. Most organisations don’t provide such a comparison. It’s very hard to tell if that guy sweating away, working late nights and weekends, constantly fire-fighting, is showing great commitment to making a really really complex system work, or is just failing. Unless you can afford to have two or more competing teams solving the same problem, and c’mon, who would do that, you will never know. Conversely, what about the guy sitting in the corner who works 9 to 5, and seems to spend a lot of time reading the internet? Is he just very proficient at writing stable reliable code, or is his job just easier than everyone else’s? To the casual observer, the first chap is working really hard, the second one isn’t. Hard work is good, laziness is bad, surely?

I would submit that the appearance of hard work is often an indication of failure. Software development often isn’t done well in a pressurised, interrupt driven, environment. It’s often not a good idea to work long hours. Sometimes the best way of solving a difficult problem is to stop thinking about it, go for a walk, or even better, get a good night’s sleep and let your subconscious solve it. One of my favourite books is A Mathematician’s Apology by G. H. Hardy, one of the leading British mathematicians of the 20th century. In it he describes his daily routine: four hours work in the morning followed by an afternoon of watching cricket. He says that it’s pointless and unproductive to do hard mental work for more than four hours a day.

To managers I would say, judge people by results, by working software, not by how hard they appear to be working. Counter intuitively, it may be better not to sit with your developers, you may get a better idea of their output unaffected by conventional/intuitive indicators. Remote working is especially beneficial; you will have to measure your employees by their output, rather than the lazier option of watching them sitting at their desks 8 hours a day thumping away at an IDE, or ‘helpfully’ crowding around each other’s desks offering ‘useful’ suggestions.

How to Run a Successful Open Source Project

$
0
0

A couple of months ago I attended BarCamp Brighton, an open conference at Brighton University. Everyone is encouraged to present a session, and as I don’t need much excuse to talk to a room full of people I thought I’d do an unrehearsed talk on running an open source project based on my experiences with EasyNetQ. I spent about half an hour going through what EasyNetQ is, its history, and how I organise things, to a room of six people. I know, fame at last! The really interesting bit came during the questions when I got into a discussion with a chap who worked for the Mozilla foundation. He asked what makes a successful open source project. Of course ‘success’ is very subjective. Success for me means that some people are interested enough in EasyNetQ to download it from NuGet and commit time to sending me bug reports and pull requests, for other people success might be defined as a project so popular that they can make a living from supporting it. Given the former, lower bar, here are some of the things that we came up with.

  • It has to do something useful. Obvious, yes. The classic ‘scratching an itch’ reason for starting an OSS project usually means that it’s useful to you. How successful your project will be depends on how common your problem is.
  • A clear mission. If you have a clear mission, it’s easy for people to understand the problem you are trying to solve. EasyNetQ’s mission is to provide the simplest possible API for RabbitMQ on .NET. People know what the intention is, and it provides a guide to EasyNetQ’s design.
  • Easy to install. If your environment provides a package manager, make sure your project provides a package for it. Otherwise make sure you provide a simple installer and clear instructions. Don’t insist that people build your project from source if you can avoid it. EasyNetQ installs from NuGet in a few seconds.
  • A good ‘first 20 minutes’ experience. Make your project super easy to get started with. Have a quick start guide to help users get something working as soon as possible. You can get EasyNetQ up and running with two console apps publishing and subscribing within 10 minutes. Hmm, I need to get that quick start guide done though :p
  • Documentation! Clear, simple to follow, documentation is really helpful to your users. Keep it up to date. A wiki that anyone can update is ideal. I use the GitHub wiki for EasyNetQ’s documentation. If you can register a domain name and have a nice homepage with a short pithy introduction to your project, so much the better.
  • A forum for communicating with your users. I use a Google group. But there are many other options. Make sure you answer people’s questions. Be nice (see below).
  • Let people know what’s happening. A blog, like this, is an ideal place to write about the latest project news. Let people know about developments and your plans for the future.
  • Release early, release often. Long feedback loops kill software. That’s true of any project, open or closed, but much more so for open source. The sooner you get your code in front of people, the sooner you find out when it isn’t working. Have a continuous build process that allows people to be able to quickly install the very latest version. You can split out ‘stable’ from ‘development’ releases if you want. I don’t bother, every commit to EasyNetQ’s GitHub repository is immediately published to NuGet, but that will probably change as the project matures.
  • Have a versioning policy and explain it. You can see EasyNetQ’s here.
  • Use GitHub to host your source code. Yes, yes, I know there are alternatives, but GitHub has made hosting and contributing to OSS projects much easier. It has become so ubiquitous that it’s the obvious choice. Everyone knows how it works.
  • Be nice! It’s easy to get frustrated with weird questions on the mailing list, or strange pull requests, but always remember to be polite and helpful. Be especially nice to people who send you bug reports and pull requests, they are helping your project for no gain. Be thankful.

I’m sure there are loads of other things that I’ll think of as soon as I hit ‘publish’ on this post, so don’t think of this as comprehensive. I’d love to hear any more suggestions in the comments.

Of course the main problem most people have to overcome with running an OSS project is finding the time. I’ve been extraordinarily fortunate that 15below have sponsored EasyNetQ. If you use it and like it, don’t forget to thank them.

The Geek Christmas Quiz 2013

$
0
0

This is the second year of my Geek Christmas Quiz. Six sections of the ultimate geek questions. The office competition was won by Toby Carter with a score of 46. See if you can do better.

Last year’s quiz is here.

What does this acronym stand for? (one point per correct answer)
    1. AWS
    2. SQL
    3. NATO
    4. GNU
    5. SCSI
    6. HTML
    7. HTTP
    8. ReST
    9. NASA
    10. RAF

Name that computer (one point for the model, another for the company/inventor)

Quiz_name_that_computer

Name that programming language (one point for each correct answer)

1.    
10 PRINT "HELLO WORLD"
20 GOTO 10 

2.    
while(true)
{
    Console.WriteLine("Hello World");
} 

3.    
for(;;) {
    printf("Hello World");
} 

4.    
let rec hello () =
    printfn "Hello World"
    hello() 

5.    
while true
    writeln ("Hello World") 

6.    
WHILE 1 = 1 PRINT "Hello World" 

7.    
while /bin/true; do echo -n 'Hello World!'; done; 

8.    
(while true (print "hello world")) 

9.    
let hello = do putStr "Hello World"; hello 

Science fiction (one point for each correct answer)

    1. "These are not the droids you are looking for" (what film?)
    2. "Open the pod bay doors please HAL" (what film, or book?)
    3. Who wrote Rendezvous with Rama?
    4. What spaceship uses dilithium crystals
    5. what does the acronym CREW in Iain Bank's culture novels mean when referring to weapons
    6. Name 3 futurama characters
    7. Who wrote the 3 laws of robotics?
    8. What is the first law of robotics?
    9. Who is the leader of the Daleks?
    10. Directive? (which film?)

Science (one point for each correct answer)

    1. 1 degree fahrenheit is a different size to 1 degree centigrade which means they must cross at some point. At what temperature are both scales the same?
    2. When and where was the first cell in your body created?
    3. What is the device which blends air and fuel in an internal combustion engine called?
    4. What was the name of the first hydrogen (thermonuclear) bomb test?
    5. What is special about Sirius, the Dog Star?
    6. What year did the last man land on the moon?
    7. Who invented the jet engine?
    8. In trigonometry what is calculated by dividing the adjacent over the hypotenuse?
    9. Which part of the Earth lies between the outer core and the crust?
    10. Where in the body are alveoli to be found?

Name that cartoon character (one point for each correct answer)

Quiz_name_that_cartoon_character

The Geek Christmas Quiz 2013: The Answers!

$
0
0

Here are the answers to this year’s Geek Christmas Quiz.

What does this acronym stand for? (one point per correct answer)
    1. AWS – Amazon Web Services
    2. SQL – Structured Query Language
    3. NATO – North Atlantic Treaty Organization
    4. GNU – GNU is Not Unix
    5. SCSI – Small Computer Serial Interface
    6. HTML – Hyper Text Markup Language
    7. HTTP – Hyper Text Transfer Protocol
    8. ReST – Representational State Transfer
    9. NASA – National Aeronautics and Space Administration
    10. RAF – Royal Air Force

Name that computer (one point for the model, another for the company/inventor)

NeXT Cube / NeXTApple II / AppleDifference Engine / Babbage
IBM PC / IBMMacintosh / AppleZX Spectrum / Sinclair
System 360 / IBMColossus / Tommy FlowersPlayStation / Sony

Name that programming language (one point for each correct answer)

  1. BASIC
  2. C#
  3. C
  4. F#
  5. Python
  6. T-SQL
  7. Bash
  8. Lisp/Scheme/Clojure
  9. Haskell

Science fiction (one point for each correct answer)

    1. "These are not the droids you are looking for" (what film?) – Start Wars IV
    2. "Open the pod bay doors please HAL" (what film, or book?) – 2001 A Space Odyssey
    3. Who wrote Rendezvous with Rama? – Arthur C Clarke
    4. What spaceship uses dilithium crystals – USS Enterprise
    5. what does the acronym CREW in Iain Bank's culture novels mean when referring to weapons - Coherent Radiation Emission Weapon
    6. Name 3 futurama characters - http://en.wikipedia.org/wiki/List_of_Futurama_characters
    7. Who wrote the 3 laws of robotics? – Isaac Asimov
    8. What is the first law of robotics? – “A robot may not injure a human being or, through inaction, allow a human being to come to harm”
    9. Who is the leader of the Daleks? - Davros
    10. Directive? (which film?) – Wall-E

Science (one point for each correct answer)

    1. 1 degree fahrenheit is a different size to 1 degree centigrade which means they must cross at some point. At what temperature are both scales the same? - -40.
    2. When and where was the first cell in your body created? - The egg that created you was formed inside of your mother’s fetus while she was inside of your grandmother’s womb.
    3. What is the device which blends air and fuel in an internal combustion engine called? – Carburettor.
    4. What was the name of the first hydrogen (thermonuclear) bomb test? – Mike.
    5. What is special about Sirius, the Dog Star? – It is the brightest star in the night sky.
    6. What year did the last man land on the moon? – 1972 (Apollo 17)
    7. Who invented the jet engine? – Frank Whittle.
    8. In trigonometry what is calculated by dividing the adjacent by the hypotenuse? - Cosine
    9. Which part of the Earth lies between the outer core and the crust? – The mantle.
    10. Where in the body are alveoli to be found? – The lungs.

Name that cartoon character (one point for each correct answer)

Lisa (Simpsons)Iron ManFone Bone
Denis the MenaceTom & JerryKiki (Kiki’s Delivery Service)
Pointy Haired BossProfessor CalculusDan Dare
Viewing all 112 articles
Browse latest View live