Quantcast
Channel: Code rant
Viewing all 112 articles
Browse latest View live

An Action Cache

$
0
0

Do you ever find yourself in a loop calling a method that expects an Action or a Func as an argument? Here’s an example from an EasyNetQ test method where I’m doing just that:

[Test, Explicit("Needs a Rabbit instance on localhost to work")]
public void Should_be_able_to_do_simple_request_response_lots()
{
for (int i = 0; i < 1000; i++)
{
var request = new TestRequestMessage { Text = "Hello from the client! " + i.ToString() };
bus.Request<TestRequestMessage, TestResponseMessage>(request, response =>
Console.WriteLine("Got response: '{0}'", response.Text));
}

Thread.Sleep(1000);
}

My initial naive implementation of IBus.Request set up a new response subscription each time Request was called. Obviously this is inefficient. It would be much nicer if I could identify when Request is called more than once with the same callback and re-use the subscription.

The question I had was: how can I uniquely identify each callback? It turns out that action.Method.GetHashcode() reliably identifies a unique action. I can demonstrate this with the following code:

public class UniquelyIdentifyDelegate
{
readonly IDictionary<int, Action> actionCache = new Dictionary<int, Action>();

public void DemonstrateActionCache()
{
for (var i=0; i < 3; i++)
{
RunAction(() => Console.Out.WriteLine("Hello from A {0}", i));
RunAction(() => Console.Out.WriteLine("Hello from B {0}", i));

Console.Out.WriteLine("");
}
}

public void RunAction(Action action)
{
Console.Out.WriteLine("Mehod = {0}, Cache Size = {1}", action.Method.GetHashCode(), actionCache.Count);
if (!actionCache.ContainsKey(action.Method.GetHashCode()))
{
actionCache.Add(action.Method.GetHashCode(), action);
}

var actionFromCache = actionCache[action.Method.GetHashCode()];

actionFromCache();
}
}


Here, I’m creating an action cache keyed on the action method’s hashcode. Then I’m calling RunAction a few times with two distinct action delegates. Note that they also close over a variable, i, from the outer scope.

Running DemonstrateActionCache() outputs the expected result:

Mehod = 59022676, Cache Size = 0
Hello from A 0
Mehod = 62968415, Cache Size = 1
Hello from B 0

Mehod = 59022676, Cache Size = 2
Hello from A 1
Mehod = 62968415, Cache Size = 2
Hello from B 1

Mehod = 59022676, Cache Size = 2
Hello from A 2
Mehod = 62968415, Cache Size = 2
Hello from B 2

Rather nice I think :)


How to stop System.Uri un-escaping forward slash characters

$
0
0

Sometimes you want to construct a URI that has an escaped forward slash. For example, the RabbitMQ Management API requires that you encode the default rabbit VirtualHost ‘/’ as ‘%2f’. Here is the URL to get the details of a queue:

http://192.168.1.4:55672/api/queues/%2f/EasyNetQ_Default_Error_Queue

But if I try use WebRequest or WebClient the ‘%2f’ is un-escaped to a ‘/’, so the URL becomes:

http://192.168.1.4:55672/api/queues///EasyNetQ_Default_Error_Queue

And I get a 404 not found back :(

Both WebRequest and WebClient use System.Uri internally. It’s easy to demonstrate this behaviour with the following code:

var uri = new Uri(url);
Console.Out.WriteLine("uri = {0}", uri.PathAndQuery);
// outputs /api/queues///EasyNetQ_Default_Error_Queue

A bit of digging in the System.Uri code thanks to the excellent ReSharper 6.0, and help from this Stack Overflow question, shows that it’s possible to reset some flags and stop this behaviour. Here’s my LeaveDotsAndSlashesEscaped method (it’s .NET 4.0 specific):

private void LeaveDotsAndSlashesEscaped()
{
var getSyntaxMethod =
typeof (UriParser).GetMethod("GetSyntax", BindingFlags.Static | BindingFlags.NonPublic);
if (getSyntaxMethod == null)
{
throw new MissingMethodException("UriParser", "GetSyntax");
}

var uriParser = getSyntaxMethod.Invoke(null, new object[] { "http" });

var setUpdatableFlagsMethod =
uriParser.GetType().GetMethod("SetUpdatableFlags", BindingFlags.Instance | BindingFlags.NonPublic);
if (setUpdatableFlagsMethod == null)
{
throw new MissingMethodException("UriParser", "SetUpdatableFlags");
}

setUpdatableFlagsMethod.Invoke(uriParser, new object[] {0});
}

The usual caveats of poking into system assemblies with reflection apply. Don’t expect this to work with any other version of .NET than 4.0.
 
Now if we re-run our test…
 
LeaveDotsAndSlashesEscaped();
const string url = "http://192.168.1.4:55672/api/queues/%2f/EasyNetQ_Default_Error_Queue";
var uri = new Uri(url);
Console.Out.WriteLine("uri = {0}", uri.PathAndQuery);
// outputs /api/queues/%2f/EasyNetQ_Default_Error_Queue

Voila!
 
If you have a Web.config or App.config file you can also try this configuration setting, which should do the same thing (from this SO question) but I haven’t tried it personally:
 
<uri> 
<schemeSettings>
<add name="http" genericUriParserOptions="DontUnescapePathDotsAndSlashes" />
</schemeSettings>
</uri>

Restarting RabbitMQ With Running EasyNetQ Clients

$
0
0

Here’s a screenshot of one of my tests of EasyNetQ (my easy-to-use .NET API for RabbitMQ). I’m running two publishing applications (top) and two subscribing applications (bottom), all publishing and subscribing to the same queue.  We’re getting a throughput of around 5000 messages / second on my local machine. Once they’re all humming along nicely, I bounce the RabbitMQ service. As you can see,  some messages get logged and once the RabbitMQ service comes back, things recover nicely and all the applications continue publishing and subscribing as before. I think that’s pretty sweet :)

RabbitRestart

The publisher and subscriber console apps are both in the EasyNetQ repository on GitHub:

EasyNetQ.Tests.Performance.Producer
EasyNetQ.Tests.Performance.Consumer

Why Write a .NET API For RabbitMQ?

$
0
0

Anyone who reads this blog knows that my current focus is writing a .NET API for RabbitMQ which I’ve named EasyNetQ. This is being paid for by my excellent clients 15Below who build high volume messaging solutions for the airline industry. EasyNetQ is a core part of their strategy moving forwards, so it’s going to get plenty of real-world use.

One question I’ve been asked quite a lot, is ‘why?’. Why am I building a .NET API when one is already available; the C# AMQP client from RabbitHQ? Think of AMQP as the HTTP of messaging. It’s a relatively low-level protocol. You typically wouldn’t build a web application directly against a low-level HTTP API such as System.Net.WebRequest, instead you would use a higher level toolkit such as WCF or ASP.NET MVC. Think of EasyNetQ as the ASP.NET MVC of AMQP.

AMQP is designed to be cross platform and language agnostic. It is also designed to flexibly support a wide range of messaging patterns based on the Exchange/Binding/Queue model. It’s great having this flexibility, but with flexibility comes complexity. It means that you will need to write a significant amount of code in order to implement a RabbitMQ client. Typically this code would include:

  • Implementing messaging patterns such as Publish/Subscribe or Request/Response. Although, to be fair, the .NET client does provide some support here.
  • Implement a routing strategy. How will you design your exchange-queue bindings, and how will you route messages between producers and consumers?
  • Implement message serialization/deserialization. How will you convert the binary representation of messages in AMQP to something your programming language understands?
  • Implement a consumer thread for subscriptions. You will need to have a dedicated consumer loop waiting for messages you have subscribed to. How will you deal with multiple subscribers, or transient subscribers, like those waiting for responses from a request?
  • Implement a versioning strategy for your messages. What happens when your message schema needs to change in response to business requirements?
  • Implement subscriber reconnection. If the connection is disrupted or the RabbitMQ server bounces, how do you detect it and make sure all your subscriptions are rebuilt?
  • Understand and implement quality of service settings. What settings do you need to make to ensure that you have a reliable client.
  • Implement an error handling strategy. What should your client do if it receives a malformed message, or if an unexpected exception is thrown?
  • Implement monitoring tools. How will you monitor your client applications so that you are alerted if there are any problems?

With EasyNetQ, you get all these out-of-the-box. You loose some of the flexibility in exchange for a model based on .NET types for routing, but it saves an awful lot of code.

Thoughts on Windows 8 (part 2)

$
0
0

Back in June I wrote some thoughts on Windows 8 after the initial announcement. Now that we’ve got more details from the Build conference, I thought I’d do a little update.

Microsoft have climbed down and made a significant concession to WPF/Silverlight/.NET devs. Gone is the previous message that Metro applications will only be written in HTML/Javascript, developers can now choose which technology they want to use on the new platform. There still seems to be a bias towards HTML/Javascript judging my the number of sessions on each however, and it seems like MS would prefer developers to go down the HTML/Javascript route. How much this double headed personality effects the development experience is yet to be seen.

Somebody high up in MS must have banged some heads together to get the Windows and Developer Divisions talking to each other. They'd become two opposing camps after the fallout from the failed Vista WinFX experiment. Now Windows is forced to support XAML/.NET and Dev-Division arm-wrestled into supporting the HTML/Javascript model in Visual Studio and Blend. Once again the depth of this rapprochement will be the deciding factor when it comes to getting a consistent message across to us developers.

The message is still clear though; Javascript has won the latest round of the language wars, whatever you think about it as a language, it's becoming as ubiquitous as C. But .NET developers are going to have to be dragged kicking and screaming to this party.

The big question is still 'will it work?' Will Windows 8 be enough to get MS back into the game? There are two main problems I can see:

1. Microsoft has a strategic problem. They make money by selling operating systems. Their two main competitors, Google and Apple, don't. Will they be able to make Windows 8 financially attractive to tablet developers when they can get Android licence free and they are competing on price with the dominant iPad? Having said that, Android has been struggling on tablets, so there's still an opportunity for Microsoft to get traction in on that form factor.

2. Windows 8 is a hybrid. There's the traditional Windows mouse-and-keyboard UI that they have to keep, and there's the new Metro UI that they want everyone to develop for. What's the experience going to be like for a tablet user when they end up in Windows classic, or for the desktop user when they switch to Metro? The development experience for both sets of UI is going to be very different too, almost like developing for two different platforms. It will be interesting if Microsoft sells a business version of Windows 8 without Metro, or indeed a tablet version without the classic UI.

Despite having raised all these questions, I do think Microsoft have a workable strategy for Windows 8, and it’s going  to be an exciting time over the next few years to see how it pans out. There is still a window (sorry) of opportunity in the tablet form factor for Microsoft to challenge Apple id they can get this right. Let’s hope they can.

Some Thoughts On Service Oriented Architecture

$
0
0

I’ve been writing a high-level ‘architectural vision’ document for my current clients. I thought it might be nice to republish bits of it here. This is the section that makes a justification a service oriented architecture based on messaging.

I’ve taken out anything client-specific.

SOA is one of those snake-oil consultancy terms that seem to mean different things depending on who you talk to. My own views on SOA have been formed from three main sources. Firstly there’s the bible on SOA, Hohpe & Woolf’s Enterprise Integration Patterns. This really is essential reading for anyone attempting to get systems to work together. Next there’s the work of Udi Dahan. He the author of the excellent NServiceBus messaging framework for .NET. I’ve attended his course, and his blog is a fantastic mine of knowledge on all things SOA. Lastly there is the fantastic series of blog posts by Bill Poole on SOA. Just check JBOWS is Bad for a taster.

So what is SOA? Software has a complexity problem. There appears to be a geometric relationship between the complexity of a monolithic system and its stability and maintainability. So a system that is twice as complex as another one will be maybe four times more expensive to maintain and a quarter as stable. There is also the human and organisational side of software. Individual teams tend to build or buy their own solutions. The organisation then needs to find ways for these disparate systems to share information and workflow. Small single purpose systems are always a better choice than large monolithic ones. If we build our systems as components, we can build and maintain them independently. SOA is a set of design patterns that guide us in building and integrating these mini-application pieces.

What is a component?

A component in SOA terms is very different from a component in Object-Oriented terms. A component in OO terms is a logical component that is not independently compiled and deployed. A component or ‘service’ in SOA is a physical component that is an independently built and deployed stand-alone application. It is usually designed as a Windows service or possibly as a web service or an executable. It may or may not have a UI, but it will have some mechanism to communicate with other services.

Each component/service should have the following characteristics:

  • Encapsulated. The service should not share its internal state with the outside world. The most common way people break this rule is by having services share a single database. The service should only change state in response to input from its public interface.
  • Contract. The service should have a clear contract with the outside world. Typically this will be a web-service API and/or the message types that it publishes and consumes.
  • Single Purpose. A component should have one job within the system as a whole. Rendering PDFs for example.
  • Context Free. A component should not be dependent on other components, it should only depend on contracts. For example, if my business process component relies on getting a flight manifest from somewhere, it should simply be able to publish a flight manifest request, it shouldn’t expect a specific flight manifest service to be present.
  • Independently deployable. I should be able to deploy the component/service independently.
  • Independently testable. It should be possible to test the service in isolation without having other services in the system running.

How do components communicate?

Now that we’ve defined the characteristics of a component in our SOA, the next challenge is to select the technology and patterns that they use to communicate with each other. We can list the things that we want from our communication technology:

  • Components should only be able to communicate via a well defined contract - logically-decoupled. They should not be able to dip into each other’s internal state.
  • Components should not have to be configured to communicate with specific endpoints – decoupled configuration. It should be possible to deploy a new service without having to reconfigure the services that it talks to.
  • The communication technology should support ‘temporal-decoupling’. This means that all the services do not have to be running at the same time for the system to work.
  • The communication should be low latency. There should be a minimal delay between a component sending a message and the consumer receiving it.
  • The communication should be based on open standards.

Let’s consider how some common communication technologies meet these criteria…

  logically decoupled decoupled configuration temporal decoupling low latency open standards
File Transfer yes yes yes no yes
Shared Database no yes yes no no
RPC (remoting COM+) yes no no yes no
SOAP Web Services yes no no yes yes
Message Queue (MSMQ) yes no yes yes no
Pub/Sub Messaging (AMQP) yes yes yes yes yes

 

From the table we can see that there’s only one technology that ticks all our boxes, and that is pub/sub messaging based on an open standard like AMQP. AMQP is a bit like HTTP for messaging, an open wire-level protocol for brokers and their clients. The leading AMQP implementation is RabbitMQ, and this is technology we’ve chosen as our core messaging platform.

There is a caveat however. For communicating with external clients and their systems, building endpoints with open-standards outweighs all other considerations. Now AMQP is indeed an open standard, but it’s a relatively new one that’s only supported by a handful of products. To be confident that we can interoperate with a wide variety of 3rd party systems over the internet we need a ubiquitous technology, so for all external communication we will use HTTP based web-services.

So, AMQP internally, HTTP externally.

Some Thoughts On Service Oriented Architecture (Part 2)

$
0
0

I’ve been writing a high-level ‘architectural vision’ document for my current clients. I thought it might be nice to republish bits of it here. This is part 2. The first part is here.

My Client has a core product that is heavily customised for each customer. In this post we look at the different kinds of components that make up this architecture. How some are common services that any make up the core product, and how other components might be bespoke pieces for a particular customer. We also examine the difference between workflow, services and endpoints.

It is very important that we make a clear distinction between components that we write as part of the product and bespoke components that we write for a particular customer. We should not put customer specific code into product components and we should not replicate common product code in customer specific pieces.

Because we are favouring small single-purpose components over large multi-purpose monolithic applications, it should be easy for us to differentiate between product and customer pieces.

There are three main kinds of components that make up a working system. Services, workflow and endpoints. The diagram below shows how they communicate via EasyNetQ, our open-source infrastructure layer. The green parts are product pieces. The blue parts are bespoke customer specific pieces.

image

Services

Services are components that implement a piece of the core product. An example of a service is a component called Renderer that takes templates and data and does a kind of mail-merge. Because Renderer is a service it should never contain any client specific code. Of course customer requirements might mean that enhancements need to be made to Renderer, but these enhancements should always be done with the understanding that Renderer is part of the product. We should be able to deploy the enhanced Renderer to all our customers without the enhancement affecting them.

Services (in fact all components) should maintain their own state using a service specific database. This database should not be shared with other services. The service should communicate via EasyNetQ with other services and not use a shared database as a back-channel. In the case of an updated Renderer, templates would be stored in Renderer’s local database. Any new or updated templates would arrive as messages via EasyNetQ. Each data item to be rendered would also arrive as a message, and once the data has been rendered, the document should also be published via EasyNetQ.

The core point here is that each service should have a clear API, defined by the message types that it subscribes to and publishes. We should be able to fully exercise a component via messages independently of other services. Because the service’s database is only used by the service, we should be able to flexibly modify its schema in response to changing requirements, without having to worry about the impact that will have on other parts of the system.

It’s important that services do not implement workflow. As we’ll see in the next section, a core feature of this architecture is that workflow and services are separate. Render, for example, should not make decisions about what happens to a document after it is rendered, or implement batching logic. These are separate concerns.

Workflow

Workflow components are customer specific components that describe what happens in response to a specific business trigger. They also implement customer specific business rules. For an airline, an example would be workflow that is triggered by a flight event, say a delay. When the component receives the delay message, it might first retrieve the manifest for the flight by sending a request to a manifest service, then render a message telling each passenger about the delay by sending a render request to renderer, then finally send that message by email by publishing an email request. It would typically implement business rules describing when a delay is considered important etc.

By separating workflow from services, we can flexibly implement customer requirements by creating custom workflows without having to customise our services. We can deliver bespoke customer solutions on a common product platform.

We call these workflow pieces, ‘Sagas’, this is a commonly used term in the industry for a long-running business process. Because sagas all need a common infrastructure for hosting, EasyNetQ includes a ‘SagaHost’. SagaHost is a Windows service that hosts sagas, just like it says on the box. This means that the sagas themselves are written as simple assemblies that can be xcopy deployed.

Sagas will usually require a database to store their state. Once again, this should be saga specific and not a database shared by other services. However a single customer workflow might well consist of several distinct sagas, it makes sense for these to be thought of as a unit. These may well share a single database.

Endpoints

Endpoints are components that communicate with the outside world. They are a bridge between our internal AMQP messaging infrastructure and our external HTTP API. The only way into and out of our product should be via this API. We want to be able to integrate with diverse customer systems, but these integration pieces should be implemented as bridges between the customer system and our official API, rather than as bespoke pieces that publish or subscribe directly to the message bus.

Endpoints come in two flavours, externally triggered and internally triggered. An externally triggered endpoint is where communication is initiated by the customer. An example of this would be flight event. These components are best implemented as web services that simply wait to be called and then publish an appropriate message using EasyNetQ.

An internally triggered endpoint is where communication is triggered by an internal event. An example of this would be the completion of a workflow with the final step being an update of a customer system. The API would be implemented as a Windows Service that subscribes to the update message using EasyNetQ and implements an HTTP client that makes a web service request to a configured endpoint.

The Importance of Testability

A core requirement for any component is that it should be testable. It should be possible to test Services and Workflow (Sagas) simply by sending them messages and checking that they respond with the correct messages. Because ‘back-channel’ communication, especially via shared databases, is not allowed we can treat these components as black-boxes that always respond with the same output to the same input.

Endpoints are slightly more complicated to test. It should be possible to send an externally triggered endpoint a request and watch for the message that’s published. An internally triggered endpoint should make a web request when it receives the correct message.

Developers should provide automated tests to QA for any modification to a component.

A Useful Linq Extension Method: Intersperse

$
0
0

Have you ever had a case where you want to insert a constant item in-between each element of a list of items? For example, if I’ve got a list of numbers:

1,2,3,4,5

And I want to insert 0 between each one to get:

1,0,2,0,3,0,4,0,5

Meet my friend Intersperse (stolen from Haskell, if you hadn’t guessed):

var items = new[] {1, 2, 3, 4, 5}.Intersperse(0).ToArray();

foreach (var item in items)
{
Console.Write(item);
}

This code outputs:

102030405

And here is the Intersperse extension method:

public static IEnumerable<T> Intersperse<T>(this IEnumerable<T> items, T separator)
{
var first = true;
foreach (var item in items)
{
if (first) first = false;
else
{
yield return separator;
}
yield return item;
}
}

Some interesting uses. System.String is, of course, an array of char, so you can do this:

var doubleSpaced = new string("hello world".Intersperse(' ').ToArray());
Console.WriteLine(doubleSpaced);
// outputs 'h e l l o w o r l d'

Or how about this perennial problem, creating a comma separated list:

var strings = new[] {"one", "two", "three"};
var commaSeprated = new string(strings.Intersperse(",").Concat().ToArray());
Console.WriteLine(commaSeprated);
// outputs one,two,three


The Database As Queue Anti-Pattern

$
0
0

“When all you have is a hammer, every problem looks like a nail.”

When all you know is SQL Server, it’s tempting to try and solve every problem with a relational database, but often it’s not the best tool for the job. Today I want to introduce you to an excellent example of this, the ‘Database As Queue’ anti-pattern. It’s a common specimen that you’ll find in the wild in many enterprise development shops.

It works like this: Usually there’s a piece of data that needs to be processed through some workflow, often by a number of different applications or processes.

Database_as_queue

There’s usually a table with some data and a status column. Each process is designed to process data at a particular status. Process 1 in this example inserts records with the status ‘New’, Process 2 picks up ‘New’ records and updates the status to ‘P2 Complete’, Process 3 picks up ‘P2 Complete’ records… you get the picture.

When I say ‘picks up’, what I mean is that each process sits there polling the database at some interval running a select statement something like this:

SELECT * FROM MyWorkflow WHERE Status = 'New'

Why is it an anti-pattern?

Firstly, there’s the polling. Either you have a short interval and hammer your database into submission, or you have a long interval and even the fastest route through your system is the sum of all the (long) intervals.

Secondly there’s the fact that SQL Server is very efficient at insertions, or updates, or queries, but rarely all three on the same table. To make the polling queries perform, you need to put an index on the status, but indexes make for slow inserts. Locks can become a major problem.

Thirdly there’s the issue of how you clear records once the workflow is complete. If you don’t clear the table down at intervals, it will keep growing until it starts to hit the performances of your queries and eventually become a serious operations problem. The problem is, deletes are inefficient on SQL Server.

Fourthly, sharing a database between applications (or services) is a bad thing. It’s just too tempting to put amorphous shared state in there and before you know it you’ll have a hugely coupled monster.

So what should I do?

Simple, use the right tool for the job: this scenario is crying out for a messaging system. It solves all the problems described above; no more polling, efficient message delivery, no need to clear completed messages from queues, and no shared state.

EasyNetQ: Topic Based Routing

$
0
0

EasyNetQ is my simple .NET API for RabbitMQ. I’ve just added a new feature, Topic based routing. It’s very cool.

Topic based routing is a RabbitMQ feature that allows a subscriber to filter messages based on multiple criteria. A topic is a list of words delimited by dots that are published along with the message. Examples would be, "stock.usd.nyse" or "book.uk.london" or "a.b.c", the words can be anything you like, but would usually be some attributes of the message. The topic string has a limit of 255 characters.

To publish with a topic, simply use the overloaded Publish method:

bus.Publish("X.A", message);

Subscribers can filter messages by specifying a topic to match to. These can include the wildcard characters:

* (star) to match to exactly one word.

# (hash) to match to zero or more words.

So a message that is published with the topic "X.A.2" would match "#", "X.#", "*.A.*" but not "X.B.*" or "A". To subscribe with a topic, use the overloaded Subscribe method:

bus.Subscribe("my_id", "X.*", handler);

A warning. Two separate subscriptions with the same subscriberId but different topic strings probably will not have the effect you are expecting. A subscriberId effectively identifies an individual AMQP queue. Two subscribers with the same subscriptionId will both connect to the same queue and both will add their own topic. So, for example, if you do this:

bus.Subscribe("my_id", "X.*", handler);
bus.Subscribe("my_id", "*.B", handler);

Each handler will be round-robbin’d with any messages that match "X.*" or "*.B".

Now, if you want to match on multiple topics ("X.*" OR "*.B") you can use another overload of the Subscribe method that takes multiple topics, like this:

bus.Subscribe("my_id", new[]{"X.*", "*.B"}, handler);

There are topic overloads for the SubscribeAsync method that work in exactly the same way.

Help! How Do I Detect When a Client Thread Exits?

$
0
0

Here’s an interesting library writer’s dilemma. In my library (in my case EasyNetQ) I’m assigning thread local resources. So when a client creates a new thread and then calls certain methods on my library new resources get created. In the case of EasyNetQ a new channel to the RabbitMQ server is created when the client calls ‘Publish’ on a new thread. I want to be able to detect when the client thread exits so that I can clean up the resources (channels).

The only way of doing this I’ve come up with is to create a new ‘watcher’ thread that simply blocks on a Join call to the client thread. Here a simple demonstration:

First my ‘library’. It grabs the client thread and then creates a new thread which blocks on ‘Join’:

public class Library
{
public void StartSomething()
{
Console.WriteLine("Library says: StartSomething called");

var clientThread = Thread.CurrentThread;
var exitMonitorThread = new Thread(() =>
{
clientThread.Join();
Console.WriteLine("Libaray says: Client thread existed");
});

exitMonitorThread.Start();
}
}

Here’s a client that uses my library. It creates a new thread and then calls my library’s StartSomething method:

public class Client
{
private readonly Library library;

public Client(Library library)
{
this.library = library;
}

public void DoWorkInAThread()
{
var thread = new Thread(() =>
{
library.StartSomething();
Thread.Sleep(10);
Console.WriteLine("Client thread says: I'm done");
});
thread.Start();
}
}

When I run the client like this:

var client = new Client(new Library());

client.DoWorkInAThread();

// give the client thread time to complete
Thread.Sleep(100);

I get this output:

Library says: StartSomething called
Client thread says: I'm done
Libaray says: Client thread existed

The thing is, I really don’t like the idea of all these blocked watcher threads hanging around. Is there a better way of doing this?

How Do I Detect When A Client Thread Exits? Make The Client Tell You.

$
0
0

Yesterday I asked the LazyWeb for some help: how does library code detect then a client thread that has called one of its methods exits? There followed a very useful discussion on Stack Overflow. I’ve been persuaded by Martin James’ arguments that however one tries to do it, there will always be scenarios where the result won’t be optimal. The answer is for the client to tell the library explicitly when it’s finished its work.

“'it just works' solutions just don't <g>. Whidows, Linux, every OS I've ever used has a 'token', 'handle' or other such object as the solution to maintaining state across calls. Using inplicit thread-local data and polling for thread termination etc. will suck the life out of your library”

OK, so I  buy the argument, but what’s the best way to change the API so that the client can indicate that it’s finished its work?

First alternative.

Provide a method that returns a worker that implements IDisposable and make it clear in the documentation that you should not share workers between threads. Here's the modified library:

public class Library
{
public LibraryWorker GetLibraryWorker()
{
return new LibraryWorker();
}
}

public class LibraryWorker : IDisposable
{
public void StartSomething()
{
Console.WriteLine("Library says: StartSomething called");
}

public void Dispose()
{
Console.WriteLine("Library says: I can clean up");
}
}

The client is now a little more complicated:

public class Client
{
private readonly Library library;

public Client(Library library)
{
this.library = library;
}

public void DoWorkInAThread()
{
var thread = new Thread(() =>
{
using(var worker = library.GetLibraryWorker())
{
worker.StartSomething();
Console.WriteLine("Client thread says: I'm done");
}
});
thread.Start();
}
}

The main problem with this change is that it's a breaking change for the API. Existing clients will have to be re-written. Now that's not such a bad thing, it would mean revisiting them and making sure they are cleaning up correctly.

Non-breaking second alternative.

The API provides a way for the client to declare 'work scope'. Once the scope completes, the library can clean up. The library provides a WorkScope that implements IDisposable, but unlike the first alternative above, the StartSomething method stays on the Library class:

public class Library
{
public WorkScope GetWorkScope()
{
return new WorkScope();
}

public void StartSomething()
{
Console.WriteLine("Library says: StartSomething called");
}
}

public class WorkScope : IDisposable
{
public void Dispose()
{
Console.WriteLine("Library says: I can clean up");
}
}

The client simply puts the StartSomething call in a WorkScope...

public class Client
{
private readonly Library library;

public Client(Library library)
{
this.library = library;
}

public void DoWorkInAThread()
{
var thread = new Thread(() =>
{
using(library.GetWorkScope())
{
library.StartSomething();
Console.WriteLine("Client thread says: I'm done");
}
});
thread.Start();
}
}

I like this less than the first alternative because it doesn't force the library user to think about scope. What do you think?

The Configuration Complexity Clock

$
0
0

When I was a young coder, just starting out in the big scary world of enterprise software, an older, far more experienced chap gave me a stern warning about hard coding values in my software. “They will have to change at some point, and you don’t want to recompile and redeploy your application just to change the VAT tax rate.” I took this advice to heart and soon every value that my application needed was loaded from a huge .ini file. I still think it’s good advice, but be warned, like most things in software, it’s good advice up to a point. Beyond that point lies pain.

Let me introduce you to my ‘Configuration Complexity Clock’.

ConfigurationComplexityClock

This clock tells a story. We start at midnight, 12 o’clock, with a simple new requirement which we quickly code up as a little application. It’s not expected to last very long, just a stop-gap in some larger strategic scheme, so we’ve hard-coded all the application’s values. Months pass, the application becomes widely used, but there’s a problem, some of the business values change, so we find ourselves rebuilding and re-deploying it just to change a few numbers. This is obviously wrong. The solution is simple, we’ll move those values out into a configuration file, maybe some appsettings in our App.config. Now we’re at 2 on the clock.

Time passes and our application is now somewhat entrenched in our organisation. The business continues to evolve and as it does, more values are moved to our configuration file. Now appsettings are no longer sufficient, we have groups of values and hierarchies of values. If we’re good, by now we will have moved our configuration into a dedicated XML schema that gets de-serialized into a configuration model. If we’re not so good we might have shoe-horned repeated and multi-dimensional values into some strange tilda and pipe separated strings. Now we’re at 4 or 5 on the clock.

More time passes, the irritating ‘chief software architect’ has been sacked and our little application is now core to our organisation. The business rules become more complex and so does our configuration. In fact there’s now a considerable learning curve before a new hire can successfully carry out a deployment. One of our new hires is a very clever chap, he’s seen this situation before. “What we need is a business rules engine” he declares. Now this looks promising. The configuration moves from its XML file into a database and has its own specialised GUI. Initially there was hope that non-technical business users would be able to use the GUI to configure the application, but that turned out to be a false hope; the mapping of business rules into the engine requires a level of expertise that only some members of the development team possess. We’re now at 6 on the clock.

Frustratingly there are still some business requirements that can’t be configured using the new rules engine. Some logical conditions simply aren’t configurable using its GUI, and so the application has to be re-coded and re-deployed for some scenarios. Help is at hand, someone on the team reads Ayende’s DSLs book. Yes, a DSL will allow us to write arbitrarily complex rules and solve all our problems. The team stops work for several months to implement the DSL. It’s a considerable technical accomplishment when it’s completed and everyone takes a well earned break. Surely this will mean the end of arbitrary hard-coded business logic? It’s now 9am on the clock.

Amazingly it works. Several months go by without any changes being needed in the core application. The team spend most of their time writing code in the new DSL. After some embarrassing episodes, they now go through a complete release cycle before deploying any new DSL code. The DSL text files are version controlled and each release goes through regression testing before being deployed. Debugging the DSL code is difficult, there’s little tooling support, they simply don’t have the resources to build an IDE or a ReSharper for their new little language. As the DSL code gets more complex they also start to miss being able to write object-oriented software. Some of the team have started to work on a unit testing framework in their spare time.

In the pub after work someone quips, “we’re back where we started four years ago, hard coding everything, except now in a much crappier language.”

They’ve gone around the clock and are back at 12.

Why tell this story? To be honest, I’ve never seen an organisation go all the way around the clock, but I’ve seen plenty that have got to 5, 6, or 7 and feel considerable pain. My point is this:

At a certain level of complexity, hard-coding a solution may be the least evil option.

You already have a general purpose programming language, before you go down the route of building a business rules engine or a DSL, or even if your configuration passes a certain level of complexity, consider that with a slicker build-test-deploy cycle, it might be far simpler just to hard code it.

As you go clockwise around the clock, the technical implementation becomes successively more complex. Building a good  rules engine is hard, writing a DSL is harder still. Each extra hour you travel clockwise will lead to more complex software with more bugs and a harder learning curve for any new hires. The more complex the configuration, the more control and testing it will need before deployment. Soon enough you’ll find that there’s little difference in the length of time it takes between changing a line of code and changing a line of configuration. Rather than a commonly available skill, such as coding C#, you find that your organisation relies on a very rare skill: understanding your rules engine or DSL.

I’m not saying that it’s never appropriate to implement complex configuration, a rules-engine or a DSL, Indeed I would jump at the chance of building a DSL given the right requirements, but I am saying that you should understand the implications and recognise where you are on the clock before you go down that route.

EasyNetQ: A Breaking Change, IPublishChannel

$
0
0

EasyNetQ is my high-level .NET API for RabbitMQ.

Until recently, to publish a message using EasyNetQ you would create an instance of IBus and then call the publish method on it:

bus.Publish(message);

The IBus instance is the logical representation of a connection to a RabbitMQ server. Generally you should open a connection to the RabbitMQ server when your service (or application) starts and close it when the service shuts down. AMQP (the messaging protocol that RabbitMQ implements) multiplexes operations over the connection using channels, so while the connection is open, multiple channels can be opened and closed to carry out AMQP operations.

Initially I’d hoped to hide channels from EasyNetQ users. With subscriptions this makes sense - after the client creates a subscription, EasyNetQ looks after the channel and the subscription consuming thread. But publishing turned out to be more complex. Channels cannot be shared between threads, so EasyNetQ initially used a ThreadLocal channel for all publish calls. A new thread meant a new channel.

This problem manifested itself with a nasty operational bug we suffered. We had a service which woke up on a timer, created some threads, and published messages on them. Although the threads ended, the channels didn’t close, and each timer interval added another few open channels. After a period of time the channel count exceeded the allowed maximum and our RabbitMQ server died. Initially we considered finding some way of discovering when a thread ends and then disposing the channels based on some trigger, but after a discussion on Stack Overflow, we came around to the view that a sensible API design hands the responsibility of managing the channel lifetime to the API’s consumer (thanks to Martin James for being the main persuader).

So let me introduce you to my new friend, IPublishChannel. Instead of calling Publish directly on the IBus instance, EasyNetQ clients now have to create an instance of IPublishChannel and call its Publish method instead. IPublishChannel, as its name suggests, represents a channel. The channel is opened when it’s created, and disposed when it’s disposed.

Here’s how you use it:

using (var publishChannel = bus.OpenPublishChannel())
{
publishChannel.Publish(message);
}

So a little more API complexity, but a lot more transparency about how channels are created and disposed.

The request/response API has also changed to include IPublishChannel:

using (var publishChannel = bus.OpenPublishChannel())
{
publishChannel.Request<TestRequestMessage, TestResponseMessage>(request, response =>
Console.WriteLine("Got response: '{0}'", response.Text));
}

The EasyNetQ version has been bumped to 0.5 to reflect the breaking change.

Happy messaging!

Time To Consider Node.js?

$
0
0

Unless you’ve been living under a rock for the past couple of years, you’ll have heard the buzz about Node.js, a platform for building network applications in Javascript. If, like me, you’re a .NET developer working on Windows, you’ll have thought, ‘that’s nice, now back to work’. But I think it’s now time to consider adopting Node as a core part of our toolkit. It allows us to do some things far easier than we can with our existing tools, and it’s now mature enough on Windows that you don’t have to jump through hoops in order to use it. There’s a very nice IIS integration story now with iisnode.

Here’s a little example. I needed to write a little web service that simply waited for a second before responding. I wanted a harness to test the asynchronous IO handling in EasyNetQ. I’d written something to do this in .NET previously. Here’s my IHttpHandler implementation…

using System;
using System.Threading;
using System.Threading.Tasks;
using System.Web;

namespace TestHttpHandler
{
public class TimerHandler : IHttpAsyncHandler
{
public void ProcessRequest(HttpContext context)
{
throw new InvalidOperationException("This handler cannot be called synchronously");
}

public bool IsReusable
{
get { return false; }
}

public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback callback, object state)
{
var taskCompletionSouce = new TaskCompletionSource<bool>(state);
var task = taskCompletionSouce.Task;

var timer = new Timer(timerState =>
{
context.Response.Write("OK");
callback(task);
taskCompletionSouce.SetResult(true);
});
timer.Change(1000, Timeout.Infinite);

return task;
}

public void EndProcessRequest(IAsyncResult result)
{
// nothing to do
}
}
}

It’s not trivial to write a non-blocking server like this. You have to have a good handle on the TPL and APM, something that’s beyond many mid-level .NET developers. In order to run this on a server I have to set up a new Web Application in IIS and configure the Web.config file. OK, it’s not a huge amount of work, but it’s still friction for a small throw-away experiment.
 
Here’s the same thing in Node:
var http = require('http');

http.createServer(function (req, res) {
setTimeout(function () {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('OK');
}, 1000);
}).listen(1338);

console.log('LongRunningServer is at http://localhost:1338/');

First of all, it’s less lines of  code. Node is asynchronous out of the box, you don’t have to understand anything more than standard Javascript programming idioms in order to write this service. I’m a total Node novice, but I was able to put this together far faster than the .NET example above.

But the runtime story is very nice too. Once I’ve installed Node (which is now just a question of running the Windows installer) all I have to do is open a command prompt and write:

node LongRunningServer.js

I don’t think I’m going to abandon .NET just yet, but I do think that I’ll be considering Node solutions as an integral part of my .NET applications.


EasyNetQ: Introducing the Advanced API

$
0
0

EasyNetQ is an easy to use .NET API for RabbitMQ. Check out our new homepage at easynetq.com.

EasyNetQ's mission is to provide the simplest possible API for RabbitMQ based messaging. The core IBus interface purposefully avoids exposing AMQP concepts such as exchanges, bindings, and queues; instead you get a Publish method and a Subscribe method, and a Request method and a Respond method, simple to use bus also a rather opinionated take on the most common messaging patterns.

For users who understand AMQP and want the flexibility to create their own exchange-binding-queue topology this can be quite limiting; you still want to use EasyNetQ’s connection management, subscription handling, serialization, and error handling, but you want to do something outside of EasyNetQ’s pub/sub request/response patterns. For these users we’ve provided the ‘advanced’ API.

The advanced API is implemented with the IAdvancedBus interface. An instance of this interface can be accessed via the Advanced property of IBus:

var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced;

Yes, I stole that idea from RavenDb, thanks Ayende  :)

To create a bespoke exchange-binding-queue topology use the classes in the EasyNetQ.Topology namespace. In this example I’m binding a durable queue to a direct exchange:

var queue = Queue.DeclareDurable("my.queue.name");
var exchange = Exchange.DeclareDirect("my.exchange.name");
queue.BindTo(exchange, "my.routing.key");

Once you’ve defined your topology, you can use the Publish and Subscribe methods on the IAdvancedBus. To publish, first create a new message. Note we have a new Message<T> class that exposes the AMQP basic properties:

var myMessage = new MyMessage {Text = "Hello from the publisher"};
var message = new Message<MyMessage>(myMessage);
message.Properties.AppId = "my_app_id";
message.Properties.ReplyTo = "my_reply_queue";

Now we can publish our message to the exchange that we declared above:

using (var channel = advancedBus.OpenPublishChannel())
{
channel.Publish(exchange, routingKey, message);
}

Here’s how we subscribe to the queue we defined:

advancedBus.Subscribe<MyMessage>(queue, (msg, messageReceivedInfo) => 
Task.Factory.StartNew(() =>
{
Console.WriteLine("Got Message: {0}", msg.Body.Text);
Console.WriteLine("ConsumerTag: {0}", messageReceivedInfo.ConsumerTag);
Console.WriteLine("DeliverTag: {0}", messageReceivedInfo.DeliverTag);
Console.WriteLine("Redelivered: {0}", messageReceivedInfo.Redelivered);
Console.WriteLine("Exchange: {0}", messageReceivedInfo.Exchange);
Console.WriteLine("RoutingKey: {0}", messageReceivedInfo.RoutingKey);
}));

Note that we also get an IMessageReceivedInfo instance that gives us information about the message context, such as the exchange, the routing key and the redelivered flag.

So as you can see, you get a lot more control and access to various AMQP entities, but still with much less complexity than using the RabbitMQ .NET C# client directly.

For more on the advanced API check out the EasyNetQ documentation. Happy messaging!

Use AutoResetEvent To Unit Test Multi-Threaded Code

$
0
0

I’ve been guilty of using a very nasty pattern recently to write unit (and integration) tests for EasyNetQ. Because I’m dealing with inherently multi-threaded code, it’s difficult to know when a test will complete. Up to today, I’d been inserting Thread.Sleep(x) statements to give my tests time to complete before the test method itself ended.

Here’s a contrived example using a timer:

[Test]
public void SampleTestWithThreadSleep()
{
var timerFired = false;

new Timer(x =>
{
timerFired = true;
}, null, someAmountOfTime, Timeout.Infinite);

Thread.Sleep(2000);
timerFired.ShouldBeTrue();
}

I don’t know what ‘someAmountOfTime’ is, so I’m guessing a reasonable interval and then making the thread sleep before doing the asserts. In most cases the ‘someAmountOfTime’ is probably far less than the amount of time I’m allowing. It pays to be conservative in this case :)

My excellent colleague Michael Steele suggested that I use an AutoResetEvent instead. To my shame, I’d never spent the time to really understand the thread synchronisation methods in .NET, so it was back to school for a few hours while I read bits of Joseph Albahari’s excellent Threading in C#.

An AutoResetEvent allows you to block one thread while waiting for another thread to complete some task; ideal for this scenario.

Here’s my new test:

[Test]
public void SampleTestWithAutoResetEvent()
{
var autoResetEvent = new AutoResetEvent(false);
var timerFired = false;

new Timer(x =>
{
timerFired = true;
autoResetEvent.Set();
}, null, someAmountOfTime, Timeout.Infinite);

autoResetEvent.WaitOne(2000);
timerFired.ShouldBeTrue();
}

If you create an AutoResetEvent by passing false into its constructor it starts in a blocked state, so my test will run as far as the WaitOne line then block. When the timer fires and the Set method is called, the AutoResetEvent unblocks and the assert is run. This test only runs for ‘someAmountOfTime’ not for the two seconds that the previous one took; far better all round.

Have a look at my EasyNetQ commit where I changed many of my test methods to use this new pattern.

When Should I Use An ORM?

$
0
0

I think like everyone, I go through the same journey whenever I find out about a new technology..

Huh? –> This is really cool –> I use it everywhere –> Hmm, sometimes it’s not so great

Remember when people were writing websites with XSLT transforms? Yes, exactly. XML is great for storing a data structure as a string, but you really don’t want to be coding your application’s business logic with it.

I’ve gone through a similar journey with Object Relational Mapping tools. After hand-coding my DALs, then code generating them, ORMs seemed like the answer to all my problems. I became an enthusiastic user of NHibernate through a number of large enterprise application builds. Even today I would still use an ORM for most classes of enterprise application.

However there are some scenarios where ORMs are best avoided. Let me introduce my easy to use, ‘when to use an ORM’ chart.

When_to_use_an_ORM

It’s got two axis, ‘Model Complexity’ and ‘Throughput’. The X-axis, Model Complexity, describes the complexity of your domain model; how many entities you have and how complex their relationships are. ORMs excel at mapping complex models between your domain and your database. If you have this kind of model, using an ORM can significantly speed up and simplify your development time and you’d be a fool not to use one.

The problem with ORMs is that they are a leaky abstraction. You can’t really use them and not understand how they are communicating with your relational model. The mapping can be complex and you have to have a good grasp of both your relational database, how it responds to SQL requests, and how your ORM comes to generate both the relational schema and the SQL that talks to it. Thinking of ORMs as a way to avoid getting to grips with SQL, tables, and indexes will only lead to pain and suffering. Their benefit is that they automate the grunt work and save you the boring task of writing all that tedious CRUD column to property mapping code.

The Y-axis in the chart, Throughput, describes the transactional throughput of your system. At very high levels, hundreds of transactions per second, you need hard-core DBA foo to get out of the deadlocked hell where you will inevitably find yourself. When you need this kind of scalability you can’t treat your ORM as anything other than a very leaky abstraction. You will have to tweak both the schema and the SQL it generates. At very high levels you’ll need Ayende level NHibernate skills to avoid grinding to a halt.

If you have a simple model, but very high throughput, experience tells me that an ORM is more trouble than it’s worth. You’ll end up spending so much time fine tuning your relational model and your SQL that it simply acts as an unwanted obfuscation layer. In fact, at the top end of scalability you should question the choice of a relational ACID model entirely and consider an eventually-consistent event based architecture.

Similarly, if your model is simple and you don’t have high throughput, you might be better off using a simple data mapper like SimpleData.

So, to sum up, ORMs are great, but think twice before using one where you have a simple model and high throughput.

EasyNetQ: Continuous Delivery with TeamCity and NuGet

$
0
0

I recently applied to have EasyNetQ, my awesome RabbitMQ API for .NET, build on the CodeBetter TeamCity build server (you can log on as ‘guest’ with no password). The very helpful Anne Epstein set it up for me and all is now is building, testing and packaging nicely.

The bit I really like is that TeamCity can automatically push the EasyNetQ NuGet package up to NuGet.org on each build. So now EasyNetQ has continuous delivery, when you install or update it using NuGet you are getting the very latest version fresh from GitHub.

My attention was also drawn to MyGet.org, a personal-nuget-in-the-cloud service. For the moment, with EasyNetQ still very much a beta project, I’m happy to have the latest master always be available, but I can envisage some time in the future where I’ll want to have a bit more confidence before pushing a release up to NuGet.org; MyGet would allow me to publish beta packages for testing and then promote them to NuGet when they’ve passed all the QA barriers.

I now consider continuous deployment / delivery to be essential for any software project; It’s just brilliant being able to point anyone who cares to a test server that always has the very latest trunk build. It’s all a part of keeping development process cycles as tight as possible and always having working software.

Playing with Twitter

$
0
0

My excellent clients, 15Below, are interested in exploring how social media, like Twitter and Facebook can work in their business area, travel communications. I’ve been tasked with investigating the technical side of this, in particular interacting with the Twitter API. Before I share what little I’ve learnt about talking to twitter with .NET, I’d like to throw a stone in the lake and ripple out some thoughts on life on planet Twitter.

The first conclusion I’ve come to is that a database of twitter handles isn’t very useful; there’s not much you can do with them. You can’t send more than 250 direct messages per day, and in any case the user has to be following you to receive them. So twitter as a point-to-point mass private communication channel, like email, is a non-starter. No spamming then. You can mention slightly more people, but once again you would soon start to hit the 1000 status updates per day limit with any serious targeted communications. You used to be able to get your application whitelisted by Twitter, which allows you higher API limits, but they’ve stopped that now.

We thought about creating multiple senders, but you can’t automate account creation and most of the limits are per account. Twitter is very strict about any application that they consider to be bending their rules, and you’ll soon find yourself blacklisted even if you manually set about creating multiple accounts for an application that’s deemed to be out of bounds.

The interesting scenarios coalesce around the core Twitter interaction use case; broadcasting to followers and joining in with conversations based around mentions. One idea we’re playing with is for an account that replies with status updates when you mention it with some kind of token, say a flight number or the name of a city. You’ve still got that 1000 tweets per day limit, but so long as the use case is partitioned correctly, that needn’t be a problem.

OK, that’s the ‘what’s it good for’ waffle out of the way, now for the interesting stuff: how easy is it to talk to Twitter from .NET?

I’ve been playing with the excellent TweetSharp library by Daniel Crenna, which makes basic twitter interaction pretty trivial. The only thing I had think at all hard about was the OAuth workflow.

OAuth is a protocol that allows a user to authorise an application for a subset of functionality on their account without giving away their password. Now I will probably be building server side components that automate a small set of twitter accounts, so I need an OAuth workflow that allows me to do a one-time configuration of the component to give it access to an account. This means I have to have a one-time OAuth tool, that allows me to harvest the OAuth tokens for a single account.

I’ve written a little console app to do just that …

class Program
{
// you get the consumer key and secret from Twitter when you register your app.
private const string consumerKey = " --- ";
private const string consumerSecret = " --- ";

static void Main(string[] args)
{
var service = new TwitterService(consumerKey, consumerSecret);
var requestToken = service.GetRequestToken();
var authorisationUri = service.GetAuthorizationUri(requestToken);

// open up a browser window and point it at the authorisationUri
Process.Start(authorisationUri.ToString());

Console.WriteLine("Input the verifier pin number from the web page:");
var verifier = Console.ReadLine();
var accessToken = service.GetAccessToken(requestToken, verifier);

// now output the access token and secret so that we can cut-n-paste them into our
// server side configuration.
Console.Out.WriteLine("accessToken.Token = {0}", accessToken.Token);
Console.Out.WriteLine("accessToken.TokenSecret = {0}", accessToken.TokenSecret);

// check that the access token and secret work.
service.AuthenticateWith(accessToken.Token, accessToken.TokenSecret);

Console.WriteLine(service.Response.Response);
}
}

This will pop up a browser window and present a pin number generated by Twitter that you then have to copy and paste. It then outputs the generated token and ‘token secret’ for that account.

Once you’ve got an access token and a token secret, you can use them for as long as you want. If you own the accounts your server side component is using, you don’t have to consider the workflow for when the user cancels your application’s access rights.

Sending a status update is trivial …

var service = new TwitterService(consumerKey, consumerSecret);
service.AuthenticateWith(previouslySavedToken, previouslySavedTokenSecret);
service.SendTweet(message);

My next challenge is getting to grips with the streaming API so that I can monitor mentions and reply to status update requests. I’ve got a working demo that polls the Twitter API, but it’s a very wasteful use of the scarce API access allowance.

So there you have it. Some very early thoughts on using Twitter for business and pleasure. I think there are some very exciting opportunities, especially for ‘you don’t have to install anything’ communication with remote applications scenario.

Viewing all 112 articles
Browse latest View live