The typical use case for a request/response scenario is to retrieve data that a user needs to see, from some external system. When the request is made, a reasonable timeout can be set. When the timeout elapses, the requesting code should be notified that the response is missing so it can move on.
If you have a scenario where you need an extensive amount of time to process a message / request, you should look at long running processes and status updates (also covered in the course / book).
But, what does it look like to elapse a timeout for a request/response scenario? Is that done in code, in RabbitMQ, in both, or … ???
The Request Timeout
There isn’t much to this example (that’s part of the point of Rabbus), but it does show how a simple request can be made from within an Express router.
Assuming a response comes from the other end of RabbitMQ, everything will be good to go. Unfortunately, that expectation will fail time and time again, as the network hiccups, applications go down, and other problems interfere with the request or response. And if that happens, the request will sit there forever, never returning and never rendering a response to the HTTP request!
Timeout The Request
Because of the possibility of an endless wait-state and never responding to the HTTP request, the first place you need a timeout is the RabbitMQ request.
You need to gracefully handle a scenario where a response is not received – for any reason – and move on. Perhaps “move on” means showing some default set of information – or a small warning message to the user. Whatever it means, you need to have code that handles this scenario.
To do that, Promises can be used effectively.
Start by extracting the Rabbus request into a separate method. This will be useful to keep the Express handler clean:
Next up, wrap the implementation of this extracted method with a new Promise (assuming ES6 promises, or use RSVP or another library).
Within this promise, you’ll need to set up a timer using setTimeout, to be responsible for “cancelling” the request. The actual “cancel” will come from resolving the promise with a “completed” flag set to false. If the request receives a response within the specified time, you can clearTimeout using the timer id received from setTimeout, and then resolve the promise with “completed” set to true, passing along the returned data as well.
Below the wrapping promise, you’ll want to immediately consume the promise using a callback function for both the resolution and rejection. If it resolves with “completed” set to true, fire the callback method with the data. Otherwise, call it with no parameters. If rejected, due to an error, fire the callback with the error.
With this done, your request will wait for the specified amount of time before cancelling and moving on.
If the back-end code on the other side of RabbitMQ goes down, or is taking too long, or whatever, your users will still get the page loaded within a reasonable amount of time – they’ll just be short a little bit of information. You can handle that in a number of ways – showing a “could not load…” message, showing some default information, or ignoring it entirely, for example.
But, if the back-end request handler goes down, what happens to all of the request messages that are being sent and not processed?
Timing Out The Request Messages
If your request handler goes down for a while, and a lot of requests come in, it’s not too much of a problem, right? You’re timing out the request and moving on. But if there are a lot of requests being made while the request handler is down, you’ll end up with a lot of messages piled up in your RabbitMQ queue. Normally this is a good thing – one of the many reasons you want to use a queue.
In the case of a cancelled request, however, this won’t be quite so good.
Say you have a request handler go down … for an hour. What happens when the request handler comes back up and it finds 1,000+ messages sitting in it’s queue? Hopefully it will start processing them – that’s what good queue handling code should do! Except in this case, why is going to process them? The code that made the request is no longer interested in the response – it moved on, long ago.
Don’t waste the valuable computing resources with these dead requests that no longer need to be processed. Instead, put a TTL (time to live) on the queue and drop the messages!
This is typically done in the queue declaration, which can be handled with Rabbus, again:
Notice that the timeout (“messageTtl”) has been set to the same amount of time as the timeout above, for the requester. Having them synchronized, or set closely to each other, will help keep the system healthy by not processing messages that have no code to which it will be returned.
With that in place, you have an effective way to manage request timeout on both the requester side, and the messaging / request handler side.
On Temporal Messages
Messages and queues are there to provide reliability and stability in systems, at times. They allow code to be run at some point in the future, make crashes easier to deal with, facilitate inter-process communications in a reliable manner, and more.
Often, messages need to be persistent and have some guarantee of being handled for the system to work properly, too. But, with request timeouts, there is real value for temporal messages – messages that exist for a short period of time, to facilitate some feature, but are not absolutely required to live and be handled.
The ability to have temporal messaging is important for distributed systems, and opens another world of opportunity for good messaging architecture.
Get Up To Speed w/ RabbitMQ and Node
If you’re looking at RabbitMQ and Node, you need to look at the RabbitMQ For Developers bundle. This series of screencasts, ebooks, interviews with messaging professionals and more, will get you up and running with RabbitMQ and Node, faster than any other resource set. You’ll learn from real-world experience and see a working implementation of the most common messaging patterns.
Get the RabbitMQ For Developers bundle, and get your messages flowing!