With NginX you can fetch pages, and if you use SSI, objects/parts. You can just say under /defined/uri it forwards the request to a memcached server or cluster (so you can have more than one server with information incase one goes down). It allows for a failsafe if there is no object in the cache, to allow for dynamic generation / fetching from disk (or proxying etc). It also allows you to set the memcached key based on various variables, including the URI. It's not used to cache content, though. You can only get(), not set() (at this point in time, though there are plans to extend it).
The thinking behind using Memcached was not so much for using it as a basis for your caching engine, which would include set()'s. I was thinking of it purely in a get() scenario, in the same way that some requests might be proxied or sent to an lsapi.
There are times when it might be most appropriate to control the cached content outside of the webserver (or at least have the option to do so). There are many applications that integrate with Memcached, including database functions and obviously scripting engines.
For example, you could have a database that has a trigger to update a Memcached entry when there's an update, which puts the relevant cached data directly into Memcached from the database. Depending on how things are defined, you wouldn't then need to call PHP/Ruby etc to update the cache.
Using memcached (possibly combined with SSI/ESI), it would be possible to serve data cached in memory across many servers rather than just one in a portable way, so that the data is the same across all servers. Sometimes it might be important that a particular object on a page appears the same across all webservers, whilst other parts are ok if there's a few minutes delay between caches.
Also, if you have very large volumes of data (multiple gigabytes for example) that you're wishing to serve from memory, and don't want to do a uri-based load-balancing solution before you hit the webserver, balancing the cached data using memcached is another option (basically what I understand was the motivation behind developing memcached in the first place, though I think it used something like PHP in front of it).
Of course there are many ways to do these things which don't involve memcached, but considering the number of applications that are integrating with it, I feel it's worth you giving consideration to it. For some applications it's a useful bit of flexibility.
I'm currently developing a framework that uses both NginX and Litespeed because NginX has all the memcached/SSI features as well as very flexible use of variables in the configurations and Litespeed is quicker (in my tests) with PHP. I'd prefer to use just one webserver, though, and am looking forward to having SSI implemented on Litespeed.
Here's a link to the memcached module in NginX:
http://wiki.codemongers.com/NginxHttpMemcachedModule