Wednesday, July 27, 2011

SimpleDB Leaderboards the ongoing story...

Optimization Update

Just a quick update today.

Following some advice from AWS peeps, I've modified the service so its possible to configure a leaderboard service instance to have each leaderboard use its own unique domain.

Coupling that with re-running some tests locally on an EC2 instance rather than on my desktop machine, through 2 proxies, and only then out to the SimpleDB servers, has generated some interesting and hopeful new timings for me for the process of regenerating the predictors for a leaderboard.

The test sample here being a single leaderboard with approx 343,000 entries and 6 columns of data, but specifying the test to generate predictors only for one column.

The same predictor generation was run against this data set and time in seconds taken recorded as below.

Service Type Local EC2
Domain per leaderboard 224 68
Single domain for all leaderboards 276 129

The results which came out of this were interesting and rather hopeful. Given that the predictor in these tests is set to generate 1000 sample data points, and to do this it does the read-skip-read(...) routine, that means each predictor generation is doing on the order of 2000 roundtrip requests to EC2 - these taking ~60ms when running on EC2 versus >120ms when running locally meant that running the test locally was hiding a big chunk of the computational cost of having multiple leaderboards in a single SDB domain - hence when running directly on an EC2 instance there is an almost 100% decrease in time taken to generate predictor.

According to AWS experts who suggested this approach, this is because SimpleDB can optimize queries which have their query operating against a single column rather than multiple - so the query looks like the following (domain per leaderboard at top, older shared domain across leaderboards below).

embed

end embed


String concatenation avoidance
Incidentally, there was another change I made during this evolution. I've been seeing how the SimpleDB queries were getting somewhat evil, and harder to avoid making a cock of when forming via string + string (or StringBuilder).
Having seen how comparatively clean the process is using Google AppEngine datastore Java API (couldn't comment for go/python) using their QueryBuilder class and preparing queries programmatically using a nice fluent API (and hence avoiding manual stringiness), I realised the time had come to knock up something similar for use here.

Currently its only being used in the leaderboard-simpledb code, and nowhere else, but its certainly been worth the (very small) effort - the java class is less than 200 lines long and that's with my formatting style which is very whitespace-ish.

I'd definitely recommend doing this early though, as I'm probably as a result going to change the API wrapper I have for SimpleDB to force use of QueryBuilder instances rather than direct Strings as the argument to SDB selects... but this is going to make many other things go 'booom!' and have me go through and fix all my fail.

Hey ho though, such is life and learning.

TTFN

Tim


Thursday, July 14, 2011

A Leaderboard Storage implementation using Amazon SimpleDB

Summary


This post is an outline of a component design which uses SimpleDB as a backing store for a leaderboard system.

It is still a work-in-progress, but has already had various stress test operations performed on it.

This document was written in order to provide a little insight into one way of tackling this problem domain, as well as allowing myself to clarify the design which I had described a few times already internally, but never really written a design summary for.

The implementation of the service itself is in Java, using aws-java-sdk to communicate with SimpleDB via a wrapper library written for the project which helps provide monitoring information (via JMX) to keep tabs on a live basis on SimpleDB usage and issues, as well as to handle some of the dynamic request ramping/throttling we previously found we needed when making use of AWS services.


N.B. YMMV - This is for certain not the only way to do it, and does not mean I would endorse this as the only way to do it, or indeed a necessarily optimal way to do your leaderboards



TL;DR
Multiple leaderboards per table. Cron task update rank prediction curve per leaderboard sort column. Estimate rank from current score when retrieving results.

Domain Structure

An instance at the application level of SimpleDBLeaderboardService maps to a single SimpleDB domain. This is used for one or more individual leaderboards.

The SimpleDB limitations of domain sizes are a fundamental limiting factor on the size of the data within a SimpleDBLeaderboardService instance.

Dependant on the usage pattern there could be a small number of unique leaderboards, each with a large number of individual entries, or a larger number of leaderboards with fewer entries.

If there is a need for many leaderboards and many rows, then either the implementation would have to be expanded to allow finer grained splitting out of leaderboards, with the simplest amendment being to have one domain per leaderboard (though this would be at a cost of eating up domains rather rapidly).

As you can see from the table below, there are relatively few attributes inherent in the system, with most being those relating to the runtime determined and leaderboard specific sorted and unsorted columns.

Attribute Description Example
itemName() SimpleDB item UID - in this case a leaderboardId + ownerId Week-21_1070385706236341127
leaderboardId Unique identifier to the leaderboard within the service domain Week-21
timestamp Timestamp of when the score for this row was last updated 2011-05-23T19:30:52.709
owner Entity which posted the score (e.g. a user or clan UID) 1070385706236341127
<sorted attribute> Value of a named attribute which is a valid column for rankingss (e.g. xp, accuracy, rescues, damage, kills). Padded, fixed accuracy, and offset for natural sort order of string value to apply 0000000600
<unsorted attribute> Value of a named attribute which is not a column used for ranking (e.g. shots, damage). Not padded/offset as natural sort orderings need not apply 1600000


Posting Scores
When a client posts game results, this results in some internal processing outside of the scope of this description, but which ultimately can result in some leaderboard columns being updated.

The part relevant to the implementation of the SimpleDBLeaderboardService is the part where the application :-
  • Retrieves the current row data for the specified LeaderboardId and owner UID.
  • Accumulates data for columns which are accumulators (e.g. shots_fired)
  • Conditionally updates other columns which are either 'lowest' or 'highest' value if the score post contains a better score than the current value for that column (e.g. time_to_complete in posted score = 60, stored = 40).
  • Posts any updated sorted/unsorted attribute values
Posting always has to be done by a read for existing Leaderboard data for that user followed by a possible write if there is anything to update, as the services makes use of SimpleDB conditional writes to ensure transactionality on the update (the timestamp of the read in data is used as a condition check if there was a previous entry, and the absence of any timestamp attribute if there was not). If this fails on the write, the process has to be repeated from the read onwards.


Retrieving Scores
Retrieval of score data from the leaderboard consists of 4 main elements, and can be pulled using several different view types available from the SimpleDBLeaderboardService.

The elements are :-
  • Systemic data such as LeaderboardId, Owner Id and timestamp.
  • Unsorted stats column data.
  • Sorted stats column data.
  • Ranking information for a specified sorted column.
The SimpleDBLeaderboardService implementation are as follows for the different views :-

1. Positions for owner
Returns just the ranks for one or more sort columns for a single specified owner on a particular leaderboard.

The service does this by retrieving current score data for the specified user, then for each specified sort column gets the rank from the PositionCalculator (see Rank Predictors below) specified for the service instance for that column.

2. Result for owner
Returns just the current leaderboard result (sorted + unsorted column values) for a single specified owner (no ranking information returned).

The service does this by retrieving current score data for the specified user.

3. Results around owner
Returns a pivot of leaderboard results of the people at the ranks around that of the specified owner for a specified sort column.

The service does this by :-
  1. Retrieving the results data for the specified user.
  2. Requesting the rank from the PositionCalculator (see Rank Predictors below) specified for the service instance for that column for the specified user
  3. Executing a query which retrieves (in order of the specified sort column) the a fixed number of rows with scores higher than that of the initial user, 
  4. Executing a query which retrieves a fixed number of rows with scores lower than that of the initial user.
  5. Assembling the results in a single respose list in order (better score rows, user rows, worse score rows) using the rank from the pivot user to fill in the ranking information for the other rows.
4. Results for owners
Returns current leaderboard results for a set of specified owners, optionally including ranks for a specified sort column

The service does this by executing a series of individual requests for results for each owner, and then requesting that owner's rank from the PositionCalculator  (see Rank Predictors below) specified for the service instance for that column for the specified user.

It is likely this request would want/need optimisation at a later stage to either parallelise the requests, or instead make a single query of form 'SELECT * from domainname WHERE itemName() IN ###names###' which may perform better.

5. Results paged
Returns an ordered page of leaderboard results for a specified sort column, along with a cursor for retrieving further results.

The services does this by performing a SELECT query using the specified name in an ORDER BY clause, limiting by the requested page size, and extracting in an application 'opaque' cursor if provided, the start rank for the current page and SimpleDB nextCursor to use in the select. The response includes an updated cursor for further pagination.

As such this request never refers to the PositionCalculator  (see Rank Predictors below), instead relying on the fact that either no cursor was supplied and we are starting at rank #1, or have been provided an encoded rank within the cursor supplied with the request.


Parallelism

Retrieval requests of all forms are pretty much constant time regardless of concurrency of requests coming into the system, as the each make use of SimpleDB largely independantly (YMMV disclaimer here!).

This is mainly down to avoiding coupling in the system between requests acting in parallel, and the fact that connections are not too much of a precious commodity when communicating with SimpleDB - with the current setup each front end server instance communicating with SimpleDB has a pool of 100 http connections it can put into use for SimpleDB communications for the purpose of leaderboards, and this can be increased either directly by increasing the pool, or indirectly by splitting the pools (particularly useful if 1 set of leaderboards starts starving out everyone else!).

There is some more parallelism which could be applied to the retrieval process, but mainly on in the request which gets multiple users data in a single request, which currently performs 'n' queries equal to the number of users in the request - this is mitigated somewhat though, as it does not go through the 'select' mechanism but instead is a straightforward 'getItem' request, which in terms of billed CPU time is much cheaper than a number of  'select's.

Write throughput when posting scores is limited not so much by the amount of per-leaderboard activity, but mainly by the rate at which posts can be made to a leaderboard before concurrent updates collide and cause out-of-order updates to be detected at which point back-offs need to start being done. This is almost certainly going to mainly affect clan rather than user leaderboards, as the use-case which would generate concurrent multiple leaderboard updates for user leaderboards is far more rare than that where score posts from different users which generates clan leaderboard updates will be attempting to update a single clan's leaderboard row at the same time.

Largely though, even though a single leaderboard update in our current setup will usually result in updates to multiple leaderboards (for example overall all-time, overall this week, overall today, per-mission all-time, per-mission today ...), these are done in series rather than in parallel, as part of a single job processing operation, as there is expected to be many other jobs which are being processed separate to this, which are running in parallel 

(and setting up each leaderboard update as its own queued job in that end of the system may drive me bonkers when debugging).

Therefore the main worry is in the retrieval of rank in these, as described below.


Rank Predictors

The initial prototypical implementation to provide rank information in the responses involved making the other requests to form up the dataset for the response, followed by looping round and essentially executing a big count to find the rank

embed

end embed

Clearly, this is meh-fine when looking at low-traffic uses, or if correct ranking is more import than any semblance of performance, but it does at least provide a 'native' fallback.

When testing on larger datasets, it became rapidly obvious this was going to be painfully slow though, particularly when querying for the rank of people who were in the >100,000 rank bracket (where I usually am on most games).

As a result of this, we implemented and tested an alternative way to provide estimated ranking information.

This is based on performing samplings from the full dataset to refresh a prediction curve periodically as follows :-

  1. Determine total number of entries in leaderboard
  2. Divide total entries by required sample count to get 'skip size'
  3. Generate a query statement for 'skip' and for 'retrieve', using the same WHERE clause (ordering on sort column).
  4. execute the retrieve query, with a LIMIT of 1 to retrieve a single row's column value - store this along with the rank (1)
  5. execute the skip query using nextToken from previous query
  6. repeat execution of retrieve query LIMT 1 for single row, storing the value along with rank (1 + skipsize)
  7. repeat skip + retrieve 1 until end of leaderboard
  8. Generate score predictor to determine estimated position based on a provided score value.
This predictor can (and is) be generated periodically by any job-processing backend node, which then shares this data via a clustered caching mechanism (memcached, hazelcast for example), so that client-facing nodes only ever get their rankings from a predictor - never having to generate the predictor on the fly (this isn't fast by any means)

N.B. This still needs a slightl tweak as tests against real stats from a previous title showed the predictor was pretty accurate but only once past the initial top 'N' scores (say 500 of 250,000). As such an update may be made which either increases the number of samples taken at top-end, or instead just reverts to NativePositionCalculator if the estimated rank is near the top end.


Alternative implementations may be added in the future which could include
  • Singleton updated inverted index for rankings - downside could involve temporarily showing ranking for a user based against their old score not their current, unless we can think of a way to be able to still do lookups on this index against scores and get the 'nearest' 2 scores if no exact score match is found and interpolate (or somesuch).
  • Use a different nature of datastore such as Cassandra or MongoDB which does this sort of sorting natively (although semi-ironically Cassandra sorts but really doesn't want to provide the ranking).




Thursday, June 2, 2011

Online Filter Bubbles? Meh!

From some discussion external to here on Online Filter Bubbles, I had this to say and felt the world could survive being bombarded by my opinion, such as it is.


I've seen stuff from a few people like Jeff Jarvis and Matt Cutts (Google Search quality body) implying the bubbling isn't too huge.


From my perspective, it has more akin to what the likes of Desmond Morris of the Naked Ape  fame have been studying for years.


Also, one could similarly look at the likes of peoples' newspaper reading habits -
- people don't  read the Grauniad because one wants or expects to see things counter to one's leftish viewpoint.
- people don't read the Daily Wail because one wants or expects to see things counter to one's rightish viewpoint.


Similarly in real life, one tends not to have too many friends drastically divergent from the views of ones-self (generally) - this tends to be how it is with people  - we've always done it, the people doing the automated filters claim it isn't that heavy, and when we have curated content, we go for extremely bubbled filtering.


So in summary - Online Filter Bubbles?




















Tim-meh

Tuesday, May 31, 2011

What next ...

Been a bit busy lately, so apologies.

I have a couple of things which I may write about next :-

Dataloading :

  • Bulk data load into/out of GAE datastore
  • Interfaces for cross service implementation data transfer
Push messaging
  • Options for push messaging (SQS, SNS, PubNub, Atmosphere, ...) ways to achieve cross-cluster push messaging
  • GAE channels API and push messaging

Leaderboards
  • Paged leaderboard access versus cursor style
  • Truth or fiction - accurate position data versus timely responses versus infinite computing resource

I'll start chewing some of those off in the next day or so!

Tim

Friday, April 15, 2011

On Google AppEngine and keeping one's options open

As I've mentioned previously, one thing I've been looking at recently is making sure I had a caching abstraction which would allow me to flick fairly seemlessly between caching implementations.

So far I have working :-

  • Hazelcast (using their distributed Map with ttl support).
  • Memcached (spymemcached client library and xmemcached client library).
  • Google AppEngine Memcached.
[note to self - add links]

Now the last one led me down an interesting road but caused me some pain, as both with hazelcast and memcached of normal proportion, I was able happily enough to have a gets and puts model using 'long' version Ids. Google AppEngine (GAE) memcached though, stuffed that plan, as for some reason they went 'clever' on the API and instead wrap the version identifier in an instance of an interface 'IdentifiableValue'.

Cue Tim going through and having to refactor the API *again* to change over in a similar fashion in order to be able to play nice with this.

Now I know there's going to be a collision with this API later on, and I'm definitely tending down a memcached style route for the caching, maybe eventually Hazelcast will sink quietly into the sunset, and I'll add some batch gets/puts and memcached counter support... but that's for the future, once I've got some way further with getting heavier into seeing what I need to do to get this trainset to scale.

... but this isn't what I mean by keeping my options open in the title of the post.


What I did realise was that firstly while I could build the GAE memcached client library, it would be largely pointless as things stood as you can only really use it if your app is within the GAE environment. Secondly I realised that the same testing of assumptions which shook things out with the caching component could have similar useful effects on the APIs of other components.

and thirdly ... whilst I doubt it'll ever see the light of day in production, if I've built up the app structure in a fashion compatible with GAE, I could actually have a backup destination of GAE for the application.

Oh how clever I felt ... right up until I realised just how many elements were there and needed GAE equivalents implementing... but I got through it, and can now deploy a working front end app to GAE, giving me that tiny bit of extra comfort that I don't have all my eggs in one basket.

There were some interesting elements though worth sharing with the different elements

1. Maven support
Last time I looked at Google Appengine, integrating with maven was clunky to say the least. This has definitely improved, and whilst I did get a little caught out and confused with the SDK installation as a mavenised thing (hint: gae:unpack), needed before doing handy things like uploading a built war project to appengine (hint: gae:update), it made sense in the end.

2. Deployments
The archtecture split up is certainly different due to appengine's restrictions, and because essentially its elephants all the way down (i.e. webapps). What I had intended to split out as a backend webapp splatting out jobs into the job queue using spring-quartz, instead in Appengine fits better as handler REST resources triggered by Appengine cron.xml task definitions in WEB.INF... not necessarily better, or worse, just different. I may come to like it quite a bit, who knows.

Also, I had been heavily tilted towards a parameterized deployment in the style favoured by the likes of Amazon Elastic Beanstalk. Handily enough, that can be largely matched in Appengine by putting environment variables into the appengine.xml file in WEB-INF, but I could see if I really wanted to use it, I'd probably want to do some more work to get these params injected in via the maven build so I could use maven to say 'clean build update -D<some options>' to build and deploy specific environments - right now its hardcoded for a particular env setup in the xml files ... which is naughty, but certainly fixable.

3. Task queues
The Appengine version of queueing service made my head twist and spin significantly. Going from easy enough SQS or JMS queues to generating tasks as URLs which then get integrated by providing the appropriate handler and chugging on that seems simple enough, but caught me out in unexpected ways, particularly with functional testing.

In theory, there's the simplified process for generating DeferredTasks, but it generates a problem itself, in that the tasks run essentially by implementing 'run' by implementing 'DeferredTask' would have to somehow get their service contexts re-injected - turned out for my purposes easier to just take my medicine and implement a handler to hook into a servlet, one way or another.

4. The big daddy - GAE Datastore
I looked hard and squinty eyed at the various wrappers for ORM or whatever around GAE's very bigtable-ish datastore, but in the end found I had the easiest run at it just using the low-level API.

I had looked into this before through the lens of JPA, and got myself all kinds of confused. Perhaps its been the experience of working with SimpleDB for the last year or so, and escaping the evil clutches of SQL, but it all seemed to make a great deal of sense, and the combination of a little bit of transactionality (and bubbling back up to the app if you need to have another go at it due to transaction conflicts), a simple but rather powerful query model, modelled in straight java (I actually prefer this to having to assemble the SimpleDB selects), and the actual underlying schemaless datatastore 'as a service' was a very quick port from SimpleDB, and for the most part felt very 'right'.


So ... quite liking Google's AppEngine right now, and the tooling around it was a bit of a pleasure to work with.

Bet if I REALLY had to use it in production I'd find some fun niggles though :D

Tim

Monday, April 4, 2011

What's in your <license>?

Not very long ago, and not very far away, a conversation opened up about checking the licenses used on dependencies, to avoid issues with pulling in dependencies with toxic licensing in relation to commercial work, or at least pulling them in by accident.

Whilst I suspect this is not exactly a 'new' issue, its certainly one which people haven't been particularly inclined to deal with in a systematic way in the past.

Now, as I'm now doing a lot of my work inside the bounding box of maven.apache.org structured projects, and as I had indeed seen a dependency pull in with GPLish consequences, I thought I'd look-see if maven could actually be our friend here, and help deal with this systematic issue - heck its dealing with dependency pulls in a systematic way, so why not the license checks too?

Well as it turns out, inside the structure of a maven pom, there is a licenses element, which can contain one (well actually zero but lets not be picky) or more license strings, like this :-

   <licenses>  
     <license>  
       <name>The Apache Software License, Version 2.0</name>  
       <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>  
       <distribution>repo</distribution>  
     </license>  
   </licenses>  

Great! I'll just go dig for the maven plugin to validate my project dependencies against their license definitions and bob will, indeed, become my father's brother.

Well, it turns out, not so much.

First of all, it didn't look like there was such a plugin (apart from Apache RAT Maven plugin, but that's specifically to Apache projects) - nevermind, I should be able put something together ... so...



.. I did, and here it is the maven-license-validator-plugin.

I'm not going to bang this drum too hard, as its actually very little code at all - very very little indeed. But it works. Which is nice.

Anyway, that being done, I've come to realise just how much of a fricking mess the whole 'license' element is in in maven.

Take a look at this :-


       <plugin>  
         <groupId>com.googlecode.maven-license-validator-plugin</groupId>  
         <artifactId>maven-license-validator-plugin</artifactId>  
...
         <configuration>  
           <allowedLicenses>  
             <value>SCE</value>  
             <value>Apache License v2</value>  
             <value>Common Public License Version 1.0</value>  
             <value>The Apache Software License, Version 2.0</value>  
             <value>The Apache Software License, Version 2.0</value>  
             <value>Apache Software License - Version 2.0</value>  
             <value>Apache License, Version 2.0</value>  
             <value>Apache License Version 2.0</value>  
             <value>Apache License</value>  
             <value>Apache 2</value>  
             <value>CDDL 1.1</value>  
             <value>COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL) Version 1.0</value>  
             <value>Common Development and Distribution License (CDDL) v1.0</value>  
             <value>Public Domain</value>  
             <value>Bouncy Castle Licence</value>  
             <value>BSD style</value>  
             <value>Google Web Toolkit Terms</value>  
             <value>ICU License</value>  
             <value>Revised BSD</value>  
           </allowedLicenses>  
           <allowedUnlicensed>  
             <value>javax.servlet:servlet-api:jar:2.5</value>  
             <value>javax.servlet.jsp:jsp-api:jar:2.1</value>  
             <value>asm:asm:jar:3.1</value>  
             <value>commons-httpclient:commons-httpclient-contrib:jar:3.1</value>  
             <value>org.slf4j:slf4j-api:jar:1.5.6</value> <!-- no license info - confirmed at www.slf4j.org/license.html MIT Licensed -->  
           </allowedUnlicensed>  
         </configuration>  
       </plugin>  

1. How many ways are there to write 'Apache License V2'?! I know I could write up funky regexp to support picking out the variants being used and reduce the number of elements, but <sigh/>, and in any case I'd be worried to then have an overly-relaxed check miss some strange wording

2. Sun/Oracle and not putting any license in their maven artifacts - now as I understand it getting those artifacts into Central was a fight ... but no license ... grr.

Anyway - I'm seeing vibes indicating that this is an area the gods of Maven are starting to focus on, but as I can see it, its going to be a long hard road towards cleaning this area of maven up and reaching that clear nirvana of easily being able to go 'yep', licenses are a-ok.

Tim

Tuesday, March 29, 2011

So long and thanks for all the subs...

Ah well, sadly SNS looks like it has too high latency, so for now, and for the purposes of cache replication ... its out. Pity, really, but at least it means I'm no longer worrying so much about being able to test pubsub malarky.

In its place for now, I've returned to an old (2 years) friend in the form of Hazelcast.

I have to say I do like Hazelcast - whilst it does provide some 'cool stuff' and has a decent variety of constructs that it supports, at its core, it just provides a nice, lightweight (no external dependencies) set of clustered connections.

Having implemented the Hazelcast backed cache though, I have been looking into avoiding a few past mistakes with it.

1. ThreadLocals
This may have changed, but in performance/scale testing previously, I found that use of hazelcast instances injects some hefty ThreadLocal data onto the thread, and in high multithreading scenarios I saw this being a significant chunk of overall memory.
Still ... good and straightforward ExecutorService comes to the rescue here - at the cost of more Thread based fancy footwork, I've wrapped the calls on the Cache implementation using Hazelcast with Callables, so I end up with calls a bit like this :-


   @Override  
   public V remove(final String key)  
   {  
     Future<V> future = executor.submit(new Callable<V>()  
     {  
       @Override  
       public V call() throws Exception  
       {  
         CacheEntry<V> existing = getMap().remove(key);  
         return (existing == null ? null : existing.getValue());  
       }  
     });  
     try  
     {  
       return future.get();  
     }  
     catch (Exception ex)  
     {  
       throw new RuntimeException(ex.getClass() + " thrown in executor", ex);  
     }  
   }  

Whilst as I say, its a bit of indirection I'd rather not have, I don't have to look at it too often, as its encapsulated away. I could take it a step further, and perform a number of the cache calls remotely using Hazelcast's executors - particularly conditional puts

2. Data coherence
This one I'm currently slightly less sure on, but I've made the changes locally, and I'm trying to see how well I can live with the consequences ...
Last time I used Hazelcast, I put in quite a chunk of effort to use distributed locking of Map keys and so on in order to ensure different server nodes didn't bother attempting concurrent pulls of data from the DB, merely in order to overwrite each other when pushing this data into the cache.
Whilst this was great for minimising the number of hits to the DB (SimpleDB in this case), minimising costs, yadda yadda, it increased HUGELY the amount of roundtrip crap going on between nodes, and was a key cause of slowdown when performing data lookups.


(sidebar - in the previous setup, it turned out replicated/coherent cache wasn't actually necessary in any case - don't get bogged down in interesting code, keep looking at the usage scenario and build the simplest system which can meet the requirement)

Instead I've been looking at how people integrate to the likes of memcached, in situations where the framework (*cough* PHP *cough*) don't tend to go down the clustering route particularly, and seeing how to trade off and ensure that whilst we can't get an absolute minimum amount of backing store activity, we do avoid overwriting cache data in cases where multiple nodes have updated state.

I'll leave that area as a snippet for now, but may put more in later on - ideally this interface style will be amenable to dropping in and out multiple implementations including memcached.


   @Override  
   public void onGameResultsPosted(GameOwner owner, GameId id, Map<String, String> attributes, Date timestamp)  
   {  
     SimpleGameResult result = new SimpleGameResult(id, owner, attributes, timestamp);  
     int attempts = 0;  
     do  
     {  
       try  
       {  
         CacheEntry<List<SimpleGameResult>> entry = this.ownerGameResultsCache.getEntry(owner.getId());  
         if (entry != null)  
         {  
           entry.getValue().add(result);  
           this.ownerGameResultsCache.put(owner.getId(), entry.getValue(), entry.getCas(), entry.getCas() + 1);  
         }  
       }  
       catch (OutOfOrderException ex)  
       {  
         log.info(ex.getClass() + " thrown whilst attempting to include posted results in cache", ex);  
       }  
     }  
     while (attempts < CACHE_MAX_RETRIES);  
     log.warn("Too many OutOfOrder exceptions when attempting to update cache");  
   }  


Any feedback, I'd be interested to hear, even if its to tell me how badly this will go booom! in my face.