Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

HowTo: Install Memcached With repcached “Built-In Server Side Replication” On Debian Lenny

People probably know about memcached ( and its high performance name-value based memory object cache interface. Its main purpose is to provide an easy to use distributed caching engine in a multinode environment. Have you ever wanted to let memcached handle replication?

If you’d like to add high-availability capabilities, you’re advised to let the client (-library) handle replication of your data throughout all nodes. Let’s say using memcached as storage backend for php sessions is configured as follows:

# configuration for php memcache module
session.save_handler = memcache
session.save_path = "tcp://,tcp://"

Now sessions get distributed (not replicated) through both nodes. As soon one node goes down you’ll loose all that data. As long you only use memcached as a performance booster and distributor of some “cacheable” data that should not hurt: The lost data gets filled up as they did whilst creating the cache entries.

To replicate your session data you’ll need to use a custom session handler that connects to each of these nodes on its own… for replication. Same if you’d like to write some own cached objects to your beloved memcached. Still your application needs to know all nodes to write to.


Ease up replication by letting memcached work it out as repcached

These issues might confuse developers that just want to focus on their application and don’t want to care so much about HA, load-balancing and scalability. With repcached at least a 2-node HA-Solution can be setup easily and transparently.

A simple patch adds fancy master-master replication to your memcached. Now you’ll only need to connect to your local memcached which than handles replication on its own.

So you’re application may remain dump and simple, using your local POSIX filesystem (which might be replicated by DRBD, GlusterFS, etc) your local Database (you guessed it: “which handles replication on its own”) and connecting to a lokal instance of your “memcached-cluster”. That setup opens up some more simple and transparant options for an easy and lightweight HA-Scalability.


Installation, configuration and testing a 2-node cluster

First grab a copy of the repcached patch or even download the pre-patched memcached-source from There is a dependency on libevent which might need to be resolved prior to installation.

root@ha-01 ~ # apt-get install libevent-dev

Now unpack and cd into the directory:

root@ha-01 ~ # tar xvf memcached-1.2.8-repcached-2.2.tar
root@ha-01 ~ # cd memcached-1.2.8-repcached-2.2/

Configure with –enable-repcached option.

root@ha-01 ~/memcached-1.2.8-repcached-2.2 # ./configure –enable-replication
root@ha-01 ~/memcached-1.2.8-repcached-2.2 # make
root@ha-01 ~/memcached-1.2.8-repcached-2.2 # make install

Don’t worry too much about a previously installed memcached.dep package. This manual install will just install a binary in /usr/local/bin/memcached letting your “original” memcached untouched in /usr/bin/memcached.

In contrast to the debian package you might like to provide command-line optionsby using a simple default configuration in /etc/default/memcachedrep (rep => ‘replication’), ie:


default configuration

## extra commandline options to start memcached in replicated mode
# -x < ip_addr > hostname or IP address of the master replication server
# -X < num > TCP port number of the master (default: 11212)
DAEMON_ARGS="-m 64 -p 11211 -u root -P /var/run/ -d -x"

With that in place an init skript is set up quite easy by using and modifying the debian skeleton:



#! /bin/sh
# Provides:             memcached
# Required-Start:       $syslog
# Required-Stop:        $syslog
# Should-Start:         $local_fs
# Should-Stop:          $local_fs
# Default-Start:        2 3 4 5
# Default-Stop:         0 1 6
# Short-Description:    memcached - Memory caching daemon replicated
# Description:          memcached - Memory caching daemon replicated
# Author: Marcus Spiegel <>
# Please remove the "Author" lines above and replace them
# with your own name if you copy and modify this script.
# Do NOT "set -e"
# PATH should only include /usr/* if it runs after the script
DAEMON_ARGS="--options args"
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/$DESC ] && . /etc/default/$DESC
# Load the VERBOSE setting and other rcS variables
. /lib/init/
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions
# Function that starts the daemon/service
	# Return
	#   0 if daemon has been started
	#   1 if daemon was already running
	#   2 if daemon could not be started
	start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
		|| return 1
	start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
		|| return 2
	# Add code here, if necessary, that waits for the process to be ready
	# to handle requests from services started subsequently which depend
	# on this one.  As a last resort, sleep for some time.
# Function that stops the daemon/service
	# Return
	#   0 if daemon has been stopped
	#   1 if daemon was already stopped
	#   2 if daemon could not be stopped
	#   other if a failure occurred
    start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME
    [ "$RETVAL" = 2 ] && return 2
	# Wait for children to finish too if this is a daemon that forks
	# and if the daemon is only ever run from this initscript.
	# If the above conditions are not satisfied then add some other code
	# that waits for the process to drop all resources that could be
	# needed by services started subsequently.  A last resort is to
	# sleep for some time.
	start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
	[ "$?" = 2 ] && return 2
	# Many daemons don't delete their pidfiles when they exit.
	rm -f $PIDFILE
	return "$RETVAL"
# Function that sends a SIGHUP to the daemon/service
do_reload() {
	# If the daemon can reload its configuration without
	# restarting (for example, when it is sent a SIGHUP),
	# then implement that here.
	start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
	return 0
case "$1" in
	[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
	case "$?" in
		0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
		2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
	[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
	case "$?" in
		0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
		2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
	# If do_reload() is not implemented then leave this commented out
	# and leave 'force-reload' as an alias for 'restart'.
	#log_daemon_msg "Reloading $DESC" "$NAME"
	#log_end_msg $?
	# If the "reload" option is implemented then remove the
	# 'force-reload' alias
	log_daemon_msg "Restarting $DESC" "$NAME"
	case "$?" in
		case "$?" in
			0) log_end_msg 0 ;;
			1) log_end_msg 1 ;; # Old process is still running
			*) log_end_msg 1 ;; # Failed to start
	  	# Failed to stop
		log_end_msg 1
	#echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2
	echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2
	exit 3


After setting up repcached on both nodes you should probably test that out. You might connect to memcached on each node per telnet:

root@ha-01 ~ # telnet 11211

Connected to
Escape character is ‘^]’.

Now issue a set command to write some test values just on one node:

set foo 0 0 3

move over to the other node, connect and try to read:

root@ha-02 ~ # telnet 11211

Connected to
Escape character is ‘^]’.
get foo
VALUE foo 0 3

Try that vice-versa and have fun! And finally tear down one of these nodes after you pushed some more values into memory. The running node still keeps responding with _all_ data – excellent.

The real fun part is, that next as soon you start up the unavailable node again it gains all the “lost” data from the other master. Go ahead try on your own. Same will happen if you now tear down the other node and bring it back up again. Even data that was set to one node whilst the other node was down will get added to the upcoming node after startup.

That setup is capable of a 2-node-replication setup only – yet. Think of building a 4-node cluster with a kind of raid10 setup where replication and distribution is combined… or something. Well at least in that 2-node HA-Setup repcached performes like a charm.

Personally I’d like to try a circle of 3 nodes next 🙂 (maybe together with heartbeat to close any lack in that circle).