Tuesday, November 17, 2009

Scaling with MySQL replication

Scaling with MySQL replication

To deal with Drupal's growth, we're adding a second database server to drupal.org which is useful for at least two reasons. First, we'll be able to handle more SQL queries as we can distribute them between multiple database servers (load balancing). Secondly, this new server can act as a "hot spare" that can immediately take over if the other database server fails (high availability / fail-over).

The current plan is to configure both database servers in master-slave configuration, which is the most common replication model for websites. This model provides scalability, but not necessarily fail-over. With a master-slave configuration, all data modification queries (like INSERT, UPDATE and DELETE queries) are sent to the master. The master writes updates to a binary log file, and serves this log file to the slaves. The slaves read the queries from the binary log, and execute them against their local copy of the data. While all data modification queries go to the master, all the read-only queries (most notably the SELECT queries) can be distributed among the slaves. By following this model, we can spread the workload amongst multiple database servers. And as Drupal.org's traffic continues to grow, we can scale horizontally by adding more slaves.

While MySQL does the database replication work, it doesn't do the actual load balancing work. That is up to the application or the database abstraction layer to implement. To be able to distribute queries among multiple database servers, the application needs to distinguish between data modification queries and read-only queries.

Care needs to be taken, as the data on the slaves might be slightly out of sync. It may or may not be practical to guarantee a low-latency environment. In those cases, the application might want to require that certain read-only queries go to the master.

There are different ways to accomplish this:

Drupal executes all SQL queries through db_query(). Traditionally, big Drupal sites manually patched db_query() to use query parsing (regular expression foo) to separate read queries from write queries. This is not convenient and it doesn't provide a good solution to deal with lag. Fortunately, work is being done to provide better support for database replication in Drupal 6. It's our intend to backport this to Drupal 5 so we can use it on drupal.org until Drupal 6 has been released and drupal.org has been upgraded to use Drupal 6.

MediaWiki, the software behind Wikipedia uses $db->select() and $db->insert(). They have some documented best practices to deal with lag.

Neither the Pear DB database abstraction layer or its successor Pear MDB2 seem to support database replication.

Wordpress uses HyperDB, a drop-in replacement for Wordpress' default database abstraction layer that provides support for replication. It was developed for use on Wordpress.com, a mass hosting provider for Wordpress blogs. Because HyperDB is a drop-in replacement, they don't have a clean API and just like Drupal 5, they have to use query parsing to separate read queries from write queries. It's not clear how they deal with lag.

Joomla! 1.0 does not separate read queries from write queries, but Joomla! 1.5 will use functions like $db->insertObject() and $db->updateObject(). Joomla! 1.5 won't support replication out of the box, but their API allows a clean drop-in replacement to be developed. It's not clear how they would deal with lag.

PostNuke uses the ADOdb database abstraction library, which at first glance does not support database replication either.

Java applications use the statement.executeQuery() and statement.executeUpdate() that are part of the standard class libraries. It's not clear how they deal with lag.

What other popular applications support database replication, and what do their APIs look like? I'd like to find out what the ideal API looks like so we can still push that for inclusion in Drupal 6.

Based on the research above, I think we should get the best of all worlds by introducing these three functions (and deprecating db_query()):

db_select_slave(); // results might be slightly out of date

db_select_master(); // results always up to date

db_update(); // for UPDATE, INSERT, DELETE, etc

Even if they don't actually do anything useful in Drupal 6, and just map onto the deprecated db_query(), they set a better standard, and they allow for a drop-in replacement to be developed during Drupal 6's lifetime. Ideally, however, Drupal 6 would ship with a working implementation.

July 13, 2007 - 17:05


Drupal performance



I suggest also aliasing db_select() to db_select_slave().

Also, you might change db_update() to db_change() or db_modify() — update being an SQL reserved word might confuse users to think there should be a db_insert() and db_delete() and db_alter_table(), etc.July 13, 2007 - 18:03


Larry Garfield:

Actually I do think there should be db_insert(), db_delete(), etc. There's already a contrib module that provides them, and I'd love to have them in core. :-)

I would also recommend aliasing db_select() not to the slave but to the master server. Not all SELECT-queries are slave-safe, and it's non-trivial to tell programmatically when that's the case. See the issue Dries linked in the "Drupal" bullet point.July 13, 2007 - 18:17



Actually I do think there should be db_insert(), db_delete(), etc. There's already a contrib module that provides them, and I'd love to have them in core.

What are the advantages?July 13, 2007 - 18:19


Larry Garfield:

Oh dear, I think we're veering off topic, but since you asked... :-)

I'm basically talking about this issue. I've been using similar routines in projects for over a year and they greatly simplify life.

Structured data should remain structured as long as possible and be serialized as late as possible, since serialization loses semantic meaning.

When dealing with very long INSERT or UPDATE queries, especially if you have an if-else involved, a unified syntax/API saves gobs of time just typing the darned thing.

We frequently have complex conditionally-built queries in core, and in contrib. Manually building a dynamic Insert statement means maintaining three(!) arrays: one for the field names, one for the placeholders, and one for the values. With a generic array syntax, you maintain one array and the rest is handled automatically. That also means fewer opportunities for syntax error. It's also more secure than what I sometimes see done, which is to implode the values with comma and use a single %s placeholder.

One of the objections you had to the previous implementation was that it wasn't type-safe and was therefore a potential injection hole. The imperfect solution at the time was to require developers to cast the values to string, int, or float when building the array, which was net no worse than expecting them to know to specify %s vs. %d. However, I am going to try and move to a PDO back-end for Drupal 7. PDO is safely type-agnostic on prepared statements. (It handles type quoting internally.) That means we don't need to worry about casting or specifying the right escape code, which makes our code much shorter and faster.

As I mention below, db_query()'s ability to accept an arbitrary number of parameters comes at a cost: Default flag values are difficult to impossible. However, consider a signature like so:

function db_insert($table, $fields, $delay_safe = FALSE);

Now you can safely and easily flag an insert statement as INSERT DELAYED (where available) without needing to worry about variable parameter count. (Actually the more I think about it, the more I think moving to array-only for prepared statement values is a good thing.)

As a bonus, the WHERE clause builder we'd get out of it is useful for complex SELECT statements. Sure, most SELECT statements are simple enough that there's no good reason to setup a full query builder. But there are edge cases, including in core, where building WHERE clauses dynamically right now involves a lot of if-else-push-implode logic. I see no good reason why that shouldn't be generalized.

We implicitly know that db_insert()/db_update()/db_delete() are not slave-safe. However, when/if we start dealing with master/master replication it's good to know semantically what sort of operation we're dealing with, without resorting to regexes.

One-off importers (say, pulling in from a CSV file) become much easier since you're just reading an array and writing an array. (OK, we don't do that very often in production Drupal code but I have done it in prep-scripts for projects and it's saved me a lot of time.) Someone in the original issue pointed out that we use keyed arrays everywhere else, so doing so in another place opens up interesting possibilities.

As I said, that's a bit off-topic for this thread, but you asked. :-) I'll come back to it when I get thoughts for a PDO push written down nicely. For now, I recommend adding slave-server support in the most syntactically non-invasive way we can (David's patch in the linked issue is fine) to avoid dragging out Drupal 6 or making a Drupal 5 backport more difficult. Then we can look at a more developed and refined system as part of a larger overhaul in Drupal 7.July 14, 2007 - 10:00


Curtis Chambers:

I'd also add that there might be future performance benefits by splitting them out because with the MyISAM engine the entire table locks on an UPDATE but not on an INSERT. So at some point if you wanted to make a scheduler to make sure a long string of UPDATE statements don't lock out all other queries.

However, if you have that many UPDATE statements it might just be time to switch to InnoDB.July 26, 2007 - 20:26



I realize given the current state of the database schema Master / Master replication on MySQL is not possible out of the box, however Master / Slave replication will only solve your current scaling problems and get Drupal.org somewhat scalable and somewhat redundant. For a really large-scale implementation Master / Master replication support should eventually be looked at.July 13, 2007 - 20:52



It would be great to support master-master configurations.

I've never setup a master-master configuration myself, but I know that the implementation of db_next_id() in Drupal 5 and below was problematic to support master-master configurations. In Drupal 5, db_next_id() used a custom sequences table, ignoring MySQL's auto_increment feature.

Self-generated keys are a pain when you want to do multi-master replication. If master A and master B both insert a row in the same table, the self-generated keys might conflict.

MySQL 5 introduced a auto_increment_increment and a auto_increment_offset variable to control the behavior of MySQL's auto_increment columns. With these variables, you can configure master A to insert rows with even keys, and you can configure master B to insert rows with odd keys. This effectively avoids conflicts.

In Drupal 6, we reworked db_next_id() to get rid of the custom sequences table and to rely on MySQL's auto_increment feature instead. Thus, as of Drupal 6, you should be able to take advantage of MySQL's auto_increment_increment and auto_increment_offset variables. (Pretty much all databases support auto-incrementing keys nowadays.)

Combined with the proposed API changes, this should pave the path to support master-master replication in Drupal 6. Except that maybe, we need a notion of database sessions so that read-only queries that need to work on an up-to-date copy of the data can go to the right master? Anything else that we are overlooking that we might have to expose to the API?July 15, 2007 - 11:42



My experiences with configuring MySQL in a Master / Master configuration leads me to firmly believe it's a bad idea. The problem is that MySQL replication is best effort and offers no data delivery guarantees of any sort, so any data loss or corruption (ie from a network hiccup, or file system error, or a system administration error, or ...) is quietly ignored. If it doesn't lead to an ID conflict, nobody complains, yet you're now running with two "Masters" that contain different versions of the data. And errors often tend to cascade. There's a reason why MySQL calls the Master / Master configuration "unsupported".

Certainly there are hashing tricks that can be used to detect such data loss or corruption, but then comes the question of what is to be done when it's detected. Which Master is the correct one? Depending on the form of data loss or corruption, the answer may be "neither".

Finally, a Master / Master configuration would reintroduce the latency issues that Dries seems to be trying to solve by being able to select the Master or the Slave. In Master / Master mode, you no longer can guarantee a connection to a database with the absolute latest data.

BTW: core support for sharding would make an interesting (and imho much better) alternative to a Master-Master configuration.August 2, 2007 - 18:28


Nicholas Thompson:

Why not add a parameter to db_query() (somehow) which decides if a function is going to master or slave and default it to master. This means all existing functions and modules will still "work" but wont be optimal.

Its then a simple matter of going through the functions and checking which are slave-save and adding the flag.

Is this possible without breaking db_query()?July 14, 2007 - 00:03


Larry Garfield:

Not without making it required, no. db_query() takes a variable number of arguments, which means that there's no good way to detect if the last value is a flag saying "this is slave-safe" or if it's a value to be mapped into the query. db_query_range() gets away with it because the last two parameters are always required. I don't think we can get away with that with db_query(), especially not in Drupal 6 at this point.

The only way that would be feasible would be if the flag was only checked if using the one-big-array syntax option for db_query(). That is, if you wanted to write a query against the slave server, you would have to use:

db_query("SELECT * FROM {foo} where fid = %d", array('fid' => $fid), TRUE);

While that's a possible long-term solution, I'm not sure that's feasible for a Drupal 5/6 time frame. A separate function, however, is easy to slip in quickly and then backport/implement during freeze without disrupting too much.July 14, 2007 - 09:33



Implementation details aside, many of the Drupal developers don't know how database replication works, nor do they get to test their code on a system that uses database replication. In other words: many developers will happily ignore such flags.

If we want all contributed modules to properly support replication, breaking backwards compatibility seems to be a requirement. It's going to make for a better solution in the long run, and that is what I'm concerned about most. There is always some pain before the pay-off ...July 15, 2007 - 11:51


Larry Garfield:

Deliberately breaking APIs for the purpose of getting a developer's attention is not a strategy I've seen before. :-) Darwin would either be proud or aghast...

I'm not going to say that's necessarily a bad idea, but I do think it's a bad idea right now. Drupal 6 is theoretically in feature freeze. Let's not make database replication into Drupal 6's FAPI fiasco.

For now, minimal API changes to allow master/slave replication. That should give us a comfortable breather for drupal.org and similar sites. In Drupal 7 we can break the whole thing in a dozen ways at once, but for now the more we break the more we delay Drupal 6. Let's not do that.July 16, 2007 - 03:59



I think we need to introduce the new functions in Drupal 6, but leave the old db_query() in place for one more release. The new functions could map directly onto db_query() but at least, developers could start writing code against the new API and a drop-in replacement could be developed. As long we leave db_query() intact for one more release, no damage is done.July 16, 2007 - 07:22


Larry Garfield:

That means we need to decide now, and implement now, what the new API will look like. If we are going to move to PDO in Drupal 7, as I want to try and do, that is going to have a big impact on the API anyway. So we then either have an API that lasts just one version or we nail down now what the Drupal 7 database API will look like. (I've been trying to avoid dealing with that until Drupal 6 was squared away, so that we focused on getting D6 stable. If you want me to start writing up a proposal now, let me know.)

I guess I just don't see the value in:

function db_query() { ... }

function db_query_master() {

return db_query();


function db_query_slave() { ... }

db_query_master() seems pointless, unless we're absolutely sure that's what and how it's going to work in Drupal 7.July 16, 2007 - 07:59


moshe weitzman:

The proposal is as simple as possible while still providing great value. Excellent.July 14, 2007 - 16:53



I think that "the slave safe" option is good idea, but it doesn't solve all problems.

Actually it would be nice to rewrite database.mysql.inc in order to support multi-master (or master-master) replication.

Even better to create new type of database connection and create some new file (for example database.multi-mysql.inc) with special care about the db_next_id() function and the sequences table (and other replication related issues).July 15, 2007 - 01:22



I wonder how this thread fits into the larger context of discussions aimed at broadening db support for Drupal (SQL Server, Oracle, etc.). I understand that the underlying goal of this activity is aimed at resolving Drupal.org's performance problems (which I fully support), but would love to get a feel for whether or not it affects the perceived desire to support high performance implementations of Drupal on non-MySQL databases. I hope that makes sense! Keep up the good work ...July 15, 2007 - 04:48


moshe Weitzman:

Everyone wants to support more databases. That goal is not in jeopardy and this feature will not obstruct that effort at all.July 17, 2007 - 17:01



I thought that was probably the case, but it is nice to hear that the two efforts aren't in conflict :)July 18, 2007 - 17:18


Sam Tresler:

Originally I was for keeping it as simple as possible. My posts on the issue on D.O. were based around a simple db_query() and db_query_slave() - in which the latter defaulted to the first in single server configuration.

Indeed, I think this still covers 95% of drupal use-cases and would also work towards alleviating D.O. current issues.

However, this new proposals allows for better horizontal scaling through application based load balancing. The issue with database load balancing that not all queries are created equal. HTTP load balancing, in essence can be 'round-robin' or even relatively random to a pool of servers, but that is when the actual build-time of one page has a relatively negligible difference from the build-time of any other page.

However, SQL queries aren't the same and it becomes useful to have the application do at least rudimentary load balancing.

I would be interested to see an amalgam of sorts. I like all three of these queries, but is it possible to have db_select_slave() functional through an additional setting in settings.php - again defaulting to db_query() and the oher two default to db_query as suggested?

This approach is as risk-free as the others and yet also opens up the door to people patching d6 with an incentive of increased performance, as opposed to patching it because d7 will need it.July 17, 2007 - 06:00


Edison Wong:

Just for brainstorming: would the Oracle driver for Drupal 6 be a solution?

Oracle is an enterprise level database, e.g. Oracle 10g is mainly targeted for database clustering. It's a "grid computing" solution much more suitable for huge sites, when compared to MySQL's master-slave solution.

On the other hand, performance of the latest Drupal Oracle driver implementation is really amazing: it is almost as fast as MySQL's. I did some simple stress tests, and the Oracle driver for Drupal is only 7% slower when tested on the same system. Don't forget that we need to do "duplicate SELECT" for every SELECT query to overcoming oci8's oci_num_rows() limitation. Oracle's internal query caching and boosting system seem to help out. :)

BTW, world is not perfect: Oracle is not free. We will need to pay for it, if choosing enterprise edition (i didn't study much about XE version, hope it may be free for charge...).

Anyway, when compare with Oracle, Enterprise DB may also a good choice. it is an "Oracle clone", based on PostgreSQL, coming with some successful case study. This may also be a good solution for drupal.org's heavy loading :)July 28, 2007 - 20:06



Meh. You do not Oracle to solve the problems of such a small site as Drupal.org. We will solve it eventually without having to *maintain* Oracle. Where I work, we used postgresql and even that turned out to be a human resource nightmare -- MySQL 'admins' are twelve a dozen but finding good pg people is hard. I can only imagine how hard it is to find Oracle people who in this case needs to work for free. Forget it.July 29, 2007 - 02:19


Edison Wong:

Err... This should also be a consideration... I guess it would be better if we choose Enterprise DB (similar to PgSQL but better in performance)...

Beside MySQL's master-slave solution, why not using MySQL 5.1's clustering feature? I know that will require for more hardware, but you get better performance. On the other hand, it is MySQL's product ... ;)July 29, 2007 - 03:25



Now that Drupal can run under Quercus Java PHP implementation that is just as fast as APC, (or faster if you use Jrocket JVM) then what about solving the DB scaling issue by using the Open Source Java products: Sequoia C-JDBC or HA-JDBC which can automatically manage a cluster of MySQL DBs in a RAID like configuration. It's transparent and plugs in as the JDBC driver layer which the Java PHP solution can make use of.

It does synchronous writes to all DBs, and reads from one DB so performance should be quite good. Drupal code does not need to be changed as it makes the DBs look like a single DB. I guess there is still the problem with the PK auto-generation issue to solve.

Any one considered this as a possible solution?

Best Regards

CarloAugust 16, 2007 - 16:44




I recently set up a replicated/load balanced environment, the application uses the AdoDb abstraction layer.

For this purpose, i wrote a subclass for AdoDb that performs writes to the master, and reads from the slaves.

I hope this class helps anyone. It can be easily modified for your own purposes and does not involve changing code in the publication layer of your application, just use the custom AdoDb Instance.

It doesn't really deal with lag, but my ISP set up the server so that write queries are transferred almost instantly, and this does not cause problems on the site i am using it on.

The class may be found at:

http://www.24hoursmedia.com/wordpress/docs/mysql-replication-and-using-a...November 17, 2007 - 14:58



RE: Wordpress uses HyperDB, a drop-in replacement for Wordpress' default database abstraction layer that provides support for replication.

The whole WordPress DB class is based on ezSQL of which the original ezSQL class features query caching and HyperDB also reflects this.

I have been contemplating working on a type of mySQL replication but not worrying too much about the difference in reads from slaves compared with the up-to-date data on the master (just as caching queries have a timeout of hours), and of coarse, the DB writes are made to the master...

I have bookmarked this page as there are a lot of pointers I have not taken into account... and to be honest... HyperDB seems very messy.

Thanks :)February 16, 2008 - 17:09



I think you should just redo Drupal with CakePHP or Symfony...you will gain a much more structured application that will make it easier to extend and work with...on top of a faster (true) MVC design pattern....AND most importantly of all - freedom to scale however you wish. It is INCREDIBLY easy to drop in more database servers. Your scaling issues with Drupal will cease to exist if you do so. Drupal can probably still be Drupal under one of these frameworks as well. You can probably organize it just the same and who knows, with a little bit of elbow grease you may be able to have some backward compatibility for modules.

Drupal is great - but it has to keep up with the times. I think you're really shooting yourself in the foot and overworking yourself. You put a LOT of great work in, but you started off on a different path than what the rest of the world seemingly adopted...as a result, you're missing out on some cool things. Swim with the current, not against it. While you've done great things, you aren't going to change the world with Drupal in its current form. Sad to say. As much as I rely on it and work with it every day.April 11, 2008 - 16:28


frederic sidler:

You can also use mysql-proxy that does the trick for you. There is a mode called rw-splitting… that split read/write to master slave. You don't need to change anything in your code ;-). If you need even more like sharding, have a look at hscale.org. They provide lua script to manage that with mysql-proxy.July 14, 2008 - 23:28



The MySQL Proxy rw-splitting.lua script does not work with drupal. Could you share the experience of using MySQL Proxy with Drupal 6?September 29, 2008 - 06:30



If you are willing to go on the key/value db route, Schemafree offers distribution and replication of writes, reads from mirrored masters in case of failure, and the feature in progress is sync after a db restart.

http://code.google.com/p/schemafree/May 13, 2009 - 15:33


No comments: