Lee Daniel Crocker wrote:
Absolutely; lies, damned lies, and benchmarks, and all
that.
Improving benchmarks which apply to Wikipedia db will hopefully improve the
situation out "in the wild". So yeah, the rest is just lies.
Disk I/O may well be a major culprit. Memory/CPU usage
probably
isn't. I'll also run some tests for things like having the database
on a separate machine
...
I'd lso appreciate suggestions for other benchmarks (specific
MySQL settings, for example).
Even if your system has plenty of memory, MySQL may not be configured to use
it. What do your settings in my.cnf look like? These settings will also
differ for MyISAM and InnoDB tables.
Improving disk throughput usually translates -> new hardware. You can try a
different file system or block size. XFS for Linux is improving. You may
want to compare it to ReiserFS. If you are going to test different block
sizes for the db, partition accordingly with the db on a separate partition
from the OS, Apache, PHP and MySQL binaries. This way, you can leave the
binary partitions at a smaller block size and adjust the db partition
without affecting the others. When installing your db on a second machine do
the same; isolate your binaries from your data.
Monitoring with mytop could be interesting:
http://jeremy.zawodny.com/mysql/mytop/