Go to the first, previous, next, last section, table of contents.


22 Solving Some Common Problems with MySQL

In this chapter, you will find information to solve some of the more common tasks with MySQL. This includes making backups, running more than one MySQL server daemon on a single machine, and replicating a database using the update or binary logs.

22.1 Database Replication with Update Log

Now that master-slave internal replication is available starting in Version 3.23.15, using the update log to implement replications is not recommended. See section 11 Replication in MySQL.

However, it is still possible to replicate a database by using the update log or the binary log. See section 23.3 The Update Log. This requires one database that acts as a master (to which data changes are made) and one or more other databases that act as slaves. To update a slave, just run mysql < update_log.* or mysqlbinlog binary_log.* | mysql. Supply host, user, and password options that are appropriate for the slave database, and use the update log from the master database as input.

If you never delete anything from a table, you can use a TIMESTAMP column to find out which rows have been inserted or changed in the table since the last replication (by comparing the time when you did the replication last time) and only copy these rows to the mirror.

It is possible to make a two-way updating system using both the update log (for deletes) and timestamps (on both sides). But in that case you must be able to handle conflicts when the same data have been changed in both ends. You probably want to keep the old version to help with deciding what has been updated.

Because replication in this case is done with SQL statements, you should not use the following functions in statements that update the database; they may not return the same value as in the original database:

All time functions are safe to use, as the timestamp is sent to the mirror if needed. LAST_INSERT_ID() is also safe to use.

22.2 Database Backups

Because MySQL tables are stored as files, it is easy to do a backup. To get a consistent backup, do a LOCK TABLES on the relevant tables followed by FLUSH TABLES for the tables. See section 7.32 LOCK TABLES/UNLOCK TABLES Syntax. See section 7.26 FLUSH Syntax. You only need a read lock; this allows other threads to continue to query the tables while you are making a copy of the files in the database directory. The FLUSH TABLE is needed to ensure that the all active index pages is written to disk before you start the backup.

If you want to make a SQL level backup of a table, you can use SELECT INTO OUTFILE or BACKUP TABLE. See section 7.19 SELECT Syntax. See section 7.13 BACKUP TABLE Syntax.

Another way to back up a database is to use the mysqldump program or the mysqlhotcopy script. See section 15.7 Dumping the Structure and Data from MySQL Databases and Tables. See section 15.8 Copying MySQL Databases and Tables.

  1. Do a full backup of your databases:
    shell> mysqldump --tab=/path/to/some/dir --opt --full
    
    or
    
    shell> mysqlhotcopy database /path/to/some/dir
    
    You can also simply copy all table files (`*.frm', `*.MYD', and `*.MYI' files) as long as the server isn't updating anything. The script mysqlhotcopy does use this method.
  2. Stop mysqld if it's running, then start it with the --log-update[=file_name] option. See section 23.3 The Update Log. The update log file(s) provide you with the information you need to replicate changes to the database that are made subsequent to the point at which you executed mysqldump.

If you have to restore something, try to recover your tables using REPAIR TABLE or myisamchk -r first. That should work in 99.9% of all cases. If myisamchk fails, try the following procedure: (This will only work if you have started MySQL with --log-update. See section 23.3 The Update Log.):

  1. Restore the original mysqldump backup.
  2. Execute the following command to re-run the updates in the binary log:
    shell> mysqlbinlog hostname-bin.[0-9]* | mysql
    
    If you are using the update log you can use:
    shell> ls -1 -t -r hostname.[0-9]* | xargs cat | mysql
    

ls is used to get all the update log files in the right order.

You can also do selective backups with SELECT * INTO OUTFILE 'file_name' FROM tbl_name and restore with LOAD DATA INFILE 'file_name' REPLACE ... To avoid duplicate records, you need a PRIMARY KEY or a UNIQUE key in the table. The REPLACE keyword causes old records to be replaced with new ones when a new record duplicates an old record on a unique key value.

If you get performance problems in making backups on your system, you can solve this by setting up replication and do the backups on the slave instead of on the master. See section 11.1 Introduction.

If you are using a Veritas file system, you can do:

  1. Execute in a client (perl ?) FLUSH TABLES WITH READ LOCK
  2. Fork a shell or execute in another client mount vxfs snapshot.
  3. Execute in the first client UNLOCK TABLES
  4. Copy files from snapshot
  5. Unmount snapshot

22.3 Running Multiple MySQL Servers on the Same Machine

There are circumstances when you might want to run multiple servers on the same machine. For example, you might want to test a new MySQL release while leaving your existing production setup undisturbed. Or you might be an Internet service provider that wants to provide independent MySQL installations for different customers.

If you want to run multiple servers, the easiest way is to compile the servers with different TCP/IP ports and socket files so they are not both listening to the same TCP/IP port or socket file. See section 15.4 mysqld_multi, program for managing multiple MySQL servers.

Assume an existing server is configured for the default port number and socket file. Then configure the new server with a configure command something like this:

shell> ./configure  --with-tcp-port=port_number \
             --with-unix-socket-path=file_name \
             --prefix=/usr/local/mysql-3.22.9

Here port_number and file_name should be different than the default port number and socket file pathname, and the --prefix value should specify an installation directory different than the one under which the existing MySQL installation is located.

You can check the socket used by any currently executing MySQL server with this command:

shell> mysqladmin -h hostname --port=port_number variables

Note that if you specify ``localhost'' as a hostname, mysqladmin will default to using Unix sockets instead of TCP/IP.

If you have a MySQL server running on the port you used, you will get a list of some of the most important configurable variables in MySQL, including the socket name.

You don't have to recompile a new MySQL server just to start with a different port and socket. You can change the port and socket to be used by specifying them at run time as options to safe_mysqld:

shell> /path/to/safe_mysqld --socket=file_name --port=port_number

mysqld_multi can also take safe_mysqld (or mysqld) as an argument and pass the options from a configuration file to safe_mysqld and further to mysqld.

If you run the new server on the same database directory as another server with logging enabled, you should also specify the name of the log files to safe_mysqld with --log, --log-update, or --log-slow-queries. Otherwise, both servers may be trying to write to the same log file.

WARNING: Normally you should never have two servers that update data in the same database! If your OS doesn't support fault-free system locking, this may lead to unpleasant surprises!

If you want to use another database directory for the second server, you can use the --datadir=path option to safe_mysqld.

NOTE also that starting several MySQL servers (mysqlds) in different machines and letting them access one data directory over NFS is generally a BAD IDEA! The problem is that the NFS will become the bottleneck with the speed. It is not meant for such use. And last but not least, you would still have to come up with a solution how to make sure that two or more mysqlds are not interfering with each other. At the moment there is no platform that would 100% reliable do the file locking (lockd daemon usually) in every situation. Yet there would be one more possible risk with NFS; it would make the work even more complicated for lockd daemon to handle. So make it easy for your self and forget about the idea. The working solution is to have one computer with an operating system that efficiently handles threads and have several CPUs in it.

When you want to connect to a MySQL server that is running with a different port than the port that is compiled into your client, you can use one of the following methods:


Go to the first, previous, next, last section, table of contents.