Tuesday, October 23, 2007

Filesystem cache is important

This time, I have completed the import of 10.000 records from the Wikipedia into MySQL. This means 7 million words to index. The table and database structure is defined as in the previous examples:

DROP DATABASE IF EXISTS full_text_investigations;
CREATE DATABASE full_text_investigations;
USE full_text_investigations;

Indexing process is long in time (several minutes for 10.000 records), moreover when making full-text indexes.

The SQL dump file that defines the 10.000 Wikipedia registers of this example is 45 Mb in size. When indexed, the MyISAM files created are these:

# ls -laSh /var/lib/mysql/full_text_investigations/
total 76M
-rw-rw---- 1 mysql mysql 42M Oct 23 16:51 articles.MYD
-rw-rw---- 1 mysql mysql 34M Oct 23 16:51 articles.MYI
-rw-rw---- 1 mysql mysql 8.4K Oct 23 16:25 articles.frm

We may again verify the recent discovered rule here: "Index size = Data size = Dump size"

It is not an exact rule, but approximately seems correct until now.

I have prepared 10 predefined queries to benchmark the system. All they are of this form:

SELECT id, title, LEFT(body, 64) FROM articles WHERE MATCH (title,body) AGAINST ('keyword');

The queries are using 10 distinct keywords. Some keywords provides more results than others. MySQL drops this execution times:

37 rows in set (0.92 sec)
82 rows in set (1.31 sec)
151 rows in set (1.74 sec)
9 rows in set (0.18 sec)
1 row in set (0.01 sec)
Empty set (0.00 sec)
14 rows in set (0.17 sec)
108 rows in set (1.28 sec)
10 rows in set (0.12 sec)
38 rows in set (0.53 sec)

If we correlate the number of rows with the time elapsed we find an important rule of thumb: "0.01 seconds to obtain 1 register". Remember this rule; it's important and universal.

Besides, we can see that it's rare to find timings greater than 1 second. It may seem acceptable a time of 1 second to resolve a search engine query, but it is NOT. We cannot hold loaded a server for 1 second just to answer only 1 query. A server of this kind wouldn't be able to support a realistic load of 10.000 visits/day, for example. Think about the peaks, not only the valleys, in the server access stats.

Well, this is precisely the goal of the present investigations. To find out the real performance of MySQL full-text as a textual search engine.

If we re-query one by one the 10 previous queries, we will find an important result:

37 rows in set (0.01 sec)
82 rows in set (0.01 sec)
151 rows in set (0.01 sec)
9 rows in set (0.01 sec)
1 row in set (0.01 sec)
Empty set (0.00 sec)
14 rows in set (0.01 sec)
108 rows in set (0.01 sec)
10 rows in set (0.01 sec)
38 rows in set (0.01 sec)

0.01 seconds replies for almost every query. That is really fast, and good in order to build a search engine. But it is only the biasing effect of doing exactly the same queries for second time. This is due to the filesystem cache. If we would have configure MySQL to use its own query cache, the result would have also be so good, but this time the responsible wouldn't have been the filesystem cache but the MySQL query cache.

Filesystem cache is the key to understand the behaviour of MySQL performance under extreme circumstances like these that we are trying to investigate. When a process in the server access to a file to read it, it is kept in RAM memory (if fits, of course). This way any further reading of this file would avoid to make a physical access to the disk drive, obtaining the data directly from RAM memory. The operative system assets that the file hasn't been change in the meanwhile, between reads when it uses the cache to improve the file reading speed.

We can observe that cache size has increased (138 Mb) after invoking MySQL queries:

# free
total used free shared buffers cached
Mem: 255504 174508 80996 0 5864 138916
-/+ buffers/cache: 29728 225776
Swap: 0 0 0

So the main conclusion is this one: "Filesystem cache improves sharply the MySQL performance". And "0.01 seconds to obtain 1 register" when cache is not playing its role due to not previously read (or properly updated in RAM) files or file chunks.

In next investigation we are indexing 30.000 records and measure file sizes and timings in query responses. We will find out what are the essential files to be cached by filesystem.

No comments: