Technouz

Simple Way To Cache MySQL Query Results With PHP

By Zahid Mahmood / 17 February 2017

A recent project I was working on required me to build a very simple REST API. I was using AngularJS as the front-end, and decided to use a dead simple PHP and MySQL setup for the back-end.

I created several endpoints using PHP which simply extracted data from a MySQL database and printed them to the screen in JSON format. The $http.get() request from AngularJS simply read the contents of the JSON file produced and dealt with the data as necessary.

At first, everything was working well. But as my database grew in size, and as the PHP back-end began to manipulate the data extracted from the database more and more I found that the load time of the endpoint was longer than acceptable. At one point, a call to retrieve all data from a single table of 100 rows took twenty seconds. Based on the project road-map, 100 rows of data was a very small amount.

This led to me exploring simple and easy ways to cache the data I was retrieving from the database. The key here was that I was not interested in building a fully-fledged back-end system for my application. I needed something dead-simple and lightweight. Anything that required complex external modules and high resources was out of the scope of this project.

How to cache MySQL query results in PHP to a text file

So I was retrieving raw data from the database and encoding it to JSON format using the PHP json_encode() function. Angular was then reading this JSON data and presenting it on the front-end of the application. .json files are essentially formatted .txt files and so this led me to thinking about caching the data in to a text file. Since none of the calls required an additional level of authentication, this idea appealed to me a lot.

Using .txt files to cache the data would work well because:

  • No authentication worries for this application so it doesn't matter if the cached data is read by an external application
  • PHP has native capability to read and write to/from .txt files
  • .txt files can be read very quickly
  • .txt files are fully compatible with JSON data

So the following is the approach I used to cache the encoded JSON data to a text file.

Caching raw data vs caching encoded data

The first decision I made was where I wanted to cache the raw data from the database, or whether I wanted to encode it to JSON before caching it. Since my data was fairly static and wasn't dynamically changed on-the-fly by any external services, it made more sense to encode it to JSON and the cache the JSON format of the data. This meant that every time the endpoint was called, the back-end could spit out the contents of the text file which would dramatically reduce the load on the server and result in faster rendering.

The database call

So just to make sure we are on the same page I will highlight my SQL query below. Of course, your database setup may be different - and that is fine. In fact, if your code is object-orientated, unlike mine, then it is far superior than the code I am about to put forward. But, the reason for this is that beginners will find it easier to gain a functional understanding of how the code works without getting bogged down by the concepts of OOP.

// declare database connection details
$servername = 'localhost'; 
$username = 'technouz';
$password = "password";
$database = "myproject";
$dbport = 3306;

// create connection to database $db = new mysqli($servername, $username, $password, $database, $dbport);
if ($result = $db->query("SELECT * FROM myTable")) {
    $rows = array();

    while($row = $result->fetch_array(MYSQL_ASSOC)) {
        $rows[] = $row;
    }

    echo json_encode($rows);
}

The above code simply declares the database connection details, connects to the database and then extracts everything from myTable. It stores each row as an element in the array variable $rows before printing it to the browser screen in JSON format using echo json_encode($rows).

If this code was stored in a file called getMyTable.php it would print a JSON representation of the data stored in myTable every time the file was hit by a HTTP request.

As the database table myTable grows, the time taken to extract information from the database and render it as JSON will take longer. So, it is in this code we can add some sort of caching functionality.

Caching logic

The idea behind caching the data retrieved from the SQL query is as follows: Since the contents is not expected to change every time the query is executed, there is no need to carry out the resource-heavy process. Instead, retrieve the data from the database periodically, save it to a file and keep serving the contents of the file until it expires.

In this example, we are going to use an expiry of one hour. What you end up using will depend entirely on your application and needs. If your data is updated monthly, then you will probably want to set an expiry for 720 hours (approx 30 days).

To achieve this, we can use a simple if-statement that checks the time-stamp of the .txt file. If the file was last modified more than an hour ago, re-run the database query, save the data to the .txt file and then serve it to the application. There are variations in how this can be implemented, but for the purpose of simplicity I am going to demonstrate this using procedural programming, a single cache file with no UUID/GUID and little/no error handling.

Below is the code I have implemented. It's a modification of the code shown above, and mostly just wraps the database call and query. I have highlighted the new code in red, and then explained it later:

if (filemtime('cache.txt') < time()-1*3600) {
    // declare database connection details
    $servername = 'localhost';
    $username = 'technouz';
    $password = "password";
    $database = "myproject";
    $dbport = 3306;

    // create connection to database
    $db = new mysqli($servername, $username, $password, $database, $dbport);

    // error handling if connection to database failed
    if ($db->connect_errno) {
        printf("Connect failed: %sn", $mysqli->connect_error);
        exit();
    }

    // make the SQL call
    if ($result = $db->query("SELECT * FROM myTable")) {
        $rows = array();

        while($row = $result->fetch_array(MYSQL_ASSOC)) {
            $rows[] = $row;
        }

        // store query result in cache.txt
        file_put_contents('cache.txt', serialize(json_encode($rows)));


        echo json_encode($rows);
    }
    else {
        echo 'An error occured: data could not be extracted from the database.';
    }

}
else {
    $data = unserialize(file_get_contents('cache.txt'));
    echo $data;
}

The code explained…

Here goes an explanation of the caching bits added to the code. If anything doesn't make sense, you can always shoot me a private message for some clarification.

if (filemtime('cache.txt') < time()-1*3600) {

Here, we use the filemtime() function to check when the cache.txt file was last modified. It returns a Unix timestamp which is then compared to the time an hour ago.

The time() function returns a Unix time-stamp for the current time, and by subtracting 3600 seconds we can get the time-stamp for an hour ago. If the time-stamp for when the cache text file was last modified is less than this value, it means that it was last modified more than one hour ago. That means the expression resolves to true and the code inside the brackets gets executed…

// error handling if connection to database failed
if ($db->connect_errno) {
    printf("Connect failed: %sn", $mysqli->connect_error);
    exit();
}

The first change inside the if-statement is some very simple error handling. This should have always been alongside our database declaration, but I wanted to keep things very clean and simple earlier on. This code simply checks if the $db variable contains a successful connection to the database. If not, we are wasting our time and should display the error message and stop executing the code on this page. If the database connection was successful, and the variable contains a valid value, the if-statement is skipped and we can continue executing our code.

// store query result in cache.txt
file_put_contents('cache.txt', serialize(json_encode($rows)));

Remember earlier on we checked if the cached data was more than an hour old? We're still inside the brackets of the if-statement after the expression resolved to true. That means we need to re-run the query as normal in order to update the data. Once that has been done, the obvious thing to do is update our cache. The above code does exactly that.

We can use the file_put_contents() function to take the results of our database query (stored in the $rows variable) and save it to the cache.txt file. The file_put_contents() PHP function is identical to calling fopen(), fwrite() and fclose() successively to write data to a file. Read more here.

If the cache.txt file does not exist, it will create it. When using the function without the append flag, contents of the file are overwritten.

echo json_encode($rows);

Once the data has been written to the file (in other words: cached), we can serve it to our application as per usual.

else { $data = unserialize(file_get_contents('cache.txt')); echo $data; }

The final block of code is executed if the earlier if-statement expression resolves to false. That is, if the cached data is less than an hour old, we simply read our cache.txt file and print the contents to the web browser screen which our application can read and use as necessary.

Summary

In this guide we have explored a basic model for caching SQL database data. All we have implemented is a simple cache file which does not contain any sensitive data, that contains a copy of the static data retrieved from an SQL query. When the endpoint is called, our PHP code checks the modified data of our cache.txt file. If the modified time-stamp suggests that it was last modified more than 3600 seconds ago we re-run the SQL query and save the data to the cache file and serve it to our application. If, however, the cache file was modified within the last hour, we assume that it has the data we need and serve that instead.

Of course this guide is very simple and basic and has many areas for further thought and improvements. For example, you may wish to look in to:

  • Ensuring data in the cache file hasn't been modified by an external process or application
  • Compressing or minifying the data stored in the cache file - if it is very large
  • Only caching select data which is accessed regularly, or is not dynamic
  • Rendering cached data if a connection to the database cannot be made
  • Implementation of a system which can pre-check if the database has been updated, and then determine whether to re-run the query or serve the cached contents

Further reading

Thanks for reading!

My name is Zahid Mahmood, and I'm one of the founders of Anterior. I started this technology blog when I was in high school and grew it to over 100,000 readers before becoming occupied with other projects. I've recently started writing again and will be posting more frequently.