Science News

PostgreSQL: Query Optimization for Mere Humans

Understanding a PostgreSQL execution plan with practical examples

Photo by Greg Rakozy on Unsplash

Today, users have high expectations for the programs they use. Users expect programs to have amazing features, to be fast, and to consume a reasonable amount of resources.

As developers, we should thrive to give our users the best experience possible. It’s pretty common that the database becomes the bottleneck, and optimizing queries and eliminating the bottlenecks is not an easy task. Unfortunately, as programs become more and more complex, and as the data become bigger, it becomes harder to write flawless SQL queries.

Today, I am going to focus on a technique to find those bottlenecks, using the Explain clause. My goal today is to show you that finding and eliminating those bottlenecks is not rocket science. Everyone can find their bottlenecks without breaking a sweat.

The code for this article can be found on GitHub.

Note: All images, unless otherwise noted, are by the author.

Let’s explain Explain 📜

Interactions with databases are done using declarative languages, where SQL is the most common one. The database decides how and what to do behind the scenes and the only glimpse it provides is the execution plan.

This limitation makes implementing proper debugging tools, and profilers almost impossible in practice. So we are kind of stuck with execution plans.

Buzzword alert 🐝!! my goal is to democratize execution plans.

In PostgreSQL in order to get the execution plan one should use Explain/Explain analyze clauses:

  • EXPLAIN shows what the planner planned to do.
  • EXPLAIN ANALYZE what the planner plans to do, execute the query, and also show how it did it.

Pro Tip #1💃: go over an execution plan at least once in your career. It’s similar across databases, and it is a rare skill in companies.

Pro Tip #2 💃: prefer EXPLAIN ANALYZE as it holds more information for most cases.

Warning #1 ⚠️ don’t use EXPLAIN ANALYZE on destructive operations like DELETE/UPDATE, EXPLAIN will suffice and it doesn’t run the query.

Warning #2 ⚠️ don’t use EXPLAIN ANALYZE when resources are scarce like production monitoring, and when a query never finishes, EXPLAIN will suffice and it doesn’t run the query.

Explain is an awesome tool as it can imply reasons why a query was slow including:

  • Missing/Overused indices/partitions.
  • Unoptimized database configurations.
  • Redundant Operations.
  • Stale statistics.
  • Too much I/O.

For the more thorough people you can see the Explain clause syntax in the next figure:

PostgreSQL Explain Clause Syntax

Understanding Explain Anatomy🫀

We will use it as an example of a simple query: we want to count the number of users that don’t have Twitter handles.

EXPLAIN ANALYZE
SELECT COUNT(*) FROM users WHERE twitter != '';
We can see the execution plan returned from the EXPLAIN ANALYZE clause

It looks cryptic at first, and It’s even longer than our query, and that on a small example of real-world execution plans can be overwhelming if you don’t focus 😭.

But it does provide useful information. We can see that the query execution took 1.27 seconds, while the query planning took only 0.4 milli-seconds (negligible time).

We can see the time the query planning and execution took

The execution plan is structured as an inverse tree. In the next figure, you can see the execution plan is divided into different nodes each one of which represents a different operation whether it’s an Aggregation or a Scan.

We can see the time the query planning and execution took

There are many kinds of nodes operations, from Scan related (‘Seq Scan’, ‘Index Only Scan’, etc…), Join related( ‘Hash Join’, ’Nested Loop’, etc…), Aggregation related (‘GroupAggregate’, ’Aggregate’, etc…) and others ( ‘Limit’, ‘Sort’, ‘materialize’, etc..). Fortunately you need to remember any of this.

Pro Tip #3 💃: Focus is key, look only on nodes that are problematic.

Pro Tip #4 💃: Cheat ! on the problematic nodes search what they mean in the explain glossary.

Now, let’s drill down into how we know which node is the problematic one.

There is a lot of information we can see on each node

Let’s drill down to what those metrics actually mean.

  • Actual Loops: the number of loops the same node executed is 1. To get the total time and rows, the actual time and rows need to be multiplied by loops values.
  • Actual Rows: the actual number of produced rows of the Aggregate node is 1 (per-loop average and we have loops is 1).
  • Plan Rows: the estimated number of produced rows of the Aggregate node is 1. The estimated number of rows can be off depending on statistics.
  • Actual Startup Time: the time it took to return the first row in milliseconds of the Aggregate node is 1271.157 (aggregated and includes previous operations).
  • Startup Cost: arbitrary units that represent the estimated time to return the first row of the Aggregate node is 845110(aggregated and includes previous operations).
  • Actual Total Time: the time it took to return all the rows in ms of the Aggregate node is 1271.158 (per-loop average and we have loops is 1 and aggregated and include previous operations).
  • Total Cost: arbitrary units that represent the estimated time to return all the rows of Aggregate node is 845110 (aggregated).
  • Plan Width: the estimated average size of rows of the Aggregate node is 8 bytes.

Pro Tip #5 💃: be wary of loops, remember to multiply loops when you care about Actual Rows and Actual Total Time.

We will drill in the next section on a practical example.

Example: Performance Optimization🐆

We will use the same query as before.

EXPLAIN ANALYZE
SELECT COUNT(*) FROM users WHERE twitter != '';
We will focus on the Seq Scan node which has the longest actual time

We focus on the longest operation which is the sequential scan on the users’ table. The scan filters out 2,487,813 rows and takes us 1.27 seconds out of 1.271.

But we are mere humans that don’t tell us anything. Let’s google it (you can use ChatGPT as well) !!!.

Googling how to make seq scan faster in PostgreSQL
CREATE INDEX twitter_test ON users (twitter)
We are doing much better, but the Scan node is still the longest in terms of actual time

We can see that now we perform an index only scan on the users’ table. It takes us 0.29 seconds instead of 1.27 seconds, which is awesome but not enough for us.

Pro Tip #6💃: optimize your queries one baby step at a time.

To understand how much data is passed to the scan. We could use the buffers parameter as you can see down below.

Pro Tip #7💃: When comparing execution plans, look at several metrics.

EXPLAIN (ANALYZE, BUFFERS)
SELECT COUNT(*) FROM users WHERE twitter != ''
We can see quite a lot of information about how much data hit the cache or disk

We have 51,854 pages to read all from the cache (400 MB), so improving configurations probably won’t change things drastically.

But, we are not out of options. Since the scan filters out 2,487,813 rows, we can change the index into a partial index but it doesn’t come for free. It will cause writes to take longer, and it will take additional storage, which is quite impactful on systems that scale vertically.

Pro Tip #8 💃: there is no free lunch.

Good optimization options🤞🏻

I won’t delve into too many details as this blog is already quite long. These are the first things one might want to tackle when he has slow queries:

  • Picking the right scan method.
  • Picking the right join method.
  • Picking the right join order.
  • Push Filters as soon as possible.
  • Reducing disk IO operations when needed.

In order to manually check specific optimization one can enable/disable settings.

SET enable_seqscan TO off;
EXPLAIN (ANALYZE) SELECT * FROM foo WHERE c1 > 500;
SET enable_seqscan TO on;

Warning #3 ⚠️: enable/disable settings only after you tried the most basic optimizations as most of the time, PostgreSQL knows what it is doing.

Aren’t there easier ways?!🙏

Unfortunately, Explain is not perfect and there are reasons why it’s not in every developer toolbox:

  • You don’t have a history of all the execution plans in production.
  • Not trivial to tune complex queries
     — It’s long, and not that easy to read.
     — It doesn’t tell you why a particular optimization is not used.
     — It doesn’t tell you how to rewrite your queries.

We can overcome the lack of history by using tools like auto_explain and pg_stat_plans to record the execution plans on certain conditions such that they won’t have a major effect on production. Another way is to record what queries run at what time and try to reproduce it, but it’s more complicated than it looks.

We can overcome complex tuning with some very opinionated tools. These focus your attention on what works for most use cases. Some of the most prominent tools are:

  • eversql — a mature solution that aims to suggest changes to your PostgreSQL queries.
  • metis — aims to suggest changes as a guarding mechanism as part of your development and CI/CD processes for PostgreSQL databases.
  • QueryFlow — an open-source tool that allows identifying bugs and performance tuning multiple queries (as the hardest queries to debug behave nicely in isolation).

Pro Tip #9 💃: use tools to make your life easy.

I will give you a taste of how convenient it is to use tools like QueryFlow (For more details you can read the following).

QueryFlow visualization of execution plan focusing on duration

It should be extremely easy to see that the Index Only Scan width is much bigger than the aggregation and indicate this is where we should focus. On multiple complex queries, other tools tend to lack

Last words

In this article, we reviewed some of the most common reasons that can cause otherwise perfectly good SQL to be too slow for any time-sensitive applications, and walk through a mythological way to identify those and avoid them.

Due to the extent of the topic, there are many optimizations I haven’t covered. For this reason, I have added additional resources in the end if you want to go the extra mile.

I am optimistic about the future. I believe these kinds of tools will be as easy as opening files in python, either by integrating into IDEs, and clients, or providing SAS solutions. This will enable us to become proactive instead of reactive.

I hope I was able to share my enthusiasm for this fascinating topic and that you find it useful, and as always I am open to any kind of constructive feedback.

Additional Resources 📚


Query Optimization for Mere Humans in PostgreSQL was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read More

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Loading Disqus Comments ...

No Trackbacks.