All posts
Databases

SQL Query Optimization: Indexes, Execution Plans, and Performance Tuning

14 min
Share:
SQLDatabaseOptimizationIndexesPerformance

Master SQL performance with indexing strategies, query optimization techniques, execution plan analysis, and proven patterns for fast database queries at scale.

Slow queries kill application performance. This guide covers indexing strategies, query optimization patterns, and how to read execution plans to make your database queries blazing fast.

Understanding Indexes

1-- Without index: Full table scan (slow)
2SELECT * FROM users WHERE email = 'john@example.com';
3-- Scans all 1,000,000 rows: ~500ms
4
5-- Create index
6CREATE INDEX idx_users_email ON users(email);
7
8-- With index: Index seek (fast)
9SELECT * FROM users WHERE email = 'john@example.com';
10-- Uses index: ~5ms (100x faster)
11
12-- Index types:
13
14-- 1. Single-column index
15CREATE INDEX idx_users_status ON users(status);
16
17-- 2. Composite index (order matters!)
18CREATE INDEX idx_users_status_created 
19ON users(status, created_at);
20-- Good for: WHERE status = 'active' AND created_at > '2024-01-01'
21-- Good for: WHERE status = 'active'
22-- ❌ Bad for: WHERE created_at > '2024-01-01' (doesn't use index)
23
24-- 3. Covering index (includes all needed columns)
25CREATE INDEX idx_users_covering 
26ON users(status) 
27INCLUDE (email, name);
28-- Query doesn't need to access table at all!
29
30-- 4. Unique index (enforces uniqueness + performance)
31CREATE UNIQUE INDEX idx_users_email_unique ON users(email);
32
33-- 5. Partial index (smaller, faster)
34CREATE INDEX idx_active_users 
35ON users(created_at) 
36WHERE status = 'active';
37-- Only indexes active users, saves space
38
39-- When NOT to use indexes:
40-- - Small tables (<1000 rows)
41-- - Columns with low cardinality (few distinct values)
42-- - Columns frequently updated (indexes slow writes)
43-- - Tables with high insert/update volume

Query Optimization Patterns

1-- 1. Use EXPLAIN to see execution plan
2EXPLAIN ANALYZE
3SELECT u.name, o.total 
4FROM users u
5JOIN orders o ON u.id = o.user_id
6WHERE u.status = 'active'
7  AND o.created_at > '2024-01-01';
8
9-- ❌ Bad: SELECT *
10SELECT * FROM users WHERE status = 'active';
11-- Fetches unnecessary data, slower
12
13-- ✅ Good: Select only needed columns
14SELECT id, name, email FROM users WHERE status = 'active';
15
16-- ❌ Bad: OR in WHERE (doesn't use indexes well)
17SELECT * FROM users 
18WHERE status = 'active' OR status = 'premium';
19
20-- ✅ Good: Use IN
21SELECT * FROM users 
22WHERE status IN ('active', 'premium');
23
24-- ❌ Bad: Function on indexed column
25SELECT * FROM users 
26WHERE LOWER(email) = 'john@example.com';
27-- Doesn't use index on email
28
29-- ✅ Good: Store lowercase, or use expression index
30CREATE INDEX idx_email_lower ON users(LOWER(email));
31SELECT * FROM users WHERE LOWER(email) = 'john@example.com';
32
33-- ❌ Bad: LIKE with leading wildcard
34SELECT * FROM users WHERE name LIKE '%John%';
35-- Full table scan, can't use index
36
37-- ✅ Good: LIKE without leading wildcard
38SELECT * FROM users WHERE name LIKE 'John%';
39-- Can use index
40
41-- ❌ Bad: Implicit type conversion
42SELECT * FROM users WHERE phone = 1234567890;
43-- phone is VARCHAR, converts every row
44
45-- ✅ Good: Match data types
46SELECT * FROM users WHERE phone = '1234567890';

JOIN Optimization

1-- ❌ Bad: Joining large tables without filters
2SELECT u.name, o.total
3FROM users u
4JOIN orders o ON u.id = o.user_id;
5-- Joins millions of rows
6
7-- ✅ Good: Filter before joining
8SELECT u.name, o.total
9FROM users u
10JOIN orders o ON u.id = o.user_id
11WHERE u.status = 'active'
12  AND o.created_at > '2024-01-01';
13-- Reduces rows before join
14
15-- ❌ Bad: Subquery in SELECT
16SELECT 
17  u.name,
18  (SELECT COUNT(*) FROM orders WHERE user_id = u.id) as order_count
19FROM users u;
20-- Executes subquery for each user (N+1 problem)
21
22-- ✅ Good: Use JOIN
23SELECT 
24  u.name,
25  COUNT(o.id) as order_count
26FROM users u
27LEFT JOIN orders o ON o.user_id = u.id
28GROUP BY u.id, u.name;
29
30-- ✅ Better: WITH clause for readability
31WITH active_users AS (
32  SELECT id, name FROM users WHERE status = 'active'
33),
34recent_orders AS (
35  SELECT user_id, SUM(total) as total_spent
36  FROM orders
37  WHERE created_at > '2024-01-01'
38  GROUP BY user_id
39)
40SELECT au.name, ro.total_spent
41FROM active_users au
42JOIN recent_orders ro ON au.id = ro.user_id;
43
44-- Join order matters (smaller table first)
45-- ❌ Bad: Large table first
46SELECT * FROM orders o
47JOIN users u ON o.user_id = u.id
48WHERE u.status = 'premium';  -- Only 1000 premium users
49
50-- ✅ Good: Small table first
51SELECT * FROM users u
52JOIN orders o ON u.id = o.user_id
53WHERE u.status = 'premium';

Pagination Done Right

1-- ❌ Bad: OFFSET (slow for large offsets)
2SELECT * FROM posts
3ORDER BY created_at DESC
4LIMIT 20 OFFSET 10000;
5-- Scans first 10,020 rows and discards 10,000
6
7-- ✅ Good: Cursor-based pagination
8-- First page:
9SELECT * FROM posts
10ORDER BY id DESC
11LIMIT 20;
12
13-- Next page (use last ID from previous page):
14SELECT * FROM posts
15WHERE id < 9980  -- Last ID from previous page
16ORDER BY id DESC
17LIMIT 20;
18
19-- ✅ Even better: Composite cursor for stable sorting
20SELECT * FROM posts
21WHERE (created_at, id) < ('2024-01-15 10:30:00', 9980)
22ORDER BY created_at DESC, id DESC
23LIMIT 20;
24
25-- Index for cursor pagination:
26CREATE INDEX idx_posts_cursor ON posts(created_at DESC, id DESC);

Aggregate Query Optimization

1-- ❌ Bad: COUNT(*) on large table
2SELECT COUNT(*) FROM orders;
3-- Full table scan: ~5 seconds
4
5-- ✅ Good: Approximate count (PostgreSQL)
6SELECT reltuples::bigint AS estimate
7FROM pg_class
8WHERE relname = 'orders';
9-- ~1ms, good enough for dashboards
10
11-- ❌ Bad: Multiple aggregates with repeated scans
12SELECT 
13  (SELECT COUNT(*) FROM orders WHERE status = 'pending'),
14  (SELECT COUNT(*) FROM orders WHERE status = 'completed'),
15  (SELECT COUNT(*) FROM orders WHERE status = 'cancelled');
16-- Scans table 3 times
17
18-- ✅ Good: Single scan with CASE
19SELECT 
20  COUNT(CASE WHEN status = 'pending' THEN 1 END) as pending,
21  COUNT(CASE WHEN status = 'completed' THEN 1 END) as completed,
22  COUNT(CASE WHEN status = 'cancelled' THEN 1 END) as cancelled
23FROM orders;
24
25-- ✅ Alternative: GROUP BY
26SELECT status, COUNT(*) as count
27FROM orders
28GROUP BY status;
29
30-- Heavy aggregations: Use materialized views
31CREATE MATERIALIZED VIEW daily_sales_summary AS
32SELECT 
33  DATE(created_at) as sale_date,
34  COUNT(*) as order_count,
35  SUM(total) as revenue
36FROM orders
37GROUP BY DATE(created_at);
38
39-- Refresh periodically
40REFRESH MATERIALIZED VIEW CONCURRENTLY daily_sales_summary;
41
42-- Query is now instant:
43SELECT * FROM daily_sales_summary 
44WHERE sale_date >= CURRENT_DATE - INTERVAL '30 days';

Database Design for Performance

  • Normalize to reduce redundancy, then denormalize for read performance
  • Use appropriate data types (INT vs BIGINT, VARCHAR(50) vs TEXT)
  • Partition large tables by date/range for faster queries
  • Archive old data to separate tables
  • Use database connection pooling (PgBouncer, ProxySQL)
  • Monitor slow query logs and add indexes for frequent queries
  • Set appropriate timeouts to kill runaway queries
  • Use read replicas for read-heavy workloads
  • Cache frequently accessed data (Redis/Memcached)
  • Batch inserts/updates instead of individual queries
  • Use EXPLAIN ANALYZE to identify bottlenecks
  • Keep statistics up to date (VACUUM ANALYZE in PostgreSQL)
  • Monitor index bloat and rebuild when necessary