Launch your tech mastery with us—your coding journey starts now!
Course Content
Introduction to MySQL
0/1
Installation and Setup of MySQL
0/1
MySQLData Types
0/1
MySQL Table Operations
0/1
MySQL Indexes and Keys
0/1
MySQL Views
0/1
MySQL Transactions
0/1
User Management and Security in MySQL
0/1
Backup and Restore in MySQL
0/1
MySQL

What is MySQL Performance Tuning?

MySQL Performance Tuning is the process of optimizing how your database server, queries, indexes, and schema work together to provide the fastest and most resource-efficient responses. When a database starts to slow down under load, tuning ensures better speed, reduced CPU/memory usage, and quicker access to data — especially for high-traffic applications or large datasets.

It involves query optimization, proper indexing, schema design, and server-level configurations that reduce delays and improve efficiency across all operations.

 

  1. Use Indexes Effectively

Indexes help MySQL find rows faster by reducing the number of full-table scans. Without indexes, MySQL scans every row in the table for matching values, which is extremely slow on large datasets.

  • Use indexes on columns used in WHERE, JOIN, ORDER BY, or GROUP BY.
  • Avoid over-indexing — indexes slow down INSERT, UPDATE, and DELETE.
  1. Index on a Single Column:-

General Syntax:- 

CREATE INDEX index_name ON table_name(column_name);

Example:-

CREATE TABLE users (

  id INT,

  name VARCHAR(100),

  email VARCHAR(100)

);

INSERT INTO users VALUES

(1, ‘Arun’, ‘arun@gmail.com’),

(2, ‘Riya’, ‘riya@yahoo.com’),

(3, ‘Ali’, ‘ali@gmail.com’);

 

Query:

SELECT * FROM users WHERE email = ‘ali@gmail.com’;

Output:-

 

  1. For creating an index in MySQL:-

General Syntax:- 

CREATE INDEX idx_email ON users(email);

Example:-

CREATE UNIQUE INDEX idx_username ON users(name);

Attempt to Insert Duplicate:

INSERT INTO users VALUES (4, ‘Riya’, ‘riya2@yahoo.com’);

Output: (Error occurs as it prevents from inserting duplicate name values. )

ERROR 1062 (23000): Duplicate entry ‘Riya’ for key ‘idx_username’

 

  1. Searching for users by email:

General Syntax:- 

SELECT * FROM users WHERE email = ‘john@example.com’;

Example:-

CREATE INDEX idx_name_email ON users(name, email);

Query:

SELECT * FROM users WHERE name = ‘Riya’ AND email = ‘riya@yahoo.com’;

Output:-

 

  1. Avoid SELECT * in Queries

Using SELECT * fetches all columns, even if you only need a few. This increases memory usage, network traffic, and processing time.

Instead of:

SELECT * FROM customers;

( Use only required columns: )

SELECT name, email FROM customers;

This reduces the amount of data transferred and processed, especially in large tables.

 

  1. Use EXPLAIN to Analyze Queries

EXPLAIN shows how MySQL executes a query — whether it uses indexes, how many rows it scans, and which table joins happen.

General Syntax:

EXPLAIN SELECT name FROM users WHERE email = ‘test@example.com’;

Example:-

Step 1: Create a Sample Table and Insert Data

CREATE TABLE users (

  id INT PRIMARY KEY AUTO_INCREMENT,

  name VARCHAR(100),

  email VARCHAR(100)

);

INSERT INTO users (name, email) VALUES

(‘Arun’, ‘arun@example.com’),

(‘Riya’, ‘riya@example.com’),

(‘Test User’, ‘test@example.com’);

Step 2: Run the Query with EXPLAIN

EXPLAIN SELECT name FROM users WHERE email = ‘test@example.com’;

Expected Output:-

 (This will get slower as table size increases.)

Step 3: Optimize by Creating an Index

CREATE INDEX idx_email ON users(email);

Step 4: Run the EXPLAIN Again

EXPLAIN SELECT name FROM users WHERE email = ‘test@example.com’;

Output:-

 

  1. Optimize Joins and Subqueries

Joins are faster than subqueries in most cases. Always filter data before joining, and avoid joining large datasets without indexes.

  • Use indexed columns in JOIN ON conditions.
  • Avoid joining on computed values or different data types.

Sample Users Table Data :- 

General Syntax:-

SELECT table1.column1, table2.column2

FROM table1

JOIN table2 ON table1.common_column = table2.common_column

WHERE table2.filter_column = ‘value’;

Example:-:

SELECT o.id, c.name

FROM orders o

JOIN customers c ON o.customer_id = c.id

WHERE c.city = ‘Mumbai’;

Output:-

 

  1. Use LIMIT to Control Results

When displaying a preview or paginated data, use LIMIT to avoid fetching thousands of unnecessary rows.

  1. General Syntax:-

SELECT column1, column2

FROM table_name

ORDER BY column_name

LIMIT number;

Example:-

SELECT name FROM users ORDER BY id LIMIT 10;

Output:-

 

  1. General Syntax for pagination:

SELECT column1, column2

FROM table_name

ORDER BY column_name

LIMIT number OFFSET offset_value;

Example:-

SELECT name FROM users ORDER BY id LIMIT 10 OFFSET 20;

Output:-

 (This saves memory and improves load time for front-end applications. )

 

  1. Tune MySQL Server Configuration

The MySQL server settings have a huge impact on performance. You can adjust the following in your my.cnf or my.ini file:

Common settings:

innodb_buffer_pool_size = 1G     — Cache for InnoDB data

query_cache_size = 64M           — (Deprecated in MySQL 8+)

max_connections = 200

tmp_table_size = 128M

( Use tools like MySQLTuner or Percona Toolkit to analyze and suggest improvements. )

 

  1. Normalize and Optimize Schema Design

A well-designed database structure prevents redundancy and inconsistency. However, over-normalization can lead to performance issues with excessive joins.

Balance normalization with performance:

  • Use proper data types (e.g., INT instead of VARCHAR(10) for IDs)
  • Avoid storing duplicate data
  • Use foreign keys wisely

 

  1. Monitor Slow Queries

Enable slow query logging to identify inefficient queries.

In my.cnf:

slow_query_log = 1

slow_query_log_file = /var/log/mysql-slow.log

long_query_time = 1

Then analyze slow queries using:

mysqldumpslow /var/log/mysql-slow.log

 

  1. Use Connection Pooling

If your application connects and disconnects frequently, use a connection pooler like ProxySQL or MySQL Router to manage connections efficiently and reduce overhead.

 

  1. Regular Maintenance
  • Run ANALYZE TABLE to update statistics.
  • Use OPTIMIZE TABLE to reduce fragmentation.
  • Periodically archive old data or split large tables with partitioning.