Skip to content

toc: move up and reorganize best practices #21322

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 15, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
*: refine title, wording, and descriptions
  • Loading branch information
lilin90 committed Jul 4, 2025
commit f6ebcf66dec8dc32d52856ef2c443646027b70a0
18 changes: 9 additions & 9 deletions TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -438,17 +438,17 @@
- Best Practices
- [Use TiDB](/best-practices/tidb-best-practices.md)
- [Optimize Multi-Column Indexes](/best-practices/multi-column-index-best-practices.md)
- [Handle Millions of Tables in SaaS Multi-Tenant Scenarios](/best-practices/saas-best-practices.md)
- [Use UUIDs as Primary Keys](/best-practices/uuid.md)
- [Java Application Development](/best-practices/java-app-best-practices.md)
- [Use HAProxy](/best-practices/haproxy-best-practices.md)
- [Highly Concurrent Write](/best-practices/high-concurrency-best-practices.md)
- [Grafana Monitoring](/best-practices/grafana-monitor-best-practices.md)
- [PD Scheduling](/best-practices/pd-scheduling-best-practices.md)
- [TiKV Performance Tuning with Massive Regions](/best-practices/massive-regions-best-practices.md)
- [Three-node Hybrid Deployment](/best-practices/three-nodes-hybrid-deployment.md)
- [Local Read Under Three Data Centers Deployment](/best-practices/three-dc-local-read.md)
- [Use UUIDs](/best-practices/uuid.md)
- [High-Concurrency Writes](/best-practices/high-concurrency-best-practices.md)
- [Tune TiKV Performance with Massive Regions](/best-practices/massive-regions-best-practices.md)
- [Tune PD Scheduling](/best-practices/pd-scheduling-best-practices.md)
- [Read-Only Storage Nodes](/best-practices/readonly-nodes.md)
- [Handle Millions of Tables in SaaS Multi-Tenant Scenarios](/best-practices/saas-best-practices.md)
- [Use HAProxy for Load Balancing](/best-practices/haproxy-best-practices.md)
- [Monitor TiDB Using Grafana](/best-practices/grafana-monitor-best-practices.md)
- [Three-Node Hybrid Deployment](/best-practices/three-nodes-hybrid-deployment.md)
- [Local Reads in Three-Data-Center Deployments](/best-practices/three-dc-local-read.md)
- TiDB Tools
- [Overview](/ecosystem-tool-user-guide.md)
- [Use Cases](/ecosystem-tool-user-case.md)
Expand Down
2 changes: 1 addition & 1 deletion auto-random.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Since the value of `AUTO_RANDOM` is random and unique, `AUTO_RANDOM` is often us

<CustomContent platform="tidb">

For more information about how to handle highly concurrent write-heavy workloads in TiDB, see [Highly concurrent write best practices](/best-practices/high-concurrency-best-practices.md).
For more information about how to handle highly concurrent write-heavy workloads in TiDB, see [Best Practices for High-Concurrency Writes](/best-practices/high-concurrency-best-practices.md).

</CustomContent>

Expand Down
8 changes: 4 additions & 4 deletions best-practices/high-concurrency-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: Highly Concurrent Write Best Practices
summary: This document provides best practices for handling highly-concurrent write-heavy workloads in TiDB. It addresses challenges and solutions for data distribution, hotspot cases, and complex hotspot problems. The article also discusses parameter configuration for optimizing performance.
title: Best Practices for High-Concurrency Writes
summary: This document provides best practices for handling high-concurrency write-heavy workloads in TiDB. It addresses challenges and solutions for data distribution, hotspot cases, and complex hotspot problems. The article also discusses parameter configuration for optimizing performance.
aliases: ['/docs/dev/best-practices/high-concurrency-best-practices/','/docs/dev/reference/best-practices/high-concurrency/']
---

# Highly Concurrent Write Best Practices
# Best Practices for High-Concurrency Writes

This document describes best practices for handling highly-concurrent write-heavy workloads in TiDB, which can help to facilitate your application development.
This document describes best practices for handling high-concurrency write-heavy workloads in TiDB, which can help to facilitate your application development.

## Target audience

Expand Down
4 changes: 2 additions & 2 deletions best-practices/massive-regions-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Best Practices for TiKV Performance Tuning with Massive Regions
title: Best Practices for Tuning TiKV Performance with Massive Regions
summary: TiKV performance tuning involves reducing the number of Regions and messages, increasing Raftstore concurrency, enabling Hibernate Region and Region Merge, adjusting Raft base tick interval, increasing TiKV instances, and adjusting Region size. Other issues include slow PD leader switching and outdated PD routing information.
aliases: ['/docs/dev/best-practices/massive-regions-best-practices/','/docs/dev/reference/best-practices/massive-regions/']
---

# Best Practices for TiKV Performance Tuning with Massive Regions
# Best Practices for Tuning TiKV Performance with Massive Regions

In TiDB, data is split into Regions, each storing data for a specific key range. These Regions are distributed among multiple TiKV instances. As data is written into a cluster, millions of Regions might be created. Too many Regions on a single TiKV instance can bring a heavy burden to the cluster and affect its performance.

Expand Down
4 changes: 2 additions & 2 deletions best-practices/pd-scheduling-best-practices.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: PD Scheduling Best Practices
title: Best Practices for PD Scheduling
summary: This document summarizes PD scheduling best practices, including scheduling process, load balancing, hot regions scheduling, cluster topology awareness, scale-down and failure recovery, region merge, query scheduling status, and control scheduling strategy. It also covers common scenarios such as uneven distribution of leaders/regions, slow node recovery, and troubleshooting TiKV nodes.
aliases: ['/docs/dev/best-practices/pd-scheduling-best-practices/','/docs/dev/reference/best-practices/pd-scheduling/']
---

# PD Scheduling Best Practices
# Best Practices for PD Scheduling

This document details the principles and strategies of PD scheduling through common scenarios to facilitate your application. This document assumes that you have a basic understanding of TiDB, TiKV and PD with the following core concepts:

Expand Down
4 changes: 2 additions & 2 deletions best-practices/three-dc-local-read.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
---
title: Local Read under Three Data Centers Deployment
title: Best Practices for Local Reads in Three-Data-Center Deployments
summary: TiDB's three data center deployment model can cause increased access latency due to cross-center data reads. To mitigate this, the Stale Read feature allows for local historical data access, reducing latency at the expense of real-time data availability. When using Stale Read in geo-distributed scenarios, TiDB accesses local replicas to avoid cross-center network latency. This is achieved by configuring the `zone` label and setting `tidb_replica_read` to `closest-replicas`. For more information on performing Stale Read, refer to the documentation.
---

# Local Read under Three Data Centers Deployment
# Best Practices for Local Reads in Three-Data-Center Deployments

In the model of three data centers, a Region has three replicas which are isolated in each data center. However, due to the requirement of strongly consistent read, TiDB must access the Leader replica of the corresponding data for every query. If the query is generated in a data center different from that of the Leader replica, TiDB needs to read data from another data center, thus causing the access latency to increase.

Expand Down
12 changes: 8 additions & 4 deletions best-practices/uuid.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,29 @@
---
title: UUID Best Practices
title: Best Practices for Using UUIDs as Primary Keys
summary: UUIDs, when used as primary keys, offer benefits such as reduced network trips, support in most programming languages and databases, and protection against enumeration attacks. Storing UUIDs as binary in a `BINARY(16)` column is recommended. It's also advised to avoid setting the `swap_flag` with TiDB to prevent hotspots. MySQL compatibility is available for UUIDs.
---

# UUID Best Practices
# Best Practices for Using UUIDs as Primary Keys

UUIDs (Universally Unique Identifiers) are a popular alternative to auto-incrementing integers for primary keys in distributed databases. This document outlines the benefits of using UUIDs in TiDB, and offers best practices for storing and indexing them efficiently.

## Overview of UUIDs

When used as a primary key, instead of an [`AUTO_INCREMENT`](/auto-increment.md) integer value, a universally unique identifier (UUID) delivers the following benefits:
When used as a primary key, a UUID offers the following advantages compared with an [`AUTO_INCREMENT`](/auto-increment.md) integer:

- UUIDs can be generated on multiple systems without risking conflicts. In some cases, this means that the number of network trips to TiDB can be reduced, leading to improved performance.
- UUIDs are supported by most programming languages and database systems.
- When used as a part of a URL, a UUID is not vulnerable to enumeration attacks. In comparison, with an `AUTO_INCREMENT` number, it is possible to guess the invoice IDs or user IDs.

## Best practices

This section describes best practices for storing and indexing UUIDs in TiDB.

### Store as binary

The textual UUID format looks like this: `ab06f63e-8fe7-11ec-a514-5405db7aad56`, which is a string of 36 characters. By using [`UUID_TO_BIN()`](/functions-and-operators/miscellaneous-functions.md#uuid_to_bin), the textual format can be converted into a binary format of 16 bytes. This allows you to store the text in a [`BINARY(16)`](/data-type-string.md#binary-type) column. When retrieving the UUID, you can use the [`BIN_TO_UUID()`](/functions-and-operators/miscellaneous-functions.md#bin_to_uuid) function to get back to the textual format.

### UUID format binary order and a clustered PK
### UUID format binary order and clustered primary keys

The `UUID_TO_BIN()` function can be used with one argument, the UUID or with two arguments where the second argument is a `swap_flag`.

Expand Down
4 changes: 2 additions & 2 deletions dashboard/dashboard-key-visualizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ When you use the TiDB database, the hotspot issue is typical, where high traffic
+ Write adjacent data into a table with the `AUTO_INCREMENT` primary key, which causes a hotspot issue on this table.
+ Write adjacent time data into the time index of a table, which causes a hotspot issue on the table index.

For more details about hotspot, refer to [Highly Concurrent Write Best Practices](/best-practices/high-concurrency-best-practices.md#hotspot-causes)
For more details about hotspot, refer to [Best Practices for High-Concurrency Writes](/best-practices/high-concurrency-best-practices.md#hotspot-causes)

### Heatmap

Expand Down Expand Up @@ -178,4 +178,4 @@ Regions in the bright areas are the hotspots of read and write traffic, which of

## Address hotspot issues

TiDB has some built-in features to mitigate the common hotspot issue. Refer to [Highly Concurrent Write Best Practices](/best-practices/high-concurrency-best-practices.md) for details.
TiDB has some built-in features to mitigate the common hotspot issue. Refer to [Best Practices for High-Concurrency Writes](/best-practices/high-concurrency-best-practices.md) for details.
2 changes: 1 addition & 1 deletion data-type-default-values.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ CREATE TABLE t3 (
);
```

For more information on how to use UUID, see [UUID best practices](/best-practices/uuid.md).
For more information on how to use UUID, see [Best Practices for Using UUIDs as Primary Keys](/best-practices/uuid.md).

An example for using `JSON`:

Expand Down
4 changes: 2 additions & 2 deletions develop/dev-guide-optimize-sql-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,13 +181,13 @@ See [Best Practices for Developing Java Applications with TiDB](https://docs.pin

<CustomContent platform="tidb">

- [Highly Concurrent Write Best Practices](/best-practices/high-concurrency-best-practices.md)
- [Best Practices for High-Concurrency Writes](/best-practices/high-concurrency-best-practices.md)

</CustomContent>

<CustomContent platform="tidb-cloud">

- [Highly Concurrent Write Best Practices](https://docs.pingcap.com/tidb/stable/high-concurrency-best-practices)
- [Best Practices for High-Concurrency Writes](https://docs.pingcap.com/tidb/stable/high-concurrency-best-practices)

</CustomContent>

Expand Down
2 changes: 1 addition & 1 deletion stale-read.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ When you are using Stale Read, TiDB will randomly select a replica for data read

+ Scenario one: If a transaction only involves read operations and is tolerant of data staleness to some extent, you can use Stale Read to get historical data. Using Stale Read, TiDB makes the query requests sent to any replica at the expense of some real-time performance, and thus increases the throughput of query executions. Especially in some scenarios where small tables are queried, if strongly consistent reads are used, leader might be concentrated on a certain storage node, causing the query pressure to be concentrated on that node as well. Therefore, that node might become a bottleneck for the whole query. Stale Read, however, can improve the overall query throughput and significantly improve the query performance.

+ Scenario two: In some scenarios of geo-distributed deployment, if strongly consistent follower reads are used, to make sure that the data read from the Followers is consistent with that stored in the Leader, TiDB requests `Readindex` from different data centers for verification, which increases the access latency for the whole query process. With Stale Read, TiDB accesses the replica in the current data center to read the corresponding data at the expense of some real-time performance, which avoids network latency brought by cross-center connection and reduces the access latency for the entire query. For more information, see [Local Read under Three Data Centers Deployment](/best-practices/three-dc-local-read.md).
+ Scenario two: In some scenarios of geo-distributed deployment, if strongly consistent follower reads are used, to make sure that the data read from the Followers is consistent with that stored in the Leader, TiDB requests `Readindex` from different data centers for verification, which increases the access latency for the whole query process. With Stale Read, TiDB accesses the replica in the current data center to read the corresponding data at the expense of some real-time performance, which avoids network latency brought by cross-center connection and reduces the access latency for the entire query. For more information, see [Best Practices for Local Reads in Three-Data-Center Deployments](/best-practices/three-dc-local-read.md).

</CustomContent>

Expand Down
6 changes: 3 additions & 3 deletions system-variable-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@ Referenced in:

Referenced in:

- [Highly Concurrent Write Best Practices](/best-practices/high-concurrency-best-practices.md)
- [Best Practices for High-Concurrency Writes](/best-practices/high-concurrency-best-practices.md)
- [System Variables](/system-variables.md#cte_max_recursion_depth)
- [TiDB 5.1 Release Notes](/releases/release-5.1.0.md)

Expand Down Expand Up @@ -3335,7 +3335,7 @@ Referenced in:
- [Best Practices for Read-Only Storage Nodes](/best-practices/readonly-nodes.md)
- [Follower Read](/follower-read.md)
- [Follower Read](/develop/dev-guide-use-follower-read.md)
- [Local Read under Three Data Centers Deployment](/best-practices/three-dc-local-read.md)
- [Best Practices for Local Reads in Three-Data-Center Deployments](/best-practices/three-dc-local-read.md)
- [Optimizer Hints](/optimizer-hints.md)
- [SHOW [GLOBAL|SESSION] VARIABLES](/sql-statements/sql-statement-show-variables.md)
- [System Variables](/system-variables.md#tidb_replica_read-new-in-v40)
Expand Down Expand Up @@ -3421,7 +3421,7 @@ Referenced in:

Referenced in:

- [Highly Concurrent Write Best Practices](/best-practices/high-concurrency-best-practices.md)
- [Best Practices for High-Concurrency Writes](/best-practices/high-concurrency-best-practices.md)
- [Limited SQL features on TiDB Cloud](https://docs.pingcap.com/tidbcloud/limited-sql-features)
- [SHOW [GLOBAL|SESSION] VARIABLES](/sql-statements/sql-statement-show-variables.md)
- [Split Region](/sql-statements/sql-statement-split-region.md)
Expand Down
2 changes: 1 addition & 1 deletion tidb-lightning/tidb-lightning-logical-import-mode-usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ When the strategy is `"ignore"`, conflicting data is recorded in the downstream

## Performance tuning

- In the logical import mode, the performance of TiDB Lightning largely depends on the write performance of the target TiDB cluster. If the cluster hits a performance bottleneck, refer to [Highly Concurrent Write Best Practices](/best-practices/high-concurrency-best-practices.md).
- In the logical import mode, the performance of TiDB Lightning largely depends on the write performance of the target TiDB cluster. If the cluster hits a performance bottleneck, refer to [Best Practices for High-Concurrency Writes](/best-practices/high-concurrency-best-practices.md).

- If the target TiDB cluster does not hit a write bottleneck, consider increasing the value of `region-concurrency` in TiDB Lightning configuration. The default value of `region-concurrency` is the number of CPU cores. The meaning of `region-concurrency` is different between the physical import mode and the logical import mode. In the logical import mode, `region-concurrency` is the write concurrency.

Expand Down
2 changes: 1 addition & 1 deletion troubleshoot-hot-spot-issues.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ For more details, see [Coprocessor Cache](/coprocessor-cache.md).

**See also:**

- [Highly Concurrent Write Best Practices](/best-practices/high-concurrency-best-practices.md)
- [Best Practices for High-Concurrency Writes](/best-practices/high-concurrency-best-practices.md)
- [Split Region](/sql-statements/sql-statement-split-region.md)

## Scatter read hotspots
Expand Down
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy