Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: Lilian Lee <[email protected]>
  • Loading branch information
qiancai and lilin90 authored Feb 10, 2025
1 parent 7b7a4cd commit fb0e68a
Showing 1 changed file with 12 additions and 12 deletions.
24 changes: 12 additions & 12 deletions glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ With the cached table feature, TiDB loads the data of an entire table into the m

A cluster is a group of nodes that work together to provide services. By using clusters in a distributed system, TiDB achieves higher availability and greater scalability compared to a single-node setup.

In the distributed architecture of TiDB:
In the distributed architecture of the TiDB database:

- TiDB nodes provide a scalable SQL layer for client interactions.
- PD nodes provide a resilient metadata layer for TiDB.
Expand All @@ -80,17 +80,17 @@ A Common Table Expression (CTE) enables you to define a temporary result set tha

### Continuous Profiling

Continuous Profiling is a way to observe resource overhead at the system call level. With the support of Continuous Profiling, TiDB provides fine-grained insights into performance issues, helping operations teams identify the root cause using a flame graph. For more information, see [TiDB Dashboard Instance Profiling - Continuous Profiling](/dashboard/continuous-profiling.md).
Continuous Profiling is a way to observe resource overhead at the system call level. With Continuous Profiling, TiDB provides fine-grained observations of performance issues, helping operations teams identify the root cause using a flame graph. For more information, see [TiDB Dashboard Instance Profiling - Continuous Profiling](/dashboard/continuous-profiling.md).

### Coprocessor

A coprocessing mechanism that shares the computation workload with TiDB. It is located in the storage layer (TiKV or TiFlash) and collaboratively processes computations [pushed down](/functions-and-operators/expressions-pushed-down.md) from TiDB on a per-region basis.
Coprocessor is a coprocessing mechanism that shares the computation workload with TiDB. It is located in the storage layer (TiKV or TiFlash) and collaboratively processes computations [pushed down](/functions-and-operators/expressions-pushed-down.md) from TiDB on a per-Region basis.

## D

### Dumpling

Dumpling is a data export tool for exporting data stored in TiDB, MySQL or MariaDB as SQL or CSV data files and can be used to make a logical full backup or export. Dumpling also supports exporting data to Amazon S3.
Dumpling is a data export tool for exporting data stored in TiDB, MySQL, or MariaDB as SQL or CSV data files. It can also be used for logical full backups or exports. Additionally, Dumpling supports exporting data to Amazon S3.

For more information, see [Use Dumpling to Export Data](/dumpling-overview.md).

Expand Down Expand Up @@ -126,7 +126,7 @@ Dynamic pruning mode is one of the modes that TiDB accesses partitioned tables.

### Expression index

The expression index is a type of special index that can be created on an expression. Once an expression index is created, TiDB can use the index for the expression-based query, which significantly improves the query performance.
The expression index is a special type of index created on an expression. Once an expression index is created, TiDB can use this index for expression-based queries, significantly improving query performance.

For more information, see [CREATE INDEX - Expression index](/sql-statements/sql-statement-create-index.md#expression-index).

Expand All @@ -148,7 +148,7 @@ Global Transaction Identifiers (GTIDs) are unique transaction IDs used in MySQL

### Hotspot

Hotspot refers to the phenomenon where the read and/or write workloads of TiKV are concentrated on one or several regions or nodes, which might cause performance bottlenecks and prevent optimal performance. To solve hotspot issues, see [Troubleshoot Hotspot Issues](/troubleshoot-hot-spot-issues.md).
Hotspot refers to a situation where the read and write workloads in TiKV are concentrated on one or a few Regions or nodes. This can lead to performance bottlenecks, preventing optimal system performance. To solve hotspot issues, see [Troubleshoot Hotspot Issues](/troubleshoot-hot-spot-issues.md).

### Hybrid Transactional and Analytical Processing (HTAP)

Expand Down Expand Up @@ -186,7 +186,7 @@ Lightweight Directory Access Protocol (LDAP) is a standardized way of accessing

### Lock View

The Lock View feature is used to provide more information about lock conflicts and lock waits in pessimistic locking, making it convenient for DBAs to observe transaction locking situations and troubleshoot deadlock issues.
The Lock View feature provides more information about lock conflicts and lock waits in pessimistic locking, making it convenient for DBAs to observe transaction locking situations and troubleshoot deadlock issues.

For more information, see system table documentation: [`TIDB_TRX`](/information-schema/information-schema-tidb-trx.md), [`DATA_LOCK_WAITS`](/information-schema/information-schema-data-lock-waits.md), and [`DEADLOCKS`](/information-schema/information-schema-deadlocks.md).

Expand Down Expand Up @@ -243,7 +243,7 @@ Currently, available steps generated by PD include:

### Optimistic transaction

Optimistic transactions are transactions that use optimistic concurrency control and generally do not cause conflicts in concurrent environments. After enabling optimistic transactions, TiDB only checks for conflicts when the transaction is finally committed. The optimistic transaction mode is suitable for concurrent scenarios with more reads and fewer writes, which can improve the performance of TiDB.
Optimistic transactions are transactions that use optimistic concurrency control and generally do not cause conflicts in concurrent environments. After enabling optimistic transactions, TiDB checks for conflicts only when the transaction is finally committed. The optimistic transaction mode is suitable for read-heavy and write-light concurrent scenarios, which can improve the performance of TiDB.

For more information, see [TiDB Optimistic Transaction Model](/optimistic-transaction.md).

Expand All @@ -255,7 +255,7 @@ For more information, see [TiDB Optimistic Transaction Model](/optimistic-transa

### PD Control (pd-ctl)

PD Control (pd-ctl) is a command-line tool to interface with the placement driver (PD) of the cluster. You can use it to obtain cluster status information and modify the cluster. For more information, see [PD Control User Guide](/pd-control.md).
PD Control (pd-ctl) is a command-line tool used to interact with the Placement Driver (PD) in the TiDB cluster. You can use it to obtain cluster status information and modify the cluster configuration. For more information, see [PD Control User Guide](/pd-control.md).

### Pending/Down

Expand All @@ -267,7 +267,7 @@ Placement Driver (PD) is a core component in the [TiDB Architecture](/tidb-archi

### Placement Rules

Placement rules are used to configure the placement of data in a TiKV cluster. With this feature, you can specify the deployment of tables and partitions to different regions, data centers, cabinets, and hosts. Use cases include optimizing data availability strategies at low cost, ensuring that local data replicas are available for local stale reads, and complying with local data compliance requirements.
Placement rules are used to configure the placement of data in a TiKV cluster. With this feature, you can specify the deployment of tables and partitions to different regions, data centers, cabinets, or hosts. Use cases include optimizing data availability strategies at low cost, ensuring that local data replicas are available for local stale reads, and complying with local data compliance requirements.


For more information, see [Placement Rules in SQL](/placement-rules-in-sql.md).
Expand Down Expand Up @@ -345,7 +345,7 @@ For more information, see [System Variables documentation - `tidb_enable_enhance

### Stale Read

Stale Read is a mechanism that TiDB applies to read historical versions of data stored in TiDB. Using this mechanism, you can read the corresponding historical data of a specific point in time or within a specified time range, and thus save the latency brought by data replication between storage nodes. When you are using Stale Read, TiDB will randomly select a replica for data reading, which means that all replicas are available for data reading.
Stale Read is a mechanism that TiDB applies to read historical versions of data stored in TiDB. Using this mechanism, you can read the corresponding historical data of a specific point in time or within a specified time range, and thus save the latency brought by data replication between storage nodes. When you use Stale Read, TiDB randomly selects a replica for data reading, which means that all replicas are available for data reading.

For more information, see [Stale Read](/stale-read.md).

Expand Down Expand Up @@ -377,7 +377,7 @@ For more information on the concepts and terminology of TiDB Lightning, see [TiD

### TiFlash

[TiFlash](/tiflash/tiflash-overview.md) is a key component of TiDB's HTAP architecture. It is a columnar extension of TiKV that provides both strong consistency and good isolation. TiFlash maintains columnar replicas by asynchronously replicating data from TiKV using the **Raft Learner protocol**. For reads, it leverages the **Raft consensus index** and **MVCC (Multi-Version Concurrency Control)** to achieve **Snapshot Isolation** consistency.This architecture effectively addresses isolation and synchronization challenges in HTAP workloads, enabling efficient analytical queries while maintaining real-time data consistency.
[TiFlash](/tiflash/tiflash-overview.md) is a key component of TiDB's HTAP architecture. It is a columnar extension of TiKV that provides both strong consistency and good isolation. TiFlash maintains columnar replicas by asynchronously replicating data from TiKV using the **Raft Learner protocol**. For reads, it leverages the **Raft consensus index** and **MVCC (Multi-Version Concurrency Control)** to achieve **Snapshot Isolation** consistency. This architecture effectively addresses isolation and synchronization challenges in HTAP workloads, enabling efficient analytical queries while maintaining real-time data consistency.

### Timestamp Oracle (TSO)

Expand Down

0 comments on commit fb0e68a

Please sign in to comment.