Welcome to today’s article on how to install Single Node TiDB database Cluster on CentOS 8 Linux server. TiDB is a MySQL compatible, open-source NewSQL database with support for Analytical Processing (HTAP) and Hybrid Transactional workloads. The top key features of TiDB are high availability, horizontal scalability and strong consistency. This database solution covers OLTP (Online Transactional Processing), OLAP (Online Analytical Processing), and HTAP services.

This setup will be performed in a single node instance and is meant for Lab and Dev environments. This guide should not be referenced for production environments which requires highly available cluster with at least three machines in the cluster. Consult official TiDB documentation pages for Production setup requirements and recommendations. Check the release notes to understand all new software features

Install Single Node TiDB database Cluster on CentOS 8

This setup is done on a server with the following hardware and software requirements:

  • OS: CentOS 8 (64 bit)
  • Memory: 16 GB
  • CPU: 8 core
  • Disk Space: 50GB
  • root user SSH access
  • Internet access on the server

If you have intense operations with other components such as PD, TiKV, TiFlash, TiCDC and Monitor these minimum requirements may not suffice. Be keen on the recommendations provided in documentation before committing on particular component.

Step 1: Update the Server

Before we can start the installation of TiDB database on CentOS 8 login to the machine and perform system update.

sudo dnf -y update

Reboot the system after an upgrade.

sudo systemctl reboot

Step 2: Disable system swap and firewalld

TiDB requires sufficient memory space for its operations and swap is not recommended. Therefore, it is recommended to disable the system swap permanently.

echo "vm.swappiness = 0" | sudo tee -a /etc/sysctl.conf
sudo swapoff -a && sudo swapon -a
sudo sysctl -p

In TiDB clusters, the access ports between nodes must be open to ensure the transmission of information such as read and write requests and data heartbeats. I recommend you disable firewalld for this Lab setup.

sudo firewall-cmd --state
sudo systemctl status firewalld.service

If you want to open the ports in the firewall check Networking Ports requirements document.

Step 3: Download and install TiUP

Next is to download TiUP installer script to CentOS 8 machine.

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh -o tiup_installer.sh

Give the script execution bits.

chmod  x tiup_installer.sh

Make sure tar package is installed.

sudo yum -y install tar

Execute the script to start installation.

sudo ./tiup_installer.sh

Execution output:

WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================

Source updated bash profile.

source /root/.bash_profile

The next step is to install the cluster component of TiUP:

# tiup cluster
The component `cluster` is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.1.2-linux-amd64.tar.gz 9.87 MiB / 9.87 MiB 100.00% 9.28 MiB p/s
Starting component `cluster`:
Deploy a TiDB cluster for production

If the TiUP cluster is already installed on the machine, update the software version:

# tiup update --self && tiup update cluster
download https://tiup-mirrors.pingcap.com/tiup-v1.1.2-linux-amd64.tar.gz 4.32 MiB / 4.32 MiB 100.00% 4.91 MiB p/s
Updated successfully!
component cluster version v1.1.2 is already installed
Updated successfully!

Step 4: Create and start local TiDB cluster

It is recommended to increase the connection limit of the sshd service since TiUP needs to simulate deployment on multiple machines.

# vi /etc/ssh/sshd_config
MaxSessions 30

Restart sshd service after making the change.

sudo systemctl restart sshd

Create topology configuration file called tidb-topology.yaml.

cat >tidb-topology.yaml<<EOF
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "https://computingforgeeks.com/tidb-deploy"
 data_dir: "https://computingforgeeks.com/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
 tiflash:
   logger.level: "info"

pd_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use

tidb_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use

tikv_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use
   port: 20160
   status_port: 20180

 - host: 127.0.0.1 # Replace with the server IP address you want to use
   port: 20161
   status_port: 20181

 - host: 127.0.0.1 # Replace with the server IP address you want to use
   port: 20162
   status_port: 20182

tiflash_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use

monitoring_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use

grafana_servers:
 - host: 127.0.0.1 # Replace with the server IP address you want to use
EOF

Where:

  • user: “tidb”: Use the tidb system user (automatically created during deployment) to perform the internal management of the cluster. By default, use port 22 to log in to the target machine via SSH.
  • replication.enable-placement-rules: This PD parameter is set to ensure that TiFlash runs normally.
  • host: The IP of the target machine.

Run the cluster deployment command:

tiup cluster deploy   ./tidb-topology.yaml --user root -p

Replace:

  • with the cluster name you want to use.
  • TiDB cluster version. Get all supported TiDB versions using the following command:
# tiup list tidb

I’ll use the latest version as returned by above command:

# tiup cluster deploy local-tidb  v4.0.6 ./tidb-topology.yaml --user root -p

Press the “y” key and provide the root user’s password to complete the deployment:

Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
Input SSH password:
  Generate SSH keys ... Done
  Download TiDB components
......

You should see TiDB components being downloaded.

Input SSH password:
  Generate SSH keys ... Done
  Download TiDB components
  - Download pd:v4.0.6 (linux/amd64) ... Done
  - Download tikv:v4.0.6 (linux/amd64) ... Done
  - Download tidb:v4.0.6 (linux/amd64) ... Done
  - Download tiflash:v4.0.6 (linux/amd64) ... Done
  - Download prometheus:v4.0.6 (linux/amd64) ... Done
  - Download grafana:v4.0.6 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
  Initialize target host environments
  - Prepare 127.0.0.1:22 ... Done
  Copy files
  - Copy pd -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tidb -> 127.0.0.1 ... Done
  - Copy tiflash -> 127.0.0.1 ... Done
  - Copy prometheus -> 127.0.0.1 ... Done
  - Copy grafana -> 127.0.0.1 ... Done
  - Copy node_exporter -> 127.0.0.1 ... Done
  - Copy blackbox_exporter -> 127.0.0.1 ... Done
  Check status
Deployed cluster `local-tidb` successfully, you can start the cluster via `tiup cluster start local-tidb`

Start your cluster:

# tiup cluster start local-tidb

Sample output:

....
Starting component pd
	Starting instance pd 127.0.0.1:2379
	Start pd 127.0.0.1:2379 success
Starting component node_exporter
	Starting instance 127.0.0.1
	Start 127.0.0.1 success
Starting component blackbox_exporter
	Starting instance 127.0.0.1
	Start 127.0.0.1 success
Starting component tikv
	Starting instance tikv 127.0.0.1:20162
	Starting instance tikv 127.0.0.1:20160
	Starting instance tikv 127.0.0.1:20161
	Start tikv 127.0.0.1:20161 success
	Start tikv 127.0.0.1:20162 success
	Start tikv 127.0.0.1:20160 success
Starting component tidb
	Starting instance tidb 127.0.0.1:4000
	Start tidb 127.0.0.1:4000 success
....

Step 5: Access TiDB cluster

To view the currently deployed cluster list:

# tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.1.2/tiup-cluster list
Name        User  Version  Path                                             PrivateKey
----        ----  -------  ----                                             ----------
local-tidb  tidb  v4.0.6   /root/.tiup/storage/cluster/clusters/local-tidb  /root/.tiup/storage/cluster/clusters/local-tidb/ssh/id_rsa

To view the cluster topology and status:

# tiup cluster display local-tidb
Starting component `cluster`: /root/.tiup/components/cluster/v1.1.2/tiup-cluster display local-tidb
tidb Cluster: local-tidb
tidb Version: v4.0.6
ID               Role        Host       Ports                            OS/Arch       Status    Data Dir                    Deploy Dir
--               ----        ----       -----                            -------       ------    --------                    ----------
127.0.0.1:3000   grafana     127.0.0.1  3000                             linux/x86_64  inactive  -                           /tidb-deploy/grafana-3000
127.0.0.1:2379   pd          127.0.0.1  2379/2380                        linux/x86_64  Up|L|UI   /tidb-data/pd-2379          /tidb-deploy/pd-2379
127.0.0.1:9090   prometheus  127.0.0.1  9090                             linux/x86_64  inactive  /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
127.0.0.1:4000   tidb        127.0.0.1  4000/10080                       linux/x86_64  Up        -                           /tidb-deploy/tidb-4000
127.0.0.1:9000   tiflash     127.0.0.1  9000/8123/3930/20170/20292/8234  linux/x86_64  N/A       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
127.0.0.1:20160  tikv        127.0.0.1  20160/20180                      linux/x86_64  Up        /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
127.0.0.1:20161  tikv        127.0.0.1  20161/20181                      linux/x86_64  Up        /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
127.0.0.1:20162  tikv        127.0.0.1  20162/20182                      linux/x86_64  Up        /tidb-data/tikv-20162       /tidb-deploy/tikv-20162

Once it is started you can access the TiDB cluster using the mysql command line client tool.

# yum install mariadb -y
# mysql -h 127.0.01 -P 4000 -u root
Welcome to the MariaDB monitor.  Commands end with ; or g.
Your MySQL connection id is 2
Server version: 5.7.25-TiDB-v4.0.6 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

MySQL [(none)]> SELECT VERSION();
 -------------------- 
| VERSION()          |
 -------------------- 
| 5.7.25-TiDB-v4.0.6 |
 -------------------- 
1 row in set (0.001 sec)

MySQL [(none)]> EXIT

Dashboards access:

What’s next