资讯详情

TiDB v5.3.0->v5.4.2->v6.1.0升级、TiDB/PD/TiKV/TiFlash扩缩容、TiSpark部署指北

本文档的部署路线图如下:

  1. 离线部署 TiDB v5.3.0(TiDB*3、PD*3、TiKV*3);
  2. 源码部署 Haproxy v2.5.0
  3. 离线升级 TiDB v5.3.0 至 TiDB v5.4.2;
  4. 缩扩容 TiDB Server、PD
  5. 扩缩容 TiKV、TiFlash
  6. 部署 TiSpark(TiSpark*3
  7. 离线升级 TiDB v5.4.2 至 TiDB v6.1

1. 离线部署

1.1. 拓扑规划

实例 实例数量 推荐配置 OS IP 端口
TiDB 3 16C/32G/SAS/万兆网卡*2 CentOS7.3 /RHEL7.3 /OEL7.3 192.168.3.221-223 4000:应用及 DBA 工具访问通信端口10080:TiDB 状态信息上报通信端口9100:TiDB 向通信端口报告集群各节点的系统信息
PD 3 4C/8G/SSD/万兆网卡*2 CentOS7.3 /RHEL7.3 /OEL7.3 192.168.3.221-223 2379:提供 TiDB 和 PD 通信端口2380:PD 通信端口在集群节点之间9100:TiDB 向通信端口报告集群各节点的系统信息
TiKV 3 16C/32G/SSD/万兆网卡*2 CentOS7.3 /RHEL7.3 /OEL7.3 192.168.3.224-226 20160:TiKV 通信端口20180:TiKV 状态信息上报通信端口9100:TiDB 向通信端口报告集群各节点的系统信息
Monitoring&Grafana 1 8C/16G/SAS/千兆网卡 CentOS7.3 /RHEL7.3 /OEL7.3 192.168.3.221 9090:Prometheus 服务通信端口9100:TiDB 向通信端口报告集群各节点的系统信息3000:Grafana Web访问端口9093:告警 web 服务端口9094:报警通信端口

1.2. 端口开放

组件 默认端口 说明
TiDB 4000 应用及 DBA 工具访问通信端口
TiDB 10080 TiDB 状态信息上报通信端口
TiKV 20160 TiKV 通信端口
TiKV 20180 TiKV 状态信息上报通信端口
PD 2379 提供 TiDB 和 PD 通信端口
PD 2380 PD 通信端口在集群节点之间
TiFlash 9000 TiFlash TCP 服务端口
TiFlash 8123 TiFlash HTTP 服务端口
TiFlash 3930 TiFlash RAFT 服务和 Coprocessor 服务端口
TiFlash 20170 TiFlash Proxy 服务端口
TiFlash 20292 Prometheus 拉取 TiFlash Proxy metrics 端口
TiFlash 8234 Prometheus 拉取 TiFlash metrics 端口
Pump 8250 Pump 通信端口
Drainer 8249 Drainer 通信端口
CDC 8300 CDC 通信接口
Prometheus 9090 Prometheus 服务通信端口
Node_exporter 9100 TiDB 向通信端口报告集群各节点的系统信息
Blackbox_exporter 9115 Blackbox_exporter 用于通信端口 TiDB 监控集群端口
Grafana 3000 Web 对外服务和客户端(浏览器)访问端口监控服务
Alertmanager 9093 告警 web 服务端口
Alertmanager 9094 警告通信端口

1.3. 主机配置

1.3.1. 更换yum源

~]# cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
~]# yum clean all
~]# yum makecache

1.3.2. ssh互信及免密登录

中控机设置ront用户互信,免密登录各节点。

~]# ssh-keygen -t rsa
~]# ssh-copy-id root@192.168.3.221
~]# ssh-copy-id root@192.168.3.222
~]# ssh-copy-id root@192.168.3.223
~]# ssh-copy-id root@192.168.3.224
~]# ssh-copy-id root@192.168.3.225
~]# ssh-copy-id root@192.168.3.226

1.3.3. TiKV数据盘优化

/dev/sdb 为例

  1. 分区格式化
~]# fdisk -l
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors

~]# parted -s -a optimal /dev/sdb mklabel gpt -- mkpart primary ext4 1 -1

[root@localhost ~]# mkfs.ext4 /dev/sdb1 
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242368 blocks
262118 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
  1. 查看分区的UUID

这里 sdb1 的 UUID 为 49e00d02-2f5b-4b05-8e0e-ac2f524a97ae

[root@localhost ~]# lsblk -f
NAME            FSTYPE      LABEL           UUID                                   MOUNTPOINT
sda                                                                                
├─sda1          ext4                        8e0e85e5-fa82-4f2b-a871-26733d6d2995   /boot
└─sda2          LVM2_member                 KKs6SL-IzU3-62b3-KXZd-a2GR-1tvQ-icleoe 
  └─centos-root ext4                        91645e3c-486c-4bd3-8663-aa425bf8d89d   /
sdb                                                                                
└─sdb1          ext4                        49e00d02-2f5b-4b05-8e0e-ac2f524a97ae   
sr0             iso9660     CentOS 7 x86_64 2020-11-04-11-36-43-00
  1. 分区挂载 编辑 /etc/fstab 文件,添加 nodelalloc 挂载参数。
~]# echo "UUID=49e00d02-2f5b-4b05-8e0e-ac2f524a97ae /tidb-data ext4 defaults,nodelalloc,noatime 0 2" >> /etc/fstab

~]# mkdir /tidb-data && mount /tidb-data

~]# mount -t ext4
/dev/mapper/centos-root on / type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sda1 on /boot type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdb1 on /tidb-data type ext4 (rw,noatime,seclabel,nodelalloc,data=ordered)

1.3.4. 关闭Swap

for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "echo \"vm.swappiness = 0\">> /etc/sysctl.conf"
    ssh root@${node_ip} "swapoff -a && swapon -a" 
    ssh root@${node_ip} "sysctl -p"
  done

一起执行 swapoff -aswapon -a 命令是为了刷新 swap,将 swap 里的数据转储回内存,并清空 swap 里的数据。

1.3.5. 关闭 SElinux

for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "setenforce 0"
    ssh root@${node_ip} "sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config"
    ssh root@${node_ip} "sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config"
  done

1.3.6. 关闭防火墙

~]# firewall-cmd --state
~]# systemctl status firewalld.service

~]# systemctl stop firewalld.service
~]# systemctl disable firewalld.service

1.3.7. 时间同步

  1. 确认时区
~]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  1. 时钟同步

TiDB 是一套分布式数据库系统,需要节点间保证时间的同步,从而确保 ACID 模型的事务线性一致性。可以通过互联网中的 pool.ntp.org 授时服务来保证节点的时间同步,也可以使用离线环境自己搭建的 NTP 服务来解决授时。

这里以向外网pool.ntp.org时间服务器同步为例。

~]# yum install ntp ntpdate 
~]# ntpdate pool.ntp.org
~]# systemctl start ntpd.service 
~]# systemctl enable ntpd.service

1.3.8. 系统优化

需要优化的项目有:

  1. 关闭透明大页( Transparent Huge Pages) never表示已关闭THP
~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
  1. 优化IO调度 假设数据盘为/sdb,需修改调度为noop
~]# cat /sys/block/sdb/queue/scheduler
noop [deadline] cfq

查看数据盘分区的唯一标识 ID_SERIAL

~]# udevadm info --name=/dev/sdb | grep ID_SERIAL
  1. CPU节能策略 The governor "powersave" 表示 cpufreq 的节能策略使用 powersave,需要调整为 performance 策略。如果是虚拟机或者云主机,则不需要调整,命令输出通常为 Unable to determine current policy
~]# cpupower frequency-info --policy
analyzing CPU 0:
current policy: frequency should be within 1.20 GHz and 3.10 GHz.
              The governor "powersave" may decide which speed to use within this range.

1.3.8.1. 使用 tuned(推荐)

  1. 查看当前tuned策略
~]# tuned-adm list
Available profiles:
- balanced                    - General non-specialized tuned profile
- desktop                     - Optimize for the desktop use-case
- hpc-compute                 - Optimize for HPC compute workloads
- latency-performance         - Optimize for deterministic performance at the cost of increased power consumption
- network-latency             - Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance
- network-throughput          - Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks
- powersave                   - Optimize for low power consumption
- throughput-performance      - Broadly applicable tuning that provides excellent performance across a variety of common server workloads
- virtual-guest               - Optimize for running inside a virtual guest
- virtual-host                - Optimize for running KVM guests
Current active profile: virtual-guest
  1. 创建新的tuned策略

在当前的tuned策略balanced基础上,追加新的策略。

~]# mkdir /etc/tuned/balanced-tidb-optimal/
~]# vi /etc/tuned/balanced-tidb-optimal/tuned.conf

[main]
include=balanced

[cpu]
governor=performance

[vm]
transparent_hugepages=never

[disk]
devices_udev_regex=(ID_SERIAL=0QEMU_QEMU_HARDDISK_drive-scsi1)
elevator=noop

多个磁盘的ID_SERIAL用竖线分割,如:

[disk]
devices_udev_regex=(ID_SERIAL=0QEMU_QEMU_HARDDISK_drive-scsi1)|(ID_SERIAL=36d0946606d79f90025f3e09a0c1f9e81)
elevator=noop
  1. 应用新的策略
~]# tuned-adm profile balanced-tidb-optimal
  1. 验证优化结果
cat /sys/kernel/mm/transparent_hugepage/enabled && \
cat /sys/block/sdb/queue/scheduler && \
cpupower frequency-info --policy

若tuned关闭THP不生效,可通过如下方式关闭:

  1. 查看默认启动内核
~]# grubby --default-kernel
/boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64
  1. 追加关闭THP参数
~]# grubby --args="transparent_hugepage=never" --update-kernel /boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64
~]# grubby --info /boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64
index=0
kernel=/boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64
args="ro crashkernel=auto spectre_v2=retpoline rd.lvm.lv=centos/root rhgb quiet LANG=en_US.UTF-8 >transparent_hugepage=never"
root=/dev/mapper/centos-root
initrd=/boot/initramfs-3.10.0-1160.71.1.el7.x86_64.img
title=CentOS Linux (3.10.0-1160.71.1.el7.x86_64) 7 (Core)
  1. 立即关闭THP
~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag

1.3.8.2. 内核优化

for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "echo \"fs.file-max = 1000000\" >> /etc/sysctl.conf"
    ssh root@${node_ip} "echo \"net.core.somaxconn = 32768\" >> /etc/sysctl.conf"
    ssh root@${node_ip} "echo \"net.ipv4.tcp_tw_recycle = 0\" >> /etc/sysctl.conf"
    ssh root@${node_ip} "echo \"net.ipv4.tcp_syncookies = 0\" >> /etc/sysctl.conf"
    ssh root@${node_ip} "echo \"vm.overcommit_memory = 1\" >> /etc/sysctl.conf"
    ssh root@${node_ip} "sysctl -p"
  done

1.3.9. 用户创建及资源限制

1.3.9.1. 创建用户

for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "useradd tidb && passwd tidb"
  done

tidb用户密码tidb123

1.3.9.2. 资源限制

for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "echo \"tidb soft nofile 1000000\" >> /etc/security/limits.conf"
    ssh root@${node_ip} "echo \"tidb hard nofile 1000000\" >> /etc/security/limits.conf"
    ssh root@${node_ip} "echo \"tidb soft stack 32768\" >> /etc/security/limits.conf"
    ssh root@${node_ip} "echo \"tidb hard stack 32768\" >> /etc/security/limits.conf"
  done

1.3.9.3. sudo权限

为 tidb 用户增加免密 sudo 权限

for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "echo \"tidb ALL=(ALL) NOPASSWD: ALL\" >> /etc/sudoers"
  done

tidb用户登录各目标节点,确认执行sudo - root无需输入密码,即表示添加成功。

1.3.9.4. 免密登录

tidb用户登录中控机执行:

~]$ id
uid=1000(tidb) gid=1000(tidb) groups=1000(tidb) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
~]$ ssh-keygen -t rsa
~]$ ssh-copy-id tidb@192.168.3.221
~]$ ssh-copy-id tidb@192.168.3.222
~]$ ssh-copy-id tidb@192.168.3.223
~]$ ssh-copy-id tidb@192.168.3.224
~]$ ssh-copy-id tidb@192.168.3.225
~]$ ssh-copy-id tidb@192.168.3.226

验证tidb免密登录

~]$ id
uid=1000(tidb) gid=1000(tidb) groups=1000(tidb) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

~]$ 
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226
  do
    echo ">>> ${node_ip}"
    ssh tidb@${node_ip} "date"
  done

1.3.10. 安装numactl

1.3.10.1. 配置本地 YUM

  • 镜像挂载
~]# mkdir -p /mnt/yum
~]# mount -o loop /dev/cdrom /mnt/yum
  • 配置本地 repo 源
~]# cat > /etc/yum.repos.d/local.repo << EOF
[Packages]
name=Redhat Enterprise Linux 7.9
baseurl=file:///mnt/yum/
enabled=1 
gpgcheck=0 
gpgkey=file:///mnt/yum/RPM-GPG-KEY-redhat-release
EOF
  • 生成 YUM 缓存
~]# yum clean all
~]# yum makecache

1.3.10.2. 安装 numactl

for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "yum -y install numactl"
  done

1.4. 准备离线镜像包

可直接在tidb官网下载TiDB Server离线镜像包,或根据需要利用打包工具自助打包离线镜像包。

1.4.1. 方式一:下载 TiDB server 离线镜像包(包含 TiUP 离线组件包)

将离线镜像包上传至中控机

https://pingcap.com/zh/product#SelectProduct

wget https://download.pingcap.org/tidb-community-server-v5.3.0-linux-amd64.tar.gz

1.4.2. 方式二:手动打包离线镜像包

  1. 安装 TiUP 工具:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
source .bash_profile
which tiup
  1. 用 TiUP 制作离线镜像
tiup mirror clone tidb-community-server-${version}-linux-amd64 ${version} --os=linux --arch=amd64
tar czvf tidb-community-server-${version}-linux-amd64.tar.gz tidb-community-server-${version}-linux-amd64

此时,tidb-community-server-${version}-linux-amd64.tar.gz 就是一个独立的离线环境包。

1.4.2.1. 调整离线包内容

  1. 可通过参数指定具体的组件和版本等信息,获得不完整的离线镜像。
tiup mirror clone tiup-custom-mirror-v1.7.0 --tiup v1.7.0 --cluster v1.7.0
tar czvf tiup-custom-mirror-v1.7.0.tar.gz tiup-custom-mirror-v1.7.0

将定制的离线包上传至离线的中控机

  1. 在隔离环境的中控机上,查看当前使用的离线镜像路径。
tiup mirror show

如果提示 show 命令不存在,可能当前使用的是较老版本的 TiUP。此时可以通过查看 $HOME/.tiup/tiup.toml 获得正在使用的镜像地址。将此镜像地址记录下来,后续步骤中将以变量 ${base_mirror} 指代此镜像地址。

  1. 将不完整的离线镜像合并到已有的离线镜像中:
# 将当前离线镜像中的 keys 目录复制到 $HOME/.tiup 目录中:
cp -r ${base_mirror}/keys $HOME/.tiup/

# 使用 TiUP 命令将不完整的离线镜像合并到当前使用的镜像中:
tiup mirror merge tiup-custom-mirror-v1.7.0

通过 tiup list 命令检查执行结果

1.5. 离线部署TiDB集群

1.5.1. 部署TiUP组件

tidb用户进行TiUP组件部署

~]$ id
uid=1000(tidb) gid=1000(tidb) groups=1000(tidb) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

~]$ sudo chown tidb:tidb tidb-community-server-v5.3.0-linux-amd64.tar.gz 
~]$ ll
total 1942000
-rw-r--r--. 1 tidb tidb 1988601700 Nov 29  2021 tidb-community-server-v5.3.0-linux-amd64.tar.gz

~]$ tar -xzvf tidb-community-server-v5.3.0-linux-amd64.tar.gz 
~]$ sh tidb-community-server-v5.3.0-linux-amd64/local_install.sh
~]$ source /home/tidb/.bash_profile

local_install.sh 脚本会自动执行 tiup mirror set tidb-community-server-v5.3.0-linux-amd64 命令将当前镜像地址设置为 tidb-community-server-v5.3.0-linux-amd64。

若需将镜像切换到其他目录,可以通过手动执行 tiup mirror set <mirror-dir> 进行切换。如果需要切换到在线环境,可执行 tiup mirror set https://tiup-mirrors.pingcap.com

1.5.2. 准备拓扑文件

~]$ tiup cluster template |grep -Ev '^\s*#|^$' > topology.yaml

生成的默认拓扑配置如下:

global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
  arch: "amd64"
monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115
pd_servers:
  - host: 10.0.1.11
  - host: 10.0.1.12
  - host: 10.0.1.13
tidb_servers:
  - host: 10.0.1.14
  - host: 10.0.1.15
  - host: 10.0.1.16
tikv_servers:
  - host: 10.0.1.17
  - host: 10.0.1.18
  - host: 10.0.1.19
tiflash_servers:
  - host: 10.0.1.20
  - host: 10.0.1.21
monitoring_servers:
  - host: 10.0.1.22
grafana_servers:
  - host: 10.0.1.22
alertmanager_servers:
  - host: 10.0.1.22

根据实际环境,修改配置文件。

global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
  arch: "amd64"
monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115
pd_servers:
  - host: 192.168.3.221
  - host: 192.168.3.222
  - host: 192.168.3.223
tidb_servers:
  - host: 192.168.3.221
  - host: 192.168.3.222
  - host: 192.168.3.223
tikv_servers:
  - host: 192.168.3.224
  - host: 192.168.3.225
  - host: 192.168.3.226
monitoring_servers:
  - host: 192.168.3.221
grafana_servers:
  - host: 192.168.3.221
alertmanager_servers:
  - host: 192.168.3.221

1.5.3. 环境校验

  • 环境检查 生产环境,需确保所有检查项都为pass
~]$ tiup cluster check ./topology.yaml --user tidb
...
Node           Check       Result  Message
----           -----       ------  -------
192.168.3.223  os-version  Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.3.223  cpu-cores   Pass    number of CPU cores / threads: 4
192.168.3.223  memory      Pass    memory size is 4096MB
192.168.3.223  selinux     Fail    SELinux is not disabled
192.168.3.223  thp         Fail    THP is enabled, please disable it for best performance
192.168.3.223  command     Pass    numactl: policy: default
192.168.3.224  os-version  Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.3.224  cpu-cores   Pass    number of CPU cores / threads: 4
192.168.3.224  memory      Pass    memory size is 4096MB
192.168.3.224  selinux     Fail    SELinux is not disabled
192.168.3.224  thp         Fail    THP is enabled, please disable it for best performance
192.168.3.224  command     Pass    numactl: policy: default
192.168.3.225  os-version  Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.3.225  cpu-cores   Pass    number of CPU cores / threads: 4
192.168.3.225  memory      Pass    memory size is 4096MB
192.168.3.225  selinux     Fail    SELinux is not disabled
192.168.3.225  thp         Fail    THP is enabled, please disable it for best performance
192.168.3.225  command     Pass    numactl: policy: default
192.168.3.226  os-version  Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.3.226  cpu-cores   Pass    number of CPU cores / threads: 4
192.168.3.226  memory      Pass    memory size is 4096MB
192.168.3.226  selinux     Fail    SELinux is not disabled
192.168.3.226  thp         Fail    THP is enabled, please disable it for best performance
192.168.3.226  command     Pass    numactl: policy: default
192.168.3.221  os-version  Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.3.221  cpu-cores   Pass    number of CPU cores / threads: 4
192.168.3.221  memory      Pass    memory size is 4096MB
192.168.3.221  selinux     Fail    SELinux is not disabled
192.168.3.221  thp         Fail    THP is enabled, please disable it for best performance
192.168.3.221  command     Pass    numactl: policy: default
192.168.3.222  os-version  Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.3.222  cpu-cores   Pass    number of CPU cores / threads: 4
192.168.3.222  memory      Pass    memory size is 4096MB
192.168.3.222  selinux     Fail    SELinux is not disabled
192.168.3.222  thp         Fail    THP is enabled, please disable it for best performance
192.168.3.222  command     Pass    numactl: policy: default
  • 环境修复
~]$ tiup cluster check ./topology.yaml --apply --user root 

1.5.4. 集群部署

~]$ id
uid=1000(tidb) gid=1000(tidb) groups=1000(tidb)

~]$ tiup cluster deploy kruidb-cluster v5.3.0 ./topology.yaml --user tidb
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.7.0/tiup-cluster deploy kruidb-cluster v5.3.0 ./topology.yaml --user tidb

+ Detect CPU Arch
+ Detect CPU Arch
  - Detecting node 192.168.3.221 ... Done
  - Detecting node 192.168.3.222 ... Done
  - Detecting node 192.168.3.223 ... Done
  - Detecting node 192.168.3.224 ... Done
  - Detecting node 192.168.3.225 ... Done
  - Detecting node 192.168.3.226 ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    kruidb-cluster
Cluster version: v5.3.0
Role          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            192.168.3.221  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            192.168.3.222  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            192.168.3.223  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          192.168.3.224  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          192.168.3.225  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          192.168.3.226  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb          192.168.3.221  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          192.168.3.222  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          192.168.3.223  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
prometheus    192.168.3.221  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       192.168.3.221  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  192.168.3.221  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y 
...

        Enable 192.168.3.226 success
        Enable 192.168.3.224 success
        Enable 192.168.3.225 success
        Enable 192.168.3.222 success
Cluster `kruidb-cluster` deployed successfully, you can start it with command: `tiup cluster start kruidb-cluster`

1.6. 初始化集群

~]$ tiup cluster start kruidb-cluster

...
+ [ Serial ] - UpdateTopology: cluster=kruidb-cluster
Started cluster `kruidb-cluster` successfully

可通过 tiup cluster start kruidb-cluster --init 在初始化集群时,为root用户生成随机密码(只显示一次)。省略 --init 参数,则为root用户指定空密码。

1.7. 检查TiDB集群

1.7.1. 查看集群

~]$ tiup cluster list
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.7.0/tiup-cluster list
Name            User  Version  Path                                                      PrivateKey
----            ----  -------  ----                                                      ----------
kruidb-cluster  tidb  v5.3.0   /home/tidb/.tiup/storage/cluster/clusters/kruidb-cluster  /home/tidb/.tiup/storage/cluster/clusters/kruidb-cluster/ssh/id_rsa
~]$ tiup cluster display kruidb-cluster
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.7.0/tiup-cluster display kruidb-cluster
Cluster type:       tidb
Cluster name:       kruidb-cluster
Cluster version:    v5.3.0
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.3.222:2379/dashboard
ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------  --------                      ----------
192.168.3.221:9093   alertmanager  192.168.3.221  9093/9094    linux/x86_64  Up      /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
192.168.3.221:3000   grafana       192.168.3.221  
        标签: sh8c15连接器

锐单商城拥有海量元器件数据手册IC替代型号,打造 电子元器件IC百科大全!

锐单商城 - 一站式电子元器件采购平台