Deploy on Single Node
This document introduces how to manually deploy SynxDB on a single physical or virtual machine.
SynxDB is not fully compatible with PostgreSQL, and some features and syntax are SynxDB-specific. If your business already relies on SynxDB and you want to use the SynxDB-specific syntax and features on a single node to avoid compatibility issues with PostgreSQL, you can consider deploying SynxDB free of segments.
SynxDB provides the single-computing-node deployment mode. This mode runs under the utility gp_role
, with only one coordinator (QD) node and one coordinator standby node, without a segment node or data distribution. You can directly connect to the coordinator and run queries as if you were connecting to a regular multi-node cluster. Note that some SQL statements are not effective in this mode because data distribution does not exist, and some SQL statements are not supported. See deploy-guides/physical-deploy/manual-deploy/deploy-single-node:user-behavior changes for details.
How to deploy
Step 1. Prepare to deploy
Log into each host as the root user, and modify the settings of each node host in the order of the following sections.
Add gpadmin admin user
Follow the example below to create a user group and username gpadmin
, set the user group and username identifier to 520
, and create and specify the home directory /home/gpadmin/
.
groupadd -g 520 gpadmin # _Adds user group gpadmin._
useradd -g 520 -u 520 -m -d /home/gpadmin/ -s /bin/bash gpadmin # _Adds username gpadmin and creates the home directory._
passwd gpadmin # _Sets a password for gpadmin. Follow the prompts to input the password after executing._
Disable SELinux and firewall software
Run systemctl status firewalld
to view the firewall status. If the firewall is on, you need to turn it off by setting the SELINUX
parameter to disabled
in the /etc/selinux/config
file.
SELINUX=disabled
You can also disable the firewall using the following commands:
systemctl stop firewalld.service
systemctl disable firewalld.service
Set system parameters
Edit the /etc/sysctl.conf
configuration file, add the relevant system parameters, and run the sysctl -p
command to make the configuration effective.
The following configuration parameters are for reference. Please adjust according to your actual needs. Detailed explanations and recommended settings for some parameters are provided below.
kernel.shmall = _PHYS_PAGES / 2
kernel.shmall = 197951838
kernel.shmmax = kernel.shmall * PAGE_SIZE
kernel.shmmax = 810810728448
kernel.shmmni = 4096
vm.overcommit_memory = 2
vm.overcommit_ratio = 95
net.ipv4.ip_local_port_range = 10000 65535
kernel.sem = 250 2048000 200 8192
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ipfrag_high_thresh = 41943040
net.ipv4.ipfrag_low_thresh = 31457280
net.ipv4.ipfrag_time = 60
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.swappiness = 10
vm.zone_reclaim_mode = 0
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
vm.dirty_background_ratio = 0
vm.dirty_ratio = 0
vm.dirty_background_bytes = 1610612736
vm.dirty_bytes = 4294967296
IP segmentation settings
When the SynxDB uses the UDP protocol for internal connection, the network card controls the fragmentation and reassembly of IP packets. If the size of a UDP message is larger than the maximum size of network transmission unit (MTU), the IP layer fragments the message.
net.ipv4.ipfrag_high_thresh
: When the total size of IP fragments exceeds this threshold, the kernel will attempt to reorganize IP fragments. If the fragments exceed this threshold but all fragments have not arrived within the specified time, the kernel will not reorganize the fragments. This threshold is typically used to control whether larger fragments are reorganized. The default value is4194304
bytes (4 MB).Set mount options for the XFS file system.net.ipv4.ipfrag_low_thresh
: Indicates that when the total size of IP fragments is below this threshold, the kernel will wait as long as possible for more fragments to arrive, to allow for larger reorganizations. This threshold is used to minimize unfinished reorganization operations and improve system performance. The default value is3145728
bytes (3 MB).net.ipv4.ipfrag_time
is a kernel parameter that controls the IP fragment reassembly timeout. The default value is30
.
It is recommended to set the above parameters to the following values:
net.ipv4.ipfrag_high_thresh = 41943040
net.ipv4.ipfrag_low_thresh = 31457280
net.ipv4.ipfrag_time = 60
System memory
If the server memory exceeds 64 GB, the following parameters are recommended in the
/etc/sysctl.conf
configuration file:vm.dirty_background_ratio = 0 vm.dirty_ratio = 0 vm.dirty_background_bytes = 1610612736 # 1.5 GB vm.dirty_bytes = 4294967296 # 4 GB
If the server memory is less than 64 GB, you do not need to set
vm.dirty_background_bytes
orvm.dirty_bytes
. It is recommended to set the following parameters in the/etc/sysctl.conf
configuration file:vm.dirty_background_ratio = 3 vm.dirty_ratio = 10
To deal with emergency situations when the system is under memory pressure, it is recommended to add the
vm.min_free_kbytes
parameter to the/etc/sysctl.conf
configuration file to control the amount of available memory reserved by the system. It is recommended to setvm.min_free_kbytes
to 3% of the system’s physical memory, with the following command:awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo /etc/sysctl.conf
The setting of
vm.min_free_kbytes
is not recommended to exceed 5% of the system’s physical memory.
Resource limit settings
Edit the /etc/security/limits.conf
file and add the following content, which will limit the amount of hardware and software resources.
*soft nofile 524288
*hard nofile 524288
*soft nproc 131072
*hard nproc 131072
CORE DUMP settings
Add the following parameter to the
/etc/sysctl.conf
configuration file:kernel.core_pattern=/var/core/core.%h.%t
Run the following command to make the configuration effective:
sysctl -p
Add the following parameter to
/etc/security/limits.conf
:soft core unlimited
Set mount options for the XFS file system
XFS is the file system for the data directory of SynxDB. XFS has the following mount options:
rw,nodev,noatime,inode64
You can set up XFS file mounting in the /etc/fstab
file. See the following commands. You need to choose the file path according to the actual situation:
mkdir -p /data0/
mkfs.xfs -f /dev/vdc
echo "/dev/vdc /data0 xfs rw,nodev,noatime,nobarrier,inode64 0 0" /etc/fstab
mount /data0
chown -R gpadmin:gpadmin /data0/
Run the following command to check whether the mounting is successful:
df -h
Blockdev value
The blockdev value for each disk device file should be 16384
. To verify the blockdev value of a disk device, you can use the following command:
sudo /sbin/blockdev --getra <devname>
For example, to verify the blockdev value of the hard disk of the example server:
sudo /sbin/blockdev --getra /dev/vdc
To modify the blockdev value of a device file, you can use the following command:
sudo /sbin/blockdev --setra <bytes> <devname>
For example, to modify the blockdev value of the hard disk of the example server:
sudo /sbin/blockdev --setra 16384 /dev/vdc
I/O scheduling policy settings for disks
The disk type, operating system, and scheduling policy of SynxDB are as follows:
Storage device type |
OS |
Recommended scheduling policy |
---|---|---|
NVMe |
RHEL 8 |
none |
Ubuntu |
none |
|
SSD |
RHEL 8 |
none |
Ubuntu |
none |
|
Other |
RHEL 8 |
mq-deadline |
Ubuntu |
mq-deadline |
Refer to the following command to modify the scheduling policy. Note that this command is only a temporary modification, and the modification will become invalid after the server is restarted.
echo schedulername /sys/block/<devname>/queue/scheduler
For example, to temporarily modify the disk I/O scheduling policy of the example server:
echo deadline /sys/block/vdc/queue/scheduler
To permanently modify the scheduling policy, use the system utility grubby. After using grubby, the modification takes effect immediately after you restart the server. The sample command is as follows:
grubby --update-kernel=ALL --args="elevator=deadline"
You can view the kernel parameter settings by using the following command:
grubby --info=ALL
Disable Transparent Huge Pages (THP)
You need to disable Transparent Huge Pages (THP), because it reduces SynxDB performance. The command is as follows:
grubby --update-kernel=ALL --args="transparent_hugepage=never"
Check the status of THP:
cat /sys/kernel/mm/*transparent_hugepage/enabled
Disable IPC object deletion
Disable IPC object deletion by setting the value of RemoveIPC
to no
. You can set this parameter in SynxDB’s /etc/systemd/logind.conf
file.
RemoveIPC=no
After disabling it, run the following command to restart the server to make the disabling setting effective:
service systemd-logind restart
SSH connection threshold
To set the SSH connection threshold, you need to modify the /etc/ssh/sshd_config
configuration file’s MaxStartups
and MaxSessions
parameters. Both of the following writing methods are acceptable.
MaxStartups 200
MaxSessions 200
MaxStartups 10:30:200
MaxSessions 200
Run the following command to restart the server to make the setting take effect:
service sshd restart
Clock synchronization
SynxDB requires the clock synchronization to be configured for all hosts, and the clock synchronization service should be started when the host starts. You can choose one of the following synchronization methods:
Use the coordinator node’s time as the source, and other hosts synchronize the clock of the coordinator node host.
Synchronize clocks using an external clock source.
The example in this document uses an external clock source for synchronization, that is, adding the following configuration to the /etc/chrony.conf
configuration file:
# Use public servers from the pool.ntp.org project
# Please consider joining the pool (http://www.pool.ntp.org/join.html)
server 0.centos.pool.ntp.org iburst
After setting, you can run the following command to check the clock synchronization status:
systemctl status chronyd
Step 2: Install SynxDB via RPM package
Download the SynxDB RPM package to the
gpadmin
home directory/home/gpadmin/
:wget -P /home/gpadmin <download address>
Install the RPM package in the
/home/gpadmin
directory.When running the following command, you need to replace
<RPM package path>
with the actual RPM package path, and execute it as theroot
user. During installation, the default installation directory/usr/local/synxdb/
will be automatically created.cd /home/gpadmin yum install <RPM package path>
Grant the
gpadmin
user permission for the installation directory:chown -R gpadmin:gpadmin /usr/local chown -R gpadmin:gpadmin /usr/local/synxdb*
Configure local SSH login for the node. As the
gpadmin
user:ssh-keygen ssh-copy-id localhost ssh `hostname` # Ensure the local SSH login works properly
Step 3: Deploy SynxDB with a single computing node
Use the scripting tool gpdemo to quickly deploy SynxDB. gpdemo
is included in the RPM package and will be installed in the GPHOME/bin
directory along with the configuration scripts (including gpinitsystem
, gpstart
, gpstop
), and it supports quickly deploying SynxDB with a single computing node. For more details about this tool, refer to gpdemo.
In the above deploy-guides/physical-deploy/manual-deploy/deploy-single-node:set mount options for the xfs file system, the XFS file system’s data directory is mounted on /data0
. The following commands deploy a single-computing-node cluster in this data directory:
cd /data0
NUM_PRIMARY_MIRROR_PAIRS=0 gpdemo # Uses the gpdemo tool
When gpdemo
is running, a new warning will be output: [WARNING]:-SinglenodeMode has been enabled, no segment will be created.
, which indicates that SynxDB is currently being deployed in the single-computing-node mode.
Common issues
How to check the deployment mode of a cluster
Perform the following steps to confirm the deployment mode of the current SynxDB cluster:
Connect to the coordinator node.
Execute
SHOW gp_role;
to view the operating mode of the cluster.If the result returns
utility
, it indicates that the cluster is in Utility mode, which is the maintenance mode where only the coordinator node is available.At this point, continue to run
SHOW gp_internal_is_singlenode;
to see whether the cluster is in the single-computing-node mode.If the result returns
on
, it indicates that the current cluster is in the single-computing-node mode.If the result returns
off
, it indicates that the current cluster is in regular utility maintenance mode.
If the result returns
dispatch
, it indicates that the current cluster is a regular cluster containing segment nodes. You can further confirm the number of segments, their status, ports, data directories, and other information by runningSELECT * FROM gp_segment_configuration;
.
Where is the data directory
gpdemo automatically creates a data directory in the current path ($PWD). For the single-computing-node deployment:
The default directory of the coordinator is
./datadirs/singlenodedir
.The default directory of the coordinator standby node is
./datadirs/standby
.
How it works
When you are deploying SynxDB in the single-computing-node mode, the deployment script gpdemo writes gp_internal_is_singlenode = true
to the configuration file postgresql.conf
and starts a coordinator and a coordinator standby node with the gp_role = utility
parameter setting. All data is written locally without a segment or data distribution.
User-behavior changes
In the single-computing-node mode, the product behavior of SynxDB has the following changes. You should pay attention to these changes before performing related operations:
When you execute
CREATE TABLE
to create a table, theDISTRIBUTED BY
clause no longer takes effect. A warning is output: WARNING: DISTRIBUTED BY clause has no effect in singlenode mode.The
SCATTER BY
clause of theSELECT
statement is no longer effective. A warning is output:WARNING: SCATTER BY clause has no effect in singlenode mode
.Other statements that are not supported (for example,
ALTER TABLE SET DISTRIBUTED BY
) are declined with an error.The lock level of
UPDATE
andDELETE
statements will be reduced fromExclusiveLock
toRowExclusiveLock
to provide better concurrency performance, because there is only a single node without global transactions or global deadlocks. This behavior is consistent with PostgreSQL.