greenplum集群安装与增加节点生产环境实战

本文涉及的产品
云原生数据库 PolarDB PostgreSQL 版,标准版 2核4GB 50GB
云原生数据库 PolarDB MySQL 版,通用型 2核4GB 50GB
简介:

greenplum集群安装与增加节点生产环境实战

1.准备环境

1.1集群介绍

系统环境:centos6.5

数据库版本:greenplum-db-4.3.3.1-build-1-RHEL5-x86_64.zip

greenplum集群中,2台机器IP分别是

[root@BI-greenplum-01 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.201        BI-greenplum-01

192.168.10.202        BI-greenplum-02

1.2创建用户及用户组(每台机器)

[root@BI-greenplum-01 ~]#  groupadd -g 530 gpadmin

[root@BI-greenplum-01 ~]# useradd -g 530 -u530 -m -d /home/gpadmin -s /bin/bash gpadmin

[root@BI-greenplum-01 ~]# passwd gpadmin

Changing password for user gpadmin.

New password:

BAD PASSWORD: it is too simplistic/systematic

BAD PASSWORD: is too simple

Retype new password:

passwd: all authentication tokens updated successfully.

 

1.3配置内核参数,添加如下内容:

vi /etc/sysctl.conf

 

#By greenplum

net.ipv4.ip_forward = 0

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 1

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.sem = 250 64000 100 512

kernel.shmmax = 500000000

kernel.shmmni = 4096

kernel.shmall = 4000000000

kernel.sem = 250 64000 100 512

net.ipv4.tcp_tw_recycle=1

net.ipv4.tcp_max_syn_backlog=4096

net.core.netdev_max_backlog=10000

vm.overcommit_memory=2

net.ipv4.conf.all.arp_filter = 1

 

以上参数可以根据自己系统配置做适当修改

手工执行命令,让参数生效

[root@BI-greenplum-01 ~]# sysctl -p

 

limits.conf文件中添加如下配置

[root@BI-greenplum-01 ~]# vi /etc/security/limits.conf

# End of file

* soft nofile 65536

* hard nofile 65536

* soft nproc 131072

* hard nproc 131072

 

2.greenplum安装

2.1安装依赖包,包括增加节点需要的包

yum -y install ed openssh-clients gcc gcc-c++  make automake autoconf libtool perl rsync coreutils glib2 lrzsz sysstat e4fsprogs xfsprogs ntp readline-devel zlib zlib-devel unzip

注意:greenplum依赖ed,否则无法初始化成功

 

2.2首先准备好安装文件(在MASTER 192.168.10.201上操作)

[root@BI-greenplum-01 ~]# unzip greenplum-db-4.3.3.1-build-1-RHEL5-x86_64.zip

[root@BI-greenplum-01 ~]# ./greenplum-db-4.3.3.1-build-1-RHEL5-x86_64.bin

ip_image002.jpeg

ip_image004.jpeg

ip_image006.jpeg

 

2.3给安装过目录赋权

[root@BI-greenplum-01 ~]# cd /usr/local/

[root@BI-greenplum-01 local]# chown -R gpadmin:gpadmin /usr/local/greenplum-db*

 

2.4压缩打包传到其他机器上

[root@BI-greenplum-01 local]# tar zcvf gp.tar.gz greenplum-db*

 

[root@BI-greenplum-01 local]# scp gp.tar.gz BI-greenplum-02:/usr/local/

 

2.5其他机器上解压文件

[root@BI-greenplum-02 ~]# cd /usr/local/

[root@BI-greenplum-02 local]# ls

bin  etc  games  gp.tar.gz  include  lib  lib64  libexec  sbin  share  src

[root@BI-greenplum-02 local]# tar zxvf gp.tar.gz

 

2.6每台机器上配置环境变量

[root@BI-greenplum-01 local]# su - gpadmin

[gpadmin@BI-greenplum-01 ~]$ vi .bash_profile

 

source /usr/local/greenplum-db/greenplum_path.sh

export MASTER_DATA_DIRECTORY=/app/master/gpseg-1

export PGPORT=5432

export PGDATABASE=trjdb

 

让环境变量生效

[gpadmin@BI-greenplum-01 ~]$ source .bash_profile

 

2.7 配置免钥

[gpadmin@BI-greenplum-01 ~]$ cat all_hosts_file

BI-greenplum-01

BI-greenplum-02

 

[gpadmin@BI-greenplum-01 ~]$ gpssh-exkeys -f all_hosts_file

[STEP 1 of 5] create local ID and authorize on local host

 

[STEP 2 of 5] keyscan all hosts and update known_hosts file

 

[STEP 3 of 5] authorize current user on remote hosts

  ... send to BI-greenplum-02

  ***

  *** Enter password for BI-greenplum-02:

 

[STEP 4 of 5] determine common authentication file content

 

[STEP 5 of 5] copy authentication files to all remote hosts

  ... finished key exchange with BI-greenplum-02

 

[INFO] completed successfully

 

2.8创建数据文件(每台操作)

[root@BI-greenplum-01 ~]# mkdir /app

[root@BI-greenplum-01 ~]# chown -R gpadmin:gpadmin /app

MASTER192.168.10.201)操作

[gpadmin@BI-greenplum-01 ~]$ gpssh -f all_hosts_file

Note: command history unsupported on this machine ...

=> mkdir /app/master

[BI-greenplum-02]

[BI-greenplum-01]

=> mkdir -p /app/data/gp1 /app/data/gp2 /app/data/gp3 /app/data/gp4

[BI-greenplum-02]

[BI-greenplum-01]

=> mkdir -p /app/data/gpm1 /app/data/gpm2 /app/data/gpm3 /app/data/gpm4

[BI-greenplum-02]

[BI-greenplum-01]

 

[gpadmin@BI-greenplum-01 ~]$ vi gpinitsystem_config

# FILE NAME: gpinitsystem_config

 

# Configuration file needed by the gpinitsystem

 

################################################

#### REQUIRED PARAMETERS

################################################

 

#### Name of this Greenplum system enclosed in quotes.

ARRAY_NAME="EMC Greenplum DW"

 

#### Naming convention for utility-generated data directories.

SEG_PREFIX=gpseg

 

#### Base number by which primary segment port numbers

#### are calculated.

PORT_BASE=40000

 

#### File system location(s) where primary segment data directories

#### will be created. The number of locations in the list dictate

#### the number of primary segments that will get created per

#### physical host (if multiple addresses for a host are listed in

#### the hostfile, the number of segments will be spread evenly across

#### the specified interface addresses).

declare -a DATA_DIRECTORY=(/app/data/gp1 /app/data/gp2 /app/data/gp3 /app/data/gp4)

 

#### OS-configured hostname or IP address of the master host.

MASTER_HOSTNAME=BI-greenplum-01

 

#### File system location where the master data directory

#### will be created.

MASTER_DIRECTORY=/app/master

 

#### Port number for the master instance.

MASTER_PORT=5432

 

#### Shell utility used to connect to remote hosts.

TRUSTED_SHELL=ssh

 

#### Maximum log file segments between automatic WAL checkpoints.

CHECK_POINT_SEGMENTS=8

 

#### Default server-side character set encoding.

ENCODING=UNICODE

 

################################################

#### OPTIONAL MIRROR PARAMETERS

################################################

 

#### Base number by which mirror segment port numbers

#### are calculated.

MIRROR_PORT_BASE=50000

 

#### Base number by which primary file replication port

#### numbers are calculated.

REPLICATION_PORT_BASE=41000

 

#### Base number by which mirror file replication port

#### numbers are calculated.

MIRROR_REPLICATION_PORT_BASE=51000

 

#### File system location(s) where mirror segment data directories

#### will be created. The number of mirror locations must equal the

#### number of primary locations as specified in the

#### DATA_DIRECTORY parameter.

declare -a MIRROR_DATA_DIRECTORY=(/app/data/gpm1 /app/data/gpm2 /app/data/gpm3 /app/data/gpm4)

 

 

################################################

#### OTHER OPTIONAL PARAMETERS

################################################

 

#### Create a database of this name after initialization.

DATABASE_NAME=trjdb

 

#### Specify the location of the host address file here instead of

#### with the the -h option of gpinitsystem.

MACHINE_LIST_FILE=/home/gpadmin/seg_hosts_file

 

 

增加配置为数据节点

[gpadmin@BI-greenplum-01 ~]$ vi seg_hosts_file

BI-greenplum-01

BI-greenplum-02

 

3.初始配置

[gpadmin@BI-greenplum-01 ~]$  gpinitsystem -c gpinitsystem_config -s BI-greenplum-02

ip_image008.jpeg

ip_image010.jpeg

以上说明已经安装完成

 

[gpadmin@BI-greenplum-01 ~]$ psql -d trjdb

psql (8.2.15)

Type "help" for help.

 

trjdb=#

 

查看集群状态

select a.dbid,a.content,a.role,a.port,a.hostname,b.fsname,c.fselocation from gp_segment_configuration a,pg_filespace b,pg_filespace_entry c where a.dbid=c.fsedbid and b.oid=c.fsefsoid order by content;

ip_image012.jpeg

 

 

 

 

greenplum增加机器与增加数据节点

1、增加两台(192.168.10.203192.168.10.204

修改hosts(每台都一样)

[root@BI-greenplum-01 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.201        BI-greenplum-01

192.168.10.202        BI-greenplum-02

192.168.10.203        BI-greenplum-03

192.168.10.204        BI-greenplum-04

 

2、创建用户及用户组(增加的机器)

[root@BI-greenplum-03 ~]# groupadd -g 530 gpadmin

[root@BI-greenplum-03 ~]# useradd -g 530 -u530 -m -d /home/gpadmin -s /bin/bash gpadmin

[root@BI-greenplum-03 ~]# passwd gpadmin

Changing password for user gpadmin.

New password:

BAD PASSWORD: it is too simplistic/systematic

BAD PASSWORD: is too simple

Retype new password:

passwd: all authentication tokens updated successfully.

 

 

3.修改内核配置文件(增加的机器)

[root@BI-greenplum-03 ~]# vi /etc/sysctl.conf

#By greenplum

net.ipv4.ip_forward = 0

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 1

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.sem = 250 64000 100 512

kernel.shmmax = 500000000

kernel.shmmni = 4096

kernel.shmall = 4000000000

kernel.sem = 250 64000 100 512

net.ipv4.tcp_tw_recycle=1

net.ipv4.tcp_max_syn_backlog=4096

net.core.netdev_max_backlog=10000

vm.overcommit_memory=2

net.ipv4.conf.all.arp_filter = 1

让内核参数生效

[root@BI-greenplum-03 ~]# sysctl -p

 

4、修改文件打开数

[root@BI-greenplum-03 ~]# vi  /etc/security/limits.conf

* soft nofile 65536

* hard nofile 65536

* soft nproc 131072

* hard nproc 131072

 

5、安装依赖包

yum -y install ed openssh-clients gcc gcc-c++  make automake autoconf libtool perl rsync coreutils glib2 lrzsz sysstat e4fsprogs xfsprogs ntp readline-devel zlib zlib-devel unzip

 

6、把之前的压缩包gp.tar.gz拷贝到增加节点上

[root@BI-greenplum-01 local]# scp gp.tar.gz BI-greenplum-03:/usr/local/

 

[root@BI-greenplum-01 local]# scp gp.tar.gz BI-greenplum-04:/usr/local/

解压

[root@BI-greenplum-03 local]# tar zxvf gp.tar.gz

[root@BI-greenplum-04 local]# tar zxvf gp.tar.gz

7、增加节点创建目录(每台增加节点上)

[root@BI-greenplum-03 local]# mkdir /app/master

[root@BI-greenplum-04 local]# mkdir /app/master

[root@BI-greenplum-03 local]# mkdir -p /app/data/gp1 /app/data/gp2 /app/data/gp3 /app/data/gp4

[root@BI-greenplum-04 local]#  mkdir -p /app/data/gp1 /app/data/gp2 /app/data/gp3 /app/data/gp4

[root@BI-greenplum-03 local]# mkdir -p /app/data/gpm1 /app/data/gpm2 /app/data/gpm3 /app/data/gpm4

[root@BI-greenplum-04 local]# mkdir -p /app/data/gpm1 /app/data/gpm2 /app/data/gpm3 /app/data/gpm4

[root@BI-greenplum-03 local]# chown -R gpadmin:gpadmin /app

[root@BI-greenplum-04 local]# chown -R gpadmin:gpadmin /app

[root@BI-greenplum-03 local]#  chmod -R 700 /app

[root@BI-greenplum-04 local]#  chmod -R 700 /app

 

 

8、修改环境变量(增加计算点机器)

[root@BI-greenplum-03 local]# su - gpadmin

[gpadmin@BI-greenplum-03 ~]$ vi .bash_profile

 

source /usr/local/greenplum-db/greenplum_path.sh

export MASTER_DATA_DIRECTORY=/app/master/gpseg-1

export PGPORT=5432

export PGDATABASE=trjdb

 

让环境变量生效

[gpadmin@BI-greenplum-03 ~]$ source .bash_profile

 

9、免密钥在BI-greenplum-01操作

[root@BI-greenplum-01 local]# su - gpadmin

[gpadmin@BI-greenplum-01 ~]$ vi all_hosts_file

BI-greenplum-01

BI-greenplum-02

BI-greenplum-03

BI-greenplum-04

 

 [gpadmin@BI-greenplum-01 ~]$ gpssh-exkeys -f all_hosts_file

[STEP 1 of 5] create local ID and authorize on local host

  ... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped

 

[STEP 2 of 5] keyscan all hosts and update known_hosts file

 

[STEP 3 of 5] authorize current user on remote hosts

  ... send to BI-greenplum-02

  ... send to BI-greenplum-03

  ***

  *** Enter password for BI-greenplum-03:

  ... send to BI-greenplum-04

 

[STEP 4 of 5] determine common authentication file content

 

[STEP 5 of 5] copy authentication files to all remote hosts

  ... finished key exchange with BI-greenplum-02

  ... finished key exchange with BI-greenplum-03

  ... finished key exchange with BI-greenplum-04

 

[INFO] completed successfully

 

10.初始化新扩展(在master上操作)

[gpadmin@BI-greenplum-01 ~]$ vi hosts_expand

BI-greenplum-03

BI-greenplum-04

 

根据自己情况来增加

[gpadmin@BI-greenplum-01 ~]$ gpexpand -f hosts_expand

20171208:00:55:14:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.3.1 build 1'

20171208:00:55:14:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14:31:57'

20171208:00:55:14:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state

 

System Expansion is used to add segments to an existing GPDB array.

gpexpand did not detect a System Expansion that is in progress.

 

Before initiating a System Expansion, you need to provision and burn-in

the new hardware.  Please be sure to run gpcheckperf/gpcheckos to make

sure the new hardware is working properly.

 

Please refer to the Admin Guide for more information.

 

Would you like to initiate a new System Expansion Yy|Nn (default=N):

> y

 

You must now specify a mirroring strategy for the new hosts.  Spread mirroring places

a given hosts mirrored segments each on a separate host.  You must be

adding more hosts than the number of segments per host to use this.

Grouped mirroring places all of a given hosts segments on a single

mirrored host.  You must be adding at least 2 hosts in order to use this.

 

 

 

What type of mirroring strategy would you like?

 spread|grouped (default=grouped):

>

 

    By default, new hosts are configured with the same number of primary

    segments as existing hosts.  Optionally, you can increase the number

    of segments per host.

 

    For example, if existing hosts have two primary segments, entering a value

    of 2 will initialize two additional segments on existing hosts, and four

    segments on new hosts.  In addition, mirror segments will be added for

    these new primary segments if mirroring is enabled.

   

 

How many new primary segments per host do you want to add? (default=0):

4

Enter new primary data directory 1:

/app/data/gp1

Enter new primary data directory 2:

> /app/data/gp2

Enter new primary data directory 3:

/app/data/gp3

Enter new primary data directory 4:

/app/data/gp4

Enter new mirror data directory 1:

/app/data/gpm1

Enter new mirror data directory 2:

/app/data/gpm2

Enter new mirror data directory 3:

/app/data/gpm3

Enter new mirror data directory 4:

/app/data/gpm4

 

Generating configuration file...

 

20171208:00:57:18:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Generating input file...

 

Input configuration files were written to 'gpexpand_inputfile_20171208_005718' and 'None'.

Please review the file and make sure that it is correct then re-run

with: gpexpand -i gpexpand_inputfile_20171208_005718 -D trjdb

               

20171208:00:57:18:023306 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Exiting...

 

会生成一个名为gpexpand_inputfile_20171208_005718配置文件,需要修改后才能通过配置文件扩展数据库

原文件如:红色是保存的需要的

[gpadmin@BI-greenplum-01 ~]$ cat gpexpand_inputfile_20171208_005718

BI-greenplum-03:BI-greenplum-03:40000:/app/data/gp1/gpseg8:19:8:p:41000

BI-greenplum-04:BI-greenplum-04:50000:/app/data/gpm1/gpseg8:31:8:m:51000

BI-greenplum-03:BI-greenplum-03:40001:/app/data/gp2/gpseg9:20:9:p:41001

BI-greenplum-04:BI-greenplum-04:50001:/app/data/gpm2/gpseg9:32:9:m:51001

BI-greenplum-03:BI-greenplum-03:40002:/app/data/gp3/gpseg10:21:10:p:41002

BI-greenplum-04:BI-greenplum-04:50002:/app/data/gpm3/gpseg10:33:10:m:51002

BI-greenplum-03:BI-greenplum-03:40003:/app/data/gp4/gpseg11:22:11:p:41003

BI-greenplum-04:BI-greenplum-04:50003:/app/data/gpm4/gpseg11:34:11:m:51003

BI-greenplum-04:BI-greenplum-04:40000:/app/data/gp1/gpseg12:23:12:p:41000

BI-greenplum-03:BI-greenplum-03:50000:/app/data/gpm1/gpseg12:27:12:m:51000

BI-greenplum-04:BI-greenplum-04:40001:/app/data/gp2/gpseg13:24:13:p:41001

BI-greenplum-03:BI-greenplum-03:50001:/app/data/gpm2/gpseg13:28:13:m:51001

BI-greenplum-04:BI-greenplum-04:40002:/app/data/gp3/gpseg14:25:14:p:41002

BI-greenplum-03:BI-greenplum-03:50002:/app/data/gpm3/gpseg14:29:14:m:51002

BI-greenplum-04:BI-greenplum-04:40003:/app/data/gp4/gpseg15:26:15:p:41003

BI-greenplum-03:BI-greenplum-03:50003:/app/data/gpm4/gpseg15:30:15:m:51003

BI-greenplum-01:BI-greenplum-01:40004:/app/data/gp1/gpseg16:35:16:p:41004

BI-greenplum-02:BI-greenplum-02:50004:/app/data/gpm1/gpseg16:55:16:m:51004

BI-greenplum-01:BI-greenplum-01:40005:/app/data/gp2/gpseg17:36:17:p:41005

BI-greenplum-02:BI-greenplum-02:50005:/app/data/gpm2/gpseg17:56:17:m:51005

BI-greenplum-01:BI-greenplum-01:40006:/app/data/gp3/gpseg18:37:18:p:41006

BI-greenplum-02:BI-greenplum-02:50006:/app/data/gpm3/gpseg18:57:18:m:51006

BI-greenplum-01:BI-greenplum-01:40007:/app/data/gp4/gpseg19:38:19:p:41007

BI-greenplum-02:BI-greenplum-02:50007:/app/data/gpm4/gpseg19:58:19:m:51007

BI-greenplum-02:BI-greenplum-02:40004:/app/data/gp1/gpseg20:39:20:p:41004

BI-greenplum-03:BI-greenplum-03:50004:/app/data/gpm1/gpseg20:59:20:m:51004

BI-greenplum-02:BI-greenplum-02:40005:/app/data/gp2/gpseg21:40:21:p:41005

BI-greenplum-03:BI-greenplum-03:50005:/app/data/gpm2/gpseg21:60:21:m:51005

BI-greenplum-02:BI-greenplum-02:40006:/app/data/gp3/gpseg22:41:22:p:41006

BI-greenplum-03:BI-greenplum-03:50006:/app/data/gpm3/gpseg22:61:22:m:51006

BI-greenplum-02:BI-greenplum-02:40007:/app/data/gp4/gpseg23:42:23:p:41007

BI-greenplum-03:BI-greenplum-03:50007:/app/data/gpm4/gpseg23:62:23:m:51007

BI-greenplum-03:BI-greenplum-03:40004:/app/data/gp1/gpseg24:43:24:p:41004

BI-greenplum-04:BI-greenplum-04:50004:/app/data/gpm1/gpseg24:63:24:m:51004

BI-greenplum-03:BI-greenplum-03:40005:/app/data/gp2/gpseg25:44:25:p:41005

BI-greenplum-04:BI-greenplum-04:50005:/app/data/gpm2/gpseg25:64:25:m:51005

BI-greenplum-03:BI-greenplum-03:40006:/app/data/gp3/gpseg26:45:26:p:41006

BI-greenplum-04:BI-greenplum-04:50006:/app/data/gpm3/gpseg26:65:26:m:51006

BI-greenplum-03:BI-greenplum-03:40007:/app/data/gp4/gpseg27:46:27:p:41007

BI-greenplum-04:BI-greenplum-04:50007:/app/data/gpm4/gpseg27:66:27:m:51007

BI-greenplum-04:BI-greenplum-04:40004:/app/data/gp1/gpseg28:47:28:p:41004

BI-greenplum-01:BI-greenplum-01:50004:/app/data/gpm1/gpseg28:51:28:m:51004

BI-greenplum-04:BI-greenplum-04:40005:/app/data/gp2/gpseg29:48:29:p:41005

BI-greenplum-01:BI-greenplum-01:50005:/app/data/gpm2/gpseg29:52:29:m:51005

BI-greenplum-04:BI-greenplum-04:40006:/app/data/gp3/gpseg30:49:30:p:41006

BI-greenplum-01:BI-greenplum-01:50006:/app/data/gpm3/gpseg30:53:30:m:51006

BI-greenplum-04:BI-greenplum-04:40007:/app/data/gp4/gpseg31:50:31:p:41007

BI-greenplum-01:BI-greenplum-01:50007:/app/data/gpm4/gpseg31:54:31:m:51007

修改后如下:

ip_image014.jpeg

 

然后运行gpexpand脚本

gpexpand -i gpexpand_inputfile_20171208_005718 -D trjdb

 

 

[gpadmin@BI-greenplum-01 ~]$ gpexpand -i gpexpand_inputfile_20171208_005718 -D trjdb

20171208:01:03:10:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.3.1 build 1'

20171208:01:03:10:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14:31:57'

20171208:01:03:11:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state

20171208:01:03:11:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Readying Greenplum Database for a new expansion

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database trjdb for unalterable tables...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database postgres for unalterable tables...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database template1 for unalterable tables...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database trjdb for tables with unique indexes...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database postgres for tables with unique indexes...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking database template1 for tables with unique indexes...

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Syncing Greenplum Database extensions

20171208:01:03:25:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-The packages on BI-greenplum-03 are consistent.

20171208:01:03:26:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-The packages on BI-greenplum-04 are consistent.

20171208:01:03:27:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Creating segment template

20171208:01:03:27:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-VACUUM FULL on the catalog tables

20171208:01:03:28:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting copy of segment dbid 1 to location /app/master/gpexpand_12082017_23572

20171208:01:03:28:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Copying postgresql.conf from existing segment into template

20171208:01:03:29:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Copying pg_hba.conf from existing segment into template

20171208:01:03:29:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Adding new segments into template pg_hba.conf

20171208:01:03:29:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Creating schema tar file

20171208:01:03:29:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Distributing template tar file to new hosts

20171208:01:03:31:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Configuring new segments (primary)

20171208:01:03:32:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Configuring new segments (mirror)

20171208:01:03:33:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Backing up pg_hba.conf file on original segments

20171208:01:03:33:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Copying new pg_hba.conf file to original segments

20171208:01:03:33:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Configuring original segments

20171208:01:03:33:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Cleaning up temporary template files

20171208:01:03:34:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting Greenplum Database in restricted mode

20171208:01:03:42:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Stopping database

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking if Transaction filespace was moved

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Checking if Temporary filespace was moved

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Configuring new segment filespaces

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Cleaning up databases in new segments.

20171208:01:03:55:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting master in utility mode

20171208:01:03:56:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Stopping master in utility mode

20171208:01:04:03:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting Greenplum Database in restricted mode

20171208:01:04:11:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Creating expansion schema

20171208:01:04:12:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database trjdb

20171208:01:04:12:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database postgres

20171208:01:04:13:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database template1

20171208:01:04:14:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Stopping Greenplum Database

20171208:01:04:27:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting Greenplum Database

20171208:01:04:34:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Starting new mirror segment synchronization

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-************************************************

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Initialization of the system expansion complete.

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-To begin table expansion onto the new segments

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-rerun gpexpand

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-************************************************

20171208:01:04:48:023572 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Exiting...

 

 

以上说明增加计算节点成功

 

假如上一步失败了,怎么办?
启动限制模式,回滚。

gpstart -R
gpexpand --rollback -D trjdb
gpstart -a
然后找问题继续上一步,直到成功。

 

可以采用脚本进行表重分布

[gpadmin@BI-greenplum-01 ~]$ gpexpand -d 60:00:00

20171208:01:09:08:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.3.1 build 1'

20171208:01:09:08:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14:31:57'

20171208:01:09:09:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state

20171208:01:09:14:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-EXPANSION COMPLETED SUCCESSFULLY

20171208:01:09:14:026159 gpexpand:BI-greenplum-01:gpadmin-[INFO]:-Exiting...

查看节点状态,红色是新增加的

[gpadmin@BI-greenplum-01 ~]$ psql -d trjdb

psql (8.2.15)

Type "help" for help.

 

trjdb=# select a.dbid,a.content,a.role,a.port,a.hostname,b.fsname,c.fselocation from gp_segment_configuration a,pg_filespace b,pg_filespace_entry c where a.dbid=c.fsedbid and b.oid=c.fsefsoid order by content;

 dbid | content | role | port  |    hostname     |  fsname   |      fselocation      

------+---------+------+-------+-----------------+-----------+------------------------

    1 |      -1 | p    |  5432 | BI-greenplum-01 | pg_system | /app/master/gpseg-1

   18 |      -1 | m    |  5432 | BI-greenplum-02 | pg_system | /app/master/gpseg-1

   10 |       0 | m    | 50000 | BI-greenplum-02 | pg_system | /app/data/gpm1/gpseg0

    2 |       0 | p    | 40000 | BI-greenplum-01 | pg_system | /app/data/gp1/gpseg0

    3 |       1 | p    | 40001 | BI-greenplum-01 | pg_system | /app/data/gp2/gpseg1

   11 |       1 | m    | 50001 | BI-greenplum-02 | pg_system | /app/data/gpm2/gpseg1

    4 |       2 | p    | 40002 | BI-greenplum-01 | pg_system | /app/data/gp3/gpseg2

   12 |       2 | m    | 50002 | BI-greenplum-02 | pg_system | /app/data/gpm3/gpseg2

    5 |       3 | p    | 40003 | BI-greenplum-01 | pg_system | /app/data/gp4/gpseg3

   13 |       3 | m    | 50003 | BI-greenplum-02 | pg_system | /app/data/gpm4/gpseg3

    6 |       4 | p    | 40000 | BI-greenplum-02 | pg_system | /app/data/gp1/gpseg4

   14 |       4 | m    | 50000 | BI-greenplum-01 | pg_system | /app/data/gpm1/gpseg4

   15 |       5 | m    | 50001 | BI-greenplum-01 | pg_system | /app/data/gpm2/gpseg5

    7 |       5 | p    | 40001 | BI-greenplum-02 | pg_system | /app/data/gp2/gpseg5

   16 |       6 | m    | 50002 | BI-greenplum-01 | pg_system | /app/data/gpm3/gpseg6

    8 |       6 | p    | 40002 | BI-greenplum-02 | pg_system | /app/data/gp3/gpseg6

   17 |       7 | m    | 50003 | BI-greenplum-01 | pg_system | /app/data/gpm4/gpseg7

    9 |       7 | p    | 40003 | BI-greenplum-02 | pg_system | /app/data/gp4/gpseg7

   31 |       8 | m    | 50000 | BI-greenplum-04 | pg_system | /app/data/gpm1/gpseg8

   19 |       8 | p    | 40000 | BI-greenplum-03 | pg_system | /app/data/gp1/gpseg8

   32 |       9 | m    | 50001 | BI-greenplum-04 | pg_system | /app/data/gpm2/gpseg9

   20 |       9 | p    | 40001 | BI-greenplum-03 | pg_system | /app/data/gp2/gpseg9

   33 |      10 | m    | 50002 | BI-greenplum-04 | pg_system | /app/data/gpm3/gpseg10

   21 |      10 | p    | 40002 | BI-greenplum-03 | pg_system | /app/data/gp3/gpseg10

   22 |      11 | p    | 40003 | BI-greenplum-03 | pg_system | /app/data/gp4/gpseg11

   34 |      11 | m    | 50003 | BI-greenplum-04 | pg_system | /app/data/gpm4/gpseg11

   27 |      12 | m    | 50000 | BI-greenplum-03 | pg_system | /app/data/gpm1/gpseg12

   23 |      12 | p    | 40000 | BI-greenplum-04 | pg_system | /app/data/gp1/gpseg12

   28 |      13 | m    | 50001 | BI-greenplum-03 | pg_system | /app/data/gpm2/gpseg13

   24 |      13 | p    | 40001 | BI-greenplum-04 | pg_system | /app/data/gp2/gpseg13

   29 |      14 | m    | 50002 | BI-greenplum-03 | pg_system | /app/data/gpm3/gpseg14

   25 |      14 | p    | 40002 | BI-greenplum-04 | pg_system | /app/data/gp3/gpseg14

   26 |      15 | p    | 40003 | BI-greenplum-04 | pg_system | /app/data/gp4/gpseg15

   30 |      15 | m    | 50003 | BI-greenplum-03 | pg_system | /app/data/gpm4/gpseg15

(34 rows)




本文转自 jxzhfei  51CTO博客,原文链接:http://blog.51cto.com/jxzhfei/2056120
相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
相关文章
|
运维 NoSQL 安全
【最佳实践】高可用mongodb集群(1分片+3副本):规划及部署
结合我们的生产需求,本次详细整理了最新版本 MonogoDB 7.0 集群的规划及部署过程,具有较大的参考价值,基本可照搬使用。 适应数据规模为T级的场景,由于设计了分片支撑,后续如有大数据量需求,可分片横向扩展。
1138 1
|
安全 前端开发 关系型数据库
单机手动部署OceanBase集群
单机手动部署OceanBase的实验步骤,有详细截图
1243 0
|
存储 缓存 负载均衡
高可用mongodb集群(分片+副本):规划及部署
高可用mongodb集群(分片+副本):规划及部署
1225 0
|
存储 NoSQL Java
高可用mongodb集群(分片+副本):性能测试
高可用mongodb集群(分片+副本):性能测试
671 0
|
9天前
|
存储 监控 大数据
构建高可用性ClickHouse集群:从单节点到分布式
【10月更文挑战第26天】随着业务的不断增长,单一的数据存储解决方案可能无法满足日益增加的数据处理需求。在大数据时代,数据库的性能、可扩展性和稳定性成为企业关注的重点。ClickHouse 是一个用于联机分析处理(OLAP)的列式数据库管理系统(DBMS),以其卓越的查询性能和高吞吐量而闻名。本文将从我的个人角度出发,分享如何将单节点 ClickHouse 扩展为高可用性的分布式集群,以提升系统的稳定性和可靠性。
24 0
|
3月前
|
SQL 存储
【TiDB原理与实战详解】3、 集群升级和逻辑备份恢复~学不会? 不存在的!
TiDB集群可通过打补丁和版本升级来维护。打补丁针对特定组件(如TiDB或TiKV)进行,而版本升级包括不停机升级和停机升级两种方式,前者会重启部分组件。升级前需更新tiup工具并调整拓扑配置,确保集群健康。TiDB的数据备份与恢复依赖于Dumpling和TiDB Lightning工具,前者负责数据导出,后者用于数据导入。导出时推荐使用小文件和多线程以提升效率,并可通过多种参数控制导出细节。恢复时需注意备份目录与存储节点分离,并可通过配置文件控制导入过程,支持断点续传及错误处理策略。此外,4.0及以上版本支持库表过滤功能,便于灵活管理数据导入。
|
3月前
|
运维 监控 安全
【TiDB原理与实战详解】2、部署与节点的扩/缩容~学不会? 不存在的!
TiUP 是 TiDB 4.0 引入的集群运维工具,TiUP cluster 用于部署、管理 TiDB 集群,支持 TiDB、TiFlash、TiDB Binlog 等组件。本文介绍使用 TiUP 部署生产环境的具体步骤,包括节点规划、工具安装、配置文件修改及集群部署等。同时,提供了常用命令和安全优化方法,并详细说明了如何进行集群的扩缩容操作,以及时区设置等维护工作。
|
3月前
|
存储 运维 NoSQL
轻松上手:逐步搭建你的高可用MongoDB集群(分片)
【8月更文挑战第13天】在数据激增的背景下,传统单机数据库难以胜任。MongoDB作为流行NoSQL数据库,采用分片技术实现水平扩展,有效处理海量数据。分片将数据分散存储,提高并发处理能力和容错性,是高可用架构基石。构建MongoDB集群需理解shard、config server和router三组件协同工作原理。通过具体实例演示集群搭建流程,包括各组件的启动及配置,确保数据高可用性和系统稳定性。合理规划与实践可构建高效稳定的MongoDB集群,满足业务需求并支持未来扩展。
82 0
|
NoSQL MongoDB
MongoDB分片+副本集高可用集群的启停步骤
MongoDB分片+副本集高可用集群的启停步骤
269 0
|
6月前
|
消息中间件 分布式计算 Hadoop
kafaka单节点安装部署kafaka多节点安装部署
kafaka单节点安装部署kafaka多节点安装部署
57 3