申 请 I D:YDLM0430
申 请 I D:YDLM0430个人邮箱:826627932@qq.com
分享技术文章
https://github.com/1160706050/ParseJSONmPaaS部署笔记一、物理机 BIOS检查清单 # numa关闭后导致传统模式无法安装系统;可以尝试检查内存条的安装位置是不是顺序模式(abcdefg);如果不是调换位置后再安装1、物理设备信息
主机名序列号IP网关角色
zscplsjyhmp-ops1 108.199.20.11108.199.20.1OPS1
zscplsjyhmp-ops2 108.199.20.12108.199.20.1OPS2
zscplsjyhmp-app1 108.199.20.13108.199.20.1App1/AKE
zscplsjyhmp-app2 108.199.20.14108.199.20.1App2/AKE
zscplsjyhmp-app3 108.199.20.15108.199.20.1App3/AKE
zscplsjyhmp-app4 108.199.20.16108.199.20.1App4
zscplsjyhmp-app5 108.199.20.17108.199.20.1Hbase
zscplsjyhmp-app6 108.199.20.18108.199.20.1Hbase
zscplsjyhmp-app7 108.199.20.19108.199.20.1Hbase
zscplsjyhmp-ob1 108.199.20.20108.199.20.1OB1
zscplsjyhmp-ob2 108.199.20.21108.199.20.1OB2
zscplsjyhmp-ob3 108.199.20.22108.199.20.1OB3
2、域名信息本次环境根域名使用:hxmpaaszsc.com3、网络规划网络配置1、 交换机配置堆叠模式2、 下联服务器端口配置trunk,透传物理机vlan和各业务属性vlan3、 如堆叠模式,物理机上联网口做bond,mode4(802.3ad);准生产环境不支持mode4;所以此次使用mode14、 物理机vlanid,管理网段VLAN需要在交换机设置PVID;所用网段内部能够互通
网段用途网段可用地址段网关VLAN
管理网段108.199.20.0/24108.199.20.2-108.199.20.254108.199.20.11247
VLAN-APP108.199.21.0/24108.199.21.2-108.199.21.254108.199.21.11248
VLAN-SPANNER108.199.22.0/24108.199.22.2-108.199.22.254108.199.22.11249
VLAN-db108.199.23.0/24108.199.23.2-108.199.23.254108.199.23.11250
二、OPS系统安装1、安装介质USB使用U盘引导,安装工具机的操作系统,安装过程是全自动的,选择 Only use sda,等待安装结束,机器会自动关机,拔掉U盘后,重新开机,使用root用户登录,密码是123123PS:如果选择了Only use sdb 安装,会出现 not enough space in file systems for the current software selection … 并终止自动安装的过程,这时重启主机,选择Only use sda进行安装即可。Only use sda(5c1c1b)安装完成后密码:1231232、环境配置;分区划分
分区名称分区类型分区大小挂在目录备注
/dev/sda5LVM600G
供KVM装机使用
/dev/sda6ext4600G/home用于装机及镜像文件存放
/dev/sda7ext4剩余所有/var/lib/docker混布容器使用
系统自动划分了home目录,首先注释掉home挂载点:vi /etc/fstab将有home挂载点的行注释掉;将home目录下的文件复制出来:cd /homemv admin /root/mv tops /root/卸载home目录:umount /home 因为磁盘大于2T;传统的fdisk无法使用;使用mkpart和parted来进行分区 对于大于2T的GPT的分区使用mkpart和parted来分区 parted /dev/sda (parted) mklabel gpt # 将MBR磁盘格式化为GPT;如果已经是gpt则不需要执行这部 (parted) print #打印当前分区 (parted) rm 5 (parted) mkpart lvm 59.1G 700G ##划分600G的LVM分区 (parted) mkpart home 700G 1300G ##划分home分区 (parted) mkpart docker 1300G 100% ##划分docker分区 (parted) print #打印当前分区 (parted) q 退出 当有warning(错误提示),然后输入Cancel,就可以了,然后再刚才的那个操作就行了,然后我们在前面给他留1M的空余空间,目的是为了让数据块整齐,提高磁盘的运行效率 lsblk-f #查看分区的标签;如果没有标签;可以在格式化的时候使用-L重新进行标记;如果此时/dev/sda5没有变成lvm;执行pvcreate /dev/sda5 创建挂载目录/var/lib/docekr mkdir -p /var/lib/docker mkfs.ext4/dev/sda6 -L home #格式化并打标签 mkfs.ext4/dev/sda7 -L docker #格式化并打标签 blkid 分区名 (/dev/sda5) #查看分区是否格式化 e2label /dev/sda6#查看标签 e2label /dev/sda5 home#给分区打标签 vim /etc/fstab LABEL=home/homeext4defaults 0 0 LABEL=docker /var/lib/docker ext4 defaults 0 0 保存退出后挂载:mount -adf -h将admin,tops文件移回至home目录:mv /root/admin /home/ mv /root/tops /home/重启服务器。 3、临时网络、主机名、公私钥、NTP时间3.1在OPS1主机上 1、临时网络ifconfig ethx192.168.0.100/24 up #配置临时网络IP ethtool -p ethx#点亮网卡 ip a #查看网络 ethtool ethx #查看网卡信息信息(主要是万兆) #光口驱动可能会不适配;需要单独安装 联想:modprobe qede echo “qede” >> /etc/module.loads.d/qede.conf 2、主机名 hostnamectl set-hostname zscplsjyhmp-ops1vim /etc/hosts #添加ops1和ops2的解析 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6108.199.20.11zscplsjyhmp-ops1ops1108.199.20,12zscplsjyhmp-ops2ops2 exec bash 3、公私钥 ssh-keygen -t rsacd /root/.ssh/cat id_rsa.pub >>authorized_keys3.2在OPS2主机上1、临时网络ifconfig ethx192.168.0.101/24 up #配置临时网络IP ethtool -p ethx#点亮网卡 ip a #查看网络 ethtool ethx #查看网卡信息信息(主要是万兆) #光口驱动可能会不适配;需要单独安装 联想:modprobe qede 2、主机名 hostnamectl set-hostname zscplsjyhmp-ops2vim /etc/hosts #添加ops1和ops2的解析 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6108.199.20.11zscplsjyhmp-ops1 ops1108.199.20.12zscplsjyhmp-ops2 ops2exec bash 3、公私钥 ssh-keygen -t rsacd /root/.ssh/cat id_rsa.pub >> authorized_keys3.3设置SSH互信1、在OPS1主机上将ops2主机的公钥存放到本地认证文件,允许osp2主机免密码登录:ssh zscplsjyhmp-ops2"cat /root/.ssh/id_rsa.pub">>/root/.ssh/authorized_keysroot@ zscplsjyhmp-ops2’s password:123123 查看认证文件,保证只有ops1和ops2主机的公钥:cat /root/.ssh/authorized_keys 验证登录到ops1免密码登录:ssh zscplsjyhmp-ops1 uname -aLinux ops1 3.10.0-327.ali2008.alios7.x86_64 #1 SMP Tue Nov 29 17:56:13 CST 2016 x86_64 x86_64 x86_64 GNU/Linux 验证登录到ops2免密码登录:#ssh zscplsjyhmp-ops2 uname -aLinux ops1 3.10.0-327.ali2008.alios7.x86_64 #1 SMP Tue Nov 29 17:56:13 CST 2016 x86_64 x86_64 x86_64 GNU/Linux 2、在OPS2主机上将ops1主机的公钥存放到本地认证文件,允许osp2主机免密码登录:ssh zscplsjyhmp-ops1 "cat /root/.ssh/id_rsa.pub" >>/root/.ssh/authorized_keys 查看认证文件,保证只有ops1和ops2主机的公钥:cat /root/.ssh/authorized_keys 验证登录到ops1免密码登录:ssh zscplsjyhmp-ops1 uname -aLinux ops1 3.10.0-327.ali2008.alios7.x86_64 #1 SMP Tue Nov 29 17:56:13 CST 2016 x86_64 x86_64 x86_64 GNU/Linux 验证登录到ops2免密码登录:ssh zscplsjyhmp-ops2 uname -aLinux mp-ops2 3.10.0-327.ali2008.alios7.x86_64 #1 SMP Tue Nov 29 17:56:13 CST 2016 x86_64 x86_64 x86_64 GNU/Linux 3.4设置ops机器ntp同步(ops1,ops2)手动设置ops1时间,然后其他机器同步ops1时间:systemctl stop ntpd设置ops1地址为ntp server的地址编辑ntp文件更改以下两行内容:vi /etc/ntp.confrestrict 108.199.20.11server108.199.20.11 iburst minpoll 4 maxpoll 6更新时钟源ntpdate 108.199.20.11systemctl start ntpdntpq -phwclock -w 删除两台ops机器root下的anaconda-ks.cfg,/etc/yum.repos.d下的google文件rm -rf 文件名 4、内核参数(ops1,ops2) #工具机加载ovs模块 echo -e "overlay\nopenvswitch" > /etc/modules-load.d/docker.confecho "toa" > /etc/modules-load.d/toa.confecho "8021q" > /etc/modules-load.d/8021q.confecho "nf_conntrack" > /etc/modules-load.d/nf_conntrack.confecho -e "ipmi_msghandler\nipmi_devintf\nipmi_si" > /etc/modules-load.d/ipmi.confecho 'options nf_conntrack hashsize=375000' >/etc/modprobe.d/nf_conntrack.conf #关闭swapsysctl -w vm.swappiness=0echo "vm.swappiness = 0">> /etc/sysctl.confswapoff -a && swapon -a #关闭DragoonAgent staragentctlchkconfig DragoonAgent offchkconfig staragentctl offsystemctl stop staragentctlsystemctl stop DragoonAgent/etc/init.d/staragentctl stop/usr/alisys/dragoon/bin/agent.sh stopchkconfig --del staragentctl #修改内核参数echo "JoinControllers=cpuset,cpu,cpuacct net_cls,net_prio" >> /etc/systemd/system.confecho "net.ipv4.tcp_max_syn_backlog = 204800" >> /etc/sysctl.confecho "net.core.rmem_max = 16777216" >> /etc/sysctl.confecho "net.core.wmem_max = 16777216" >> /etc/sysctl.confecho "net.netfilter.nf_conntrack_max=3000000" >> /etc/sysctl.confecho "net.ipv4.tcp_timestamps=1" >> /etc/sysctl.confecho "net.ipv4.tcp_tw_timeout=3" >> /etc/sysctl.confecho "net.ipv4.tcp_syncookies = 1" >> /etc/sysctl.confecho "net.ipv4.tcp_synack_retries = 2" >> /etc/sysctl.confecho "net.ipv4.tcp_tw_reuse = 1" >> /etc/sysctl.confecho "net.ipv4.tcp_tw_recycle = 0" >> /etc/sysctl.confecho "net.ipv4.ip_local_reserved_ports = 8899,8888,8443,19211,9001,8902" >> /etc/sysctl.confecho "net.core.somaxconn = 51200" >> /etc/sysctl.confecho "net.ipv4.ip_local_port_range = 1024 61000" >> /etc/sysctl.confsysctl -p 或者: 工具机加载ovs模块:vi /etc/modules-load.d/docker.conf8021qoverlayopenvswitchnf_conntracktoa 手动加载模块:modprobe 8021qmodprobe overlaymodprobe openvswitchmodprobe nf_conntrackmodprobe toa 删除ks文件pgm -f "clu.hn" " rm -f /root/anaconda-ks.cfg" 工具机增加系统内核参数:vi /etc/sysctl.confnet.ipv4.tcp_max_syn_backlog = 65535net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.netfilter.nf_conntrack_max=3000000net.ipv4.tcp_timestamps=1net.ipv4.tcp_tw_timeout=3 sysctl -p 5、禁用ops1,ops2的IPV6vi /etc/sysctl.conf 修改文件中一下参数:net.ipv6.conf.all.disable_ipv6 = 1生效配置:sudo sysctl -p /etc/sysctl.confservice network restart 三、部署KVM虚拟机1、创建目录解压文件从U盘或移动硬盘中拷贝OCT压缩包opsbuild.tgz到ops1主机的/home目录下并解压:cp /mnt/opsbuild.tgz/home/mkdir /home/oct tar –xvf opsbuild.tgz -C /home/oct/ ##必须是这个目录!!2、修改deploy文件 配置SSH KEY,便于deploy host执行:cd /home/oct/opsbuild/etc/salt/pki_dir/sshcat /root/.ssh/id_rsa >salt-ssh.rsacat /root/.ssh/id_rsa.pub >salt-ssh.rsa.pub创建配置文件:cd /home/oct/opsbuild/etc/vi precise-hxbzsc.ymlcat etc/precise-hxbzsc.yml
type: pcroot_key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDqK+CCnR2FiTsGyfje7mE0eoIQHvcz0BxQOuZmfOq0dFcf0v4GSTzO5RGN1CnndRK9hpDDz4I2yuI/mt5X69cLbOh+3UcAK1uIZT9yJhc6ANtg43SLz+VYzF6MhSnsXSeCYPqifJi0OV1Z1FrvgCvnd+WkcVe3bgFs70fLS8j5/jWC3vtNGTb/Eqn4+tdieUFW4oj6uoTviWo50DaDxpkggPCJNVacxCE2AE4ywCMs199tcGQ/15KMt8TqxRPj3J/gEbNu0gM+VX2nj5Ew34b8TewDnHOt93Bxxlbeig39vcJOGFhl4X+eirEUCXFkg75MZQe7+g0E+AFbEA57mMkF root@zscplsjyhmp-ops1dom0_net_bandwidth: 200 #MB/s dns_zone: - alipay.com - hxmpaaszsc.com //域名,需要行方提供 network:- subnet: 108.199.20.0 //物理机网段 netmask: 255.255.255.0 gateway: 108.199.20.1 oobnetwork:- subnet: 172.16.0.0 //oob地址,现在很少使用可以不该 netmask: 255.255.255.0 gateway: 172.16.0.254 kvm_img: /home/oct/opsbuild/alios6_cm10_for_shiyu_20150522105914.qcow2 dom0: - ip: 108.199.20.11 //ops1机IP datapath: /dev/sda5 oobnic: eth2 pnic: - eth11 //bond的slave网卡 - eth13 - ip: 108.199.20.12 //ops2机IP datapath: /dev/sda5 oobnic: eth2 pnic: //bond的slave网卡 - eth11 - eth13 vmyumvip:- 108.199.20.50mysqlvip: 108.199.20.51yumvip:- 108.199.20.52clonewebvip:- 108.199.20.53 oobmastervip:- 108.199.20.54dnsvip:- ip: 108.199.20.55- ip: 108.199.20.56 lvs:- ip: 108.199.20.57- ip: 108.199.20.58 mysql:- ip: 108.199.20.59- ip: 108.199.20.60dns:- ip: 108.199.20.61 disksize: 40G- ip: 108.199.20.62 disksize: 40G clone:- ip: 108.199.20.63 disksize: 20G- ip: 108.199.20.64 disksize: 20G cloneweb:- ip: 108.199.20.65- ip: 108.199.20.66 vmyum:- ip: 108.199.20.67 disksize: 20G- ip: 108.199.20.68 disksize: 20G yum:- ip: 108.199.20.69 disksize: 180G- ip: 108.199.20.70 disksize: 180Gntp:- ip: 108.199.20.71- ip: 108.199.20.72 oob:- ip: 108.199.20.73 oobip: 192.168.0.1- ip: 108.199.20.74 oobip: 192.168.0.2 oobmaster:- ip: 108.199.20.75- ip: 108.199.20.76
编辑完成退出后使用如下命令方便对照查看IP:
kvm_osIP
vmyumvip108.199.20.50
mysqlvip108.199.20.51
yumvip108.199.20.52
clonewebvip108.199.20.53
oobmastervip108.199.20.54
dnsvip108.199.20.55
108.199.20.56
lvs108.199.20.57
108.199.20.58
mysql108.199.20.59
108.199.20.60
dns108.199.20.61
108.199.20.62
clone108.199.20.63
108.199.20.64
cloneweb108.199.20.65
108.199.20.66
vmyum108.199.20.67
108.199.20.68
yum108.199.20.69
108.199.20.70
ntp108.199.20.71
108.199.20.72
oob108.199.20.73
108.199.20.74
oobmaster108.199.20.75
108.199.20.75
cat precise- hxbzsc.yml |grep 108 |awk '{print $NF}'|sort -n|uniq -c;激活虚拟环境,生成详细配置文件:source /home/oct/opsbuild/venv/bin/activate(venv)cd /home/oct/opsbuild/python bin/opsbuild.py gendata etc/precise-hxbzsc.yml3、部署KVMcd /home/oct/opsbuild/source ../venv/bin/activatepython bin/opsbuild.py -d etc/predata.yml deploy host 若部署错误,提示重启网卡失败:cd /etc/sysconfig/network-scripts/ls删除所有.bak,.tmp的网卡配置文件,删除所有没有插线的网卡配置信息,手动重启网络看是否成功:systemctl restart network若中途有其他配置问题失败,需要删除vg:for i in `virsh list|grep run|awk '{print $2}'`dovirsh destroy $ivirsh undefine $idone==dmsetup remove -f vgdata-clone2p1dmsetup remove -f vgdata-clone2p2dmsetup remove -f vgdata-vmyum2p1dmsetup remove -f vgdata-vmyum2p2dmsetup remove -f vgdata-dns2p1dmsetup remove -f vgdata-dns2p2 dmsetup remove -f vgdata-clone1p1dmsetup remove -f vgdata-clone1p2dmsetup remove -f vgdata-vmyum1p1dmsetup remove -f vgdata-vmyum1p2dmsetup remove -f vgdata-dns1p1dmsetup remove -f vgdata-dns1p2 lvremove /dev/vgdata/*vgremove vgdatadd if=/dev/zero of=/dev/sdb2 bs=512k count=1 部署成功后,检查虚拟机启动情况:在ops1主机上:virsh list Id Name State---------------------------------------------------- 2 lvs1 running 3 oob1 running 4 mysql1 running 5 ntp1 running 6 clone1 running 7 cloneweb1 running 8 vmyum1 running 9 dns1 running 10 oobmaster1 running 11 yum1 running 设置所有的VM开机自动启动:virsh autostart lvs1virsh autostart oob1virsh autostart mysql1virsh autostart ntp1virsh autostart clone1virsh autostart cloneweb1virsh autostart vmyum1virsh autostart dns1virsh autostart oobmaster1virsh autostart yum1 除了VIP,登录其他IP检查是否成功:cat /home/oct/opsbuild/etc/precise-sdbank.yml |grep -o 172.16.81.* >> in.lstfor i in `cat in.lst`; do echo "$i == `ssh $i uname -n`"; done 将KVM的IP和主机名写入本机/etc/hosts文件!108.199.20.57 lvs1108.199.20.58 lvs2108.199.20.59 mysql1108.199.20.60 mysql2108.199.20.61 dns1108.199.20.62 dns2108.199.20.63 clone1108.199.20.64 clone2108.199.20.65 cloneweb1108.199.20.66 cloneweb2108.199.20.67 vmyum1108.199.20.68 vmyum2108.199.20.69 yum1108.199.20.70 yum2108.199.20.71 ntp1108.199.20.72 ntp2108.199.20.73 oob1108.199.20.74 oob2108.199.20.75 oobmaster1108.199.20.76 oobmaster2 (venv)在ops2主机上:#virsh list Id Name State---------------------------------------------------- 2 lvs2 running 3 oob2 running 4 mysql2 running 5 ntp2 running 6 clone2 running 7 cloneweb2 running 8 vmyum2 running 9 dns2 running 10 oobmaster2 running 11 yum2 running 设置所有的VM开机自动启动:virsh autostart lvs2virsh autostart oob2virsh autostart mysql2virsh autostart ntp2virsh autostart clone2virsh autostart cloneweb2virsh autostart vmyum2virsh autostart dns2virsh autostart oobmaster2virsh autostart yum2将IP和主机名写入本机/etc/hosts文件!4、部署基础服务说明至此,KVM部署完毕,接下来为每个VM部署对应的服务,由于VM的操作系统是AliOS6的版本, 因此检查服务运行状态,需要用到一下3条命令:chkconfigl --listservice keepalived statusservice keepalived start 请严格按照以下顺序部署,以下所有部署命令,请在ops1主机的oct目录下的虚拟环境下操作:激活虚拟环境:source /home/oct/opsbuild/venv/bin/activate(venv) 5、部署MySQlops1主机下:cd /home/oct/opsbuildpython bin/opsbuild.py -d etc/predata.yml deploy mysql(venv) 命令执行完之后,登录到mysql1和mysql2服务器中,检查keepalived和mysql服务是否开机启动:ssh mysql1uname -acat /etc/redhat-releaseservice keepalived statusservice keepalived startservice keepalived statusservice mysqld status 确保keepalived和mysql服务是否开机启动chkconfig --list|egrep "keep|mysql" 如果都是off,执行:chkconfig mysqld onchkconfig keepalived on 登录到mysql2服务器中,参考上面的命令,检查keepalived和mysql服务。6、部署LVS一定要确保整套系统中的keepalived服务的virtual_router_id 不能冲突,比如lvs1和lvs2的virtual_router_id必须一致,但是不能跟mysql的keepalived的virtual_router_id一样。 在ops1主机上:cd /home/oct/opsbuildpython bin/opsbuild.py -d etc/predata.yml deploy lvs(venv) 登录到lvs1服务器,检查Keepalived服务和LVS服务是否正常,是否是开机启动的:chkconfig --list|egrep "keep|ipvsadm"chkconfig ipvsadm onservice ipvsadm statusservice keepalived status 登录到lvs2服务器中,参考上面的命令,检查keepalived和ipvsadm服务。 7、部署DNScd /home/oct/opsbuildpython bin/opsbuild.py -d etc/predata.yml deploy dns(venv) 登录到dns1服务器,检查named服务是否正常:ssh dns1service named statuscd /var/namedcat hxmpasszsc.com 为ops1和ops2主机配置DNS:cat /etc/resolv.confsearch tbsite.netoptions attempts:2 timeout:2nameserver 108.199.20.55 #DNS1_IPnameserver 108.199.20.56 #DNS2_IP8、部署clone#python bin/opsbuild.py -d etc/predata.yml deploy clone #部署报错,重新部署一次就可以了ops | host | action | result | desc--------|------------------------------|--------------------|--------|--------------------clone |172.50.54.62 |port9999 |False|Command "lsof -i tcp:9999if [ $? -eq 0 ]; then # port 9999 used by other program echo "port 9999 was aleady used" exit 1 else # port 9999 not used sh start_service.sh fi" run(venv)#python bin/opsbuild.py -d etc/predata.yml deploy clone(venv) 登录到clone1服务器,检查clone服务是否正常:#ssh clone1 #service dhcpd statusdhcpd (pid2113) is running... #lsof -i:9999COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEgunicorn 2526 root 5uIPv428313 0t0TCP *:distinct (LISTEN)gunicorn 2531 root 5uIPv428313 0t0TCP *:distinct (LISTEN)gunicorn 2532 root 5uIPv428313 0t0TCP *:distinct (LISTEN)gunicorn 2534 root 5uIPv428313 0t0TCP *:distinct (LISTEN)gunicorn 2536 root 5uIPv428313 0t0TCP *:distinct (LISTEN)gunicorn 2537 root 5uIPv428313 0t0TCP *:distinct (LISTEN) #ps -ef|grep 2526root 2526 10 01:46 ? 00:00:00 /home/tops/bin/python /home/tops/bin/gunicorn --workers=5 -b :9999 ngis_serviceroot 253125260 01:46 ? 00:00:00 /home/tops/bin/python /home/tops/bin/gunicorn --workers=5 -b :9999 ngis_serviceroot 253225260 01:46 ? 00:00:00 /home/tops/bin/python /home/tops/bin/gunicorn --workers=5 -b :9999 ngis_serviceroot 253425260 01:46 ? 00:00:00 /home/tops/bin/python /home/tops/bin/gunicorn --workers=5 -b :9999 ngis_serviceroot 253625260 01:46 ? 00:00:00 /home/tops/bin/python /home/tops/bin/gunicorn --workers=5 -b :9999 ngis_serviceroot 253725260 01:46 ? 00:00:00 /home/tops/bin/python /home/tops/bin/gunicorn --workers=5 -b :9999 ngis_serviceroot 262622540 01:51 pts/0 00:00:00 grep 2526 9、部署cloneweb#python bin/opsbuild.py -d etc/predata.yml deploy clonewebops | host | action | result | desc--------|------------------------------|--------------------|--------|--------------------cloneweb|172.16.31.67 |init_db |False|Command "export DJANGO_SETTINGS_MODULE="settings"/home/alicloneweb/yunclone/manage dbshell < /home/alicloneweb/aliclone.sql" runcloneweb|172.16.31.66 |init_db |False|Command "export DJANGO_SETTINGS_MODULE="settings"/home/alicloneweb/yunclone/manage dbshell < /home/alicloneweb/aliclone.sql" run(venv) ssh cloneweb1#/home/alicloneweb/yunclone/manage dbshell < /home/alicloneweb/aliclone.sqlERROR 2003 (HY000): Can't connect to MySQL server on '172.16.31.53' (113)登录cloneweb1,修复alicloneweb错误:#export DJANGO_SETTINGS_MODULE="settings" #/home/alicloneweb/yunclone/manage dbshell < /home/alicloneweb/aliclone.sql 登录cloneweb2,安装上述步骤执行一遍修复alicloneweb错误。#export DJANGO_SETTINGS_MODULE="settings" #/home/alicloneweb/yunclone/manage dbshell < /home/alicloneweb/aliclone.sql 确保ops1/2的/etc/resolv.conf配置文件指向正确的DNS服务器的VIP,并且DNS的VIP是ping的通的,确保lvs1/2上面的ipvsadm服务是正常工作。cloneweb部署好了,就可以看到以下输出了: #python bin/opsbuild.py acli listid sn hostname os app_name ip sm progress status postcheck_result gmt_created(venv) #python bin/opsbuild.py acli list Traceback (most recent call last): File "bin/opsbuild.py", line 772, in <module> args.func(args, aconfig) File "/home/oct/opsbuild/lib/acli/__init__.py", line 173, in _list headers={'content-type': 'application/json'}) File "/home/oct/venv/lib64/python2.7/site-packages/requests/api.py", line 109, in post return request('post', url, data=data, json=json, **kwargs) File "/home/oct/venv/lib64/python2.7/site-packages/requests/api.py", line 50, in request response = session.request(method=method, url=url, **kwargs) File "/home/oct/venv/lib64/python2.7/site-packages/requests/sessions.py", line 465, in request resp = self.send(prep, **send_kwargs) File "/home/oct/venv/lib64/python2.7/site-packages/requests/sessions.py", line 573, in send r = adapter.send(request, **kwargs) File "/home/oct/venv/lib64/python2.7/site-packages/requests/adapters.py", line 415, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', error(113, 'No route to host')) (venv)10、部署ntp#python bin/opsbuild.py -d etc/predata.yml deploy ntp(venv) 11、部署yum#python bin/opsbuild.py -d etc/predata.yml deploy yum(venv) 删除多余的repo:#rm -rf /etc/yum.repos.d/google-chrome.repo #rm -rf /etc/yum.repos.d/google-chrome.repo12、部署vmyum#python bin/opsbuild.py -d etc/predata.yml deploy vmyum(venv) 13、信息汇总IP地址/主机名清单写入ops的hosts文件内108.199.20.57 lvs1108.199.20.58 lvs2108.199.20.59 mysql1108.199.20.60 mysql2108.199.20.61 dns1108.199.20.62 dns2108.199.20.63 clone1108.199.20.64 clone2108.199.20.65 cloneweb1108.199.20.66 cloneweb2108.199.20.67 vmyum1108.199.20.68 vmyum2108.199.20.69 yum1108.199.20.70 yum2108.199.20.71 ntp1108.199.20.72 ntp2108.199.20.73 oob1108.199.20.74 oob2108.199.20.75 oobmaster1108.199.20.76 oobmaster2所有kvm ip写入kvm.ip文件中;只需要写IP地址即可 所有kvm启动acpid服务pgm -f kvm.ip "chkconfig acpid on;service acpid start" 抱歉,未能达到申请要求,申请不通过,可以关注论坛官方微信(吾爱破解论坛),等待开放注册通知。
页:
[1]