HyperLedger Fabric 1.2(1.0/1.4)kafka环境正式部署-基于阿里云
本帖最后由 鸣风彪悍 于 2019-4-1 09:39 编辑本文档基于阿里云制作,自有虚拟机一样可以操作完成,虚拟机推荐4G内存及以上内存
本文档可以直接参考,基本上无需参考其他文件,有问题,直接私信我问题+邮箱,看到会回复。
fabric搭建完成后,还有接口(sdk)一样搭建完成,涉及到技术问题,固不放出源码了,毕竟官方源码很烂,搞出来不容易。
多通道/ca等一样弄出来了,私下可以交流
请各位觉得有用的朋友,加个热心
本人搭建的时候,重点参考了一位叫做灵龙的大哥,也遇到了许多问题自己解决的。感谢灵龙。贴出原贴地址:
https://www.cnblogs.com/llongst/tag/fabric/
发布后发现md文件打开有问题,我传输一个百度云文件,自行下载
#Fabric系统搭建
##系统环境:ubuntu 16.04 x64
##使用moba,保持linux窗口长连接方法
>不设置长连接,会导致长时间不操作,连接掉线,只能手动重连,之前的操作数据不方便查询。
> ###linux服务器设置:
`vi /etc/ssh/sshd_config`
> 找到 TCPKeepAlive yes把前面的#去掉(阿里云默认去掉了#,不用更改)
> 找到ClientAliveInterval 参数去掉前面的#
>
> 在TCPKeepAlive下面添加下列语句,保存退出
`ClientAliveInterval 60`
>
> 重启服务:
`service ssh restart`
>
> 如果报错,就重启,后续操作也会重启
> moba客户端-Settings-configuration-SSH-SSH keepalive(勾选)
##更新代码库
`apt-get updat`
##安装GO v1.9
> Ubuntu的apt-get自带的go版本太低,这里我们重新安装,输入指令:
`wget https://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz`
> 然后解压:
`sudo tar -C /usr/local -xzf go1.9.linux-amd64.tar.gz`
> 接下来编辑当前用户的环境变量
`vi ~/.profile`
> 在最后添加以下内容
>
```
export GOROOT=/usr/local/go
export GOBIN=$GOROOT/bin
export GOPATH=/usr/local/fabric
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
```
> 最后载入环境变量
`source ~/.profile`
##Docker的安装
>安装docker.io
`apt install docker.io`
验证安装版本
`docker version`
##Docker-Compose的安装
> `apt install docker-compose`
> Docker-Compose
是用来定义和运行复杂应用的Docker工具。可以在一个文件中定义一个多容器应用和容器依赖,并且使用一条命令来启动你的应用,完成一切准备工作。
下载最新版本docker-compose到/usr/local/bin/docker-compose目录下
`curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose`
设置/usr/local/bin/docker-compose目录为可执行权限
`chmod +x /usr/local/bin/docker-compose`
测试docker-compose安装是否成功
`docker-compose -version`
##Node.js && NPM
> Node.js源码安装
下载最新版的源码,这里我选择8.11.3版本。
注意:Node.js 9.x版本不再被支持,请选择8.9.x 或更新的版本
`wget https://nodejs.org/dist/v8.11.3/node-v8.11.3.tar.gz`
解压源码
`ubuntu:~$ tar -zxf node-v8.11.3.tar.gz`
> 编译安装
```
cd node-v8.11.3/
./configure
make
make install
```
> make过程可能会比较长~
> 验证是否安装成功
`node -v`
v8.11.3
`npm -version`
5.6.0
##安装Fabric范例、源码和Docker镜像
> 这里采用官方手册中的替代解决方案。
由于下载速度很慢,预计时间12h
复制官方提供的bootstrap.sh脚本内容到本机
路径
https://github.com/hyperledger/fabric/blob/master/scripts/bootstrap.sh
> bootstrap.sh
```
#!/bin/bash
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# if version not passed in, default to latest released version
export VERSION=1.4.0
# if ca version not passed in, default to latest released version
export CA_VERSION=$VERSION
# current version of thirdparty images (couchdb, kafka and zookeeper) released
export THIRDPARTY_IMAGE_VERSION=0.4.14
export ARCH=$(echo "$(uname -s|tr '[:upper:]' '[:lower:]'|sed 's/mingw64_nt.*/windows/')-$(uname -m | sed 's/x86_64/amd64/g')")
export MARCH=$(uname -m)
printHelp() {
echo "Usage: bootstrap.sh ]] "
echo
echo "options:"
echo "-h : this help"
echo "-d : bypass docker image download"
echo "-s : bypass fabric-samples repo clone"
echo "-b : bypass download of platform-specific binaries"
echo
echo "e.g. bootstrap.sh 1.4.0 -s"
echo "would download docker images and binaries for version 1.4.0"
}
dockerFabricPull() {
local FABRIC_TAG=$1
for IMAGES in peer orderer ccenv javaenv tools; do
echo "==> FABRIC IMAGE: $IMAGES"
echo
docker pull hyperledger/fabric-$IMAGES:$FABRIC_TAG
docker tag hyperledger/fabric-$IMAGES:$FABRIC_TAG hyperledger/fabric-$IMAGES
done
}
dockerThirdPartyImagesPull() {
local THIRDPARTY_TAG=$1
for IMAGES in couchdb kafka zookeeper; do
echo "==> THIRDPARTY DOCKER IMAGE: $IMAGES"
echo
docker pull hyperledger/fabric-$IMAGES:$THIRDPARTY_TAG
docker tag hyperledger/fabric-$IMAGES:$THIRDPARTY_TAG hyperledger/fabric-$IMAGES
done
}
dockerCaPull() {
local CA_TAG=$1
echo "==> FABRIC CA IMAGE"
echo
docker pull hyperledger/fabric-ca:$CA_TAG
docker tag hyperledger/fabric-ca:$CA_TAG hyperledger/fabric-ca
}
samplesInstall() {
# clone (if needed) hyperledger/fabric-samples and checkout corresponding
# version to the binaries and docker images to be downloaded
if [ -d first-network ]; then
# if we are in the fabric-samples repo, checkout corresponding version
echo "===> Checking out v${VERSION} of hyperledger/fabric-samples"
git checkout v${VERSION}
elif [ -d fabric-samples ]; then
# if fabric-samples repo already cloned and in current directory,
# cd fabric-samples and checkout corresponding version
echo "===> Checking out v${VERSION} of hyperledger/fabric-samples"
cd fabric-samples && git checkout v${VERSION}
else
echo "===> Cloning hyperledger/fabric-samples repo and checkout v${VERSION}"
git clone -b master https://github.com/hyperledger/fabric-samples.git && cd fabric-samples && git checkout v${VERSION}
fi
}
# Incrementally downloads the .tar.gz file locally first, only decompressing it
# after the download is complete. This is slower than binaryDownload() but
# allows the download to be resumed.
binaryIncrementalDownload() {
local BINARY_FILE=$1
local URL=$2
curl -f -s -C - ${URL} -o ${BINARY_FILE} || rc=$?
# Due to limitations in the current Nexus repo:
# curl returns 33 when there's a resume attempt with no more bytes to download
# curl returns 2 after finishing a resumed download
# with -f curl returns 22 on a 404
if [ "$rc" = 22 ]; then
# looks like the requested file doesn't actually exist so stop here
return 22
fi
if [ -z "$rc" ] || [ $rc -eq 33 ] || [ $rc -eq 2 ]; then
# The checksum validates that RC 33 or 2 are not real failures
echo "==> File downloaded. Verifying the md5sum..."
localMd5sum=$(md5sum ${BINARY_FILE} | awk '{print $1}')
remoteMd5sum=$(curl -s ${URL}.md5)
if [ "$localMd5sum" == "$remoteMd5sum" ]; then
echo "==> Extracting ${BINARY_FILE}..."
tar xzf ./${BINARY_FILE} --overwrite
echo "==> Done."
rm -f ${BINARY_FILE} ${BINARY_FILE}.md5
else
echo "Download failed: the local md5sum is different from the remote md5sum. Please try again."
rm -f ${BINARY_FILE} ${BINARY_FILE}.md5
exit 1
fi
else
echo "Failure downloading binaries (curl RC=$rc). Please try again and the download will resume from where it stopped."
exit 1
fi
}
# This will attempt to download the .tar.gz all at once, but will trigger the
# binaryIncrementalDownload() function upon a failure, allowing for resume
# if there are network failures.
binaryDownload() {
local BINARY_FILE=$1
local URL=$2
echo "===> Downloading: " ${URL}
# Check if a previous failure occurred and the file was partially downloaded
if [ -e ${BINARY_FILE} ]; then
echo "==> Partial binary file found. Resuming download..."
binaryIncrementalDownload ${BINARY_FILE} ${URL}
else
curl ${URL} | tar xz || rc=$?
if [ ! -z "$rc" ]; then
echo "==> There was an error downloading the binary file. Switching to incremental download."
echo "==> Downloading file..."
binaryIncrementalDownload ${BINARY_FILE} ${URL}
else
echo "==> Done."
fi
fi
}
binariesInstall() {
echo "===> Downloading version ${FABRIC_TAG} platform specific fabric binaries"
binaryDownload ${BINARY_FILE} https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric/hyperledger-fabric/${ARCH}-${VERSION}/${BINARY_FILE}
if [ $? -eq 22 ]; then
echo
echo "------> ${FABRIC_TAG} platform specific fabric binary is not available to download <----"
echo
fi
echo "===> Downloading version ${CA_TAG} platform specific fabric-ca-client binary"
binaryDownload ${CA_BINARY_FILE} https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric-ca/hyperledger-fabric-ca/${ARCH}-${CA_VERSION}/${CA_BINARY_FILE}
if [ $? -eq 22 ]; then
echo
echo "------> ${CA_TAG} fabric-ca-client binary is not available to download(Available from 1.1.0-rc1) <----"
echo
fi
}
dockerInstall() {
which docker >& /dev/null
NODOCKER=$?
if [ "${NODOCKER}" == 0 ]; then
echo "===> Pulling fabric Images"
dockerFabricPull ${FABRIC_TAG}
echo "===> Pulling fabric ca Image"
dockerCaPull ${CA_TAG}
echo "===> Pulling thirdparty docker images"
dockerThirdPartyImagesPull ${THIRDPARTY_TAG}
echo
echo "===> List out hyperledger docker images"
docker images | grep hyperledger*
else
echo "========================================================="
echo "Docker not installed, bypassing download of Fabric images"
echo "========================================================="
fi
}
DOCKER=true
SAMPLES=true
BINARIES=true
# Parse commandline args pull out
# version and/or ca-version strings first
if [ ! -z "$1" -a "${1:0:1}" != "-" ]; then
VERSION=$1;shift
if [ ! -z "$1"-a "${1:0:1}" != "-" ]; then
CA_VERSION=$1;shift
if [ ! -z "$1"-a "${1:0:1}" != "-" ]; then
THIRDPARTY_IMAGE_VERSION=$1;shift
fi
fi
fi
# prior to 1.2.0 architecture was determined by uname -m
if [[ $VERSION =~ ^1\.\.* ]]; then
export FABRIC_TAG=${MARCH}-${VERSION}
export CA_TAG=${MARCH}-${CA_VERSION}
export THIRDPARTY_TAG=${MARCH}-${THIRDPARTY_IMAGE_VERSION}
else
# starting with 1.2.0, multi-arch images will be default
: ${CA_TAG:="$CA_VERSION"}
: ${FABRIC_TAG:="$VERSION"}
: ${THIRDPARTY_TAG:="$THIRDPARTY_IMAGE_VERSION"}
fi
BINARY_FILE=hyperledger-fabric-${ARCH}-${VERSION}.tar.gz
CA_BINARY_FILE=hyperledger-fabric-ca-${ARCH}-${CA_VERSION}.tar.gz
# then parse opts
while getopts "h?dsb" opt; do
case "$opt" in
h|\?)
printHelp
exit 0
;;
d)DOCKER=false
;;
s)SAMPLES=false
;;
b)BINARIES=false
;;
esac
done
if [ "$SAMPLES" == "true" ]; then
echo
echo "Installing hyperledger/fabric-samples repo"
echo
samplesInstall
fi
if [ "$BINARIES" == "true" ]; then
echo
echo "Installing Hyperledger Fabric binaries"
echo
binariesInstall
fi
if [ "$DOCKER" == "true" ]; then
echo
echo "Installing Hyperledger Fabric docker images"
echo
dockerInstall
fi
```
> 下载文件到服务器,修改fabric文件权限
否则会提示Permission denied
`chmod -R 777 fabric/`
`cd ./`
下载的bootstrap.sh脚本需要修改文件格式,为DOS格式,需要转换成UNIX格式
`vim ./bootstrap.sh`
查看文件格式dos或unix的字样.
`:set ff ? `
如果是要转化成unix格式就是
`:set ff=unix`
然后保存退出
执行脚本文件
`./bootstrap.sh`
##安装Fabric范例、源码和Docker镜像(快速安装)
根据自己的阿里镜像配置阿里镜像库
`https://blog.csdn.net/sinat_32247833/article/details/79767263`
> 修改配置文件
`mkdir -p /etc/docker`
`vi /etc/docker/daemon.json`
> 添加
```
{
"registry-mirrors": ["https://erhtkl3b.mirror.aliyuncs.com"]
}
```
> 重启docker
`systemctl daemon-reload`
`systemctl restart docker`
> 安装docker镜像-指定版本(安装docker镜像-最新版本)
```
docker pull hyperledger/fabric-ca:1.2.0
docker pull hyperledger/fabric-tools:1.2.0
docker pull hyperledger/fabric-ccenv:1.2.0
docker pull hyperledger/fabric-orderer:1.2.0
docker pull hyperledger/fabric-peer:1.2.0
docker pull hyperledger/fabric-zookeeper:0.4.10
docker pull hyperledger/fabric-kafka:0.4.10
docker pull hyperledger/fabric-couchdb:0.4.10
docker pull hyperledger/fabric-baseos:amd64-0.4.10
docker pull hyperledger/fabric-ca
docker pull hyperledger/fabric-tools
docker pull hyperledger/fabric-ccenv
docker pull hyperledger/fabric-orderer
docker pull hyperledger/fabric-peer
docker pull hyperledger/fabric-zookeeper
docker pull hyperledger/fabric-kafka
docker pull hyperledger/fabric-couchdb
docker pull hyperledger/fabric-baseos
```
## 安装Fabric源码
> 下载fabric源码,可以重新下载,也可以拷贝已有的
转到自己想要的文件夹下
```
cd /usr/local/
git clone https://github.com/hyperledger/fabric.git
```
> 编译源码,需要进入fabric目录
```
cd ./fabric
git checkout v1.2.0
```
> 修改fabric文件权限
```
cd ..
chmod -R 777 fabric/
cd ./fabric
```
> ###准备生成证书和区块配置文件
可以参考文章:https://www.cnblogs.com/llongst/p/9571363.html
配置crypto-config.yaml和configtx.yaml文件,放在fabric目录下。(yaml文件太多,在此不展示)
> crypto-config.yaml:
```
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
# ---------------------------------------------------------------------------
# Orderer
# ---------------------------------------------------------------------------
- Name: Orderer
Domain: example.com
CA:
Country: US
Province: California
Locality: San Francisco
# ---------------------------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# ---------------------------------------------------------------------------
Specs:
- Hostname: orderer
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
# ---------------------------------------------------------------------------
# Org1
# ---------------------------------------------------------------------------
- Name: Org1
Domain: org1.example.com
EnableNodeOUs: true
CA:
Country: US
Province: California
Locality: San Francisco
# ---------------------------------------------------------------------------
# "Specs"
# ---------------------------------------------------------------------------
# Uncomment this section to enable the explicit definition of hosts in your
# configuration.Most users will want to use Template, below
#
# Specs is an array of Spec entries.Each Spec entry consists of two fields:
# - Hostname: (Required) The desired hostname, sans the domain.
# - CommonName: (Optional) Specifies the template or explicit override for
# the CN.By default, this is the template:
#
# "{{.Hostname}}.{{.Domain}}"
#
# which obtains its values from the Spec.Hostname and
# Org.Domain, respectively.
# ---------------------------------------------------------------------------
# Specs:
# - Hostname: foo # implicitly "foo.org1.example.com"
# CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
# - Hostname: bar
# - Hostname: baz
# ---------------------------------------------------------------------------
# "Template"
# ---------------------------------------------------------------------------
# Allows for the definition of 1 or more hosts that are created sequentially
# from a template. By default, this looks like "peer%d" from 0 to Count-1.
# You may override the number of nodes (Count), the starting index (Start)
# or the template used to construct the name (Hostname).
#
# Note: Template and Specs are not mutually exclusive.You may define both
# sections and the aggregate nodes will be created for you.Take care with
# name collisions
# ---------------------------------------------------------------------------
Template:
Count: 2
# Start: 5
# Hostname: {{.Prefix}}{{.Index}} # default
# ---------------------------------------------------------------------------
# "Users"
# ---------------------------------------------------------------------------
# Count: The number of user accounts _in addition_ to Admin
# ---------------------------------------------------------------------------
Users:
Count: 1
# ---------------------------------------------------------------------------
# Org2: See "Org1" for full specification
# ---------------------------------------------------------------------------
- Name: Org2
Domain: org2.example.com
EnableNodeOUs: true
CA:
Country: US
Province: California
Locality: San Francisco
Template:
Count: 2
Users:
Count: 1
```
> configtx.yaml:
```
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
---
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
# SampleOrg defines an MSP using the sampleconfig.It should never be used
# in production but may be used as a template for other definitions
- &OrdererOrg
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: OrdererOrg
# ID to load the MSP definition as
ID: OrdererMSP
# MSPDir is the filesystem path which contains the MSP configuration
MSPDir: crypto-config/ordererOrganizations/example.com/msp
# Policies defines the set of policies at this level of the config tree
# For organization policies, their canonical path is usually
# /Channel/<Application|Orderer>/<OrgName>/<PolicyName>
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &Org1
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org1MSP
# ID to load the MSP definition as
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
# Policies defines the set of policies at this level of the config tree
# For organization policies, their canonical path is usually
# /Channel/<Application|Orderer>/<OrgName>/<PolicyName>
Policies:
Readers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')"
Writers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.client')"
Admins:
Type: Signature
Rule: "OR('Org1MSP.admin')"
AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication.Note, this value is only
# encoded in the genesis block in the Application section context
- Host: peer0.org1.example.com
Port: 7051
- &Org2
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org2MSP
# ID to load the MSP definition as
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
# Policies defines the set of policies at this level of the config tree
# For organization policies, their canonical path is usually
# /Channel/<Application|Orderer>/<OrgName>/<PolicyName>
Policies:
Readers:
Type: Signature
Rule: "OR('Org2MSP.admin', 'Org2MSP.peer', 'Org2MSP.client')"
Writers:
Type: Signature
Rule: "OR('Org2MSP.admin', 'Org2MSP.client')"
Admins:
Type: Signature
Rule: "OR('Org2MSP.admin')"
AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication.Note, this value is only
# encoded in the genesis block in the Application section context
- Host: peer0.org2.example.com
Port: 7051
################################################################################
#
# SECTION: Capabilities
#
# - This section defines the capabilities of fabric network. This is a new
# concept as of v1.1.0 and should not be utilized in mixed networks with
# v1.0.x peers and orderers.Capabilities define features which must be
# present in a fabric binary for that binary to safely participate in the
# fabric network.For instance, if a new MSP type is added, newer binaries
# might recognize and validate the signatures from this type, while older
# binaries without this support would be unable to validate those
# transactions.This could lead to different versions of the fabric binaries
# having different world states.Instead, defining a capability for a channel
# informs those binaries without this capability that they must cease
# processing transactions until they have been upgraded.For v1.0.x if any
# capabilities are defined (including a map with all capabilities turned off)
# then the v1.0.x peer will deliberately crash.
#
################################################################################
Capabilities:
# Channel capabilities apply to both the orderers and the peers and must be
# supported by both.Set the value of the capability to true to require it.
Global: &ChannelCapabilities
# V1.1 for Global is a catchall flag for behavior which has been
# determined to be desired for all orderers and peers running v1.0.x,
# but the modification of which would cause incompatibilities.Users
# should leave this flag set to true.
V1_1: true
# Orderer capabilities apply only to the orderers, and may be safely
# manipulated without concern for upgrading peers.Set the value of the
# capability to true to require it.
Orderer: &OrdererCapabilities
# V1.1 for Order is a catchall flag for behavior which has been
# determined to be desired for all orderers running v1.0.x, but the
# modification of whichwould cause incompatibilities.Users should
# leave this flag set to true.
V1_1: true
# Application capabilities apply only to the peer network, and may be safely
# manipulated without concern for upgrading orderers.Set the value of the
# capability to true to require it.
Application: &ApplicationCapabilities
# V1.1 for Application is a catchall flag for behavior which has been
# determined to be desired for all peers running v1.0.x, but the
# modification of which would cause incompatibilities.Users should
# leave this flag set to true.
V1_2: true
################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
# Organizations is the list of orgs which are defined as participants on
# the application side of the network
Organizations:
# Policies defines the set of policies at this level of the config tree
# For Application policies, their canonical path is
# /Channel/Application/<PolicyName>
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
# Capabilities describes the application level capabilities, see the
# dedicated Capabilities section elsewhere in this file for a full
# description
Capabilities:
<<: *ApplicationCapabilities
################################################################################
#
# SECTION: Orderer
#
# - This section defines the values to encode into a config transaction or
# genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: solo
Addresses:
- orderer.example.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 98 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects. Edit
# this list to identify the brokers of the ordering service.
# NOTE: Use IP:port notation.
Brokers:
- 127.0.0.1:9092
# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations:
# Policies defines the set of policies at this level of the config tree
# For Orderer policies, their canonical path is
# /Channel/Orderer/<PolicyName>
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
# BlockValidation specifies what signatures must be included in the block
# from the orderer for the peer to validate it.
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
# Capabilities describes the orderer level capabilities, see the
# dedicated Capabilities section elsewhere in this file for a full
# description
Capabilities:
<<: *OrdererCapabilities
################################################################################
#
# CHANNEL
#
# This section defines the values to encode into a config transaction or
# genesis block for channel related parameters.
#
################################################################################
Channel: &ChannelDefaults
# Policies defines the set of policies at this level of the config tree
# For Channel policies, their canonical path is
# /Channel/<PolicyName>
Policies:
# Who may invoke the 'Deliver' API
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
# Who may invoke the 'Broadcast' API
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
# By default, who may modify elements at this config level
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
# Capabilities describes the channel level capabilities, see the
# dedicated Capabilities section elsewhere in this file for a full
# description
Capabilities:
<<: *ChannelCapabilities
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:
TwoOrgsOrdererGenesis:
<<: *ChannelDefaults
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
```
> 生成公私钥和证书
`./bin/cryptogen generate --config=./crypto-config.yaml`
> 生成创世区块
`mkdir channel-artifacts`
`/bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block`
> 生成通道配置区块
`./bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel`
> ###准备docker配置文件
配置docker-compose-XXXXXXX.yaml文件,并修改ip,拷贝到fabric目录下
> docker-compose-orderer.yaml:
```
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 7050:7050
```
> docker-compose-peer.yaml:
```
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=multipeer_default
#- CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 7051:7051
- 7052:7052
- 7053:7053
extra_hosts:
- "orderer.example.com:192.168.235.100"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/multipeer/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- peer0.org1.example.com
extra_hosts:
- "orderer.example.com:192.168.235.100"
- "peer0.org1.example.com:192.168.235.101"
- "peer1.org1.example.com:192.168.235.102"
- "peer0.org2.example.com:192.168.235.103"
- "peer1.org2.example.com:192.168.235.104"
```
## 启动Fabric网络
> 修改order、peer文件的ip地址。
启动orderer和peer
`docker-compose -f docker-compose-orderer.yaml up -d`
`docker-compose -f docker-compose-peer.yaml up -d`
> 报错
```
orderer.example.com | 2018-12-27 13:47:14.743 UTC initializeLocalMsp -> FATA 002 Failed to initialize local MSP: could not load a valid signer certificate from directory /var/hyperledger/orderer/msp/signcerts: stat /var/hyperledger/orderer/msp/signcerts: no such file or directory
```
> 此错误为自行拷贝的crypto-config.yaml和configtx.yaml中,order的配置出错,更换文件后解决问题
> 安装防火墙组件
`apt install firewalld`
> 防火墙命令操作,先打开7050端口(否则会报7050端口不通的错误,后续还需要打开其他端口,请自行打开)
```
firewall-cmd --list-ports
firewall-cmd --zone=public --add-port=7050/tcp --permanent
firewall-cmd --zone=public --add-port=7051/tcp --permanent
firewall-cmd --zone=public --add-port=7052/tcp --permanent
firewall-cmd --zone=public --add-port=9091/tcp --permanent
firewall-cmd --zone=public --add-port=9092/tcp --permanent
firewall-cmd --zone=public --add-port=2181/tcp --permanent
firewall-cmd --zone=public --add-port=2888/tcp --permanent
firewall-cmd --zone=public --add-port=3888/tcp --permanent
firewall-cmd --zone=public --add-port=7007/tcp --permanent
firewall-cmd --reload
```
> 关闭防火墙
`ufw disable`
> 重启电脑
`sudo reboot`
> 删除Fabric容器
```
docker rm -f $(docker ps -aq)
docker rmi -f $(docker images |grep "dev-" |awk '{print $3}')
docker inspect --format='{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
```
> 启动orderer和peer
`docker-compose -f docker-compose-orderer.yaml up -d`
`docker-compose -f docker-compose-peer.yaml up -d`
> 启动cli容器
`docker exec -it cli bash`
> 创建Channel
```
ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
peer channel create -o orderer.example.com:7050 -c mychannel -f ./channel-artifacts/mychannel.tx --tls --cafile $ORDERER_CA
```
> Peer加入Channel
`peer channel join -b mychannel.block`
> 安装智能合约
`peer chaincode install -n mycc -p github.com/hyperledger/fabric/multipeer/chaincode/go/example02/cmd/ -v 1.0`
> 报错----路径问题。去掉CMD目录
```
Error: error getting chaincode code mycc: path to chaincode does not exist: /opt/gopath/src/github.com/hyperledger/fabric/multipeer/chaincode/go/example02/cmd
```
> 安装智能合约
`peer chaincode install -n mycc -p github.com/hyperledger/fabric/multipeer/chaincode/go/example02/ -v 1.0`
## 实例化智能合约
> 区块初始化数据为a为100,b为200。
```
ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile $ORDERER_CA -C mychannel -n mycc -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P "OR ('Org1MSP.peer','Org2MSP.peer')"
```
> 报错---peer文件中,CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=multipeer_default,需要更改为fabric_default
```
2018-12-28 05:55:37.952 UTC checkChaincodeCmdParams -> INFO 004 Using default vscc
Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: API error (404): network multipeer_default not found
```
> 更改后,删除容器重新开始。
> Peer上查询a,显示100
`peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'`
> 至此,区块链1order,1peer测试环境搭建完成
> 删除docker 文件,备份镜像。(阿里云镜像备份太慢了,直接新开,所有机器重新安装,速度更快)
## 开始搭建多节点机器。
> 配置hosts文件的ip映射
```
47.111.16.153 zookeeper0
47.111.18.207 zookeeper1
47.110.245.35 zookeeper2
47.111.16.153 kafka0
47.111.18.207 kafka1
47.110.245.35 kafka2
47.111.0.157 kafka3
47.111.16.153 orderer0.example.com
47.111.18.207 orderer1.example.com
47.110.245.35 orderer2.example.com
47.111.0.157 peer0.org1.example.com
192.168.235.8 peer1.org1.example.com
192.168.235.9 peer0.org2.example.com
192.168.235.10 peer1.org2.example.com
```
> 修改kafka、zookeeper,order,peer的ip配置,
重新生成证书和区块配置文件:channel-artifacts,crypto-config
然后将其传到其他服务器
scp -r channel-artifacts crypto-config root@47.111.0.157:/usr/local/fabric
> (**重点**)分别配置好kafka,zookeeper,order文件,注意每一行代码上下文的匹配性。如果嫌麻烦,可以直接拷贝现成文件,然后只修改ip。
按照zookeeper0-2,kafka0-3,order0-2启动顺序,依次启动,最后启动peer节点
> 这里报错,基本上都是链接的问题了,看端口是否打开,打开后是否被监听,阿里云服务器的安全策略组是否设置好等
启动peer后,进入容器,创建通道,提示报错
Will not attempt to authenticate using SASL
错误原因,linux端口未被监听,需要手动监听端口
开启端口命令
```
nc -lp 7050 &
nc -lp 7051 &
nc -lp 7052 &
nc -lp 2888 &
nc -lp 3888 &
nc -lp 2181 &
nc -lp 9091 &
nc -lp 9092 &
```
> 查看端口是否开启
`netstat -an | grep 7050`
> 重新删除镜像,再来一次。
`cd /usr/local/fabric/kafkapeer/`
##安装python3
> ubuntu 16.04自带python 2.7.12以及3.5.2,这里选择使用默认的python3即可,所有关于python指令,均改为python3即可
安装pip3并更新最新版pip3 pip-19.0.3
`apt install python3-pip`
`pip3 install --upgrade pip`
修改pip3文件,文件目录:/usr/bin/pip3
否则报错`cannot import name 'main'`
原版:
```
#!/usr/bin/python3
# GENERATED BY DEBIAN
import sys
# Run the main entry point, similarly to how setuptools does it, but because
# we didn't install the actual entry point from setup.py, don't use the
# pkg_resources API.
from pip import main
if __name__ == '__main__':
sys.exit(main())
```
修改后文件:
```
#!/usr/bin/python3
# GENERATED BY DEBIAN
import sys
# Run the main entry point, similarly to how setuptools does it, but because
# we didn't install the actual entry point from setup.py, don't use the
# pkg_resources API.
from pip import __main__
if __name__ == '__main__':
sys.exit(__main__._main())
```
> 安装python包paramiko flask
`pip3 install paramiko`
`pip3 install flask`
运行程序
> linux-sdk服务器问题集
安装好python3后,安装包后,运行代码,提示错误
`Python OSError: Cannot assign requested address`
> 地址出现问题,查看端口,发现未启动
`netstat -anp |grep 7007`
>
`firewall-cmd --zone=public --add-port=7007/tcp --permanent`
> 开启端口,报错。
`Failed to start firewalld - dynamic firewall daemon`
端口无法启动
运行systemctl start firewalld 报错
查看文档,发现是默认python版本的问题,修改版本。
`vi /usr/sbin/firewalld`
头部内容默认是
`#!/usr/bin/python`
需要修改为:`#!/usr/bin/python/python2.7`
通过systemctl status firewalld查看firewalld状态,发现当前是dead状态,即防火墙未开启
重新开启防火墙等
通过`systemctl start firewalld`开启防火墙,没有任何提示即开启成功。
再通过`systemctl status firewalld`查看firewalld状态,显示running即已开启了。
执行`firewall-cmd --permanent --zone=public --add-port=8888/tcp`,提示success,表示设置成功,就可以继续后面的设置了。
> 如何 杀死占用端口的程序
查询端口
`netstat -tlnp|grep 5000`
`tcp? ? ? ? 0? ? ? 0 0.0.0.0:5000? ? ? ? ? ? 0.0.0.0:*? ? ? ? ? ? ? ?LISTEN? ? ? 2345/python`
> 杀死程序
`kill -9 2345`
>
`netstat -tlnp|grep 5000`
> python 后台启动
`nohup python xxxxx.py&`
链接: https://pan.baidu.com/s/1MLOqRKooXMlOPWvJ31Zd7g 提取码: copy
原创区只发布原创内容,百度已搜索得“http://www.cnblogs.com/NinWoo/archive/2018/07/23/9357113.html”或更多出处 感谢楼主!无私奉献。 谢谢楼主分享技术贴
页:
[1]