一、前言:
1、底下的配置文件很重要,一配置错了就有问题了,下面附出这篇文章所需要的配置文件,下载链接:https://download.csdn.net/download/u012561176/15041906,大家可以参考底下的配置来
2、在部署时,遇到了很多问题,还有错误,基本上都是配置文件的锅,如有问题,可以看此文章链接:https://blog.csdn.net/u012561176/article/details/113244536
二、生成Fabric组织机构和身份证书
1、新建一个fabric配置目录,输入命令:
sudo mkdir -p /opt/hyperledger/fabricconfig
2、进入上面创建的fabricconfig目录,并生成配置文件crypto-config.yaml,依次输入下面的命令:
cd /opt/hyperledger/fabricconfig
sudo vim crypto-config.yaml
输入内容,配置一个排序服务节点orderer,和两个普通节点peer:
OrdererOrgs:
- Name: Orderer
Domain: qkltest.com
EnableNodeOUs: true
Specs:
- Hostname: orderer
PeerOrgs:
- Name: Org1
Domain: org1.qkltest.com
EnableNodeOUs: true
Template:
Count: 3
Users:
Count: 3
- Name: Org2
Domain: org2.qkltest.com
EnableNodeOUs: true
Template:
Count: 2
Users:
Count: 2
3、接着通过这个配置文件生成证书:
(1)、输入命令:
sudo cryptogen generate --config=crypto-config.yaml --output ./crypto-config
如下图所示:
(2)、此时将会在这个目录底下生成一个crypto-config目录,如下图所示:
(3)、进入crypto-config目录,看底下有什么东西,依次输入命令:
cd crypto-config
ls
如下图所示,生成了一个排序服务节点orderer配置信息目录和普通节点peer配置信息目录:
(4)、查看排序服务节点orderer配置目录底下的内容,就是相关的配置证书,还有秘钥相关的配置文件:
(5)、查看普通节点peer配置目录底下的内容,生成了两个普通节点目录,分别用crypto-config.yaml配置文件底下的domain主机名来生成:
(6)、节点下的目录内容都差不多,也是相关的配置证书,还有秘钥相关的配置文件:
(7)、由于在crypto-config.yaml配置文件底下配置了数量为3,所以底下的peers目录会生成三个目录,如下图所示:
(8)、综上所述,此时的组织结构和身份证书便生成了。
三、Orderer服务启动创始区块的创建,以及应用通道配置文件、锚节点更新配置文件的生成
1、Orderer服务启动创世区块的创建
(1)、首先,创建一个目录,用来存放生成创始区块和channel通道的配置文件 – configtx.yaml,输入命令:
sudo mkdir -p /opt/hyperledger/order
(2)、在之前下载的源码底下,即~/go/src/github.com/hyperledger/fabric/scripts/fabric-samples/test-network/configtx底下有这个配置文件,复制过来,并修改相关内容,依次输入命令:
cd /opt/hyperledger/order
sudo cp ~/go/src/github.com/hyperledger/fabric/scripts/fabric-samples/test-network/configtx/configtx.yaml configtx.yaml
sudo vim configtx.yaml
修改后如下:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
---
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
# SampleOrg defines an MSP using the sampleconfig. It should never be used
# in production but may be used as a template for other definitions
- &OrdererOrg
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: OrdererOrg
# ID to load the MSP definition as
ID: OrdererMSP
# MSPDir is the filesystem path which contains the MSP configuration
MSPDir: /opt/hyperledger/fabricconfig/crypto-config/ordererOrganizations/qkltest.com/msp
# Policies defines the set of policies at this level of the config tree
# For organization policies, their canonical path is usually
# /Channel/<Application|Orderer>/<OrgName>/<PolicyName>
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
Endorsement:
Type: Signature
Rule: "OR('OrdererMSP.member')"
OrdererEndpoints:
- orderer.qkltest.com:7050
- &Org1
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org1MSP
# ID to load the MSP definition as
ID: Org1MSP
MSPDir: /opt/hyperledger/fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/msp
# Policies defines the set of policies at this level of the config tree
# For organization policies, their canonical path is usually
# /Channel/<Application|Orderer>/<OrgName>/<PolicyName>
Policies:
Readers:
Type: Signature
Rule: "OR('Org1MSP.member')"
Writers:
Type: Signature
Rule: "OR('Org1MSP.member')"
Admins:
Type: Signature
Rule: "OR('Org1MSP.admin')"
Endorsement:
Type: Signature
Rule: "OR('Org1MSP.member')"
OrdererEndpoints:
- orderer.qkltest.com:7050
AnchorPeers:
- Host: peer0.org1.qkltest.com
Port: 7051
- &Org2
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org2MSP
# ID to load the MSP definition as
ID: Org2MSP
MSPDir: /opt/hyperledger/fabricconfig/crypto-config/peerOrganizations/org2.qkltest.com/msp
# Policies defines the set of policies at this level of the config tree
# For organization policies, their canonical path is usually
# /Channel/<Application|Orderer>/<OrgName>/<PolicyName>
Policies:
Readers:
Type: Signature
Rule: "OR('Org2MSP.member')"
Writers:
Type: Signature
Rule: "OR('Org2MSP.member')"
Admins:
Type: Signature
Rule: "OR('Org2MSP.admin')"
Endorsement:
Type: Signature
Rule: "OR('Org2MSP.member')"
OrdererEndpoints:
- orderer.qkltest.com:7050
AnchorPeers:
- Host: peer0.org2.qkltest.com
Port: 7051
################################################################################
#
# SECTION: Capabilities
#
# - This section defines the capabilities of fabric network. This is a new
# concept as of v1.1.0 and should not be utilized in mixed networks with
# v1.0.x peers and orderers. Capabilities define features which must be
# present in a fabric binary for that binary to safely participate in the
# fabric network. For instance, if a new MSP type is added, newer binaries
# might recognize and validate the signatures from this type, while older
# binaries without this support would be unable to validate those
# transactions. This could lead to different versions of the fabric binaries
# having different world states. Instead, defining a capability for a channel
# informs those binaries without this capability that they must cease
# processing transactions until they have been upgraded. For v1.0.x if any
# capabilities are defined (including a map with all capabilities turned off)
# then the v1.0.x peer will deliberately crash.
#
################################################################################
Capabilities:
# Channel capabilities apply to both the orderers and the peers and must be
# supported by both.
# Set the value of the capability to true to require it.
Channel: &ChannelCapabilities
# V2_0 capability ensures that orderers and peers behave according
# to v2.0 channel capabilities. Orderers and peers from
# prior releases would behave in an incompatible way, and are therefore
# not able to participate in channels at v2.0 capability.
# Prior to enabling V2.0 channel capabilities, ensure that all
# orderers and peers on a channel are at v2.0.0 or later.
V2_0: true
# Orderer capabilities apply only to the orderers, and may be safely
# used with prior release peers.
# Set the value of the capability to true to require it.
Orderer: &OrdererCapabilities
# V2_0 orderer capability ensures that orderers behave according
# to v2.0 orderer capabilities. Orderers from
# prior releases would behave in an incompatible way, and are therefore
# not able to participate in channels at v2.0 orderer capability.
# Prior to enabling V2.0 orderer capabilities, ensure that all
# orderers on channel are at v2.0.0 or later.
V2_0: true
# Application capabilities apply only to the peer network, and may be safely
# used with prior release orderers.
# Set the value of the capability to true to require it.
Application: &ApplicationCapabilities
# V2_0 application capability ensures that peers behave according
# to v2.0 application capabilities. Peers from
# prior releases would behave in an incompatible way, and are therefore
# not able to participate in channels at v2.0 application capability.
# Prior to enabling V2.0 application capabilities, ensure that all
# peers on channel are at v2.0.0 or later.
V2_0: true
################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
ACLs: &ACLsDefault
# This section provides defaults for policies for various resources
# in the system. These "resources" could be functions on system chaincodes
# (e.g., "GetBlockByNumber" on the "qscc" system chaincode) or other resources
# (e.g.,who can receive Block events). This section does NOT specify the resource's
# definition or API, but just the ACL policy for it.
#
# Users can override these defaults with their own policy mapping by defining the
# mapping under ACLs in their channel definition
#---New Lifecycle System Chaincode (_lifecycle) function to policy mapping for access control--#
# ACL policy for _lifecycle's "CheckCommitReadiness" function
_lifecycle/CheckCommitReadiness: /Channel/Application/Writers
# ACL policy for _lifecycle's "CommitChaincodeDefinition" function
_lifecycle/CommitChaincodeDefinition: /Channel/Application/Writers
# ACL policy for _lifecycle's "QueryChaincodeDefinition" function
_lifecycle/QueryChaincodeDefinition: /Channel/Application/Readers
# ACL policy for _lifecycle's "QueryChaincodeDefinitions" function
_lifecycle/QueryChaincodeDefinitions: /Channel/Application/Readers
#---Lifecycle System Chaincode (lscc) function to policy mapping for access control---#
# ACL policy for lscc's "getid" function
lscc/ChaincodeExists: /Channel/Application/Readers
# ACL policy for lscc's "getdepspec" function
lscc/GetDeploymentSpec: /Channel/Application/Readers
# ACL policy for lscc's "getccdata" function
lscc/GetChaincodeData: /Channel/Application/Readers
# ACL Policy for lscc's "getchaincodes" function
lscc/GetInstantiatedChaincodes: /Channel/Application/Readers
#---Query System Chaincode (qscc) function to policy mapping for access control---#
# ACL policy for qscc's "GetChainInfo" function
qscc/GetChainInfo: /Channel/Application/Readers
# ACL policy for qscc's "GetBlockByNumber" function
qscc/GetBlockByNumber: /Channel/Application/Readers
# ACL policy for qscc's "GetBlockByHash" function
qscc/GetBlockByHash: /Channel/Application/Readers
# ACL policy for qscc's "GetTransactionByID" function
qscc/GetTransactionByID: /Channel/Application/Readers
# ACL policy for qscc's "GetBlockByTxID" function
qscc/GetBlockByTxID: /Channel/Application/Readers
#---Configuration System Chaincode (cscc) function to policy mapping for access control---#
# ACL policy for cscc's "GetConfigBlock" function
cscc/GetConfigBlock: /Channel/Application/Readers
# ACL policy for cscc's "GetConfigTree" function
cscc/GetConfigTree: /Channel/Application/Readers
# ACL policy for cscc's "SimulateConfigTreeUpdate" function
cscc/SimulateConfigTreeUpdate: /Channel/Application/Readers
#---Miscellaneous peer function to policy mapping for access control---#
# ACL policy for invoking chaincodes on peer
peer/Propose: /Channel/Application/Writers
# ACL policy for chaincode to chaincode invocation
peer/ChaincodeToChaincode: /Channel/Application/Readers
#---Events resource to policy mapping for access control###---#
# ACL policy for sending block events
event/Block: /Channel/Application/Readers
# ACL policy for sending filtered block events
event/FilteredBlock: /Channel/Application/Readers
# Organizations is the list of orgs which are defined as participants on
# the application side of the network
Organizations:
# Policies defines the set of policies at this level of the config tree
# For Application policies, their canonical path is
# /Channel/Application/<PolicyName>
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
LifecycleEndorsement:
Type: ImplicitMeta
Rule: "MAJORITY Endorsement"
Endorsement:
Type: ImplicitMeta
Rule: "MAJORITY Endorsement"
Capabilities:
<<: *ApplicationCapabilities
################################################################################
#
# SECTION: Orderer
#
# - This section defines the values to encode into a config transaction or
# genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
OrdererType: solo
# Addresses used to be the list of orderer addresses that clients and peers
# could connect to. However, this does not allow clients to associate orderer
# addresses and orderer organizations which can be useful for things such
# as TLS validation. The preferred way to specify orderer addresses is now
# to include the OrdererEndpoints item in your org definition
Addresses:
- orderer.qkltest.com:7050
#EtcdRaft:
# Consenters:
# - Host: orderer.example.com
# Port: 7050
# ClientTLSCert: ../organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
# ServerTLSCert: ../organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- 127.0.0.1:9092
# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations:
# Policies defines the set of policies at this level of the config tree
# For Orderer policies, their canonical path is
# /Channel/Orderer/<PolicyName>
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
# BlockValidation specifies what signatures must be included in the block
# from the orderer for the peer to validate it.
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
################################################################################
#
# CHANNEL
#
# This section defines the values to encode into a config transaction or
# genesis block for channel related parameters.
#
################################################################################
Channel: &ChannelDefaults
# Policies defines the set of policies at this level of the config tree
# For Channel policies, their canonical path is
# /Channel/<PolicyName>
Policies:
# Who may invoke the 'Deliver' API
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
# Who may invoke the 'Broadcast' API
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
# By default, who may modify elements at this config level
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
# Capabilities describes the channel level capabilities, see the
# dedicated Capabilities section elsewhere in this file for a full
# description
Capabilities:
<<: *ChannelCapabilities
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:
TestTwoOrgsOrdererGenesis:
<<: *ChannelDefaults
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
TestTwoOrgsChannel:
Consortium: SampleConsortium
<<: *ChannelDefaults
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
Capabilities:
<<: *ApplicationCapabilities
(3)、接着生成一个orderer启动创始区块的文件,输入命令:
sudo configtxgen -profile TestTwoOrgsOrdererGenesis -channelID csqkltestchannel -outputBlock ./orderer.genesis.block
运行后如下图所示,生成成功:
(4)、再输入命令检查下生成文件内容是否正常,如果输出json字符串就为正常的:
sudo configtxgen -inspectBlock orderer.genesis.block
2、应用通道配置文件的生成
(1)、直接输入命令创建:
sudo configtxgen -profile TestTwoOrgsChannel -channelID qkltestchannel -outputCreateChannelTx ./qkltestchannel.tx
运行后如下图所示,生成成功:
(2)、检查通道配置的交易信息,输入命令,如无报错,显示了一串json即可:
sudo configtxgen -inspectChannelCreateTx qkltestchannel.tx
3、生成锚点更新文件
(1)、依次输入命令:
sudo configtxgen -profile TestTwoOrgsChannel -channelID qkltestchannel -outputAnchorPeersUpdate ./Org1MSPanchors.tx -asOrg Org1MSP
sudo configtxgen -profile TestTwoOrgsChannel -channelID qkltestchannel -outputAnchorPeersUpdate ./Org2MSPanchors.tx -asOrg Org2MSP
(2)、运行后如下图所示,生成锚点更新文件成功:
四、启动分布式网络
1、网络服务的配置:
(1)、首先,先把~/go/src/github.com/hyperledger/fabric/scripts/fabric-samples/test-network/docker里的docker-componse-test-net.yaml复制到当前目录底下,文件名为docker-componse-test-qkl.yaml,输入命令:
sudo cp ~/go/src/github.com/hyperledger/fabric/scripts/fabric-samples/test-network/docker/docker-compose-test-net.yaml docker-compose-test-qkl.yaml
(2)、接着编辑docker-componse-test-qkl.yaml文件,输入命令:
sudo vim docker-compose-test-qkl.yaml
修改内容如下:
- $IMAGE_TAG 改为 当前使用的版本 2.2.1
- MSPID 改为与配置文件 configtx.yaml 中的定义一致
- 挂载卷路径要与之前生成证书的路径相同
- 修改并增加对应域名
- 修改 cli 容器挂载目录
- 要注意所有挂载目录路径正确
- 端口号配置正确
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
orderer.qkltest.com:
peer0.org1.qkltest.com:
peer1.org1.qkltest.com:
peer2.org1.qkltest.com:
peer0.org2.qkltest.com:
peer1.org2.qkltest.com:
networks:
qkltest:
services:
orderer.qkltest.com:
container_name: orderer.qkltest.com
image: hyperledger/fabric-orderer:2.2.1
environment:
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR=1
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ../order/orderer.genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../fabricconfig/crypto-config/ordererOrganizations/qkltest.com/orderers/orderer.qkltest.com/msp:/var/hyperledger/orderer/msp
- ../fabricconfig/crypto-config/ordererOrganizations/qkltest.com/orderers/orderer.qkltest.com/tls:/var/hyperledger/orderer/tls
- orderer.qkltest.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
networks:
- qkltest
peer0.org1.qkltest.com:
container_name: peer0.org1.qkltest.com
image: hyperledger/fabric-peer:2.2.1
environment:
#Generic peer variables
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=order_qkltest
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
# Peer specific variabes
- CORE_PEER_ID=peer0.org1.qkltest.com
- CORE_PEER_ADDRESS=peer0.org1.qkltest.com:7051
- CORE_PEER_LISTENADDRESS=0.0.0.0:7051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.qkltest.com:7052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.qkltest.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.qkltest.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- ../fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/msp:/etc/hyperledger/fabric/msp
- ../fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls:/etc/hyperledger/fabric/tls
- peer0.org1.qkltest.com:/var/hyperledger/production
working_dir: /opt/hyperledger/peer
command: peer node start
ports:
- 7051:7051
networks:
- qkltest
peer1.org1.qkltest.com:
container_name: peer1.org1.qkltest.com
image: hyperledger/fabric-peer:2.2.1
environment:
#Generic peer variables
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=order_qkltest
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
# Peer specific variabes
- CORE_PEER_ID=peer1.org1.qkltest.com
- CORE_PEER_ADDRESS=peer1.org1.qkltest.com:8051
- CORE_PEER_LISTENADDRESS=0.0.0.0:8051
- CORE_PEER_CHAINCODEADDRESS=peer1.org1.qkltest.com:8052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:8052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.qkltest.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.qkltest.com:8051
- CORE_PEER_LOCALMSPID=Org1MSP
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- ../fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer1.org1.qkltest.com/msp:/etc/hyperledger/fabric/msp
- ../fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer1.org1.qkltest.com/tls:/etc/hyperledger/fabric/tls
- peer1.org1.qkltest.com:/var/hyperledger/production
working_dir: /opt/hyperledger/peer
command: peer node start
ports:
- 8051:8051
networks:
- qkltest
peer2.org1.qkltest.com:
container_name: peer2.org1.qkltest.com
image: hyperledger/fabric-peer:2.2.1
environment:
#Generic peer variables
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=order_qkltest
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
# Peer specific variabes
- CORE_PEER_ID=peer2.org1.qkltest.com
- CORE_PEER_ADDRESS=peer2.org1.qkltest.com:9051
- CORE_PEER_LISTENADDRESS=0.0.0.0:9051
- CORE_PEER_CHAINCODEADDRESS=peer2.org1.qkltest.com:9052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:9052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.qkltest.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer2.org1.qkltest.com:9051
- CORE_PEER_LOCALMSPID=Org1MSP
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- ../fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer2.org1.qkltest.com/msp:/etc/hyperledger/fabric/msp
- ../fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer2.org1.qkltest.com/tls:/etc/hyperledger/fabric/tls
- peer2.org1.qkltest.com:/var/hyperledger/production
working_dir: /opt/hyperledger/peer
command: peer node start
ports:
- 9051:9051
networks:
- qkltest
peer0.org2.qkltest.com:
container_name: peer0.org2.qkltest.com
image: hyperledger/fabric-peer:2.2.1
environment:
#Generic peer variables
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=order_qkltest
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
# Peer specific variabes
- CORE_PEER_ID=peer0.org2.qkltest.com
- CORE_PEER_ADDRESS=peer0.org2.qkltest.com:10051
- CORE_PEER_LISTENADDRESS=0.0.0.0:10051
- CORE_PEER_CHAINCODEADDRESS=peer0.org2.qkltest.com:10052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:10052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.qkltest.com:10051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org2.qkltest.com:10051
- CORE_PEER_LOCALMSPID=Org2MSP
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- ../fabricconfig/crypto-config/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/msp:/etc/hyperledger/fabric/msp
- ../fabricconfig/crypto-config/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls:/etc/hyperledger/fabric/tls
- peer0.org2.qkltest.com:/var/hyperledger/production
working_dir: /opt/hyperledger/peer
command: peer node start
ports:
- 10051:10051
networks:
- qkltest
peer1.org2.qkltest.com:
container_name: peer1.org2.qkltest.com
image: hyperledger/fabric-peer:2.2.1
environment:
#Generic peer variables
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=order_qkltest
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
# Peer specific variabes
- CORE_PEER_ID=peer1.org2.qkltest.com
- CORE_PEER_ADDRESS=peer1.org2.qkltest.com:11051
- CORE_PEER_LISTENADDRESS=0.0.0.0:11051
- CORE_PEER_CHAINCODEADDRESS=peer1.org2.qkltest.com:11052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:11052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.qkltest.com:11051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org2.qkltest.com:10051
- CORE_PEER_LOCALMSPID=Org2MSP
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- ../fabricconfig/crypto-config/peerOrganizations/org2.qkltest.com/peers/peer1.org2.qkltest.com/msp:/etc/hyperledger/fabric/msp
- ../fabricconfig/crypto-config/peerOrganizations/org2.qkltest.com/peers/peer1.org2.qkltest.com/tls:/etc/hyperledger/fabric/tls
- peer1.org2.qkltest.com:/var/hyperledger/production
working_dir: /opt/hyperledger/peer
command: peer node start
ports:
- 11051:11051
networks:
- qkltest
cli:
container_name: cli
image: hyperledger/fabric-tools:2.2.1
tty: true
stdin_open: true
environment:
- GOPATH=/home/linyexiong/go
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.qkltest.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/users/Admin@org1.qkltest.com/msp
working_dir: /opt/hyperledger/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ../fabricconfig/crypto-config:/opt/hyperledger/peer
- ../order:/opt/hyperledger/peer/channel-artifacts
- ./chaincode: /opt/hyperledger/peer/chaincode
depends_on:
- peer0.org1.qkltest.com
- peer1.org1.qkltest.com
- peer2.org1.qkltest.com
- peer0.org2.qkltest.com
- peer1.org2.qkltest.com
networks:
- qkltest
(3)、启动该网络,输入以下命令:
# 设置配置文件中的环境变量,启动时将用于容器网络名。
# 如果不设置,后续链码可能无法启动,报错无法找到网络 order_qkltest,因为前半部分没有配置
# 注:CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE这个值,由当前目录/opt/hyperledger/order的order还有# docker-compose-test-qkl.yaml底下配置的networks属性值qkltest组合而成
# 启动
sudo docker-compose -f docker-compose-test-qkl.yaml up -d
# 查看启动状态,如果状态是up,代表启动成功,exit代表启动失败
docker-compose -f docker-compose-test-qkl.yaml ps
执行后如下图所示:
(4)、注:这一步如果启动容器失败(即为Exit时)才看,如果容器都启动成功可以跳过,如果启动容器不成功的时候,如何查看错误,并定位问题解决:
如果发现orderer这个启动失败,于是我们查看一下docker orderer这个容器的日志,输入命令:
docker ps -a
如下图所示,找到这个容器的ID:
查看这个容器的日志,下面的命令是查看30分钟以内的:
docker logs --since 30m 65c7a092df48
报错信息如下:
通常出现那种情况,就是证书路径配置错了,检查一下路径,发现没有错呀,最终发现,docker-compose-test-qkl.yaml文件底下orderer配置的路径下msp没有signcerts目录,如下图所示:
原因:在最开始Fabric第一步时配置出问题了,我是把一个orderer其中的属性Specs写成小写的specs,导致一些目录和证书缺少,没生成,需要重新修改crypto-config.yaml,然后重新生成即可,输入以下命令:
cd /opt/hyperledger/fabricconfig
sudo cryptogen generate --config=crypto-config.yaml
下面的两步,如果有启动状态是 Exit 代表启动失败,找到问题解决,或者修改docker-compose-test-qkl-test.yaml文件后,最好执行以下步骤,这样能保证容器启动无误,并挂载成功:
停止并删除容器,以及删除容器挂载卷,输入下面两条命令:
# 停止并删除容器
sudo docker-compose -f docker-compose-test-qkl.yaml down
# 删除容器挂载卷
docker volume prune
接着重新输入命令,看是否成功:
sudo docker-compose -f docker-compose-test-qkl.yaml up -d
docker-compose -f docker-compose-test-qkl.yaml ps
五、部署网络:创建应用通道,并将节点加入通道中,更新锚节点
1、进入通过docker-compose-test-qkl.yaml配置文件指定的fabirc-tools容器CLI,并创建通道
(1)、用docker命令进入CLI容器(后续操作都在CLI容器中执行),输入命令:
sudo docker exec -it cli /bin/bash
如下图所示,自动进入了/opt/hyperledger/peer目录:
(2)、创建应用通道,输入命令:
export ORDERER_CA=/opt/hyperledger/peer/ordererOrganizations/qkltest.com/orderers/orderer.qkltest.com/msp/tlscacerts/tlsca.qkltest.com-cert.pem
peer channel create -t 50s -o orderer.qkltest.com:7050 -c qkltestchannel -f /opt/hyperledger/peer/channel-artifacts/qkltestchannel.tx --tls true --cafile $ORDERER_CA
注:这里要注意,cli容器的工作目录是docker-compose-test-qkl.yaml中配置的,并挂载到一个虚拟的工作目录,这些工作目录还有一些路径要配置正确,不然执行的时候会找不到对应的目录和文件,如果报错文件目录找不到的话,就要检查路径还有配置,再重新启动网络:
(3)、执行后如下图所示:
可以看到已经在当前目录下生成了一个qkltestchannel.block区块文件。
2、将当前节点加入应用通道
(1)、直接输入命令:
peer channel join -b qkltestchannel.block
如下图所示,已经成功加入通道了:
3、将所有节点分别加入通道
在 cli 容器中修改变量,加入通道,注意节点名称 MSPID 的修改以及端口号与启动的容器一致
(1)、将peer1.org1节点加入通道,输入命令:
# peer1.org1 节点修改
export CORE_PEER_ADDRESS=peer1.org1.qkltest.com:8051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_CERT_FILE=/opt//hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer1.org1.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer1.org1.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer1.org1.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/users/Admin@org1.qkltest.com/msp
# 执行加入通道命令
peer channel join -b qkltestchannel.block
(2)、将peer2.org1节点加入通道,输入命令:
# peer2.org1 节点修改
export CORE_PEER_ADDRESS=peer2.org1.qkltest.com:9051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_CERT_FILE=/opt//hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer2.org1.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer2.org1.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer2.org1.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/users/Admin@org1.qkltest.com/msp
# 执行加入通道命令
peer channel join -b qkltestchannel.block
(3)、将peer0.org2节点加入通道,输入命令:
# peer0.org2 节点修改
export CORE_PEER_ADDRESS=peer0.org2.qkltest.com:10051
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_TLS_CERT_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/users/Admin@org2.qkltest.com/msp
# 执行加入通道命令
peer channel join -b qkltestchannel.block
(4)、将peer1.org2节点加入通道,输入命令:
# peer1.org2 节点修改
export CORE_PEER_ADDRESS=peer1.org2.qkltest.com:11051
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_TLS_CERT_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer1.org2.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer1.org2.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer1.org2.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/users/Admin@org2.qkltest.com/msp
# 执行加入通道命令
peer channel join -b qkltestchannel.block
4、更新锚节点,这步可以暂时跳过,执行也没关系:
# 更新 org1 的 anchor peer(锚节点)
export CORE_PEER_ADDRESS=peer0.org1.qkltest.com:7051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_CERT_FILE=/opt//hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/users/Admin@org1.qkltest.com/msp
export ORDERER_CA=/opt/hyperledger/peer/ordererOrganizations/qkltest.com/orderers/orderer.qkltest.com/msp/tlscacerts/tlsca.qkltest.com-cert.pem
peer channel update -o orderer.qkltest.com:7050 -c qkltestchannel -f /opt/hyperledger/peer/channel-artifacts/Org1MSPanchors.tx --tls --cafile $ORDERER_CA
# 更新 org2 的 anchor peer(锚节点)
export CORE_PEER_ADDRESS=peer0.org2.qkltest.com:10051
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_TLS_CERT_FILE=/opt//hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/users/Admin@org2.qkltest.com/msp
export ORDERER_CA=/opt/hyperledger/peer/ordererOrganizations/qkltest.com/orderers/orderer.qkltest.com/msp/tlscacerts/tlsca.qkltest.com-cert.pem
peer channel update -o orderer.qkltest.com:7050 -c qkltestchannel -f /opt/hyperledger/peer/channel-artifacts/Org2MSPanchors.tx --tls --cafile $ORDERER_CA
分别更新后,有如下的日志打印即可
六、安装链码
1、打包chaincode
(1)、首先,打包链码需要依赖配置文件 core.yaml,找到~/go/src/github.com/hyperledger/fabric/scripts/fabric-samples/config目录底下的core.yaml文件,并复制到当前目录/opt/hyperledger/order,输入命令:
sudo cp ~/go/src/github.com/hyperledger/fabric/scripts/fabric-samples/config/core.yaml core.yaml
(2)、修改core.yaml文件,修改过的内容如下:
peer:
id: jdoe_test
networkId: dev_test
gossip:
useLeaderElection: true
orgLeader: false
tls:
cert:
file: /opt/hyperledger/fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.crt
key:
file: /opt/hyperledger/fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.key
rootcert:
file: /opt/hyperledger/fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/ca.crt
clientRootCAs:
files:
- /opt/hyperledger/fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/ca.crt
mspConfigPath: /opt/hyperledger/fabricconfig/crypto-config/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/msp/
localMspId: Org1MSP
(3)、接着打包链码,首先将~/go/src/github.com/hyperledger/fabric/scripts/fabric-samples/chaincode/fabcar/go/的源代码移到chaincode目录下,再设置go代理,输入命令:
sudo cp -r ~/go/src/github.com/hyperledger/fabric/scripts/fabric-samples/chaincode/fabcar/go/ chaincode/
sudo go env -w GO111MODULE=on
sudo go env -w GOPROXY=https://goproxy.cn,direct
sudo peer lifecycle chaincode package testcc.tar.gz --path ./chaincode/go/ --lang golang --label testcc_1
如下图所示,产生了一个压缩包:
注,如果报错了,如下图所示:
原因:因为sudo的原因会重置环境变量导致查找不到可执行的go程序
解决办法,输入命令:
sudo vim ~/.bashrc
sudo vim ~/.profile
并依次加上下面这一行:
alias sudo='sudo env PATH=$PATH LD_LIBRARY_PATH=$LD_LIBRARY_PATH'
接着使环境变量生效,输入命令:
source ~/.bashrc
source ~/.profile
(4)、接着将此压缩包移到chaincode目录底下,输入命令:
# 将链码复制到chaincode目录
sudo mv testcc.tar.gz chaincode/
2、安装链码
(1)、每个组织只需要安装一个节点,这里安装在 peer0.org1.qkltest.com 和 peer0.org2.qkltest.com,进入cli,输入命令:
sudo docker exec -it cli /bin/bash
(2)、接着先设置peer0.org1.qkltest.com的环境变量,再安装链码:
export CORE_PEER_ADDRESS=peer0.org1.qkltest.com:7051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_CERT_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/users/Admin@org1.qkltest.com/msp
# 安装链码
peer lifecycle chaincode install chaincode/testcc.tar.gz
# 查询链码安装状态
peer lifecycle chaincode queryinstalled
如下图所示:
(3)、接着先设置peer0.org2.qkltest.com的环境变量,再安装链码:
export CORE_PEER_ADDRESS=peer0.org2.qkltest.com:10051
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_TLS_CERT_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/users/Admin@org2.qkltest.com/msp
# 安装链码
peer lifecycle chaincode install chaincode/testcc.tar.gz
# 查询链码安装状态
peer lifecycle chaincode queryinstalled
如下图所示:
3、批准链码
(1)、切换到peer0.org1.qkltest.com环境变量,并查询链码 Package ID:
export CORE_PEER_ADDRESS=peer0.org1.qkltest.com:7051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_CERT_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/users/Admin@org1.qkltest.com/msp
peer lifecycle chaincode queryinstalled
(2)、设置环境变量并批准链码,输入命令:
export PACKAGE_ID=testcc_1:38f2d296b578bab89b0a37f6c7cf91e6ad3315d81754102297e6eaf187712871
export ORDERER_CA=/opt/hyperledger/peer/ordererOrganizations/qkltest.com/orderers/orderer.qkltest.com/msp/tlscacerts/tlsca.qkltest.com-cert.pem
peer lifecycle chaincode approveformyorg --orderer orderer.qkltest.com:7050 --ordererTLSHostnameOverride orderer.qkltest.com --channelID qkltestchannel --name testcc -v 1 --package-id $PACKAGE_ID --sequence 1 --tls --cafile $ORDERER_CA
执行成功后,如下图所示:
(3)、查询链码批准状态,输入命令:
peer lifecycle chaincode checkcommitreadiness --channelID qkltestchannel --name testcc --version 1 --sequence 1 --tls --cafile $ORDERER_CA --output json
如下图所示:
(4)、切换到peer0.org2.qkltest.com环境变量,并批准链码,输入以下命令:
export CORE_PEER_ADDRESS=peer0.org2.qkltest.com:10051
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_TLS_CERT_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/server.crt
export CORE_PEER_TLS_KEY_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/server.key
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/ca.crt
export CORE_PEER_MSPCONFIGPATH=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/users/Admin@org2.qkltest.com/msp
peer lifecycle chaincode approveformyorg --orderer orderer.qkltest.com:7050 --ordererTLSHostnameOverride orderer.qkltest.com --channelID qkltestchannel --name testcc -v 1 --package-id $PACKAGE_ID --sequence 1 --tls --cafile $ORDERER_CA
执行成功后,如下图所示:
(5)、查询链码批准状态,输入命令:
peer lifecycle chaincode checkcommitreadiness --channelID qkltestchannel --name testcc --version 1 --sequence 1 --tls --cafile $ORDERER_CA --output json
如下图所示:
注:如果了Error: timed out waiting for txid on all peers,如下图所示:
原因:这是peer 在与 orderer 通信失败导致的,有可能是配置文件configtx.yaml底下OrdererEndpoints 配置了127.0.0.1,写错了,又或者configtx.yaml文件策略配置出问题了,可以用下面的命令查看各个节点的日志进行排查:
docker logs peer0.org1.qkltest.com
docker logs peer0.org2.qkltest.com
排查出问题后,重新开始时,回到宿主机,输入以下命令:
cd /opt/hyperledger/order
# 关闭网络
sudo docker-compose -f docker-compose-test-qkl.yaml down
# 删除卷
sudo docker volume prune
# 查看所有镜像
# docker images
# 删除生成的链码镜像,65726767gh3为上面命令查到的IMAGE ID,别删错了,只需要删除上面安装生成两个的链码镜像,具体可以看下面的图片,再操作
# sudo docker rmi -f 65726767gh3
# sudo docker rmi -f 34567677dg2
# 删除链码目录,还有相关生成的区块文件,通道文件,锚点文件
sudo rm -rf chaincode/
sudo rm -rf orderer.genesis.block
sudo rm -rf qkltestchannel.tx
sudo rm -rf Org1MSPanchors.tx
sudo rm -rf Org2MSPanchors.tx
# 最后重新生成相关的区块文件,通道文件,锚点文件,启动网络等等操作,按顺序重新来一遍
4、链码提交
(1)、输入命令:
# 设置环境变量
export CHANNEL_NAME=qkltestchannel
export ORDERER_CA=/opt/hyperledger/peer/ordererOrganizations/qkltest.com/orderers/orderer.qkltest.com/msp/tlscacerts/tlsca.qkltest.com-cert.pem
export ORG1_CA=/opt/hyperledger/peer/peerOrganizations/org1.qkltest.com/peers/peer0.org1.qkltest.com/tls/ca.crt
export ORG2_CA=/opt/hyperledger/peer/peerOrganizations/org2.qkltest.com/peers/peer0.org2.qkltest.com/tls/ca.crt
# 提交链码
peer lifecycle chaincode commit -o orderer.qkltest.com:7050 --ordererTLSHostnameOverride orderer.qkltest.com --channelID qkltestchannel --name testcc --version 1 --sequence 1 --tls --cafile $ORDERER_CA --peerAddresses peer0.org1.qkltest.com:7051 --tlsRootCertFiles $ORG1_CA --peerAddresses peer0.org2.qkltest.com:10051 --tlsRootCertFiles $ORG2_CA
执行后如下图所示:
(2)、查看链码提交状态,输入命令:
peer lifecycle chaincode querycommitted --channelID qkltestchannel --name testcc --cafile $ORDERER_CA --output json
执行后如下图所示:
(3)、查看容器是否有链码容器启动,输入命令:
docker ps
如下图所示:
5、测试链代码
(1)、初始化数据
peer chaincode invoke -o orderer.qkltest.com:7050 --ordererTLSHostnameOverride orderer.qkltest.com --tls --cafile $ORDERER_CA -C qkltestchannel -n testcc --peerAddresses peer0.org1.qkltest.com:7051 --tlsRootCertFiles $ORG1_CA --peerAddresses peer0.org2.qkltest.com:10051 --tlsRootCertFiles $ORG2_CA -c '{"function":"InitLedger","Args":[]}'
成功时,如下图所示:
注:如果报错了,start-could not start container: API error (404): network fabric_test not found 如下图所示:
原因:启动 peer 容器时指定网络有误
在配置文件 docker-compose-test-qkl.yaml 中CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE这个值有误,我们可以通过以下命令,找到正确的这个值,输入命令:
docker inspect peer0.org1.qkltest.com
如下图所示,需将docker-compose-test-qkl.yaml 中CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE的值改为order_qkltest,再重新走上面第四步,从启动网络开始重新来一遍
(2)、查询数据,查询所有车辆,输入命令:
peer chaincode query -C qkltestchannel -n testcc -c '{"Args":["queryAllCars"]}'
如下图所示: