摘要:本部分將下載并安裝到節(jié)點(diǎn)上,然后生成相關(guān)和,供集群組件使用。版本后的不再提供所有權(quán)限,需要建立一個(gè)來(lái)綁定復(fù)制,然后貼到是社區(qū)維護(hù)的容器集群監(jiān)控和分析工具。
最近,有部分用戶飄了……
覺得Rainbond提供的既簡(jiǎn)潔、又易用、而且生產(chǎn)就緒的Kubernets體驗(yàn)不過(guò)癮……
想要挑戰(zhàn)一下Kubernetes全手動(dòng)部署……
并在凌晨一點(diǎn)撥通了客服小哥的電話……
因此本著不重復(fù)造輪子并且關(guān)愛客服小哥身心健康的主張,我們搬來(lái)了Kairen的精彩教程——
開始Kubernetes官方提供了多種安裝方式Picking the right solution,本文將以全手動(dòng)安裝方式來(lái)部署Kubernetes v1.8.x版本,學(xué)習(xí)和了解Kubernetes的構(gòu)建流程。
版本明細(xì):
Kubernetes v1.8.6
CNI v0.6.0
Etcd v3.2.9
Calico v2.6.2
Docker v17.10.0-ce
準(zhǔn)備系統(tǒng):ubuntu 16.x 或 centos 7.x
節(jié)點(diǎn):
172.16.35.12 / master1 / 1 CPU / 2G
172.16.35.10 / node1 / 1 CPU / 2G
172.16.35.11 / node2 / 1 CPU / 2G
master為主要控制節(jié)點(diǎn)和部署節(jié)點(diǎn),node為應(yīng)用運(yùn)行節(jié)點(diǎn)
所有操作均為root
安裝前需確認(rèn)以下事項(xiàng):
確認(rèn)所有節(jié)點(diǎn)之間網(wǎng)絡(luò)互通,master1 SSH登入其他節(jié)點(diǎn)為passwordless
確認(rèn)防火墻和SELinux已關(guān)閉,如centos:
$ systemctl stop firewalld && systemctl disable firewalld $ setenforce 0 $ vim /etc/selinux/config SELINUX=disabled
所有節(jié)點(diǎn)需要設(shè)置/etc/host解析到所有主機(jī)
... 172.16.35.10 node1 172.16.35.11 node2 172.16.35.12 master1
所有節(jié)點(diǎn)都需要安裝docker
$ curl -fsSL "https://get.docker.com/" | sh
注意:centos安裝docker完成后需要執(zhí)行:
$ systemctl enable docker && systemctl start docker
編輯/lib/systemd/system/docker.service,在ExecStart=..加入:
ExecStartPost=/sbin/iptables -A FORWARD -s 0.0.0.0/0 -j ACCEPT
完成后重啟docker服務(wù):
$ systemctl daemon-reload && systemctl restart docker
所有節(jié)點(diǎn)都需要設(shè)定/etc/sysctl.d/k8s.conf系統(tǒng)參數(shù)
$ cat </etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF $ sysctl -p /etc/sysctl.d/k8s.conf
在master1安裝CFSSL工具,用來(lái)建立TLS certificates
$ export CFSSL_URL="https://pkg.cfssl.org/R1.2" $ wget "${CFSSL_URL}/cfssl_linux-amd64" -O /usr/local/bin/cfssl $ wget "${CFSSL_URL}/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson $ chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljsonEtcd
安裝Kubernetes之前,我們需要完成一些必要的系統(tǒng)配置,高可用共享配置和服務(wù)發(fā)現(xiàn)存儲(chǔ)Etcd便是其中的重要一環(huán),節(jié)點(diǎn)會(huì)從Etcd中獲取所需數(shù)據(jù)。
建立集群CA和certificates這里需要生成client和server各組件certificate,代替kubernetes admin user生成client證書。
首先在master1建立/etc/etcd/ssl目錄,而后進(jìn)入目錄進(jìn)行以下操作:
$ mkdir -p /etc/etcd/ssl && cd /etc/etcd/ssl $ export PKI_URL="https://kairen.github.io/files/manual-v1.8/pki"
下載ca-config.json和etcd-ca-csr.json:
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/etcd-ca-csr.json" $ cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca $ ls etcd-ca*.pem etcd-ca-key.pem etcd-ca.pem
下載etcd-csr.json并生成Etcd certificate證書:
$ wget "${PKI_URL}/etcd-csr.json" $ cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd $ ls etcd*.pem etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem
如果節(jié)點(diǎn)IP不同,需要修改etcd-csr.json的hosts
完成后刪除不必要的文件:
$ rm -rf *.json
并確認(rèn)/etc/etcd/ssl包含:
$ ls /etc/etcd/ssl etcd-ca.csr etcd-ca-key.pem etcd-ca.pem etcd.csr etcd-key.pem etcd.pemEtcd的安裝和設(shè)置
首先在master1節(jié)點(diǎn)下載Etcd,解壓到/opt安裝:
$ export ETCD_URL="https://github.com/coreos/etcd/releases/download" $ cd && wget -qO- --show-progress "${ETCD_URL}/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz" | tar -zx $ mv etcd-v3.2.9-linux-amd64/etcd* /usr/local/bin/ && rm -rf etcd-v3.2.9-linux-amd64
完成后新建Etcd Group和User,并設(shè)定Etcd目錄:
$ groupadd etcd && useradd -c "Etcd user" -g etcd -s /sbin/nologin -r etcd
下載etcd相關(guān)配置,我們將來(lái)管理Etcd:
$ export ETCD_CONF_URL="https://kairen.github.io/files/manual-v1.8/master" $ wget "${ETCD_CONF_URL}/etcd.conf" -O /etc/etcd/etcd.conf $ wget "${ETCD_CONF_URL}/etcd.service" -O /lib/systemd/system/etcd.service
如果沒用本文準(zhǔn)備部分的IP,請(qǐng)用自己的IP代替172.16.35.12
建立var 存放數(shù)據(jù),然后啟動(dòng)Etcd服務(wù):
$ mkdir -p /var/lib/etcd && chown etcd:etcd -R /var/lib/etcd /etc/etcd $ systemctl enable etcd.service && systemctl start etcd.service
通過(guò)以下命令驗(yàn)證:
$ export CA="/etc/etcd/ssl" $ ETCDCTL_API=3 etcdctl --cacert=${CA}/etcd-ca.pem --cert=${CA}/etcd.pem --key=${CA}/etcd-key.pem --endpoints="https://172.16.35.12:2379" endpoint health # output https://172.16.35.12:2379 is healthy: successfully committed proposal: took = 641.36μsKubernetes Master
Master是Kubernetes的大總管,通過(guò)apiserver、Controller manager以及Scheduler管理所有節(jié)點(diǎn)。
本部分將下載Kubernetes并安裝到master1節(jié)點(diǎn)上,然后生成相關(guān)TLS certificates和CA,供集群組件使用。
下載Kubernetes組件# Download Kubernetes $ export KUBE_URL="https://storage.googleapis.com/kubernetes-release/release/v1.8.6/bin/linux/amd64" $ wget "${KUBE_URL}/kubelet" -O /usr/local/bin/kubelet $ wget "${KUBE_URL}/kubectl" -O /usr/local/bin/kubectl $ chmod +x /usr/local/bin/kubelet /usr/local/bin/kubectl # Download CNI $ mkdir -p /opt/cni/bin && cd /opt/cni/bin $ export CNI_URL="https://github.com/containernetworking/plugins/releases/download" $ wget -qO- --show-progress "${CNI_URL}/v0.6.0/cni-plugins-amd64-v0.6.0.tgz" | tar -zx建立集群CA和certificates
與Etcd部分原理一樣,操作也大相徑庭,首先在master1建立pki目錄,并進(jìn)入目錄執(zhí)行:
$ mkdir -p /etc/kubernetes/pki && cd /etc/kubernetes/pki $ export PKI_URL="https://kairen.github.io/files/manual-v1.8/pki" $ export KUBE_APISERVER="https://172.16.35.12:6443"
下載ca-config.json和etcd-ca-csr.json:
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/ca-csr.json" $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca $ ls ca*.pem ca-key.pem ca.pem
下載apiserver-csr.json,生成kube-apiserver certificate證書:
$ wget "${PKI_URL}/apiserver-csr.json" $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=10.96.0.1,172.16.35.12,127.0.0.1,kubernetes.default -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver $ ls apiserver*.pem apiserver-key.pem apiserver.pem
如果節(jié)點(diǎn)IP不同,需要修改-hostname
下載front-proxy-ca-csr.json,生成Front proxy CA,F(xiàn)ront proxy主要用在API aggregator上:
$ wget "${PKI_URL}/front-proxy-ca-csr.json" $ cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca $ ls front-proxy-ca*.pem front-proxy-ca-key.pem front-proxy-ca.pem
下載front-proxy-client-csr.json,生成front-proxy-client證書:
$ wget "${PKI_URL}/front-proxy-client-csr.json" $ cfssl gencert -ca=front-proxy-ca.pem -ca-key=front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare front-proxy-client $ ls front-proxy-client*.pem front-proxy-client-key.pem front-proxy-client.pem
手工方式生成CA非常麻煩,只適合少量機(jī)器,每次簽證時(shí)都需要綁定Node IP,隨著機(jī)器增加會(huì)帶來(lái)很多的不便,因此這里使用TLS Bootstrapping的方式來(lái)進(jìn)行授權(quán),由apiserver自動(dòng)為符合條件的Node發(fā)送證書授權(quán)加入集群。
做法是在kubelet啟動(dòng)時(shí),向kuber-apiserver傳送TLS Bootstrapping請(qǐng)求,而kube-apiserver驗(yàn)證kubelet請(qǐng)求的token是否與設(shè)定的一樣,如果一樣則自動(dòng)生成Kuberlet證書和密鑰。具體作法可以參考TLS bootstrapping。
首先生成BOOTSTRAP_TOKEN,并建立bootstrap.conf的kubeconfig:
$ export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d " ") $ cat </etc/kubernetes/token.csv ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF # bootstrap set-cluster $ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=../bootstrap.conf # bootstrap set-credentials $ kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=../bootstrap.conf # bootstrap set-context $ kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=../bootstrap.conf # bootstrap set default context $ kubectl config use-context default --kubeconfig=../bootstrap.conf
如果想用CA的方式來(lái)認(rèn)證,可以參考Kubelet certificate
下載admin-csr.json,并生成admin certificate證書:
$ wget "${PKI_URL}/admin-csr.json" $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin $ ls admin*.pem admin-key.pem admin.pem
然后執(zhí)行一下命令生成名為admin.conf的kubeconfig:
# admin set-cluster $ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=../admin.conf # admin set-credentials $ kubectl config set-credentials kubernetes-admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=../admin.conf # admin set-context $ kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=../admin.conf # admin set default context $ kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=../admin.conf
下載manager-csr.json,并生成kube-controller-manager certificate證書:
$ wget "${PKI_URL}/manager-csr.json" $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes manager-csr.json | cfssljson -bare controller-manager $ ls controller-manager*.pem
如果節(jié)點(diǎn)IP不同,需要修改manager-csr.json的hosts
然后執(zhí)行命令生成名為controller-manager.conf的kubeconfig:
# controller-manager set-cluster $ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=../controller-manager.conf # controller-manager set-credentials $ kubectl config set-credentials system:kube-controller-manager --client-certificate=controller-manager.pem --client-key=controller-manager-key.pem --embed-certs=true --kubeconfig=../controller-manager.conf # controller-manager set-context $ kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=../controller-manager.conf # controller-manager set default context $ kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=../controller-manager.conf
下載scheduler-csr.json,生成kube-scheduler certificate證書:
$ wget "${PKI_URL}/scheduler-csr.json" $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare scheduler $ ls scheduler*.pem scheduler-key.pem scheduler.pem
如果節(jié)點(diǎn)IP不同,需要修改scheduler-csr.json的hosts
然后執(zhí)行一下命令生成名為scheduler.conf的kubeconfig:
# scheduler set-cluster $ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=../scheduler.conf # scheduler set-credentials $ kubectl config set-credentials system:kube-scheduler --client-certificate=scheduler.pem --client-key=scheduler-key.pem --embed-certs=true --kubeconfig=../scheduler.conf # scheduler set-context $ kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=../scheduler.conf # scheduler set default context $ kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=../scheduler.conf
下載kubelet-csr.json,并生成master node certificate證書:
$ wget "${PKI_URL}/kubelet-csr.json" $ sed -i "s/$NODE/master1/g" kubelet-csr.json $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=master1,172.16.35.12 -profile=kubernetes kubelet-csr.json | cfssljson -bare kubelet $ ls kubelet*.pem kubelet-key.pem kubelet.pem
$NODE需要隨節(jié)點(diǎn)名稱不同而改變
然后執(zhí)行一下命令生成名為kubelet.conf的kubeconfig:
# kubelet set-cluster $ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=../kubelet.conf # kubelet set-credentials $ kubectl config set-credentials system:node:master1 --client-certificate=kubelet.pem --client-key=kubelet-key.pem --embed-certs=true --kubeconfig=../kubelet.conf # kubelet set-context $ kubectl config set-context system:node:master1@kubernetes --cluster=kubernetes --user=system:node:master1 --kubeconfig=../kubelet.conf # kubelet set default context $ kubectl config use-context system:node:master1@kubernetes --kubeconfig=../kubelet.conf
Service account不需要CA認(rèn)證,也就不需要CA來(lái)做Service account key的檢查,這里我們建立一組private和public的密鑰供Service account key使用:
$ openssl genrsa -out sa.key 2048 $ openssl rsa -in sa.key -pubout -out sa.pub $ ls sa.* sa.key sa.pub
完成后刪除不必要文件:
$ rm -rf *.json *.csr
確認(rèn)/etc/kubernetes和/etc/kubernetes/pki包含以下文件:
$ ls /etc/kubernetes/ admin.conf bootstrap.conf controller-manager.conf kubelet.conf pki scheduler.conf token.csv $ ls /etc/kubernetes/pki admin-key.pem apiserver-key.pem ca-key.pem controller-manager-key.pem front-proxy-ca-key.pem front-proxy-client-key.pem kubelet-key.pem sa.key scheduler-key.pem admin.pem apiserver.pem ca.pem controller-manager.pem front-proxy-ca.pem front-proxy-client.pem kubelet.pem sa.pub scheduler.pem安裝Kubernetes 核心元件
下載Kubernetes核心組件Yaml文件,這里我們利用Kubernetes Statics Pod來(lái)建立Master核心組件,因此下載所有Static Pod文件到etc/kubernetes/manifests目錄“
$ export CORE_URL="https://kairen.github.io/files/manual-v1.8/master" $ mkdir -p /etc/kubernetes/manifests && cd /etc/kubernetes/manifests $ for FILE in apiserver manager scheduler; do wget "${CORE_URL}/${FILE}.yml.conf" -O ${FILE}.yml done
同樣的,如果IP與本文IP準(zhǔn)備不同的話,需要修改apiserver.yml、manager.yml、`
scheduler.yml`apiserver中的NodeRestriction請(qǐng)參考Using Node Authorization
生成一個(gè)用來(lái)加密Etcd的key
$ head -c 32 /dev/urandom | base64 SUpbL4juUYyvxj3/gonV5xVEx8j769/99TSAf8YT/sQ=
在/etc/kubernetes/目錄建立encryption.yml的加密YAML文件:
$ cat </etc/kubernetes/encryption.yml kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: SUpbL4juUYyvxj3/gonV5xVEx8j769/99TSAf8YT/sQ= - identity: {} EOF
Etcd加密可參考Encrypting data at rest
在/etc/kubernetes/目錄建立audit-policy.yml的auditing policay YAML文件:
$ cat </etc/kubernetes/audit-policy.yml apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: - level: Metadata EOF
audit policy請(qǐng)參考Audit
下載kubelet.service相關(guān)文件來(lái)管理kubelet:
$ export KUBELET_URL="https://kairen.github.io/files/manual-v1.8/master" $ mkdir -p /etc/systemd/system/kubelet.service.d $ wget "${KUBELET_URL}/kubelet.service" -O /lib/systemd/system/kubelet.service $ wget "${KUBELET_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet.service.d/10-kubelet.conf
若cluster-dns或cluster-domain有變動(dòng),需要修改10-kubelet.conf
最后建立var并啟動(dòng)kubelet服務(wù):
$ mkdir -p /var/lib/kubelet /var/log/kubernetes $ systemctl enable kubelet.service && systemctl start kubelet.service
完成后需要一段時(shí)間來(lái)下載鏡像文件并啟動(dòng)組件:
$ watch netstat -ntlp tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 23012/kubelet tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 22305/kube-schedule tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 22529/kube-controll tcp6 0 0 :::6443 :::* LISTEN 22956/kube-apiserve
看到上述信息即表明服務(wù)啟動(dòng)正常,如果出現(xiàn)問(wèn)題可通過(guò)docker cli查看
完成后,復(fù)制admin kubeconfig并通過(guò)以下命令驗(yàn)證:
$ cp /etc/kubernetes/admin.conf ~/.kube/config $ kubectl get cs NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health": "true"} scheduler Healthy ok controller-manager Healthy ok $ kubectl get node NAME STATUS ROLES AGE VERSION master1 NotReady master 1m v1.8.6 $ kubectl -n kube-system get po NAME READY STATUS RESTARTS AGE kube-apiserver-master1 1/1 Running 0 4m kube-controller-manager-master1 1/1 Running 0 4m kube-scheduler-master1 1/1 Running 0 4m
確認(rèn)服務(wù)能夠執(zhí)行l(wèi)ogs等命令:
$ kubectl -n kube-system logs -f kube-scheduler-master1 Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log kube-apiserver-master1)
出現(xiàn)403 Forbidden問(wèn)題表明kube-apiserver user并沒有nodes的權(quán)限
由于上述權(quán)限問(wèn)題,我們需要建立一個(gè)apiserver-to-kubelet-rbac.yml來(lái)定義權(quán)限,以供我們執(zhí)行l(wèi)ogs、exec等命令:
$ cd /etc/kubernetes/ $ export URL="https://kairen.github.io/files/manual-v1.8/master" $ wget "${URL}/apiserver-to-kubelet-rbac.yml.conf" -O apiserver-to-kubelet-rbac.yml $ kubectl apply -f apiserver-to-kubelet-rbac.yml # 測(cè)試 logs $ kubectl -n kube-system logs -f kube-scheduler-master1 ... I1031 03:22:42.527697 1 leaderelection.go:184] successfully acquired lease kube-system/kube-schedulerKubernetes Node
Node運(yùn)行容器實(shí)例的節(jié)點(diǎn),即工作節(jié)點(diǎn)。本部分我們會(huì)下載Kubernetes binary并建立node 的certificate來(lái)提供給節(jié)點(diǎn)注冊(cè)認(rèn)證用。Kubernetes使用Node Authorizer來(lái)提供Authorization mode,這種授權(quán)模式會(huì)替Kubelet生成API request。
開始前,我們先在master1將需要的ca和cert復(fù)制到Node節(jié)點(diǎn)上:
$ for NODE in node1 node2; do ssh ${NODE} "mkdir -p /etc/kubernetes/pki/" ssh ${NODE} "mkdir -p /etc/etcd/ssl" # Etcd ca and cert for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do scp /etc/etcd/ssl/${FILE} ${NODE}:/etc/etcd/ssl/${FILE} done # Kubernetes ca and cert for FILE in pki/ca.pem pki/ca-key.pem bootstrap.conf; do scp /etc/kubernetes/${FILE} ${NODE}:/etc/kubernetes/${FILE} done done下載Kubernetes 元件
首先獲取所有需要執(zhí)行的文件:
# Download Kubernetes $ export KUBE_URL="https://storage.googleapis.com/kubernetes-release/release/v1.8.6/bin/linux/amd64" $ wget "${KUBE_URL}/kubelet" -O /usr/local/bin/kubelet $ chmod +x /usr/local/bin/kubelet # Download CNI $ mkdir -p /opt/cni/bin && cd /opt/cni/bin $ export CNI_URL="https://github.com/containernetworking/plugins/releases/download" $ wget -qO- --show-progress "${CNI_URL}/v0.6.0/cni-plugins-amd64-v0.6.0.tgz" | tar -zx設(shè)定Kubernetes node
下載Kubernetes相關(guān)文件,包括drop-in file、systemd service等:
$ export KUBELET_URL="https://kairen.github.io/files/manual-v1.8/node" $ mkdir -p /etc/systemd/system/kubelet.service.d $ wget "${KUBELET_URL}/kubelet.service" -O /lib/systemd/system/kubelet.service $ wget "${KUBELET_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet.service.d/10-kubelet.conf
如果cluster-dns或cluster-domain有改變的話,需要修改10-kubelet.conf
然后在所有node建立var,并啟動(dòng)kubelet服務(wù):
$ mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/kubernetes/manifests $ systemctl enable kubelet.service && systemctl start kubelet.service授權(quán)Kubernetes Node
重復(fù)完成所有節(jié)點(diǎn)后,在master1節(jié)點(diǎn)建立ClusterRoleBinding(因?yàn)槲覀儾捎玫氖荰LS Bootstrapping):
$ kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
在master進(jìn)行驗(yàn)證,我們可以看到節(jié)點(diǎn)處于pending:
$ kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-YWf97ZrLCTlr2hmXsNLfjVLwaLfZRsu52FRKOYjpcBE 2s kubelet-bootstrap Pending node-csr-eq4q6ffOwT4yqYQNU6sT7mphPOQdFN6yulMVZeu6pkE 2s kubelet-bootstrap Pending
通過(guò)kubectl,允許節(jié)點(diǎn)加入集群:
$ kubectl get csr | awk "/Pending/ {print $1}" | xargs kubectl certificate approve certificatesigningrequest "node-csr-YWf97ZrLCTlr2hmXsNLfjVLwaLfZRsu52FRKOYjpcBE" approved certificatesigningrequest "node-csr-eq4q6ffOwT4yqYQNU6sT7mphPOQdFN6yulMVZeu6pkE" approved $ kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-YWf97ZrLCTlr2hmXsNLfjVLwaLfZRsu52FRKOYjpcBE 30s kubelet-bootstrap Approved,Issued node-csr-eq4q6ffOwT4yqYQNU6sT7mphPOQdFN6yulMVZeu6pkE 30s kubelet-bootstrap Approved,Issued $ kubectl get no NAME STATUS ROLES AGE VERSION master1 NotReady master 21m v1.8.6 node1 NotReady node 8s v1.8.6 node2 NotReady node 8s v1.8.6Kubernetes Core Addons 部署
完成以上所有步驟,我們還需要安裝一些插件,比如kube-dns、kube-proxy等等。
Kube-proxy addonKube-proxy是實(shí)現(xiàn)Service的關(guān)鍵組件,kube-proxy會(huì)在每個(gè)節(jié)點(diǎn)上執(zhí)行,然后監(jiān)聽API Server的Service和Endpoint變化,并根據(jù)變化執(zhí)行iptables實(shí)現(xiàn)網(wǎng)絡(luò)轉(zhuǎn)發(fā)。
這里我們需要DaemonSet來(lái)執(zhí)行,并需要生成一些certificate。
首先在master1下載kube-proxy-csr.json,并生成kube-proxy certificate證書:
$ export PKI_URL="https://kairen.github.io/files/manual-v1.8/pki" $ cd /etc/kubernetes/pki $ wget "${PKI_URL}/kube-proxy-csr.json" "${PKI_URL}/ca-config.json" $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy $ ls kube-proxy*.pem kube-proxy-key.pem kube-proxy.pem
然后通過(guò)以下命令生成名為`kube-proxy.conf·的kubeconfig:
# kube-proxy set-cluster $ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server="https://172.16.35.12:6443" --kubeconfig=../kube-proxy.conf # kube-proxy set-credentials $ kubectl config set-credentials system:kube-proxy --client-key=kube-proxy-key.pem --client-certificate=kube-proxy.pem --embed-certs=true --kubeconfig=../kube-proxy.conf # kube-proxy set-context $ kubectl config set-context system:kube-proxy@kubernetes --cluster=kubernetes --user=system:kube-proxy --kubeconfig=../kube-proxy.conf # kube-proxy set default context $ kubectl config use-context system:kube-proxy@kubernetes --kubeconfig=../kube-proxy.conf
刪除不必要的文件:
$ rm -rf *.json
確認(rèn)/etc/kubernetes有以下文件:
$ ls /etc/kubernetes/ admin.conf bootstrap.conf encryption.yml kube-proxy.conf pki token.csv audit-policy.yml controller-manager.conf kubelet.conf manifests scheduler.conf
在master1上將kube-proxy相關(guān)文件復(fù)制到Node節(jié)點(diǎn)上:
$ for NODE in node1 node2; do echo "--- $NODE ---" for FILE in pki/kube-proxy.pem pki/kube-proxy-key.pem kube-proxy.conf; do scp /etc/kubernetes/${FILE} ${NODE}:/etc/kubernetes/${FILE} done done
完成后,在master1通過(guò)kubectl建立kube-proxy daemon:
$ export ADDON_URL="https://kairen.github.io/files/manual-v1.8/addon" $ mkdir -p /etc/kubernetes/addons && cd /etc/kubernetes/addons $ wget "${ADDON_URL}/kube-proxy.yml.conf" -O kube-proxy.yml $ kubectl apply -f kube-proxy.yml $ kubectl -n kube-system get po -l k8s-app=kube-proxy NAME READY STATUS RESTARTS AGE kube-proxy-bpp7q 1/1 Running 0 47s kube-proxy-cztvh 1/1 Running 0 47s kube-proxy-q7mm4 1/1 Running 0 47sKube-dns addon
Kube DNS是Kubernetes集群內(nèi)部Pod之間通信的重要插件,允許Pod通過(guò)Domain Name鏈接Service,主要由Kube DNS與Sky DNS組合而成,通過(guò)Kube DNS監(jiān)聽Service與Endpoint變化,來(lái)提供給Sky DNS信息以更新解析位址。
安裝只需要在master1通過(guò)kubectl建立kube-dns deployment即可:
$ export ADDON_URL="https://kairen.github.io/files/manual-v1.8/addon" $ wget "${ADDON_URL}/kube-dns.yml.conf" -O kube-dns.yml $ kubectl apply -f kube-dns.yml $ kubectl -n kube-system get po -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE kube-dns-6cb549f55f-h4zr5 0/3 Pending 0 40sCalico Network 安裝與設(shè)定
Calico是一款純3層協(xié)議(不需要Overlay 網(wǎng)路),已與各種云原生平臺(tái)有良好的整合,在每個(gè)節(jié)點(diǎn)節(jié)點(diǎn)利用Linux Kernel實(shí)現(xiàn)高效的vRouter來(lái)負(fù)責(zé)數(shù)據(jù)轉(zhuǎn)發(fā),而當(dāng)數(shù)據(jù)中心復(fù)雜度增加時(shí),可以用BGP route reflector來(lái)達(dá)成。
首先在master1通過(guò)kubectl建立Calico policy controller:
$ export CALICO_CONF_URL="https://kairen.github.io/files/manual-v1.8/network" $ wget "${CALICO_CONF_URL}/calico-controller.yml.conf" -O calico-controller.yml $ kubectl apply -f calico-controller.yml $ kubectl -n kube-system get po -l k8s-app=calico-policy NAME READY STATUS RESTARTS AGE calico-policy-controller-5ff8b4549d-tctmm 0/1 Pending 0 5s
如果節(jié)點(diǎn)IP不同,需要修改calico-controller.yml的ETCD_ENDPOINTS
在`master1·下載Calico CLI工具:
$ wget https://github.com/projectcalico/calicoctl/releases/download/v1.6.1/calicoctl $ chmod +x calicoctl && mv calicoctl /usr/local/bin/
然后在所有節(jié)點(diǎn)下載Calico,并執(zhí)行以下命令:
$ export CALICO_URL="https://github.com/projectcalico/cni-plugin/releases/download/v1.11.0" $ wget -N -P /opt/cni/bin ${CALICO_URL}/calico $ wget -N -P /opt/cni/bin ${CALICO_URL}/calico-ipam $ chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
接著在所有節(jié)點(diǎn)下載CNI plugins以及calico-node.service:
$ mkdir -p /etc/cni/net.d $ export CALICO_CONF_URL="https://kairen.github.io/files/manual-v1.8/network" $ wget "${CALICO_CONF_URL}/10-calico.conf" -O /etc/cni/net.d/10-calico.conf $ wget "${CALICO_CONF_URL}/calico-node.service" -O /lib/systemd/system/calico-node.service
如果節(jié)點(diǎn)IP不同,需要修改10-calico.conf的etcd_endpoints如果部署機(jī)器是虛擬機(jī),需要修改calico-node.service,并在IP_AUTODETECTION_METHOD (包含IP6)部分指定綁定的網(wǎng)卡,以免預(yù)設(shè)綁定到NAT網(wǎng)路上
之后在所有節(jié)點(diǎn)啟動(dòng)Calico-node:
$ systemctl enable calico-node.service && systemctl start calico-node.service
在master1查看Calico nodes:
$ cat <~/calico-rc export ETCD_ENDPOINTS="https://172.16.35.12:2379" export ETCD_CA_CERT_FILE="/etc/etcd/ssl/etcd-ca.pem" export ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" export ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" EOF $ . ~/calico-rc $ calicoctl get node -o wide NAME ASN IPV4 IPV6 master1 (64512) 172.16.35.12/24 node1 (64512) 172.16.35.10/24 node2 (64512) 172.16.35.11/24
查看pending的pod是否已執(zhí)行:
$ kubectl -n kube-system get po NAME READY STATUS RESTARTS AGE calico-policy-controller-5ff8b4549d-tctmm 1/1 Running 0 4m kube-apiserver-master1 1/1 Running 0 20m kube-controller-manager-master1 1/1 Running 0 20m kube-dns-6cb549f55f-h4zr5 3/3 Running 0 5m kube-proxy-fnrkb 1/1 Running 0 6m kube-proxy-l72bq 1/1 Running 0 6m kube-proxy-m6rfw 1/1 Running 0 6m kube-scheduler-master1 1/1 Running 0 20m
省事的做法是用Standard Hosted方式安裝。
Kubernetes Extra Addons 部署本部分說(shuō)明如何部署官方常用的addons,例如dashboard、heapster等。
Dashboard addonDashboard是Kubernetes官方開發(fā)的儀表板,讓我們以可以i 通過(guò)web-based方式管理kubernetes集群。
在master1通過(guò)kubectl建立kubernetes dashboard即可:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml $ kubectl -n kube-system get po,svc -l k8s-app=kubernetes-dashboard NAME READY STATUS RESTARTS AGE po/kubernetes-dashboard-747c4f7cf-md5m8 1/1 Running 0 56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes-dashboard ClusterIP 10.98.120.209443/TCP 56s
這邊會(huì)額外建立一個(gè)名稱為open-api的Cluster Role Binding,放拜年測(cè)試使用,一般情況下不開啟(開啟會(huì)存取所有API)。
$ cat < 管理者可以針對(duì)特定使用者來(lái)開放API存取權(quán)限,這里我們?yōu)榱朔奖阒苯咏壴赾luster-admin cluster role。1.7版本后的Dashboard不再提供所有權(quán)限,需要建立一個(gè)service account來(lái)綁定cluster-admin role:
$ kubectl -n kube-system create sa dashboard $ kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard $ SECRET=$(kubectl -n kube-system get sa dashboard -o yaml | awk "/dashboard-token/ {print $3}") $ kubectl -n kube-system describe secrets ${SECRET} | awk "/token:/{print $2}" eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tdzVocmgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYWJmMTFjYzMtZjRlYi0xMWU3LTgzYWUtMDgwMDI3NjdkOWI5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.Xuyq34ci7Mk8bI97o4IldDyKySOOqRXRsxVWIJkPNiVUxKT4wpQZtikNJe2mfUBBD-JvoXTzwqyeSSTsAy2CiKQhekW8QgPLYelkBPBibySjBhJpiCD38J1u7yru4P0Pww2ZQJDjIxY4vqT46ywBklReGVqY3ogtUQg-eXueBmz-o7lJYMjw8L14692OJuhBjzTRSaKW8U2MPluBVnD7M2SOekDff7KpSxgOwXHsLVQoMrVNbspUCvtIiEI1EiXkyCNRGwfnd2my3uzUABIHFhm0_RZSmGwExPbxflr8Fc6bxmuz-_jSdOtUidYkFIzvEWw2vRovPgs3MXTv59RwUw復(fù)制token,然后貼到Kubernetes dashboardHeapster addonHeapster是Kubernetes社區(qū)維護(hù)的容器集群監(jiān)控和分析工具。Heapster會(huì)從Kubernetes apiserver取得所有Node數(shù)據(jù),然后再通過(guò)Node獲取kubelet上的數(shù)據(jù),最后再將所有收集到數(shù)據(jù)送到Heapster后臺(tái)儲(chǔ)存InfluxDB,最后利用Grafana抓取InfluxDB數(shù)據(jù)源來(lái)進(jìn)行展示。
在master1通過(guò)kubectl來(lái)建立kubernetes monitor即可:
$ export ADDON_URL="https://kairen.github.io/files/manual-v1.8/addon" $ wget ${ADDON_URL}/kube-monitor.yml.conf -O kube-monitor.yml $ kubectl apply -f kube-monitor.yml $ kubectl -n kube-system get po,svc NAME READY STATUS RESTARTS AGE ... po/heapster-74fb5c8cdc-62xzc 4/4 Running 0 7m po/influxdb-grafana-55bd7df44-nw4nc 2/2 Running 0 7m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... svc/heapster ClusterIP 10.100.242.225簡(jiǎn)單部署Nginx 服務(wù)80/TCP 7m svc/monitoring-grafana ClusterIP 10.101.106.180 80/TCP 7m svc/monitoring-influxdb ClusterIP 10.109.245.142 8083/TCP,8086/TCP 7m ··· Kubernetes可以選擇使用指令直接建立應(yīng)用和服務(wù),或者我們可以寫YAML、JSON文件來(lái)配置,如下所示:
$ kubectl run nginx --image=nginx --port=80 $ kubectl expose deploy nginx --port=80 --type=LoadBalancer --external-ip=172.16.35.12 $ kubectl get svc,po NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes ClusterIP 10.96.0.1443/TCP 1h svc/nginx LoadBalancer 10.97.121.243 172.16.35.12 80:30344/TCP 22s NAME READY STATUS RESTARTS AGE po/nginx-7cbc4b4d9c-7796l 1/1 Running 0 28s 192.160.57.181 ,172.16.35.12 80:32054/TCP 21s 這里type可以選擇NodePort和LoadBalancer在本地裸機(jī)部署,兩者差異在于NodePort只映射Host port到Container port,而LoadBalancer則繼承NodePort額外映射Host target port到Container port擴(kuò)展服務(wù)數(shù)量最后,我們可以通過(guò)以下方式來(lái)擴(kuò)展服務(wù)數(shù)量:
$ kubectl scale deploy nginx --replicas=2 $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-158599303-0h9lr 1/1 Running 0 25s 10.244.100.5 node2 nginx-158599303-k7cbt 1/1 Running 0 1m 10.244.24.3 node1
相關(guān)閱讀
技術(shù) Kubernetes Autoscaling是如何工作的? 2018/05/07
技術(shù) 用戶評(píng)測(cè) | Docker管理面板系列——云幫(Rainbond 出色的k8s管理面板) 2018/05/09
技術(shù) 如何把應(yīng)用轉(zhuǎn)移到Kubernetes 2018/05/04
技術(shù) Kubernetes伸縮到2500個(gè)節(jié)點(diǎn)中遇到的問(wèn)題和解決方法 2018/04/24
技術(shù) kubernetes容器網(wǎng)絡(luò)接口(CNI) midonet網(wǎng)絡(luò)插件的設(shè)計(jì)與實(shí)現(xiàn) 2017/05/04
技術(shù) 在生產(chǎn)環(huán)境使用Kuberntes一年后,我們總結(jié)了這些經(jīng)驗(yàn)和教訓(xùn) 2017/02/23
技術(shù) 關(guān)于K8s容器集群日志收集的總結(jié) 2016/12/15
技術(shù) Kubernetes集群中的高性能網(wǎng)絡(luò)策略 2016/12/08
行業(yè) 開了香檳的Kubernetes并不打算放慢成功的腳步 2018/05/09
行業(yè) 上手kubernetes之前,你應(yīng)該知道這6件事 2018/05/03
行業(yè) 為什么說(shuō)Kubernetes是云服務(wù)的未來(lái)? 2016/10/21
開源PaaS Rainbond提供生產(chǎn)就緒的Kubernetes,在線體驗(yàn)請(qǐng)注冊(cè)公有云(新用戶7天免費(fèi))
文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請(qǐng)注明本文地址:http://systransis.cn/yun/32677.html
摘要:此文已由作者劉超授權(quán)網(wǎng)易云社區(qū)發(fā)布。所以當(dāng)我們?cè)u(píng)估大數(shù)據(jù)平臺(tái)牛不牛的時(shí)候,往往以單位時(shí)間內(nèi)跑的任務(wù)數(shù)目以及能夠處理的數(shù)據(jù)量來(lái)衡量。的問(wèn)題調(diào)度在大數(shù)據(jù)領(lǐng)域是核心中的核心,在容器平臺(tái)中是重要的,但不是全部。 此文已由作者劉超授權(quán)網(wǎng)易云社區(qū)發(fā)布。 歡迎訪問(wèn)網(wǎng)易云社區(qū),了解更多網(wǎng)易技術(shù)產(chǎn)品運(yùn)營(yíng)經(jīng)驗(yàn) 最近總在思考,為什么在支撐容器平臺(tái)和微服務(wù)的競(jìng)爭(zhēng)中,Kubernetes 會(huì)取得最終的勝出,事實(shí)...
摘要:項(xiàng)目現(xiàn)已正式發(fā)布這是一個(gè)基于云和容器部署的分布式塊存儲(chǔ)新方式。這可能是與大多數(shù)現(xiàn)有的分布式存儲(chǔ)系統(tǒng)相比,最具特色的功能。快速入門指南易于安裝和使用。使用或術(shù)語(yǔ),管理器容器是一項(xiàng)全球性服務(wù)。目前,我們不保留額外的元數(shù)據(jù)來(lái)指示使用哪些。 Longhorn項(xiàng)目現(xiàn)已正式發(fā)布!這是一個(gè)基于云和容器部署的分布式塊存儲(chǔ)新方式。Longhorn遵循微服務(wù)的原則,利用容器將小型獨(dú)立組件構(gòu)建為分布式塊存儲(chǔ)...
摘要:在年月的報(bào)告中,認(rèn)為,在中國(guó)市場(chǎng),容器技術(shù)的使用是近期的熱點(diǎn)。本地廠商由于能夠貼近客戶實(shí)際需求,而在選擇中占據(jù)優(yōu)勢(shì),例如阿里云靈雀云等中國(guó)本地廠商。容器使事情變得不那么復(fù)雜,成為新的常態(tài)。容器的采用幅度將遠(yuǎn)遠(yuǎn)超出僅以為主要容器類型的情況。 在2019年2月 China Summary Translation: Market Guide for Container Management ...
摘要:?jiǎn)?dòng)并設(shè)置為開機(jī)自啟動(dòng)安裝服務(wù)這部分配置與上一篇筆記完全相同。我們創(chuàng)建這個(gè)文件并填入如下內(nèi)容安裝完和之后將其啟動(dòng)并設(shè)置為開機(jī)自啟動(dòng)以上,角色的功能已經(jīng)安裝完成。 上一篇筆記中,我嘗試了使用 k8s 1.6 版本安裝一個(gè)最簡(jiǎn)單的集群。這一次,我希望能夠增加 node 的數(shù)量并且安裝網(wǎng)絡(luò)插件,然后配置內(nèi)部的域名解析功能。 在起初的設(shè)想中,我仍然希望不配置各個(gè)組件間的認(rèn)證,只關(guān)心功能的正常運(yùn)...
摘要:創(chuàng)建集群如果你是初次接觸,我們建議你預(yù)先創(chuàng)建好一個(gè)新的和子網(wǎng),與生產(chǎn)環(huán)境隔離。節(jié)點(diǎn)的可用區(qū)選擇會(huì)根據(jù)的可用區(qū)選擇變化,現(xiàn)已支持針對(duì)節(jié)點(diǎn)的平臺(tái)硬件隔離組最大數(shù)標(biāo)簽等設(shè)置。創(chuàng)建集群如果你是初次接觸Kubernetes,我們建議你預(yù)先創(chuàng)建好一個(gè)新的VPC和子網(wǎng),與生產(chǎn)環(huán)境隔離。創(chuàng)建集群之前,你需要先了解下Kubernetes中的Node CIDR、Pod CIDR、Service CIDR等基本概...
閱讀 566·2024-11-06 13:38
閱讀 857·2024-09-10 13:19
閱讀 1018·2024-08-22 19:45
閱讀 1397·2021-11-19 09:40
閱讀 2650·2021-11-18 13:14
閱讀 4305·2021-10-09 10:02
閱讀 2342·2021-08-21 14:12
閱讀 1295·2019-08-30 15:54