搭建K8S集群:kubernetes-1.11.3

搭建K8S集群 kubernetes 1.11.3

10多年的潍城网站建设经验,针对设计、前端、开发、售后、文案、推广等六对一服务,响应快,48小时及时工作处理。成都营销网站建设的优势是能够根据用户设备显示端的尺寸不同,自动调整潍城建站的显示方式,使网站能够适用不同显示终端,在浏览器中调整网站的宽度,无论在任何一种浏览器上浏览网站,都能展现优雅布局与设计,从而大程度地提升浏览体验。成都创新互联公司从事“潍城网站设计”,“潍城网站推广”以来,每个客户项目都认真落实执行。

1.1 实验架构:

kubernetes架构

node1: master 10.192.44.129

node2: node2 10.192.44.127

node3: node3 10.192.44.126

etcd架构

node1: master 10.192.44.129

node2: node 10.192.44.127

node3: node 10.192.44.126

harbor服务器

redhat128.example.com 

10.192.44.128

2.安装

2.1配置系统相关参数(每台):

2.1.1 临时禁用selinux

sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux

setenforce 0

2.1.2 临时关闭swap ,永久关闭直接注释fstab中swap行

swapoff -a

2.1.3 开启forward

iptables -P FORWARD ACCEPT

2.1.3 配置转发相关参数,否则可能会出错

cat <  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

vm.swappiness=0

EOF

sysctl --system

2.1.4 配置hosts

10.192.44.126 node3

10.192.44.127 node2

10.192.44.128 redhat128

10.192.44.129 node1

2.1.5 安装docker 参考我此前的blog。

2.1.6 时间同步

yum install ntpdate -y &&ntpdate 0.asia.pool.ntp.org 

3.创建TLS证书和秘钥(master节点)

3.1 生成的证书文件如下:

ca-key.pem #根私钥

ca.pem    #根证书

kubernetes-key.pem #集群私钥

kubernetes.pem    #集群证书

kube-proxy.pem    #proxy私钥-node节点进行认证

kube-proxy-key.pem #proxy证书-node节点进行认证

admin.pem    #管理员私钥-主要用于kubectl认证

admin-key.pem      #管理员证书-主要用于kubectl认证

知识点补充:

TLS: TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 建立连接,更不用提有没有权限向 apiserver 请求指定内容。

RBAC作用:RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O 字段作为用户组。

总结:想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。

3.2 下载安装CFSSL(用于签名,验证和捆绑TLS证书的HTTP API工具)(master节点)

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

mv  cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

mv  cfssl_linux-amd64 /usr/local/bin/cfssl

mv  https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 /usr/local/bin/cfssljson

    chmod +x  /usr/local/bin/cfssl /usr/local/bin/cfssl-certinfo /usr/local/bin/cfssljson

3.3创建CA(Certificate Authority)(master节点)

mkdir /root/ssl

cd /root/ssl

cfssl print-defaults config > config.json

cfssl print-defaults csr > csr.json

# 根据config.json文件的格式创建如下的ca-config.json文件

# 过期时间设置成了 87600h

cat > ca-config.json <

{

  "signing": {

"default": {

  "expiry": "87600h"

},

"profiles": {

  "kubernetes": {

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

],

"expiry": "87600h"

  }

}

  }

}

EOF

知识点:

ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;

signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;

server auth:表示client可以用该 CA 对server提供的证书进行验证;

client auth:表示server可以用该CA对client提供的证书进行验证;

3.4 创建证书请求

cat > ca-csr.json <

{

  "CN": "kubernetes",    

  "key": {

"algo": "rsa",

"size": 2048

  },

  "names": [

{

  "C": "CN",

  "ST": "GuangDong",

  "L": "ShenZhen",

  "O": "k8s",

  "OU": "System"

}

  ],

"ca": {

   "expiry": "87600h"

}

}

EOF

知识点:

"CN":Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name)

"O":Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)

3.5 生成CA证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

3.6 创建kubernetes证书请求文件

cat > kubernetes-csr.json <

{

"CN": "kubernetes",

"hosts": [

  "127.0.0.1",

  "10.192.44.129",

  "10.192.44.128",

  "10.192.44.126",

  "10.192.44.127",

  "10.254.0.1",

  "*.kubernetes.master",

  "localhost",

  "kubernetes",

  "kubernetes.default",

  "kubernetes.default.svc",

  "kubernetes.default.svc.cluster",

  "kubernetes.default.svc.cluster.local"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "GuangDong",

"L": "ShenZhen",

"O": "k8s",

"OU": "System"

}

]

}

EOF

知识点:

这个证书目前专属于 apiserver加了一个 *.kubernetes.master 域名以便内部私有 DNS 解析使用(可删除);至于很多人问过 kubernetes 这几个能不能删掉,答案是不可以的;因为当集群创建好后,default namespace 下会创建一个叫 kubenretes 的 svc,有一些组件会直接连接这个 svc 来跟 api 通讯的,证书如果不包含可能会出现无法连接的情况;其他几个 kubernetes 开头的域名作用相同

hosts包含的是授权范围,不在此范围的的节点或者服务使用此证书就会报证书不匹配错误。

10.254.0.1是指kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP。

3.7 生成kubernetes证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

3.8 创建admin证书

cat > admin-csr.json <

{

  "CN": "admin",

  "hosts": [],

  "key": {

"algo": "rsa",

"size": 2048

  },

  "names": [

{

  "C": "CN",

  "ST": "GuangDong",

  "L": "ShenZhen",

  "O": "system:masters",

  "OU": "System"

}

  ]

}

EOF

3.9 生成admin证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

知识点:

这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group

3.10 创建Kube-proxy 证书

cat > kube-proxy-csr.json <

{

  "CN": "system:kube-proxy",

  "hosts": [],

  "key": {

"algo": "rsa",

"size": 2048

  },

  "names": [

{

  "C": "CN",

  "ST": "GuangDong",

  "L": "ShenZhen",

  "O": "k8s",

  "OU": "System"

}

  ]

}

EOF 

3.11 生成kube-proxy客户端证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

3.12 校验证书

openssl x509  -noout -text -in  kubernetes.pem

3.13分发证书 将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下备用

mkdir -p /etc/kubernetes/ssl

scp *.pem {node2,node3}:/etc/kubernetes/ssl

4.创建kubeconfig文件 (master节点)

4.1 生成token文件

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

export KUBE_APISERVER="https://10.192.44.129:6443"

echo $BOOTSTRAP_TOKEN

cat > token.csv <

${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:bootstrappers"

EOF

cp token.csv /etc/kubernetes/

知识点:不要质疑 system:bootstrappers 用户组是否写错了,有疑问请参考官方文, https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/

4.2 创建kubelete-kubeconfig文件

kubeconfig 设置其实是权限配置文件,是对k8s集群层面的访问控制。如果不使用--kubeconfig=xx.kubeconfig,他就会默认保存在~/.kube/conf中文件,然后作为默认配置文件。其实通过kubeadm配置也会发现,他要求你将kubeconfig拷贝到~/.kube/conf。

cd /etc/kubernetes/ssl

4.2.1设置集群参数

kubectl config set-cluster kubernetes \

  --certificate-authority=/etc/kubernetes/ssl/ca.pem \

  --embed-certs=true \

  --server=${KUBE_APISERVER} \

  --kubeconfig=bootstrap.kubeconfig

4.2.2设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \

  --token=${BOOTSTRAP_TOKEN} \

  --kubeconfig=bootstrap.kubeconfig

4.2.3设置上下文参数

kubectl config set-context default \

  --cluster=kubernetes \

  --user=kubelet-bootstrap \

  --kubeconfig=bootstrap.kubeconfig

4.2.4设置默认上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

4.3 创建kube-proxy文件

4.3.1 设置集群参数

kubectl config set-cluster kubernetes \

  --certificate-authority=/etc/kubernetes/ssl/ca.pem \

  --embed-certs=true \

  --server=${KUBE_APISERVER} \

  --kubeconfig=kube-proxy.kubeconfig

4.3.2 设置客户端认证参数

kubectl config set-credentials kube-proxy \

  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \

  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \

  --embed-certs=true \

  --kubeconfig=kube-proxy.kubeconfig

4.3.3 设置上下文参数

kubectl config set-context default \

  --cluster=kubernetes \

  --user=kube-proxy \

  --kubeconfig=kube-proxy.kubeconfig

4.3.4 设置默认上下文

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4.4 分发kubeconfig 证书

scp bootstrap.kubeconfig kube-proxy.kubeconfig {node2,node3}:/etc/kubernetes/

4.5 创建 admin kubeconfig文件

4.5.1 设置集群参数

kubectl config set-cluster kubernetes \

  --certificate-authority=/etc/kubernetes/ssl/ca.pem \

  --embed-certs=true \

  --server=${KUBE_APISERVER} \

  --kubeconfig=admin.conf

4.5.2设置客户端认证参数

kubectl config set-credentials admin \

  --client-certificate=/etc/kubernetes/ssl/admin.pem \

  --embed-certs=true \

  --client-key=/etc/kubernetes/ssl/admin-key.pem \

  --kubeconfig=admin.conf

4.5.3设置上下文参数

kubectl config set-context default \

  --cluster=kubernetes \

  --user=admin \

  --kubeconfig=admin.conf

4.5.4设置默认上下文

kubectl config use-context default --kubeconfig=admin.conf

4.6 创建高级审计文件

cat >> audit-policy.yaml <

# Log all requests at the Metadata level.

apiVersion: audit.k8s.io/v1beta1

kind: Policy

rules:

- level: Metadata

EOF

4.7 文件拷贝:

#cp ~/.kube/config /etc/kubernetes/kubelet.kubeconfig (#关于这一步当时我是添加node节点出问题,如果没有问题请忽略这操作,下面的kubelet.kubeconfig一样)

scp /etc/kubernetes/{kubelet.kubeconfig,bootstrap.kubeconfig,kube-proxy.kubeconfig} node2:/etc/kubernetes/

scp /etc/kubernetes/{kubelet.kubeconfig,bootstrap.kubeconfig,kube-proxy.kubeconfig} node3:/etc/kubernetes/

5 创建etcd集群

5.1创建etcd启动服务(每台)

cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=-/etc/etcd/etcd.conf

ExecStart=/usr/local/bin/etcd \

  --name ${ETCD_NAME} \

  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \

  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \

  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

  --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \

  --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \

  --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \

  --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \

  --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \

  --initial-cluster infra1=https://172.20.0.113:2380,infra2=https://172.20.0.114:2380,infra3=https://172.20.0.115:2380 \

  --initial-cluster-state new \

  --data-dir=${ETCD_DATA_DIR}

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

知识点: systemd是配置管理驱动服务的。 环境变量 = -/  "-"表示抑制错误,即发生错误的时候,也不影响其他命令的执行。

5.2 编辑配置文件(以ectd1为例,etcd2,etcd3注意替换IP地址)

mkdir /etc/etcd && vim /etc/etcd/etcd.conf

cat > /etc/etcd/etcd.conf << EOF

ETCD_NAME=infra1

ETCD_DATA_DIR="/var/lib/etcd"

ETCD_LISTEN_PEER_URLS="https://10.192.44.129:2380"

ETCD_LISTEN_CLIENT_URLS="https://10.192.44.129:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.192.44.129:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="https://10.192.44.129:2379"

5.3启动etcd服务器,记得创建/var/lib/etcd。

mkdir /var/lib/etcd   

systemctl enable etcd && systemctl start etcd

6 部署master节点:(好像需要自己到服务器文件解压)

6.1 下载kubernetes 文件

下载kubernetes (v1.11.3)

wget  https://github.com/kubernetes/kubernetes/releases/download/v1.11.3/kubernetes.tar.gz

tar -xzvf kubernetes.tar.gz

cd kubernetes

./cluster/get-kube-binaries.sh

#如果不行,请手动操作

cd server/

tar -zxvf kubernetes-server-linux-amd64.tar.gz

cp kubernetes/server/bin/kube-apiserver /usr/local/bin/kube-apiserver

cp kubernetes/server/bin/kube-controller-manager /usr/local/bin/kube-controller-manager

cp kubernetes/server/bin/kube-scheduler /usr/local/bin/kube-scheduler

chmod +x /usr/local/bin/{kube-apiserver,kube-controller-manager,kube-scheduler}

6.2配置系统服务启动kube-apiserver,kube-controller-manager,kube-scheduler

6.2.1创建kube-apiserver.service

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]

Description=Kubernetes API Service

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

After=etcd.service

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/apiserver

ExecStart=/usr/local/bin/kube-apiserver \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_ETCD_SERVERS \

$KUBE_API_ADDRESS \

$KUBE_API_PORT \

$KUBELET_PORT \

$KUBE_ALLOW_PRIV \

$KUBE_SERVICE_ADDRESSES \

$KUBE_ADMISSION_CONTROL \

$KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

6.2.2 创建kube-controller-manager.service

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/controller-manager

ExecStart=/usr/local/bin/kube-controller-manager \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_MASTER \

$KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

6.2.3 创建kube-scheduler.service

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]

Description=Kubernetes Scheduler Plugin

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/scheduler

ExecStart=/usr/local/bin/kube-scheduler \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_MASTER \

$KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

6.2.4 编辑/etc/kubernetes/config文件

cat > /etc/kubernetes/config << EOF

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver

#KUBE_MASTER="--master=http://test-001.jimmysong.io:8080"

KUBE_MASTER="--master=http://10.192.44.129:8080"

EOF

6.2.5 编辑apiserver配置文件

cat > /etc/kubernetes/apiserver << EOF

###

## kubernetes system config

##

## The following values are used to configure the kube-apiserver

##

#

## The address on the local server to listen to.

#KUBE_API_ADDRESS="--insecure-bind-address=test-001.jimmysong.io"

KUBE_API_ADDRESS="--advertise-address=10.192.44.129 --bind-address=10.192.44.129 --insecure-bind-address=10.192.44.129"

#

## The port on the local server to listen on.

KUBE_API_PORT="--secure-port=6443"

#

## Port minions listen on

#KUBELET_PORT="--kubelet-port=10250"

#

## Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS="--etcd-servers=https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379"

#

## Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

#

## default admission control policies

KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction"

#

## Add your own!

KUBE_API_ARGS="--anonymous-auth=false \

--authorization-mode=Node,RBAC  \

--kubelet-https=true \

--kubelet-timeout=3s \

--enable-bootstrap-token-auth \

--enable-garbage-collector \

--enable-logs-handler \

--token-auth-file=/etc/kubernetes/token.csv \

--service-node-port-range=30000-32767 \

--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \

--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

--client-ca-file=/etc/kubernetes/ssl/ca.pem \

--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \

--etcd-cafile=/etc/kubernetes/ssl/ca.pem \

--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \

--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \

--etcd-compaction-interval=5m0s \

--etcd-count-metric-poll-period=1m0s \

--enable-swagger-ui=true \

--apiserver-count=3 \

--log-flush-frequency=5s \

--audit-log-maxage=30 \

--audit-log-maxbackup=3 \

--audit-log-maxsize=100 \

--audit-log-path=/var/lib/audit.log \

--audit-policy-file=/etc/kubernetes/audit-policy.yaml \

--storage-backend=etcd3 \

--event-ttl=1h"

EOF

6.2.6 编辑controller-manager配置文件

cat > /etc/kubernetes/controller-manager << EOF

###

# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!

KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 

--service-cluster-ip-range=10.254.0.0/16 

--cluster-name=kubernetes 

--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem 

--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  

--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem 

--root-ca-file=/etc/kubernetes/ssl/ca.pem 

--leader-elect=true"

EOF

6.2.7 编辑scheduler配置文件

cat > /etc/kubernetes/scheduler << EOF

###

# kubernetes scheduler config

# default config should be adequate

# Add your own!

KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1 --algorithm-provider=DefaultProvider"

6.2.8 启动服务

systemctl daemon-reload

systemctl enable kueb-apiserver kube-controller-manager kube-scheduler

systemctl start  kueb-apiserver kube-controller-manager kube-scheduler

6.2.9 验证master节点功能

kubectl get componentstatuses  

如下:

NAME                 STATUS    MESSAGE              ERROR

scheduler            Healthy   ok                   

controller-manager   Healthy   ok                   

etcd-0               Healthy   {"health": "true"}   

etcd-1               Healthy   {"health": "true"}   

etcd-2               Healthy   {"health": "true"}

6.2.10 kubectl命令补全

echo "source <(kubectl completion bash)" >> ~/.bashrc

source ~/.bashrc

7. 安装flannel网络插件

7.1 通过yum安装配置flannel(每节点)

yum install -y flannel

7.2 配置服务文件(每节点)

cat > /usr/lib/systemd/system/flanneld.service << EOF

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service

[Service]

Type=notify

EnvironmentFile=/etc/sysconfig/flanneld

EnvironmentFile=-/etc/sysconfig/docker-network

ExecStart=/usr/bin/flanneld-start \

  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \

  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \

  $FLANNEL_OPTIONS

ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Restart=on-failure

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

知识点:mk-docker-opts.sh生成环境变量/run/flannel/subnet.env,/run/docker_opts.env。后续要docker要调用其配置文件。

7.3 创建flanneld配置文件(每节点)

cat > /etc/sysconfig/flanneld << EOF

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs

FLANNEL_ETCD_ENDPOINTS="https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379"

# etcd config key.  This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass

FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

7.4 在etcd创建网络配置(每节点,gw模式)

etcdctl --endpoints=https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379 \

  --ca-file=/etc/kubernetes/ssl/ca.pem \

  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \

  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

  mkdir /kube-centos/network

etcdctl --endpoints=https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379 \

  --ca-file=/etc/kubernetes/ssl/ca.pem \

  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \

  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

  mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}'

  

7.5 启动flannel(每节点)

systemctl daemon-reload

systemctl restart flanneld

systemctl start flanneld

7.6 查看etcd内容(随便一个节点执行就行了,因为数据是同步的)

etcdctl --endpoints=https://10.192.44.129:2379 \

  --ca-file=/etc/kubernetes/ssl/ca.pem \

  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \

  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

  ls /kube-centos/network/subnets

etcdctl --endpoints=https://10.192.44.129:2379 \

  --ca-file=/etc/kubernetes/ssl/ca.pem \

  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \

  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

  get /kube-centos/network/config

7.7  将flannel启动后生成的环境变量添加到docker的systemd目录。(每节点)

vim /usr/lib/systemd/system/docker.service

EnvironmentFile=-/run/flannel/docker

systemctl daemon-reload && systemctl restart docker

7.8 更改dockerd启动配置(每节点)

vim /usr/lib/systemd/system/docker.service

EnvironmentFile=-/run/flannel/docker

ExecStart=/usr/bin/dockerd \

$DOCKER_OPT_BIP \

$DOCKER_OPT_IPMASQ \

$DOCKER_OPT_MTU \

--log-driver=json-file

8.部署node节点

8.1 TLS bootstrapping配置(master节点)

cd /etc/kubernetes

kubectl create clusterrolebinding kubelet-bootstrap \

  --clusterrole=system:node-bootstrapper \

  --user=kubelet-bootstrap

kubectl create clusterrolebinding kubelet-nodes \

  --clusterrole=system:node \

  --group=system:nodes

知识点:

kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):

kubelet 通过认证后向 kube-apiserver 发送 register node 请求,需要先将 kubelet-nodes 用户赋予 system:node cluster角色(role) 和 system:nodes 组(group), 然后 kubelet 才能有权限创建节点请求:

8.2 下载kubelet和kube-proxy 二进制文件(每节点)

wget  https://github.com/kubernetes/kubernetes/releases/download/v1.11.3/kubernetes.tar.gz

tar -xzvf kubernetes.tar.gz

cd kubernetes

./cluster/get-kube-binaries.sh

#如果不行,请手动操作

cd server/

tar -zxvf kubernetes-server-linux-amd64.tar.gz

cp kubernetes/server/bin/kubelet /usr/local/bin/kubelet

cp kubernetes/server/bin/kube-proxy /usr/local/bin/kube-proxy

chmod +x /usr/local/bin/{kubelet,kube-proxy}

8.3 配置系统服务启动kubelet,kube-proxy

8.3.1 创建kubelete

cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]

Description=Kubernetes Kubelet Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

[Service]

WorkingDirectory=/var/lib/kubelet

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/kubelet

ExecStart=/usr/local/bin/kubelet \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBELET_ADDRESS \

$KUBELET_PORT \

$KUBELET_HOSTNAME \

$KUBE_ALLOW_PRIV \

$KUBELET_POD_INFRA_CONTAINER \

$KUBELET_ARGS

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

8.3.2 创建Kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/proxy

ExecStart=/usr/local/bin/kube-proxy \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_MASTER \

$KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

8.3.3 创建conf文件

cd /etc/kubernetes

cat >/etc/kubernetes/config<

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=2"

EOF

8.3.4 创建kubelete-conf文件(master)

cat > /etc/kubernetes/kubelet << EOF

###

## kubernetes kubelet (minion) config

#

## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=10.192.44.129"

#

## The port for the info server to serve on

#KUBELET_PORT="--port=10250"

#

## You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=master"

#

## location of the api-server

## COMMENT THIS ON KUBERNETES 1.8+

#

## pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

#

## Add your own!

KUBELET_ARGS="--cluster-dns=10.254.0.2  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig  --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local.  --allow-privileged=true  --serialize-image-pulls=false --fail-swap-on=false --log-dir=/var/log/kubernetes/kubelet"

EOF

8.3.5 创建kubelete-conf文件(node2)

cat > /etc/kubernetes/kubelet << EOF

###

## kubernetes kubelet (minion) config

#

## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=10.192.44.127"

#

## The port for the info server to serve on

#KUBELET_PORT="--port=10250"

#

## You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=node2"

#

## location of the api-server

## COMMENT THIS ON KUBERNETES 1.8+

#

## pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

#

## Add your own!

KUBELET_ARGS="--cluster-dns=10.254.0.2  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig  --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local.  --allow-privileged=true  --serialize-image-pulls=false --fail-swap-on=false --log-dir=/var/log/kubernetes/kubelet"

EOF

8.3.6 创建kubelete-conf文件(node3)

cat > /etc/kubernetes/kubelet << EOF

###

## kubernetes kubelet (minion) config

#

## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=10.192.44.126"

#

## The port for the info server to serve on

#KUBELET_PORT="--port=10250"

#

## You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=node3"

#

## location of the api-server

## COMMENT THIS ON KUBERNETES 1.8+

#

## pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

#

## Add your own!

KUBELET_ARGS="--cluster-dns=10.254.0.2  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig  --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local.  --allow-privileged=true  --serialize-image-pulls=false --fail-swap-on=false --log-dir=/var/log/kubernetes/kubelet"

EOF

8.3.7 创建kube-proxy文件(master)

cat > /etc/kubernetes/proxy << EOF

###

# kubernetes proxy config

# default config should be adequate

# Add your own!

KUBE_PROXY_ARGS="--bind-address=10.192.44.129 --hostname-override=master  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16 --log-dir=/var/log/kubernetes/proxy"

EOF

8.3.7 创建kube-proxy文件(node2)

cat > /etc/kubernetes/proxy << EOF

###

# kubernetes proxy config

# default config should be adequate

# Add your own!

KUBE_PROXY_ARGS="--bind-address=10.192.44.127 --hostname-override=node2  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16 --log-dir=/var/log/kubernetes/proxy"

EOF

8.3.8 创建kube-proxy文件(node3)

cat > /etc/kubernetes/proxy << EOF

###

# kubernetes proxy config

# default config should be adequate

# Add your own!

KUBE_PROXY_ARGS="--bind-address=10.192.44.126 --hostname-override=node3  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16 --log-dir=/var/log/kubernetes/proxy"

EOF

8.3.9 启动kubelet

systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet

8.3.10 启动kube-proxy

systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy

8.3.11 查看证书申请请求(node节点自动去kubeapi节点申请)

kubectl get csr

8.3.12 master节点允许请求 ,查看证书请求状态

kubectl certificate approve node-csr-Yiiv675wUCvQl3HH11jDr0cC9p3kbrXWrxvG3EjWGoE

kubectl describe csr node-csr-Yiiv675wUCvQl3HH11jDr0cC9p3kbrXWrxvG3EjWGoE

状态标注下如下:

kubectl describe csr node-csr-hsBS9OyhOa8rK_Q48ee81giH17t6Nk4FL9IRWRt4ygw

Name:               node-csr-hsBS9OyhOa8rK_Q48ee81giH17t6Nk4FL9IRWRt4ygw

Labels:             

Annotations:       

CreationTimestamp:  Thu, 22 Nov 2018 20:19:09 +0800

Requesting User:    kubelet-bootstrap

Status:             Approved,Issued

Subject:

Common Name:    system:node:node3

Serial Number:  

Organization:   system:nodes

8.3.13 查看节点状态

kubectl get nodes

8.3.14 创建测试

vim deploy.yaml

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: nginx-deployment

spec:

  replicas: 2

  template:

metadata:

  labels:

app: nginx

spec:

  containers:

  - name: nginx

image: ngxin:1.7.9

ports:

- containerPort: 80

kubectl create -f deploy.yaml

kubectl scale deployment nginx --replicas=4

9.部署集群DNS(CoreDNS)

9.1 下载coredns配置文件,如下:

coredns.yaml.sed

apiVersion: v1

kind: ServiceAccount

metadata:

  name: coredns

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRole

metadata:

  labels:

kubernetes.io/bootstrapping: rbac-defaults

  name: system:coredns

rules:

- apiGroups:

  - ""

  resources:

  - endpoints

  - services

  - pods

  - namespaces

  verbs:

  - list

  - watch

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

  annotations:

rbac.authorization.kubernetes.io/autoupdate: "true"

  labels:

kubernetes.io/bootstrapping: rbac-defaults

  name: system:coredns

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: system:coredns

subjects:

- kind: ServiceAccount

  name: coredns

  namespace: kube-system

---

apiVersion: v1

kind: ConfigMap

metadata:

  name: coredns

  namespace: kube-system

data:

  Corefile: |

.:53 {

errors

health

kubernetes CLUSTER_DOMAIN REVERSE_CIDRS {

  pods insecure

  upstream

  fallthrough in-addr.arpa ip6.arpa

}

prometheus :9153

proxy . /etc/resolv.conf

cache 30

}

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: coredns

  namespace: kube-system

  labels:

k8s-app: kube-dns

kubernetes.io/name: "CoreDNS"

spec:

  replicas: 2

  strategy:

type: RollingUpdate

rollingUpdate:

  maxUnavailable: 1

  selector:

matchLabels:

  k8s-app: kube-dns

  template:

metadata:

  labels:

k8s-app: kube-dns

spec:

  serviceAccountName: coredns

  tolerations:

- key: "CriticalAddonsOnly"

  operator: "Exists"

  containers:

  - name: coredns

image: coredns/coredns:1.1.1

imagePullPolicy: IfNotPresent

args: [ "-conf", "/etc/coredns/Corefile" ]

volumeMounts:

- name: config-volume

  mountPath: /etc/coredns

ports:

- containerPort: 53

  name: dns

  protocol: UDP

- containerPort: 53

  name: dns-tcp

  protocol: TCP

- containerPort: 9153

  name: metrics

  protocol: TCP

livenessProbe:

  httpGet:

path: /health

port: 8080

scheme: HTTP

  initialDelaySeconds: 60

  timeoutSeconds: 5

  successThreshold: 1

  failureThreshold: 5

  dnsPolicy: Default

  volumes:

- name: config-volume

  configMap:

name: coredns

items:

- key: Corefile

  path: Corefile

---

apiVersion: v1

kind: Service

metadata:

  name: kube-dns

  namespace: kube-system

  annotations:

prometheus.io/scrape: "true"

  labels:

k8s-app: kube-dns

kubernetes.io/cluster-service: "true"

kubernetes.io/name: "CoreDNS"

spec:

  selector:

k8s-app: kube-dns

  clusterIP: CLUSTER_DNS_IP

  ports:

  - name: dns

port: 53

protocol: UDP

  - name: dns-tcp

port: 53

protocol: TCP

9.2 编写部署脚本

cat > deploy.sh << EOF 

#!/bin/bash

# Deploys CoreDNS to a cluster currently running Kube-DNS.

SERVICE_CIDR=${1:-10.254.0.0/16}

POD_CIDR=${2:-172.30.0.0/16}

CLUSTER_DNS_IP=${3:-10.254.0.2}

CLUSTER_DOMAIN=${4:-cluster.local}

YAML_TEMPLATE=${5:-`pwd`/coredns.yaml.sed}

sed -e s/CLUSTER_DNS_IP/$CLUSTER_DNS_IP/g -e s/CLUSTER_DOMAIN/$CLUSTER_DOMAIN/g -e s?SERVICE_CIDR?$SERVICE_CIDR?g -e s?POD_CIDR?$POD_CIDR?g $YAML_TEMPLATE > coredns.yaml

EOF 

知识点:根据自己的node网络,cluster修改自己的地址段。

9.3 部署coredns

chmod + deploy.sh

./deploy.sh 

kubectl create -f coredns.yaml

9.4 验证dns服务

9.4.1  创建deployment

cat > busyboxdeploy.yaml << EOF

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: busybox-deployment

spec:

  replicas: 2

  template:

metadata:

  labels:

app: busybox

spec:

  containers:

  - name: busybox

image: busybox

ports:

- containerPort: 80

args: ["/bin/sh","-c","sleep 1000"]

EOF

9.4.2  进入pod,ping自己的SVC

kubectl exec busybox-deployment-6679c4bb96-86kfg  -it -- /bin/sh

# ping kubernetes

# ...

# 虽然因为网络的问题ping不同,但是可以解析出名称。

10. 部署heapster

10.1.下载yaml文件

mkdir heapter

cd hapster

wget  https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml

wget  https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml

wget  https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

wget  https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml

10.2. 修改yaml的container镜像源文件(默认使用goolge镜像源,我们下载不到只能改成其他人上传至dockerhub上的)

10.2.1 修改grafana.yaml 

k8s.gcr.io/heapster-grafana-amd64:v5.0.4 mirrorgooglecontainers/heapster-grafana-amd64:v5.0.4

10.2.2 修改heapster.yaml

k8s.gcr.io/heapster-amd64:v1.5.4 cnych/heapster-amd64:v1.5.4 

10.2.3 修改influxdb.yaml

k8s.gcr.io/heapster-influxdb-amd64:v1.5.2  fishchen/heapster-influxdb-amd64:v1.5.2

10.3 查看heapster状态

kubectl get svc -n kube-system

10.4 在master设置代理可以允许外部访问

kubectl proxy --port=8096 --address="10.192.44.129" --accept-hosts='^*$'

11.部署dashboard

11.1  下载dashboard的yaml文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml

11.2 修改如下:(使用的是官方镜像,但是更换了images,添加了nodePort)

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1

kind: Secret


本文名称:搭建K8S集群:kubernetes-1.11.3
网页地址:http://bzwzjz.com/article/ggopog.html

其他资讯

Copyright © 2007-2020 广东宝晨空调科技有限公司 All Rights Reserved 粤ICP备2022107769号
友情链接: 成都品牌网站设计 响应式网站建设 手机网站建设 网站设计制作报价 响应式网站设计 企业手机网站建设 成都响应式网站建设公司 成都网站制作 手机网站制作 app网站建设 成都网站设计 网站建设方案 成都网站建设 网站建设 网站建设改版 成都h5网站建设 自适应网站建设 成都网站建设 成都网站设计 响应式网站设计方案 温江网站设计 成都网站制作