kubernetes 1.11.2整理Ⅱ

栏目: 编程工具 · 发布时间: 5年前

内容简介:kubelet 授权 kube-apiserver 的一些操作 exec run logs 等注意修改hostname以下为了区分 会先生成 hostname 名称加 bootstrap.kubeconfig

配置 kubelet 认证

kubelet 授权 kube-apiserver 的一些操作 exec run logs 等

# RBAC 只需创建一次就可以

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

创建 bootstrap kubeconfig 文件

注意: token 生效时间为 1day , 超过时间未创建自动失效,需要重新创建 token

创建 集群所有 kubelet 的 token

注意修改hostname

[root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:master1 --kubeconfig ~/.kube/config
of2phx.v39lq3ofeh0w6f3m

[root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:master2 --kubeconfig ~/.kube/config
b3stk9.edz2iylppqjo5qbc

[root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:master3 --kubeconfig ~/.kube/config
ck2uqr.upeu75jzjj1ko901

[root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:node1 --kubeconfig ~/.kube/config
1ocjm9.7qa3rd5byuft9gwr

[root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:node2 --kubeconfig ~/.kube/config
htsqn3.z9z6579gxw5jdfzd

查看生成的 token

[root@master1 kubernetes]# kubeadm token list --kubeconfig ~/.kube/config
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
1ocjm9.7qa3rd5byuft9gwr   23h       2018-09-02T16:06:32+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:node1

b3stk9.edz2iylppqjo5qbc   23h       2018-09-02T16:03:46+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master2

ck2uqr.upeu75jzjj1ko901   23h       2018-09-02T16:05:16+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master3

htsqn3.z9z6579gxw5jdfzd   23h       2018-09-02T16:06:34+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:node2

of2phx.v39lq3ofeh0w6f3m   23h       2018-09-02T16:03:40+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master1

以下为了区分 会先生成 hostname 名称加 bootstrap.kubeconfig

生成 master1 的 bootstrap.kubeconfig

# 配置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=master1-bootstrap.kubeconfig

# 配置客户端认证

kubectl config set-credentials kubelet-bootstrap \
  --token=of2phx.v39lq3ofeh0w6f3m \
  --kubeconfig=master1-bootstrap.kubeconfig


# 配置关联

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=master1-bootstrap.kubeconfig
  
  
# 配置默认关联
kubectl config use-context default --kubeconfig=master1-bootstrap.kubeconfig

# 拷贝生成的 master1-bootstrap.kubeconfig 文件

mv master1-bootstrap.kubeconfig /etc/kubernetes/bootstrap.kubeconfig

生成 master2 的 bootstrap.kubeconfig

# 配置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=master2-bootstrap.kubeconfig

# 配置客户端认证

kubectl config set-credentials kubelet-bootstrap \
  --token=b3stk9.edz2iylppqjo5qbc \
  --kubeconfig=master2-bootstrap.kubeconfig


# 配置关联

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=master2-bootstrap.kubeconfig
  
  
# 配置默认关联
kubectl config use-context default --kubeconfig=master2-bootstrap.kubeconfig


# 拷贝生成的 master2-bootstrap.kubeconfig 文件

scp master2-bootstrap.kubeconfig 192.168.161.162:/etc/kubernetes/bootstrap.kubeconfig

生成 master3 的 bootstrap.kubeconfig

# 配置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=master3-bootstrap.kubeconfig

# 配置客户端认证

kubectl config set-credentials kubelet-bootstrap \
  --token=ck2uqr.upeu75jzjj1ko901 \
  --kubeconfig=master3-bootstrap.kubeconfig


# 配置关联

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=master3-bootstrap.kubeconfig
  
  
# 配置默认关联
kubectl config use-context default --kubeconfig=master3-bootstrap.kubeconfig


# 拷贝生成的 master3-bootstrap.kubeconfig 文件

scp master3-bootstrap.kubeconfig 192.168.161.163:/etc/kubernetes/bootstrap.kubeconfig

生成 node1 的 bootstrap.kubeconfig

# 配置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=node1-bootstrap.kubeconfig

# 配置客户端认证

kubectl config set-credentials kubelet-bootstrap \
  --token=1ocjm9.7qa3rd5byuft9gwr \
  --kubeconfig=node1-bootstrap.kubeconfig


# 配置关联

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=node1-bootstrap.kubeconfig
  
  
# 配置默认关联
kubectl config use-context default --kubeconfig=node1-bootstrap.kubeconfig


# 拷贝生成的 node1-bootstrap.kubeconfig 文件

scp node1-bootstrap.kubeconfig 192.168.161.77:/etc/kubernetes/bootstrap.kubeconfig

生成 node2 的 bootstrap.kubeconfig

# 配置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=node2-bootstrap.kubeconfig

# 配置客户端认证

kubectl config set-credentials kubelet-bootstrap \
  --token=htsqn3.z9z6579gxw5jdfzd \
  --kubeconfig=node2-bootstrap.kubeconfig


# 配置关联

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=node2-bootstrap.kubeconfig
  
  
# 配置默认关联
kubectl config use-context default --kubeconfig=node2-bootstrap.kubeconfig


# 拷贝生成的 node2-bootstrap.kubeconfig 文件

scp node2-bootstrap.kubeconfig 192.168.161.78:/etc/kubernetes/bootstrap.kubeconfig

配置 bootstrap RBAC 权限

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers


# 否则报如下错误

failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:1jezb7" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope

创建自动批准相关 CSR 请求的 ClusterRole

vi /etc/kubernetes/tls-instructs-csr.yaml


kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]


# 创建 yaml 文件
[root@master1 kubernetes]# kubectl apply -f /etc/kubernetes/tls-instructs-csr.yaml
clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeserver created

[root@master1 kubernetes]# kubectl describe ClusterRole/system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
Name:         system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"system:certificates.k8s.io:certificatesigningreq...
PolicyRule:
  Resources                                                      Non-Resource URLs  Resource Names  Verbs
  ---------                                                      -----------------  --------------  -----
  certificatesigningrequests.certificates.k8s.io/selfnodeserver  []                 []              [create]
#  将 ClusterRole 绑定到适当的用户组


# 自动批准 system:bootstrappers 组用户 TLS bootstrapping 首次申请证书的 CSR 请求

kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers


# 自动批准 system:nodes 组用户更新 kubelet 自身与 apiserver 通讯证书的 CSR 请求

kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes


# 自动批准 system:nodes 组用户更新 kubelet 10250 api 端口证书的 CSR 请求

kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes

Node 端

单 Node 部分 需要部署的组件有

docker, calico, kubelet, kube-proxy

这几个组件。 Node 节点 基于 Nginx 负载 API 做 Master HA

# master 之间除 api server 以外其他组件通过 etcd 选举,api server 默认不作处理;

在每个 node 上启动一个 nginx,每个 nginx 反向代理所有 api server;

node 上 kubelet、kube-proxy 连接本地的 nginx 代理端口;

当 nginx 发现无法连接后端时会自动踢掉出问题的 api server,从而实现 api server 的 HA;

kubernetes 1.11.2整理Ⅱ

这种模式和我之前所接触的不太一样,之前所做的架构是基于KUBE-APISERVER 的负载均衡,所有的node节点都会去连接负载均衡的虚拟VIP。

创建Nginx 代理

在每个 node 都必须创建一个 Nginx 代理, 这里特别注意, 当 Master 也做为 Node 的时候 不需要配置 Nginx-proxy

# 创建配置目录
mkdir -p /etc/nginx

# 写入代理配置
cat << EOF >> /etc/nginx/nginx.conf
error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}

stream {
    upstream kube_apiserver {
        least_conn;
        server 192.168.161.161:6443;
        server 192.168.161.162:6443;
    }

    server {
        listen        0.0.0.0:6443;
        proxy_pass    kube_apiserver;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
}
EOF

# 更新权限
chmod +r /etc/nginx/nginx.conf
# 配置 Nginx 基于  docker  进程,然后配置 systemd 来启动

cat << EOF >> /etc/systemd/system/nginx-proxy.service
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\
                              -v /etc/nginx:/etc/nginx \\
                              --name nginx-proxy \\
                              --net=host \\
                              --restart=on-failure:5 \\
                              --memory=512M \\
                              nginx:1.13.7-alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s

[Install]
WantedBy=multi-user.target
EOF

启动 Nginx

systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy
systemctl status nginx-proxy

journalctl  -u nginx-proxy -f   ##查看实时日志

9月 01 17:34:55 node1 docker[4032]: 1.13.7-alpine: Pulling from library/nginx
9月 01 17:34:57 node1 docker[4032]: 128191993b8a: Pulling fs layer
9月 01 17:34:57 node1 docker[4032]: 655cae3ea06e: Pulling fs layer
9月 01 17:34:57 node1 docker[4032]: dbc72c3fd216: Pulling fs layer
9月 01 17:34:57 node1 docker[4032]: f391a4589e37: Pulling fs layer
9月 01 17:34:57 node1 docker[4032]: f391a4589e37: Waiting
9月 01 17:35:03 node1 docker[4032]: dbc72c3fd216: Verifying Checksum
9月 01 17:35:03 node1 docker[4032]: dbc72c3fd216: Download complete
9月 01 17:35:07 node1 docker[4032]: f391a4589e37: Verifying Checksum
9月 01 17:35:07 node1 docker[4032]: f391a4589e37: Download complete
9月 01 17:35:15 node1 docker[4032]: 128191993b8a: Verifying Checksum
9月 01 17:35:15 node1 docker[4032]: 128191993b8a: Download complete
9月 01 17:35:17 node1 docker[4032]: 128191993b8a: Pull complete
9月 01 17:35:50 node1 docker[4032]: 655cae3ea06e: Verifying Checksum
9月 01 17:35:50 node1 docker[4032]: 655cae3ea06e: Download complete
9月 01 17:35:51 node1 docker[4032]: 655cae3ea06e: Pull complete
9月 01 17:35:51 node1 docker[4032]: dbc72c3fd216: Pull complete
9月 01 17:35:51 node1 docker[4032]: f391a4589e37: Pull complete
9月 01 17:35:51 node1 docker[4032]: Digest: sha256:34aa80bb22c79235d466ccbbfa3659ff815100ed21eddb1543c6847292010c4d
9月 01 17:35:51 node1 docker[4032]: Status: Downloaded newer image for nginx:1.13.7-alpine
9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: using the "epoll" event method
9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: nginx/1.13.7
9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: built by gcc 6.2.1 20160822 (Alpine 6.2.1)
9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: OS: Linux 3.10.0-514.el7.x86_64
9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: start worker processes
9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: start worker process 5

创建 kubelet.service 文件

注意修改节点的hostname↓

# 创建 kubelet 目录

mkdir -p /var/lib/kubelet

vi /etc/systemd/system/kubelet.service


[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --hostname-override=node1 \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:pause-amd64_3.1 \
  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.config.json \
  --cert-dir=/etc/kubernetes/ssl \
  --logtostderr=true \
  --v=2

[Install]
WantedBy=multi-user.target

创建 kubelet config 配置文件

vi /etc/kubernetes/kubelet.config.json


{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.161.77",
  "port": 10250,
  "readOnlyPort": 0,
  "cgroupDriver": "cgroupfs",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "RotateCertificates": true,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "MaxPods": "512",
  "failSwapOn": false,
  "containerLogMaxSize": "10Mi",
  "containerLogMaxFiles": 5,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.254.0.2"]
}

##其它node节点记得修改如上的IP地址
# 如上配置:
node1    本机hostname
10.254.0.2       预分配的 dns 地址
cluster.local.   为 kubernetes 集群的 domain
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:pause-amd64_3.1  这个是 pod 的基础镜像,既 gcr 的 gcr.io/google_containers/pause-amd64:3.1 镜像, 下载下来修改为自己的仓库中的比较快。
"clusterDNS": ["10.254.0.2"] 可配置多个 dns地址,逗号可开, 可配置宿主机dns.
同理修改其它node节点

启动 kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

journalctl -u kubelet -f

创建 kube-proxy 证书

# 证书方面由于我们node端没有装 cfssl
# 我们回到 master 端 机器 去配置证书,然后拷贝过来

cd /opt/ssl

vi kube-proxy-csr.json

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShenZhen",
      "L": "ShenZhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
生成 kube-proxy 证书和私钥
/opt/local/cfssl/cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
  -ca-key=/etc/kubernetes/ssl/ca-key.pem \
  -config=/opt/ssl/config.json \
  -profile=kubernetes  kube-proxy-csr.json | /opt/local/cfssl/cfssljson -bare kube-proxy
  
# 查看生成
ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

# 拷贝到目录

cp kube-proxy* /etc/kubernetes/ssl/

scp ca.pem kube-proxy* 192.168.161.77:/etc/kubernetes/ssl/

scp ca.pem kube-proxy* 192.168.161.78:/etc/kubernetes/ssl/

创建 kube-proxy kubeconfig 文件

# 配置集群

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kube-proxy.kubeconfig


# 配置客户端认证

kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
  
  
# 配置关联

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig



# 配置默认关联
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

# 拷贝到需要的 node 端里

scp kube-proxy.kubeconfig 192.168.161.77:/etc/kubernetes/

scp kube-proxy.kubeconfig 192.168.161.78:/etc/kubernetes/

创建 kube-proxy.service 文件

1.10 官方 ipvs 已经是默认的配置 –masquerade-all 必须添加这项配置,否则 创建 svc 在 ipvs 不会添加规则

打开 ipvs 需要安装 ipvsadm ipset conntrack 软件, 在 node 中安装

yum install ipset ipvsadm conntrack-tools.x86_64 -y

yaml 配置文件中的 参数如下:

https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/kubeproxyconfig/types.go

cd /etc/kubernetes/

vi  kube-proxy.config.yaml


apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.161.77
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.254.64.0/18
healthzBindAddress: 192.168.161.77:10256
hostnameOverride: node1             ##注意修改此处的hostname
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.161.77:10249
mode: "ipvs"
# 创建 kube-proxy 目录

mkdir -p /var/lib/kube-proxy

vi /etc/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.config.yaml \
  --logtostderr=true \
  --v=1
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动 kube-proxy

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

检查 ipvs 情况

[root@node1 kubernetes]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.161.161:6443         Masq    1      0          0
  -> 192.168.161.162:6443         Masq    1      0          0

配置 Calico 网络

官方文档 https://docs.projectcalico.org/v3.1/introduction

下载 Calico yaml

# 下载 yaml 文件

wget http://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml

wget http://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml

下载镜像

# 下载 镜像

# 国外镜像 有墙
quay.io/calico/node:v3.1.3
quay.io/calico/cni:v3.1.3
quay.io/calico/kube-controllers:v3.1.3


# 国内镜像
jicki/node:v3.1.3
jicki/cni:v3.1.3
jicki/kube-controllers:v3.1.3

# 阿里镜像
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:node_v3.1.3
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:cni_v3.1.3
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:kube-controllers_v3.1.3

# 替换镜像
sed -i 's/quay\.io\/calico/jicki/g'  calico.yaml

修改配置

vi calico.yaml

# 注意修改如下选项:


# etcd 地址

  etcd_endpoints: "https://192.168.161.161:2379,https://192.168.161.162:2379,https://192.168.161.163:2379"
  
 
# etcd 证书路径
  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files. 
    etcd_ca: "/calico-secrets/etcd-ca"  
    etcd_cert: "/calico-secrets/etcd-cert"
    etcd_key: "/calico-secrets/etcd-key"  


# etcd 证书 base64 地址 (执行里面的命令生成的证书 base64 码,填入里面)

data:
  etcd-key: (cat /etc/kubernetes/ssl/etcd-key.pem | base64 | tr -d '\n')
  etcd-cert: (cat /etc/kubernetes/ssl/etcd.pem | base64 | tr -d '\n')
  etcd-ca: (cat /etc/kubernetes/ssl/ca.pem | base64 | tr -d '\n')
  
## 如上需要去掉() 只需要填写生成的编码即可
  
# 修改 pods 分配的 IP 段

            - name: CALICO_IPV4POOL_CIDR
              value: "10.254.64.0/18"

查看服务

[root@master1 kubernetes]# kubectl get po -n kube-system -o wide
NAME                                      READY     STATUS    RESTARTS   AGE       IP               NODE      NOMINATED NODE
calico-kube-controllers-79cfd7887-xbsd4   1/1       Running   5          11d       192.168.161.77   node1     <none>
calico-node-2545t                         2/2       Running   0          29m       192.168.161.78   node2     <none>
calico-node-tbptz                         2/2       Running   7          11d       192.168.161.77   node1     <none>


[root@master1 kubernetes]# kubectl get nodes -o wide
NAME      STATUS    ROLES     AGE       VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
node1     Ready     <none>    11d       v1.11.2   192.168.161.77   <none>        CentOS Linux 7 (Core)   3.10.0-514.el7.x86_64   docker://17.3.2
node2     Ready     <none>    29m       v1.11.2   192.168.161.78   <none>        CentOS Linux 7 (Core)   3.10.0-514.el7.x86_64   docker://17.3.2

修改 kubelet 配置

两台node节点都需要配置

#   kubelet 需要增加 cni 插件    --network-plugin=cni

vim /etc/systemd/system/kubelet.service


  --network-plugin=cni \


# 重新加载配置

systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet.service

检查网络的互通性:

[root@node1 ~]# ifconfig
tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1440
        inet 10.254.102.128  netmask 255.255.255.255
        tunnel   txqueuelen 1  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@node2 ~]# ifconfig
tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1440
        inet 10.254.75.0  netmask 255.255.255.255
        tunnel   txqueuelen 1  (IPIP Tunnel)
        RX packets 2  bytes 168 (168.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 168 (168.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    
直接在node2上面ping:

[root@node2 ~]# ping 10.254.102.128
PING 10.254.102.128 (10.254.102.128) 56(84) bytes of data.
64 bytes from 10.254.102.128: icmp_seq=1 ttl=64 time=72.3 ms
64 bytes from 10.254.102.128: icmp_seq=2 ttl=64 time=0.272 ms

安装 calicoctl

calicoctl 是 calico 网络的管理客户端, 只需要在一台 node 里配置既可。
# 下载 二进制文件

curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.1.3/calicoctl

mv calicoctl /usr/local/bin/

chmod +x /usr/local/bin/calicoctl


# 创建 calicoctl.cfg 配置文件

mkdir /etc/calico

vim /etc/calico/calicoctl.cfg


apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/root/.kube/config"


# 查看 calico 状态

[root@node1 src]# calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 192.168.161.78 | node-to-node mesh | up    | 06:54:19 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.


[root@node1 src]# calicoctl get node        ##当然我这边是在node节点操作的,node节点是没有/root/.kube/config 这个文件的,只需要从master节点copy过来即可!!
NAME
node1
node2

以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Alone Together

Alone Together

Sherry Turkle / Basic Books / 2011-1-11 / USD 28.95

Consider Facebookit’s human contact, only easier to engage with and easier to avoid. Developing technology promises closeness. Sometimes it delivers, but much of our modern life leaves us less connect......一起来看看 《Alone Together》 这本书的介绍吧!

JS 压缩/解压工具
JS 压缩/解压工具

在线压缩/解压 JS 代码

SHA 加密
SHA 加密

SHA 加密工具

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器