内容简介:2019年3月6日:出版安装kubeadmin部署k8s集群教程本次安装采用kubeadmin !安装的k8s版本为1.13.3版,是当前最新版本!
2019年3月6日:出版安装kubeadmin部署k8s集群教程
本次安装采用kubeadmin !
安装的k8s版本为1.13.3版,是当前最新版本!
本篇文章,所使用的任何镜像和yaml都会在我的github上找到!
github: https://github.com/heyangguang
有任何问题可以直接联系我的Email:heyangev@cn.ibm.com
主机列表:
系统采用的是Centos7.5
主机名 | IP地址 | 角色 |
---|---|---|
k8smaster | 9.186.137.114 | master |
k8snode-1 | 9.186.137.115 | node |
k8snode-2 | 9.186.137.116 | Node |
基础环境准备:
关闭防火墙、关闭selinux,关闭swap,关闭网卡管理器。
k8smaster: [root@k8smaster ~]# systemctl stop firewalld [root@k8smaster ~]# systemctl disable firewalld [root@k8smaster ~]# systemctl stop NetworkManager ; systemctl disable NetworkManager [root@k8smaster ~]# vim /etc/selinux/config [root@k8smaster ~]# scp /etc/selinux/config root@k8snode-1:/etc/selinux/config config 100% 546 1.1MB/s 00:00 [root@k8smaster ~]# scp /etc/selinux/config root@k8snode-2:/etc/selinux/config config 100% 546 1.3MB/s 00:00 [root@k8smaster ~]# swapoff -a [root@k8smaster ~]# vim /etc/fstab [root@k8smaster ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Mon Mar 4 17:23:04 2019 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=3dd5660e-0905-4f1e-9fa3-9ce664d6eb94 /boot xfs defaults 0 0 /dev/mapper/centos-home /home xfs defaults 0 0 #/dev/mapper/centos-swap swap swap defaults 0 0 k8snode-1: [root@k8snode-1 ~]# systemctl stop firewalld [root@k8snode-1 ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@k8snode-1 ~]# systemctl stop NetworkManager ; systemctl disable NetworkManager [root@k8snode-1 ~]# swapoff -a [root@k8snode-1 ~]# vim /etc/fstab k8snode-2: [root@k8snode-2 ~]# systemctl stop firewalld [root@k8snode-2 ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@k8snode-2 ~]# swapoff -a [root@k8snode-2 ~]# systemctl stop NetworkManager ; systemctl disable NetworkManager [root@k8snode-2 ~]# vim /etc/fstab
yum源配置:
需要k8snode-1和k8snode-2先操作,在把yum源copy过去。
k8smaster: [root@k8smaster yum.repos.d]# rm -rf * [root@k8smaster yum.repos.d]# ll 总用量 12 -rw-r--r-- 1 root root 2206 3月 5 18:50 CentOS-Base.repo -rw-r--r-- 1 root root 923 3月 5 18:50 epel.repo -rw-r--r-- 1 root root 276 3月 5 18:50 k8s.repo [root@k8smaster yum.repos.d]# scp * k8snode-1:/etc/yum. scp: /etc/yum.: No such file or directory [root@k8smaster yum.repos.d]# scp * k8snode-1:/etc/yum.repos.d/ CentOS-Base.repo 100% 2206 352.0KB/s 00:00 epel.repo 100% 923 160.8KB/s 00:00 k8s.repo 100% 276 48.2KB/s 00:00 [root@k8smaster yum.repos.d]# scp * k8snode-2:/etc/yum.repos.d/ CentOS-Base.repo 100% 2206 216.3KB/s 00:00 epel.repo 100% 923 157.1KB/s 00:00 k8s.repo 100% 276 47.5KB/s 00:00 k8snode-1: [root@k8snode-1 ~]# cd /etc/yum.repos.d/ [root@k8snode-1 yum.repos.d]# rm -rf * k8snode-2: [root@k8snode-2 ~]# rm -rf /etc/yum.repos.d/*
安装 Docker 引擎:
k8smaster: [root@k8smaster yum.repos.d]# yum -y install docker [root@k8smaster yum.repos.d]# systemctl start docker ; systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. k8snode-1: [root@k8snode-1 yum.repos.d]# yum -y install docker [root@k8snode-1 yum.repos.d]# systemctl start docker ; systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. k8snode-2: [root@k8snode-2 ~]# yum -y install docker [root@k8snode-2 ~]# systemctl start docker ; systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
设置k8s相关系统内核参数:
k8smaster: [root@k8smaster ~]# cat << EOF > /etc/sysctl.d/k8s.conf > net.bridge.bridege-nf-call-iptables = 1 > net.bridge.bridege-nf-call-ip6tables = 1 > EOF [root@k8smaster ~]# sysctl -p k8snode-1: [root@k8snode-1 ~]# cat << EOF > /etc/sysctl.d/k8s.conf > net.bridge.bridege-nf-call-iptables = 1 > net.bridge.bridege-nf-call-ip6tables = 1 > EOF [root@k8snode-1 ~]# sysctl -p k8snode-2: [root@k8snode-2 ~]# cat << EOF > /etc/sysctl.d/k8s.conf > net.bridge.bridege-nf-call-iptables = 1 > net.bridge.bridege-nf-call-ip6tables = 1 > EOF [root@k8snode-2 ~]# sysctl -p
安装kubernetes相关组件:
k8smaster: [root@k8smaster ~]# yum install -y kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3 --disableexcludes=kubernetes [root@k8smaster ~]# systemctl start kubelet ; systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. k8snode-1: [root@k8snode-1 ~]# yum install -y kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3 --disableexcludes=kubernetes [root@k8snode-1 ~]# systemctl start kubelet ; systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. k8snode-2: [root@k8snode-2 ~]# yum install -y kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3 --disableexcludes=kubernetes [root@k8snode-2 ~]# systemctl start kubelet ; systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
导入所需要的镜像:
为什么要导入镜像呢?因为kubeadmin会去谷歌的镜像源上面下载,大家都懂得!所以我下载下来,使用的时候直接导入就好了。
所需要的镜像:
k8s.gcr.io/kube-apiserver:v1.13.3
k8s.gcr.io/kube-controller-manager:v1.13.3
k8s.gcr.io/kube-scheduler:v1.13.3
k8s.gcr.io/kube-proxy:v1.13.3
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6
k8s.gcr.io/pause:3.1
这里所用到的镜像,我会全部打包到我的github上,需要的可以去下载。
导入方式相同,只需要全部导入即可!
k8smaster: [root@k8smaster ~]# ls anaconda-ks.cfg images.tar [root@k8smaster ~]# ll 总用量 1834184 -rw-------. 1 root root 1245 3月 4 17:27 anaconda-ks.cfg -rw-r--r--. 1 root root 1878197248 3月 5 18:31 images.tar [root@k8smaster ~]# docker load < images.tar k8snode-1: [root@k8snode-1 ~]# ls anaconda-ks.cfg images.tar [root@k8snode-1 ~]# docker load < images.tar k8snode-2: [root@k8snode-2 ~]# ls anaconda-ks.cfg images.tar [root@k8snode-2 ~]# docker load < images.tar
master上初始化kubeadmin生成node token:
使用kubeadm init 初始化环境,--kubernetes-version指定版本,-pod-network-cidr指定虚拟网络的网段,可以随便指定任何网段!
[root@k8smaster ~]# kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 [init] using Kubernetes version: v1.13.3 [preflight] running pre-flight checks [WARNING KubernetesVersion]: kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. kubernetes version: 1.13.3. Kubeadm version: 1.11.x I0305 19:49:27.250624 5373 kernel_validator.go:81] Validating kernel version I0305 19:49:27.250718 5373 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 9.186.137.114] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [9.186.137.114 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 19.502118 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node k8smaster as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node k8smaster as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8smaster" as an annotation [bootstraptoken] using token: xjzf96.nv0qhqwj9j47r1tv [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 9.186.137.114:6443 --token xjzf96.nv0qhqwj9j47r1tv --discovery-token-ca-cert-hash sha256:e386175a5cae597dec6bfeb7c92d01bc5fe052313b50dc48e419057c8c3f824c [root@k8smaster ~]# mkdir -p $HOME/.kube [root@k8smaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8smaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
node节点执行kubeadm join加入集群:
注意!一定要时间同步,否则就会出问题!
报这个错误:
[discovery] Failed to request cluster info, will try again: [Get https://9.186.137.114 :6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
k8snode-1: [root@k8snode-1 ~]# kubeadm join 9.186.137.114:6443 --token xjzf96.nv0qhqwj9j47r1tv --discovery-token-ca-cert-hash sha256:e386175a5cae597dec6bfeb7c92d01bc5fe052313b50dc48e419057c8c3f824c [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0305 20:02:24.453910 5983 kernel_validator.go:81] Validating kernel version I0305 20:02:24.454026 5983 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "9.186.137.114:6443" [discovery] Created cluster-info discovery client, requesting info from "https://9.186.137.114:6443" [discovery] Requesting info from "https://9.186.137.114:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "9.186.137.114:6443" [discovery] Successfully established connection with API Server "9.186.137.114:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8snode-1" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. k8snode-2: [root@k8snode-2 ~]# kubeadm join 9.186.137.114:6443 --token xjzf96.nv0qhqwj9j47r1tv --discovery-token-ca-cert-hash sha256:e386175a5cae597dec6bfeb7c92d01bc5fe052313b50dc48e419057c8c3f824c [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0305 19:51:39.452856 5036 kernel_validator.go:81] Validating kernel version I0305 19:51:39.452954 5036 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "9.186.137.114:6443" [discovery] Created cluster-info discovery client, requesting info from "https://9.186.137.114:6443" [discovery] Requesting info from "https://9.186.137.114:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "9.186.137.114:6443" [discovery] Successfully established connection with API Server "9.186.137.114:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8snode-2" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
安装Flannel网络:
k8smaster: [root@k8smaster ~]# kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
查看K8S集群状态:
k8smaster: [root@k8smaster ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-kmfct 1/1 Running 0 8m26s coredns-86c58d9df4-qn2k2 1/1 Running 0 8m26s etcd-k8smaster 1/1 Running 0 8m35s kube-apiserver-k8smaster 1/1 Running 1 8m10s kube-controller-manager-k8smaster 1/1 Running 0 7m43s kube-flannel-ds-amd64-9rmfz 1/1 Running 0 5m9s kube-flannel-ds-amd64-vnwtf 1/1 Running 0 12s kube-flannel-ds-amd64-x7q4s 1/1 Running 0 51s kube-proxy-7zl9n 1/1 Running 0 7m31s kube-proxy-t2sx9 1/1 Running 0 8m27s kube-proxy-txsfr 1/1 Running 0 7m27s kube-scheduler-k8smaster 1/1 Running 0 8m56s
到这里kubeadmin就部署完毕k8s集群了!
希望大家可以给我指出问题,我们一起前进!
谢谢大家!
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:- CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群
- 实战生产环境:1.13.3最新版k8s集群部署Heapster插件
- python模拟登陆知乎(最新版)
- 某开源博客系统最新版源码审计
- CocoaPods打包私有库实践 | 最新版
- 记录过某常见WAF最新版
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Just My Type
Simon Garfield / Profile Books / 2010-10-21 / GBP 14.99
What's your type? Suddenly everyone's obsessed with fonts. Whether you're enraged by Ikea's Verdanagate, want to know what the Beach Boys have in common with easy Jet or why it's okay to like Comic Sa......一起来看看 《Just My Type》 这本书的介绍吧!