k8s入门教程二:使用kubeadm在centos上安装k8s1.17

本文主要是参考了k8s官网的教程,并且镜像使用的是国内的阿里云的资源,保证了下载的速度。

准备工作

Kubernetes将集群中的机器划分为一个Master和一些Node。在Master上运行着集群管理相关的一组进程kube-apiserver、kube-controller-manager和kubescheduler,这些进程实现了整个集群的资源管理、Pod调度、弹性伸缩、安全控制、系统监控和纠错等管理功能,并且都是自动完成的。

Node作为集群中的工作节点,运行真正的应用程序,在Node上Kubernetes管理的最小运行单元是Pod。在Node上运行着Kubernetes的kubelet、kube-proxy服务进程,这些服务进程负责Pod的创建、启动、监控、重启、销毁,以及实现软件模式的负载均衡器。

以下操作每台机器上面都需要运行。

IP 系统版本 用途
192.168.1.77 centos 7.7 master
192.168.1.78 centos 7.7 node1

配置要求如下:

  • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响您应用的运行内存)
  • 2 CPU 核或更多

如下是安装的步骤流程图。

前期准备工作

kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,在安装之前,需要先做如下准备。

  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)

  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。使用 sudo cat /sys/class/dmi/id/product_uuid 命令对 product_uuid 校验。

  • 开启机器上的某些端口。请参考:检查所需端口

  • 禁用交换分区。为了保证 kubelet 正常工作,您 必须 禁用交换分区

    1
    2
    3
    4
    # 关闭 swap
    swapoff -a
    yes | cp /etc/fstab /etc/fstab_bak
    cat /etc/fstab_bak |grep -v swap > /etc/fstab
  • 确保使用的是 iptables 工具不使用 nftables 后端。这是因为nftables 后端与当前的 kubeadm 软件包不兼容:它会导致重复防火墙规则并破坏 kube-proxy。RHEL 8 不支持切换到旧版本模式,因此与当前的 kubeadm 软件包不兼容。在 Debian 10 (Buster)、Ubuntu 19.04、Fedora 29 和较新的发行版本中会出现这种问题。

  • 关闭防火墙以及selinux

    1
    2
    3
    4
    5
    6
    7
    # 关闭 防火墙
    systemctl stop firewalld && systemctl disable firewalld
    yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

    # 关闭 SeLinux
    setenforce 0
    sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
  • 修改主机名,需要设置不同的hostname:

    1
    2
    3
    hostnamectl set-hostname master #或者 node1
    echo "127.0.0.1 $(hostname)" >> /etc/hosts
    timedatectl set-timezone Asia/Shanghai
  • 持久化journal日志,非必要。可以参考:https://blog.steamedfish.org/post/systemd-journald/

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    mkdir /var/log/journal # 持久化保存日志的目录
    mkdir /etc/systemd/journald.conf.d
    cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
    [Journal]
    # 持久化保存到磁盘
    Storage=persistent
    # 压缩历史日志
    Compress=yes
    SyncIntervalSec=5m
    RateLimitInterval=30s
    RateLimitBurst=1000
    # 最大占用空间 10G
    SystemMaxUse=10G
    # 单日志文件最大 200M
    SystemMaxFileSize=200M
    # 日志保存时间 2 周
    MaxRetentionSec=2week
    # 不将日志转发到 syslog
    ForwardToSyslog=no
    EOF
    systemctl restart systemd-journald

安装docker

Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口,同时支持 Docker、CRI-O、Containered 等多种容器环境,但默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。安装方法请参考:Docker

由于 iptables 被绕过而导致流量无法正确路由的问题。您应该确保 在 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 被设置为 1。

1
2
3
4
5
6
7
8
9
10
11
modprobe br_netfilter

# 设置必需的sysctl参数,这些参数在重新启动后仍然存在。
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.swappiness=0
EOF

sysctl --system

可以使用 lsmod |grep br_netfilter查看模块是否有正常加载。

再运行安装,这里使用aliyun源:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 安装依赖包 http://dockerdocs.gclearning.cn/install/linux/docker-ce/centos/
yum install yum-utils device-mapper-persistent-data lvm2 -y
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install containerd.io-1.2.10 docker-ce-19.03.4 docker-ce-cli-19.03.4 -y
systemctl start docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://registry.cn-hangzhou.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF

#curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io #https://www.daocloud.io/mirror
mkdir -p /etc/systemd/system/docker.service.d
systemctl enable docker && systemctl daemon-reload && systemctl restart docker

对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。即:native.cgroupdriver=systemd

安装k8s工具

您需要在每台机器上安装以下的软件包:

  • kubeadm:用来初始化集群的指令。
  • kubelet:在集群中的每个节点上用来启动 pod 和容器等,是一个持续运行的后台服务。
  • kubectl:用来与集群通信的命令行工具。

kubeadm 不能 帮您安装或者管理 kubeletkubectl,所以您需要确保它们与通过 kubeadm 安装的控制平面的版本相匹配。 如果不这样做,则存在发生版本偏差的风险,可能会导致一些预料之外的错误和问题。 然而,控制平面与 kubelet 间的相差一个次要版本不一致是支持的,但 kubelet 的版本不可以超过 API 服务器的版本。 例如,1.7.0 版本的 kubelet 可以完全兼容 1.8.0 版本的 API 服务器,反之则不可以。官方文档:kubeadm

先配置K8S的yum源

1
2
3
4
5
6
7
8
9
10
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

使用 yum list kubelet --showduplicates来查看可用的版本,默认可以直接进行安装:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

yum install -y ipvsadm bash-completion
#配置自动补全,使用k代替kubectl
mkdir ~/.kube/
kubectl completion bash > ~/.kube/completion.bash.inc
printf "
# Kubectl shell completion
source '$HOME/.kube/completion.bash.inc'
" >> $HOME/.bash_profile
source $HOME/.bash_profile
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -F __start_kubectl k' >>~/.bashrc
source ~/.bashrc

加载ipvs模块

使用ipvs模块来替代iptables,性能更好,但不是必须。

1
2
3
4
5
6
7
8
9
10
11
12
13
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- br_netfilter
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && \
bash /etc/sysconfig/modules/ipvs.modules && \
lsmod | grep -E "ip_vs|nf_conntrack_ipv4"

初始化master节点

官方文档是直接使用的是google源,但是国内是访问不了的,所以就必须修改源,有以下2种方法。

手工下载镜像法

docker pull

总体思路是使用 kubeadm config images list 先输出镜像名,然后替换为阿里源:

1
2
3
4
5
6
7
8
9
# 获取依赖镜像列表
kubeadm config images list

# 使用阿里源下载 K8s 依赖镜像
kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g' |sh -x
#把images名修改为k8s.gcr.io
docker images |grep registry.cn-hangzhou.aliyuncs.com/google_containers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#registry.cn-hangzhou.aliyuncs.com/google_containers#k8s.gcr.io#2' |sh -x
#删除
docker images |grep registry.cn-hangzhou.aliyuncs.com/google_containers |awk '{print "docker rmi ", $1":"$2}' |sh -x

详细的操作如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[root@master ~]# kubeadm config images list 2>/dev/null |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g'
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
[root@master ~]# docker images |grep registry.cn-hangzhou.aliyuncs.com/google_containers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#registry.cn-hangzhou.aliyuncs.com/google_containers#k8s.gcr.io#2' |bash -x
+ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2 k8s.gcr.io/kube-proxy:v1.17.2
+ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.2 k8s.gcr.io/kube-apiserver:v1.17.2
+ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2 k8s.gcr.io/kube-controller-manager:v1.17.2
+ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.2 k8s.gcr.io/kube-scheduler:v1.17.2
+ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5
+ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
+ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
[root@master ~]# docker images |grep registry.cn-hangzhou.aliyuncs.com/google_containers |awk '{print "docker rmi ", $1":"$2}' |sh -x
+ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy@sha256:4a1b15c88bcfb832de4d3b8e7f59c8249007554174e3e345897bcad4e7537faf
+ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver@sha256:0b8d8fc93a97f8b606c023dadd9b2398cbba41cf3f663d760d6ae1d8b244adee
+ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager@sha256:9730a49ed564ec69db38b73f3c959d22869c1b9dca76d9e457fa46dbae7726e4
+ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler@sha256:5386d6b086c2da29dd81412b627a0bf045f2542b4019ca7c44c5ed5ab50e27f7
+ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c
+ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216
+ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pause@sha256:759c3f0f6493093a9043cc813092290af69029699ade0e3dbe024e968fcb7cca
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.2 cba2a99699bd 12 days ago 116MB
k8s.gcr.io/kube-apiserver v1.17.2 41ef50a5f06a 12 days ago 171MB
k8s.gcr.io/kube-controller-manager v1.17.2 da5fd66c4068 12 days ago 161MB
k8s.gcr.io/kube-scheduler v1.17.2 f52d4c527ef2 12 days ago 94.4MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 2 months ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 3 months ago 288MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB

kubeadm init初始化

使用kubeadm init --kubernetes-version=1.17.2进行初始化,目前最新的 K8s 版本为 1.17.2,有关kubeadm说明可以参考: 官方中文文档

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
[root@master ~]# kubeadm init --kubernetes-version=1.17.2 --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers

W0130 22:18:51.052657 17744 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0130 22:18:51.052828 17744 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.60]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.1.60 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.1.60 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0130 22:19:03.852435 17744 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0130 22:19:03.856861 17744 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.047801 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: srcnlw.h0mhu9v2h47oz60c
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.60:6443 --token srcnlw.h0mhu9v2h47oz60c \
--discovery-token-ca-cert-hash sha256:4429d40f4060078508cc766e30116dde357d76ffb3450e7967fceaef21b400e4

执行成功后,根据提示,根据拷贝 admin.conf 文件到当前用户相应目录下。

1
2
3
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

kubeadm init主要执行了以下操作:

  • [init]:指定版本进行初始化操作
  • [preflight] :初始化前的检查和下载所需要的Docker镜像文件
  • [kubelet-start] :生成kubelet的配置文件/var/lib/kubelet/config.yaml,没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动失败。
  • [certs]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。
  • [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
  • [control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
  • [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
  • [wait-control-plane]:等待control-plan部署的Master组件启动。
  • [apiclient]:检查Master组件服务状态。
  • [upload-config]:更新配置
  • [kubelet]:使用configMap配置kubelet。
  • [mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
  • [bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • [addons]:安装附加组件CoreDNS和kube-proxy

查看结果

1
2
3
4
5
6
7
8
9
10
11
# 执行如下命令查看pod信息,等待 3-10 分钟,直到所有的容器组处于 Running 状态
kubectl get pod -n kube-system -o wide

# 查看 master 节点初始化结果
kubectl get nodes -o wide

# 查看具体pod的详细信息
kubectl describe pods -n kube-system xxxx

#进入容器 kubectl exec POD_name [-c CONTAINER] -- COMMAND [args...]
kubectl exec -it mysql-qcznx /bin/bash

安装pod-network

运行 kubectl get nodes会发现node的状态是NotReady的,还需要安装网络插件,根据 官方文档pod-network 的说明,有支持多种插件。选用了weave-net。

1
2
3
4
5
6
7
8
9
10
[root@master ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 175m v1.17.2

注意:如果 下载好的网络插件里面的网段必须跟初始化时指定的 serviceSubnet 一样,不然会出现问题。

去污点

即使只有一台 Master 主机,集群也是可以正常使用的,但默认情况下,出于安全考虑,集群不会在 control-plane 运行节点上部署 pods,如果需要正常使用单节点集群,需要使用如下命令解除该限制:

1
2
3
4
5
6
[root@master ~]# kubectl describe nodes master |grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
[root@master ~]# kubectl taint node master node-role.kubernetes.io/master-
node/master untainted
[root@master ~]# kubectl describe nodes master |grep Taints
Taints: <none>

使用yaml方法

使用kubeadm config print init-defaults 2>/dev/null >kubeadm-config.yaml可以打印集群初始化默认的使用的配置:

kubeadm-config.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
- advertiseAddress: 1.2.3.4
+ advertiseAddress: 192.168.1.60 #master节点的IP
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
+ name: master #这里可以填写IP地址以及域名,使用域名时,必须保证解析
taints:
- - effect: NoSchedule
+ - effect: PreferNoSchedule #taints污点,
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
-imageRepository: k8s.gcr.io
+imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #修改为国内源
kind: ClusterConfiguration
kubernetesVersion: v1.17.2
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
+ podSubnet: 10.100.0.1/16
scheduler: {}

由此可以简单一点,保留以下yaml文件:

kubeadm-config.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.17.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking:
serviceSubnet: "10.96.0.0/16"
podSubnet: "10.100.0.1/16"
dnsDomain: "cluster.local"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs

运行 kubeadm init --config kubeadm-config.yaml 进行初始化。成功之后,再安装pod-network网络插件。

可以使用 kubeadm config view 查看集群信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@master ~]# kubeadm config view
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: apiserver.demo:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.2
networking:
dnsDomain: cluster.local
podSubnet: 10.100.0.1/16
serviceSubnet: 10.96.0.0/16
scheduler: {}

将node加入到集群

加入到集群的命令为:kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

token未过期

默认情况下,kubeadm init成功之后,会默认生成一个token,默认是24小时有效的。如果未过期,可以使用以下方法获取。

使用 kubeadm join 指令加入,就是 kubeadm init 输出的最后一行,如果忘记了,可以使用kubeadm token create --print-join-command来查看如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 在master上面运行
[root@master ~]# kubeadm token create --print-join-command 2>/dev/null
kubeadm join 192.168.1.60:6443 --token x85upg.75xd1dqi9y6r3ku5 --discovery-token-ca-cert-hash sha256:4429d40f4060078508cc766e30116dde357d76ffb3450e7967fceaef21b400e4

# 在node1上面运行
[root@node1 ~]# kubeadm join 192.168.1.60:6443 --token srcnlw.h0mhu9v2h47oz60c \
> --discovery-token-ca-cert-hash sha256:4429d40f4060078508cc766e30116dde357d76ffb3450e7967fceaef21b400e4
W0131 01:31:35.216827 2392 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

回到 Master 节点使用kubectl get nodes查看集群,会看到node1是NotReady的状态,这是由于要下载images导致的,耐心一下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@master ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready master 7h27m v1.17.2 192.168.1.60 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.4
node1 Ready <none> 3h33m v1.17.2 192.168.1.61 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.4
[root@master ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6955765f44-2nq69 1/1 Running 0 7h27m 10.32.0.3 master <none> <none>
coredns-6955765f44-x9ztm 1/1 Running 0 7h27m 10.32.0.2 master <none> <none>
etcd-master 1/1 Running 0 7h27m 192.168.1.60 master <none> <none>
kube-apiserver-master 1/1 Running 0 7h27m 192.168.1.60 master <none> <none>
kube-controller-manager-master 1/1 Running 0 7h27m 192.168.1.60 master <none> <none>
kube-proxy-46vfx 1/1 Running 0 3h33m 192.168.1.61 node1 <none> <none>
kube-proxy-qfch9 1/1 Running 0 7h27m 192.168.1.60 master <none> <none>
kube-scheduler-master 1/1 Running 0 7h27m 192.168.1.60 master <none> <none>
weave-net-hkqx7 2/2 Running 16 3h33m 192.168.1.61 node1 <none> <none>
weave-net-mg9sz 2/2 Running 0 4h42m 192.168.1.60 master <none> <none>

如果不想等待,可以先使用上面的方法,将 k8s.gcr.io/kube-proxy:v1.17.2 先下载下来。

token过期

如果超过了24小时,那可以创建一个新的token。

1
2
3
4
[root@master ~]# kubeadm token create
mwuh2n.nraceakhrm79qjos
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
4429d40f4060078508cc766e30116dde357d76ffb3450e7967fceaef21b400e4

然后在要加的节点上面运行 kubeadm join 192.168.1.60:6443 --token mwuh2n.nraceakhrm79qjos --discovery-token-ca-cert-hash 4429d40f4060078508cc766e30116dde357d76ffb3450e7967fceaef21b400e4

安装Dashboard

Dashboard 是基于网页的 Kubernetes 用户界面。您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,您可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。官方链接为:https://github.com/kubernetes/dashboard/tree/master/docs

安装方法如下:

1
2
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
kubectl apply -f recommended.yaml

这时就会自动下载镜像了。注意,目前Dashboard的namespace为 kubernetes-dashboard。可以运行 kubectl get pods -n kubernetes-dashboard来查看pod的状态。

第二步要需要创建RBAC。就是创建权限访问控制。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

执行 kubectl create -f kubernetes-dashboard-admin.rbac.yaml 就可以了。或者使用以下方法快速创建:

1
2
kubectl create serviceaccount admin-user -n kubernetes-dashboard
kubectl create clusterrolebinding admin-user --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:admin-user

第三步就是访问。这里特别要注意的是可以使用firefox浏览器访问这个地址:https://{master-ip}:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login。没错,就是这么一长串的地址,其实这是API访问方式。

这时会要求输入一个token,可以使用使用来获取:ubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')。点击进入就可以了。

但这边有一个问题,就是chrome的浏览器不能访问,这是因为SSL证书是自签的,chrome不认,但可以这样做:

1
2
3
4
5
6
7
8
9
10
11
# 生成client-certificate-data
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt

# 生成client-key-data
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key

# 生成p12
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

按要求输入密码直接回车即可,密码不要胡乱输,后面给浏览器导入的时候要用。
运行完后在当前目录会有个kubecfg.p12证书文件。

然后手动将证书导入chrome。正常情况下,Chrome的证书都是自动导入的。手动导入证书,只是非正常情况下才会用到,比如遇到“您打开的链接不是私密连接”,这个时候就需要手动导入证书了。点击浏览器 菜单-设置-高级-管理证书,选择“受信任的根证书颁发机构”这一栏,然后点击导入。(注意 版本的差别:chrome71选择个人
然后根据步骤操作完。导入上面生成的p12文件后,重启浏览器chrome://restart,再次访问就可以了。

附:minikube安装集群

根据 Minikube - Kubernetes本地实验环境 以及 minikube官方文档 的说明,但是这些说明文档都没有说在 minikube start 之前需要做一些前期的准备工作,直接运行这条指令会出现各种各样的问题。所以我这边给出我自己认为正确的方式:

由于是在虚拟机上面做的测试,所以minikube只能运行在 --vm-driver=none的模式上,但官方实际上有说明了可能会出现各种各样的问题。还是不建议使用,用kubeadm搭建也很快速了。

第一步:准备工作,本文的第一部分都需要操作。但有一个需要注意的是:在安装k8s工具时,只需要安装kubectl即可。全部安装也是可以的。

第二步:下载minikube,以及使用阿里的源进行下载镜像。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@localhost ~]# curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && sudo install minikube-linux-amd64 /usr/local/bin/minikube

# 或者 minikube start --image-mirror-country cn --iso-url=https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.7.3.iso --registry-mirror=https://gjl2xle4.mirror.aliyuncs.com
[root@localhost ~]# minikube start --vm-driver=none --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
* minikube v1.7.2 on Centos 7.7.1908
* Using the none driver based on user configuration
* Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
* Running on localhost (CPUs=3, Memory=3789MB, Disk=97755MB) ...
* OS release is CentOS Linux 7 (Core)
* Preparing Kubernetes v1.17.2 on Docker 19.03.4 ...
* Downloading kubectl v1.17.2
* Downloading kubeadm v1.17.2
* Downloading kubelet v1.17.2
* Launching Kubernetes ...
* Enabling addons: default-storageclass, storage-provisioner
* Configuring local host environment ...
*
! The 'none' driver provides limited isolation and may reduce system security and reliability.
! For more information, see:
- https://minikube.sigs.k8s.io/docs/reference/drivers/none/
*
! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
*
- sudo mv /root/.kube /root/.minikube $HOME
- sudo chown -R $USER $HOME/.kube $HOME/.minikube
*
* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* Waiting for cluster to come online ...
* Done! kubectl is now configured to use "minikube"
* For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

然后查看k8s的状态

1
2
3
4
5
6
7
8
[root@localhost ~]# k get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@localhost ~]# k get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 5m35s v1.17.2

但有一个问题是coredns启动不了,提示 no route to host 无解中,求高手赐教。

1
2
3
4
5
[root@localhost ~]# k get pod -A |grep core
kube-system coredns-7f9c544f75-d67hk 0/1 Running 0 4h46m
kube-system coredns-7f9c544f75-ldwpb 0/1 Running 0 4h46m
[root@localhost ~]# k logs -n kube-system coredns-7f9c544f75-d67hk |head -2
E0218 03:16:09.389735 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Gethttps://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host

https://minikube.sigs.k8s.io/docs/examples/ 例子中,可以正常创建deployment以及svc的,但 minikube dashboard 运行是会报错的,还是放弃吧,用kubeadm吧。

参考资料

  • 本文作者: wumingx
  • 本文链接: https://www.wumingx.com/k8s/kubernetes-kubeadm.html
  • 本文主题: k8s入门教程二:使用kubeadm在centos上安装k8s1.17
  • 版权声明: 本站所有文章除特别声明外,转载请注明出处!如有侵权,请联系我删除。
0%