一、K8s介绍
1、K8s简介官网链接:https://www.kubernetes.org.cn/k8sKubernetes一个用于容器集群的自动化部署、扩容以及运维的开源平台。(1)使用Kubernetes,你可以快速高效地响应客户需求:
动态地对应用进行扩容。无缝地发布新特性。
仅使用需要的资源以优化硬件使用。
(2)Kubernetes是:
简洁的:轻量级,简单,易上手可移植的:公有,私有,混合,多重云(multi-cloud)
可扩展的: 模块化, 插件化, 可挂载, 可组合
可自愈的: 自动布置, 自动重启, 自动复制
2、下载地址
下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md#v1813百度网盘下载地址: https://pan.baidu.com/s/1rXo9XXi6k8GuwT7-kFKf9w 密码: 6tmc
二、实验环境selinux iptables off
1、安装说明
注意:配置文件中的注释只是为了说明对应功能,安装时需删除,否则会报错,并且不同版本的配置文件并不兼容!!!主机名 IP 系统版本 安装服务 功能说明
server1 10.10.10.1 rhel7.3 docker、etcd、api-server、scheduler、controller-manager、kubelet、flannel、docker-compose、harbor 作为集群的Master、服务环境控制相关模块、api网管控制相关模块、平台管理控制台模块server2 10.10.10.2 rhel7.3 docker、etcd、kubelet、proxy、flannel 服务节点、用于容器化服务部署和运行server3 10.10.10.3 rhel7.3 docker、etcd、kubelet、proxy、flannel 服务节点、用于容器化服务部署和运行[root@server1 mnt]# cat /etc/hosts10.10.10.1 server110.10.10.2 server210.10.10.3 server32、把swap分区关闭(3台):swapoff -a我们基础环境就准备好了,下面进行k8s的环境的安装!!!三、安装Docker(3台)
由于k8s基于docker,docker版本:docker-ce-18.03.1.ce-1.el71、如果之前已经安装了docker:
yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-selinux \ docker-engine-selinux \ docker-engine 注意:如果你使用的系统不是7.3的话(例如7.0),需要把yum源换成7.3的yum源,由于依赖的版本过高无法安装!!!2、安装
[root@server1 mnt]# ls /mntcontainer-selinux-2.21-1.el7.noarch.rpm libsemanage-python-2.5-8.el7.x86_64.rpm docker-ce-18.03.1.ce-1.el7.centos.x86_64.rpm pigz-2.3.4-1.el7.x86_64.rpm docker-engine-1.8.2-1.el7.rpm policycoreutils-2.5-17.1.el7.x86_64.rpm libsemanage-2.5-8.el7.x86_64.rpm policycoreutils-python-2.5- 17.1.el7.x86_64.rpm[root@server1 mnt]# yum install -y /mnt/*[root@server1 mnt]# systemctl enable docker[root@server1 mnt]# systemctl start docker[root@server1 mnt]# systemctl status docker ###可以发现生成doker0网卡、172.17.0.1/163、测试[root@server1 mnt]# docker version Client: Version: 18.03.1-ce API version: 1.37 Go version: go1.9.5 Git commit: 9ee9f40 Built: Thu Apr 26 07:20:16 2018 OS/Arch: linux/amd64 Experimental: false Orchestrator: swarmServer:
Engine: Version: 18.03.1-ce API version: 1.37 (minimum version 1.12) Go version: go1.9.5 Git commit: 9ee9f40 Built: Thu Apr 26 07:23:58 2018 OS/Arch: linux/amd64 Experimental: false4、如果想进行IP更改(可跳过)[root@server1 ~]# vim /etc/docker/daemon.json{ "bip": "172.17.0.1/24"}[root@server1 ~]# systemctl restart docker[root@server1 ~]# ip addr show docker03: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:23:2d:56:4b brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/24 brd 172.17.0.255 scope global docker0 valid_lft forever preferred_lft forever四、安装etcd(3台)1、建立安装存放的目录[root@server1 mnt]# mkdir -p /usr/local/kubernetes/{bin,config}[root@server1 mnt]# vim /etc/profile ###为了方便能够直接执行命令,加入声明 export PATH=$PATH:/usr/local/kubernetes/bin[root@server1 mnt]# . /etc/profile 或 source /etc/profile[root@server1 mnt]# mkdir -p /var/lib/etcd/ ###建立工作目录--WorkingDirectory(3台)2、解压etcd,移动命令到对应位置[root@server1 ~]# tar xf /root/etcd-v3.1.7-linux-amd64.tar.gz[root@server1 ~]# mv /root/etcd-v3.1.7-linux-amd64/etcd* /usr/local/kubernetes/bin/ 3、etcd systemd配置文件[root@server1 ~]# vim /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.target[Service]
Type=simpleWorkingDirectory=/var/lib/etcdEnvironmentFile=-/usr/local/kubernetes/config/etcd.confExecStart=/usr/local/kubernetes/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS} \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=${ETCD_INITIAL_CLUSTER_STATE}Type=notify[Install]
WantedBy=multi-user.target4、etcd配置文件[root@server1 ~]# cat /usr/local/kubernetes/config/etcd.conf #[member]ETCD_NAME="etcd01" ###修改为本机对应的名字,etcd02,etcd03ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.10.10.1:2380" ###修改为本机IPETCD_INITIAL_CLUSTER="etcd01=http://10.10.10.1:2380,etcd02=http://10.10.10.2:2380,etcd03=http://10.10.10.3:2380" ###把IP更换成集群IPETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="http://10.10.10.1:2379" ###修改为本机IP[root@server1 ~]# systemctl enable etcd[root@server1 ~]# systemctl start etcd 注意:etcd启动的话3台同时启动,否则启动会失败!!!5、3台都安装完成我们可以进行测试:
[root@server1 ~]# etcdctl cluster-healthmember 2cc6b104fe5377ef is healthy: got healthy result from http://10.10.10.3:2379member 74565e08b84745a6 is healthy: got healthy result from http://10.10.10.2:2379member af08b45e1ab8a099 is healthy: got healthy result from http://10.10.10.1:2379cluster is healthy[root@server1 kubernetes]# etcdctl member list ###集群自己选择出leader
2cc6b104fe5377ef: name=etcd03 peerURLs=http://10.10.10.3:2380 clientURLs=http://10.10.10.3:2379 isLeader=true74565e08b84745a6: name=etcd02 peerURLs=http://10.10.10.2:2380 clientURLs=http://10.10.10.2:2379 isLeader=falseaf08b45e1ab8a099: name=etcd01 peerURLs=http://10.10.10.1:2380 clientURLs=http://10.10.10.1:2379 isLeader=false五、安装Master节点组件1、移动二进制到bin目录下[root@server1 ~]# tar xf /root/kubernetes-server-linux-amd64.tar.gz[root@server1 kubernetes]# mv /root/kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubectl,kubelet} /usr/local/kubernetes/bin2、apiserver安装(1)apiserver配置文件[root@server1 ~]# vim /usr/local/kubernetes/config/kube-apiserver #启用日志标准错误KUBE_LOGTOSTDERR="--logtostderr=true"#日志级别KUBE_LOG_LEVEL="--v=4"#Etcd服务地址KUBE_ETCD_SERVERS="--etcd-servers=http://10.10.10.1:2379"#API服务监听地址KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"#API服务监听端口KUBE_API_PORT="--insecure-port=8080"#对集群中成员提供API服务地址KUBE_ADVERTISE_ADDR="--advertise-address=10.10.10.1"#允许容器请求特权模式,默认falseKUBE_ALLOW_PRIV="--allow-privileged=false"#集群分配的IP范围,自定义但是要跟后面的kubelet(服务节点)的配置DNS在一个区间KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/24" (2)apiserver systemd配置文件[root@server1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/usr/local/kubernetes/config/kube-apiserverExecStart=/usr/local/kubernetes/bin/kube-apiserver \${KUBE_LOGTOSTDERR} \${KUBE_LOG_LEVEL} \${KUBE_ETCD_SERVERS} \${KUBE_API_ADDRESS} \${KUBE_API_PORT} \${KUBE_ADVERTISE_ADDR} \${KUBE_ALLOW_PRIV} \${KUBE_SERVICE_ADDRESSES}Restart=on-failure[Install]WantedBy=multi-user.target[root@server1 ~]# systemctl enable kube-apiserver[root@server1 ~]# systemctl start kube-apiserver3、安装Scheduler(1)scheduler配置文件[root@server1 ~]# vim /usr/local/kubernetes/config/kube-scheduler KUBE_LOGTOSTDERR="--logtostderr=true"KUBE_LOG_LEVEL="--v=4"KUBE_MASTER="--master=10.10.10.1:8080"KUBE_LEADER_ELECT="--leader-elect"(2)scheduler systemd配置文件[root@server1 ~]# vim /usr/lib/systemd/system/kube-scheduler.service [Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/usr/local/kubernetes/config/kube-schedulerExecStart=/usr/local/kubernetes/bin/kube-scheduler \${KUBE_LOGTOSTDERR} \${KUBE_LOG_LEVEL} \${KUBE_MASTER} \${KUBE_LEADER_ELECT}Restart=on-failure[Install]WantedBy=multi-user.target[root@server1 ~]# systemctl enable kube-scheduler [root@server1 ~]# systemctl start kube-scheduler 4、controller-manager安装(1)controller-manage配置文件[root@server1 ~]# vim /usr/local/kubernetes/config/kube-controller-manager KUBE_LOGTOSTDERR="--logtostderr=true"KUBE_LOG_LEVEL="--v=4"KUBE_MASTER="--master=10.10.10.1:8080"(2)controller-manage systemd配置文件[root@server1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/usr/local/kubernetes/config/kube-controller-managerExecStart=/usr/local/kubernetes/bin/kube-controller-manager \${KUBE_LOGTOSTDERR} \${KUBE_LOG_LEVEL} \${KUBE_MASTER} \${KUBE_LEADER_ELECT}Restart=on-failure[Install]WantedBy=multi-user.target[root@server1 ~]# systemctl enable kube-controller-manager[root@server1 ~]# systemctl start kube-controller-manager5、安装kubelet安装完此服务,kubectl get nodes就可以看到Master,也可以选择不安装!!!(1)kubeconfig配置文件
[root@server1 ~]# vim /usr/local/kubernetes/config/kubelet.kubeconfigapiVersion: v1kind: Configclusters: - cluster: server: http://10.10.10.1:8080 ###Master的IP,即自身IP name: localcontexts: - context: cluster: local name: localcurrent-context: local (2)kubelet配置文件[root@server1 ~]# vim /usr/local/kubernetes/config/kubelet# 启用日志标准错误KUBE_LOGTOSTDERR="--logtostderr=true"# 日志级别KUBE_LOG_LEVEL="--v=4"# Kubelet服务IP地址NODE_ADDRESS="--address=10.10.10.1"# Kubelet服务端口NODE_PORT="--port=10250"# 自定义节点名称NODE_HOSTNAME="--hostname-override=10.10.10.1"# kubeconfig路径,指定连接API服务器KUBELET_KUBECONFIG="--kubeconfig=/usr/local/kubernetes/config/kubelet.kubeconfig"# 允许容器请求特权模式,默认falseKUBE_ALLOW_PRIV="--allow-privileged=false"# DNS信息,DNS的IPKUBELET_DNS_IP="--cluster-dns=10.0.0.2"KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local"# 禁用使用SwapKUBELET_SWAP="--fail-swap-on=false"(3)kubelet systemd配置文件[root@server1 ~]# vim /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=-/usr/local/kubernetes/config/kubeletExecStart=/usr/local/kubernetes/bin/kubelet \${KUBE_LOGTOSTDERR} \${KUBE_LOG_LEVEL} \${NODE_ADDRESS} \${NODE_PORT} \${NODE_HOSTNAME} \${KUBELET_KUBECONFIG} \${KUBE_ALLOW_PRIV} \${KUBELET_DNS_IP} \${KUBELET_DNS_DOMAIN} \${KUBELET_SWAP}Restart=on-failureKillMode=process[Install]WantedBy=multi-user.target(4)启动服务,并设置开机启动:[root@server1 ~]# swapoff -a ###启动之前要先关闭swap[root@server1 ~]# systemctl enable kubelet[root@server1 ~]# systemctl start kubelet 注意:服务启动先启动etcd,再启动apiserver,其他无顺序!!!六、安装node(server2和server3)
1、移动二进制到bin目录[root@server2 ~]# tar xf /root/kubernetes-node-linux-amd64.tar.gz [root@server2 ~]# mv /root/kubernetes/node/bin/{kubelet,kube-proxy} /usr/local/kubernetes/bin/2、安装kubelet(1)kubeconfig配置文件注意:kubeconfig文件用于kubelet连接master apiserver[root@server2 ~]# vim /usr/local/kubernetes/config/kubelet.kubeconfig
apiVersion: v1kind: Configclusters: - cluster: server: http://10.10.10.1:8080 name: localcontexts: - context: cluster: local name: localcurrent-context: local(2)kubelet配置文件[root@server2 ~]# cat /usr/local/kubernetes/config/kubelet#启用日志标准错误KUBE_LOGTOSTDERR="--logtostderr=true"#日志级别KUBE_LOG_LEVEL="--v=4"#Kubelet服务IP地址(自身IP)NODE_ADDRESS="--address=10.10.10.2"#Kubelet服务端口NODE_PORT="--port=10250"#自定义节点名称(自身IP)NODE_HOSTNAME="--hostname-override=10.10.10.2"#kubeconfig路径,指定连接API服务器KUBELET_KUBECONFIG="--kubeconfig=/usr/local/kubernetes/config/kubelet.kubeconfig"#允许容器请求特权模式,默认falseKUBE_ALLOW_PRIV="--allow-privileged=false"#DNS信息,跟上面给的地址段对应KUBELET_DNS_IP="--cluster-dns=10.0.0.2"KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local"#禁用使用SwapKUBELET_SWAP="--fail-swap-on=false"(3)kubelet systemd配置文件[root@server2 ~]# vim /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=-/usr/local/kubernetes/config/kubeletExecStart=/usr/local/kubernetes/bin/kubelet \${KUBE_LOGTOSTDERR} \${KUBE_LOG_LEVEL} \${NODE_ADDRESS} \${NODE_PORT} \${NODE_HOSTNAME} \${KUBELET_KUBECONFIG} \${KUBE_ALLOW_PRIV} \${KUBELET_DNS_IP} \${KUBELET_DNS_DOMAIN} \${KUBELET_SWAP}Restart=on-failureKillMode=process[Install]WantedBy=multi-user.target(4)启动服务,并设置开机启动:[root@server2 ~]# swapoff -a ###启动之前要先关闭swap[root@server2 ~]# systemctl enable kubelet[root@server2 ~]# systemctl start kubelet 3、安装proxy(1)proxy配置文件[root@server2 ~]# cat /usr/local/kubernetes/config/kube-proxy#启用日志标准错误KUBE_LOGTOSTDERR="--logtostderr=true"#日志级别KUBE_LOG_LEVEL="--v=4"#自定义节点名称(自身IP)NODE_HOSTNAME="--hostname-override=10.10.10.2"#API服务地址(MasterIP)KUBE_MASTER="--master=http://10.10.10.1:8080"(2)proxy systemd配置文件[root@server2 ~]# cat /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/usr/local/kubernetes/config/kube-proxyExecStart=/usr/local/kubernetes/bin/kube-proxy \${KUBE_LOGTOSTDERR} \${KUBE_LOG_LEVEL} \${NODE_HOSTNAME} \${KUBE_MASTER}Restart=on-failure[Install]WantedBy=multi-user.target[root@server2 ~]# systemctl enable kube-proxy[root@server2 ~]# systemctl restart kube-proxy七、安装flannel(3台)1、移动二进制到bin目录[root@server1 ~]# tar xf flannel-v0.7.1-linux-amd64.tar.gz [root@server1 ~]# mv /root/{flanneld,mk-docker-opts.sh} /usr/local/kubernetes/bin/2、flannel配置文件[root@server1 kubernetes]# vim /usr/local/kubernetes/config/flanneld# Flanneld configuration options # etcd url location. Point this to the server where etcd runs,自身IPFLANNEL_ETCD="http://10.10.10.1:2379"# etcd config key. This is the configuration key that flannel queries
# For address range assignment,etcd-key的目录FLANNEL_ETCD_KEY="/atomic.io/network"# Any additional options that you want to pass,根据自己的网卡名填写
FLANNEL_OPTIONS="--iface=eth0"3、flannel systemd配置文件[root@server1 kubernetes]# vim /usr/lib/systemd/system/flanneld.service [Unit]Description=Flanneld overlay address etcd agentAfter=network.targetAfter=network-online.targetWants=network-online.targetAfter=etcd.serviceBefore=docker.service[Service]
Type=notifyEnvironmentFile=/usr/local/kubernetes/config/flanneldEnvironmentFile=-/etc/sysconfig/docker-networkExecStart=/usr/local/kubernetes/bin/flanneld -etcd-endpoints=${FLANNEL_ETCD} -etcd-prefix=${FLANNEL_ETCD_KEY} $FLANNEL_OPTIONSExecStartPost=/usr/local/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/dockerRestart=on-failure[Install]
WantedBy=multi-user.targetRequiredBy=docker.service4、设置etcd-key注意: 在一台上设置即可,会同步过去!!![root@server1 ~]# etcdctl mkdir /atomic.io/network
###下面的IP跟你docker本身的IP地址一个网段[root@server1 ~]# etcdctl mk /atomic.io/network/config "{ \"Network\": \"172.17.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }" { "Network": "172.17.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }5、设置docker配置因为docker要使用flanneld,所以在其配置中加入EnvironmentFile=-/etc/sysconfig/flanneld,EnvironmentFile=-/run/flannel/subnet.env,--bip=${FLANNEL_SUBNET}[root@server1 ~]# vim /usr/lib/systemd/system/docker.service
6、启动flannel和docker[root@server1 ~]# systemctl enable flanneld.service[root@server1 ~]# systemctl restart flanneld.service[root@server1 ~]# systemctl daemon-reload [root@server1 ~]# systemctl restart docker.service 7、测试flannelip addr ###发现flanneld生成的IP和Docker的IP在同一个网段即完成
8、集群测试如果Master中没有装kubelet,kubectl get nodes就看不到Master!!!
[root@server1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION10.10.10.1 Ready <none> 1d v1.8.1310.10.10.2 Ready <none> 1d v1.8.1310.10.10.3 Ready <none> 1d v1.8.13[root@server1 ~]# kubectl get cs
NAME STATUS MESSAGE ERRORcontroller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} 这样一个简单的K8s集群就安装完成了,下面介绍下harbor安装以及插件的安装!!!八、harbor安装
此处我是用harbor来管理我的镜像仓库,它的介绍我就不多介绍了!!!注意:如果要使用本地镜像可以不用安装harbor,但是要安装busybox,并设置yaml配置文件imagePullPolicy: Never或imagePullPolicy: IfNotPresent在image同级下加入即可,默认方式为always使用网络的镜像。1、安装docker-compose
(1)下载方法https://github.com/docker/compose/releases/ ###官网下载地址[root@server1 ~]# wget https://github.com/docker/compose/releases/download/1.22.0/docker-compose-Linux-x86_64(2)安装[root@server1 ~]# cp docker-compose-Linux-x86_64 /usr/local/kubernetes/bin/docker-compose[root@server1 ~]# chmod +x /usr/local/kubernetes/bin/docker-compose(3)查看版本[root@server1 ~]# docker-compose -versiondocker-compose version 1.22.0, build f46880fe2、安装harbor(1)下载方法官网安装参考链接:https://github.com/vmware/harbor/blob/master/docs/installation_guide.mdhttps://github.com/vmware/harbor/releases#install ###下载地址
[root@server1 ~]# wget http://harbor.orientsoft.cn/harbor-v1.5.0/harbor-offline-installer-v1.5.0.tgz(2)解压tar包[root@server1 ~]# tar xf harbor-offline-installer-v1.5.0.tgz -C /usr/local/kubernetes/(3)配置harbor.cfg[root@server1 ~]# grep -v "^#" /usr/local/kubernetes/harbor/harbor.cfg _version = 1.5.0###修改为本机IP即可hostname = 10.10.10.1ui_url_protocol = httpmax_job_workers = 50 customize_crt = onssl_cert = /data/cert/server.crtssl_cert_key = /data/cert/server.keysecretkey_path = /dataadmiral_url = NAlog_rotate_count = 50log_rotate_size = 200Mhttp_proxy =https_proxy =no_proxy = 127.0.0.1,localhost,uiemail_identity = email_server = smtp.mydomain.comemail_server_port = 25email_username = sample_admin@mydomain.comemail_password = abcemail_from = admin <sample_admin@mydomain.com>email_ssl = falseemail_insecure = falseharbor_admin_password = Harbor12345auth_mode = db_authldap_url = ldaps://ldap.mydomain.comldap_basedn = ou=people,dc=mydomain,dc=comldap_uid = uid ldap_scope = 2 ldap_timeout = 5ldap_verify_cert = trueldap_group_basedn = ou=group,dc=mydomain,dc=comldap_group_filter = objectclass=groupldap_group_gid = cnldap_group_scope = 2self_registration = ontoken_expiration = 30project_creation_restriction = everyonedb_host = mysqldb_password = root123db_port = 3306db_user = rootredis_url =clair_db_host = postgresclair_db_password = passwordclair_db_port = 5432clair_db_username = postgresclair_db = postgresuaa_endpoint = uaa.mydomain.orguaa_clientid = iduaa_clientsecret = secretuaa_verify_cert = trueuaa_ca_cert = /path/to/ca.pemregistry_storage_provider_name = filesystemregistry_storage_provider_config =(4)安装[root@server1 ~]# cd /usr/local/kubernetes/harbor/ ###一定要进入此目录,日志放在/var/log/harbor/[root@server1 harbor]# ./prepare [root@server1 harbor]# ./install.sh(5)查看生成的镜像[root@server1 harbor]# docker ps ###状态为up此时你也可以通过浏览器输入http:10.10.10.1(Master的IP)进行登陆,默认账号:admin默认密码:Harbor12345!!!
(6)加入认证
<1> server1(Mster)[root@server1 harbor]# vim /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=10.10.10.1'<2> 三台都加入[root@server1 ~]# vim /etc/docker/daemon.json
{ "insecure-registries": [ "10.10.10.1" ]}(7)测试<1> shell中登陆[root@server1 ~]# docker login 10.10.10.1 ###账号:admin 密码:Harbor12345
Username: adminPassword: Login Succeeded登陆成功后,我们便可以使用harbor仓库了!!!<2> 浏览器中登陆
(8)登陆报错:
<1> 报错:[root@server1 ~]# docker login 10.10.10.1
Error response from daemon: Get http://10.10.10.1/v2/: unauthorized: authentication required<2> 解决方法:加入认证,上面有写,或者就是密码输入错误!!!
(9)harbor基本操作<1> 下线# cd /usr/local/kubernetes/harbor/
# docker-compose down -v 或 docker-compose stop<2> 修改配置修改harbor.cfg和docker-compose.yml
<3> 重新部署上线# cd /usr/local/kubernetes/harbor/
# ./prepare# docker-compose up -d 或 docker-compose start3、harbor使用用浏览器登陆你可以发现其默认是有个项目library,也可以自己创建,我们来使用此项目!!!(1)对镜像进行处理
###打包及删除docker tag gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 10.10.10.1/library/kubernetes-dashboard-amd64:v1.8.3docker rmi gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7 10.10.10.1/library/k8s-dns-sidecar-amd64:1.14.7docker rmi gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 10.10.10.1/library/k8s-dns-kube-dns-amd64:1.14.7docker rmi gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7 10.10.10.1/library/k8s-dns-dnsmasq-nanny-amd64:1.14.7docker rmi gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7###推送到harbor上
docker push 10.10.10.1/library/kubernetes-dashboard-amd64:v1.8.3docker push 10.10.10.1/library/k8s-dns-sidecar-amd64:1.14.7docker push 10.10.10.1/library/k8s-dns-kube-dns-amd64:1.14.7 docker push 10.10.10.1/library/k8s-dns-dnsmasq-nanny-amd64:1.14.7(2)通过浏览器测试
九、kube-dns安装下载地址:https://github.com/kubernetes/kubernetes/tree/release-1.8/cluster/addons/dns
(1)我们需要的是这几个包:
kubedns-sa.yamlkubedns-svc.yaml.basekubedns-cm.yaml kubedns-controller.yaml.base[root@server1 data]# unzip kubernetes-release-1.8.zip [root@server1 dns]# pwd/data/kubernetes-release-1.8/cluster/addons/dns[root@server1 dns]# cp {kubedns-svc.yaml.base,kubedns-cm.yaml,kubedns-controller.yaml.base,kubedns-sa.yaml} /root(2)clusterIP查看[root@server2 ~]# cat /usr/local/kubernetes/config/kubelet(3)4个yaml可以合并成此kube-dns.yamlapiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.0.0.2 ###此地址是在kubelet的DNS地址 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP---apiVersion: v1kind: ConfigMapmetadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcilespec: strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true #imagePullSecrets: #- name: registrykey-aliyun-vpc containers: - name: kubedns image: 10.10.10.1/library/k8s-dns-kube-dns-amd64:1.14.7 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local - --dns-port=10053 - --config-dir=/kube-dns-config - --kube-master-url=http://10.10.10.1:8080 ###修改为集群地址 - --v=2 env: - name: PROMETHEUS_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: dnsmasq image: 10.10.10.1/library/k8s-dns-dnsmasq-nanny-amd64:1.14.7 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --no-negcache - --log-facility=- - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: 10.10.10.1/library/k8s-dns-sidecar-amd64:1.14.7 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS.(4)创建及删除# kubectl create -f kube-dns.yaml ###创建# kubectl delete -f kube-dns.yaml ###删除,此步骤不用执行(5)查看[root@server1 ~]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEkube-dns-5855d8c4f7-sbb7x 3/3 Running 0
[root@server1 ~]# kubectl describe pod -n kube-system kube-dns-5855d8c4f7-sbb7x ###报错的话,可以查看报错信息
十、安装dashboard1、配置kube-dashboard.yamlkind: DeploymentapiVersion: apps/v1beta2metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-systemspec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: 10.10.10.1/library/kubernetes-dashboard-amd64:v1.8.3 ports: - containerPort: 9090 protocol: TCP args: - --apiserver-host=http://10.10.10.1:8080 ###修改为Master的IP volumeMounts: - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule---
kind: Service
apiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-systemspec: type: NodePort ports: - port: 80 targetPort: 9090 nodePort: 30090 selector: k8s-app: kubernetes-dashboard2、创建dashboard# kubectl create -f kube-dashboard.yaml3、查看node的IP[root@server1 dashboard]# kubectl get pods -o wide --namespace kube-systemNAME READY STATUS RESTARTS AGE IP NODEkube-dns-5855d8c4f7-sbb7x 3/3 Running 0 1h 172.17.89.3 10.10.10.1kubernetes-dashboard-7f8b5f54f9-gqjsh 1/1 Running 0 1h 172.17.39.2 10.10.10.3[root@server1 dashboard]# kubectl get pods -o wide --namespace kube-system
NAME READY STATUS RESTARTS AGE IP NODEkube-dns-5855d8c4f7-sbb7x 3/3 Running 0 1h 172.17.89.3 10.10.10.1kubernetes-dashboard-7f8b5f54f9-gqjsh 1/1 Running 0 1h 172.17.39.2 10.10.10.34、测试
(1)浏览器http://10.10.10.3:30090(2)结果