您当前的位置:首页 > 电脑百科 > 程序开发 > 架构

ansible部署k8s

时间:2020-08-13 12:03:01  来源:  作者:
ansible部署k8s

 

目录

  • 1、安装ansible
  • 2、安装k8s
  • 3、检查环境
  • 3.1、检查etcd
  • 3.2、检查flanneld
  • 3.3、检查Nginx和keepalived
  • 3.4、检查kube-apiserver
  • 3.5、检查 kube-controller-manager
  • 3.6、检查kube-scheduler
  • 3.7、检查kubelet
  • 3.8、检查kube-proxy
  • 4、检查附加组件
  • 4.1、检查coreDNS
  • 4.2、检查dashboard
  • 4.3、检查traefik
  • 4.4、检查metrics
  • 4.5、检查EFK
  • 5、验证集群
  • 6、重启所有组件

1、安装ansible

# 系统改成阿里yum源,并更新系统
mv /etc/yum.repos.d/centos-Base.repo /etc/yum.repos.d/CentOS-Base.repo.$(date +%Y%m%d)
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all && yum makecache && yum update -y

#安装ansible
yum -y install epel-release
yum install ansible -y
ssh-keygen -t rsa
ssh-copy-id xx.xx.xx.xx

## 批量拷贝秘钥
#### ##编写机器ip	 访问端口	登录密码
cat <<EOF> hostname.txt
192.168.10.11 22 fana
192.168.10.12 22 fana
192.168.10.13 22 fana
192.168.10.14 22 fana
EOF
#### 不输入yes,修改后重启sshd
sed -i '/StrictHostKeyChecking/s/^#//; /StrictHostKeyChecking/s/ask/no/' /etc/ssh/ssh_config
#### 然后执行拷贝秘钥
cat hostname.txt | while read ip port pawd;do sshpass -p $pawd ssh-copy-id -p $port root@$ip;done
#### 安装sshpass
wget http://sourceforge.net/projects/sshpass/files/sshpass
tar xvzf sshpass-1.06.tar.gz 
./configure 
make 
make install

## 升级内核参考:https://www.cnblogs.com/fan-gx/p/11006762.html

2、安装k8s

## 下载ansible脚本
#链接:https://pan.baidu.com/s/1VKQ5txJ2xgwUVim_E2P9kA 
#提取码:3cq2

## ansible 安装k8s
ansible-playbook -i inventory installK8s.yml 

## 版本:
k8s: 1.14.8
etcd: 3.3.18
flanneld: 0.11.0
Docker: 19.03.5
nginx: 1.16.1
    
## 自签TLS证书
etcd:ca.pem server.pem server-key.pem
flannel:ca.pem server.pem server-key.pem
kube-apiserver:ca.pem server.pem server-key.pem
kubelet:ca.pem ca-key.pem
kube-proxy:ca.pem kube-proxy.pem kube-proxy-key.pem
kubectl:ca.pem admin.pem admin-key.pem   ------ 用于管理员访问集群

## 检查证书时长,官方建议一年最少升级一次k8s集群,升级的时候证书时长也会升级
openssl x509 -in ca.pem -text -noout
### 显示如下
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            51:5c:66:8b:40:24:d7:bb:ea:94:e7:5a:33:fe:44:a2:e2:18:51:b3
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=CN, ST=ShangHai, L=ShangHai, O=k8s, OU=System, CN=kubernetes
        Validity
            Not Before: Dec 14 13:26:00 2019 GMT
            Not After : Dec 11 13:26:00 2029 GMT	#时长为10年
        Subject: C=CN, ST=ShangHai, L=ShangHai, O=k8s, OU=System, CN=kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:c2:5c:92:dd:36:67:3f:d4:f1:e0:5f:e0:48:40:
# 使用镜像
kubelet:  243662875/pause-amd64:3.1
coredns:  243662875/coredns:1.3.1
dashboard:  243662875/kubernetes-dashboard-amd64:v1.10.1
metrics-server:  243662875/metrics-server-amd64:v0.3.6
traefik:  traefik:latest
es:  elasticsearch:6.6.1
fluentd-es:  243662875/fluentd-elasticsearch:v2.4.0
kibana:  243662875/kibana-oss:6.6.1


3、检查环境

3.1、检查etcd

etcd参考:https://www.cnblogs.com/winstom/p/11811373.html

systemctl status etcd|grep active

etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem 
--cert-file=/etc/kubernetes/ssl/etcd.pem 
--key-file=/etc/kubernetes/ssl/etcd-key.pem cluster-health
##显示如下:
member 1af68d968c7e3f22 is healthy: got healthy result from https://192.168.10.12:2379
member 7508c5fadccb39e2 is healthy: got healthy result from https://192.168.10.11:2379
member e8d9a97b17f26476 is healthy: got healthy result from https://192.168.10.13:2379
cluster is healthy

etcdctl --endpoints=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 
--ca-file=/etc/kubernetes/ssl/ca.pem 
--cert-file=/etc/kubernetes/ssl/etcd.pem 
--key-file=/etc/kubernetes/ssl/etcd-key.pem member list

ETCDCTL_API=3 etcdctl 
-w table --cacert=/etc/kubernetes/ssl/ca.pem 
--cert=/etc/kubernetes/ssl/etcd.pem 
--key=/etc/kubernetes/ssl/etcd-key.pem 
--endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" endpoint status
### 显示如下
+----------------------------+------------------+---------+---------+-----------+-----------+------------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://192.168.10.11:2379 | 7508c5fadccb39e2 |  3.3.18 |  762 kB |     false |       421 |     287371 |
| https://192.168.10.12:2379 | 1af68d968c7e3f22 |  3.3.18 |  762 kB |      true |       421 |     287371 |
| https://192.168.10.13:2379 | e8d9a97b17f26476 |  3.3.18 |  762 kB |     false |       421 |     287371 |
+----------------------------+------------------+---------+---------+-----------+-----------+------------+

#遇到报错: cannot unmarshal event: proto: wrong wireType = 0 for field Key
#解决办法参考:https://blog.csdn.net/dengxiafubi/article/details/102627341

#查询etcd API3的键
ETCDCTL_API=3 etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" 
--cacert=/etc/kubernetes/ssl/ca.pem 
--cert=/etc/kubernetes/ssl/etcd.pem 
--key=/etc/kubernetes/ssl/etcd-key.pem get / --prefix --keys-only

3.2、检查flanneld

systemctl status flanneld|grep Active

ip addr show|grep flannel
ip addr show|grep docker

cat /run/flannel/docker

cat /run/flannel/subnet.env

#### 列出键值存储的目录
etcdctl 
--ca-file=/etc/kubernetes/ssl/ca.pem 
--cert-file=/etc/kubernetes/ssl/flanneld.pem 
--key-file=/etc/kubernetes/ssl/flanneld-key.pem ls -r
## 显示如下
/kubernetes
/kubernetes/network
/kubernetes/network/config
/kubernetes/network/subnets
/kubernetes/network/subnets/172.30.12.0-24
/kubernetes/network/subnets/172.30.43.0-24
/kubernetes/network/subnets/172.30.9.0-24


#### 检查分配的pod网段
etcdctl 
--endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" 
--ca-file=/etc/kubernetes/ssl/ca.pem 
--cert-file=/etc/kubernetes/ssl/flanneld.pem 
--key-file=/etc/kubernetes/ssl/flanneld-key.pem 
get /kubernetes/network/config
#### 检查分配的pod子网列表
etcdctl 
--endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" 
--ca-file=/etc/kubernetes/ssl/ca.pem 
--cert-file=/etc/kubernetes/ssl/flanneld.pem 
--key-file=/etc/kubernetes/ssl/flanneld-key.pem 
ls /kubernetes/network/subnets
#### 检查pod网段对于的IP和flannel接口
etcdctl 
--endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" 
--ca-file=/etc/kubernetes/ssl/ca.pem 
--cert-file=/etc/kubernetes/ssl/flanneld.pem 
--key-file=/etc/kubernetes/ssl/flanneld-key.pem 
get /kubernetes/network/subnets/172.30.74.0-24

3.3、检查nginx和keepalived

ps -ef|grep nginx
ps -ef|grep keepalived
netstat -lntup|grep nginx
ip add|grep 192.168			# 查看VIP,显示如下
	inet 192.168.10.11/24 brd 192.168.10.255 scope global noprefixroute ens32
    inet 192.168.10.100/32 scope global ens32

3.4、检查kube-apiserver

netstat -lntup | grep kube-apiser
# 显示如下
tcp        0      0 192.168.10.11:6443      0.0.0.0:*               LISTEN      115454/kube-apiserv
        
kubectl cluster-info
# 显示如下
Kubernetes master is running at https://192.168.10.100:8443
Elasticsearch is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
CoreDNS is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


kubectl get all --all-namespaces


kubectl get cs
# 显示如下
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"} 

#### 打印kube-apiserver写入etcd数据
ETCDCTL_API=3 etcdctl 
--endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" 
--cacert=/etc/kubernetes/ssl/ca.pem 
--cert=/etc/kubernetes/ssl/etcd.pem 
--key=/etc/kubernetes/ssl/etcd-key.pem 
get /registry/ --prefix --keys-only

#### 遇到报错
unexpected ListAndWatch error: storage/cacher.go:/secrets: Failed to list *core.Secret: unable to transform key "/registry/secrets/kube-system/bootstrap-token-2z8s62": invalid padding on input
##### 原因,集群上的,kube-apiserver 的token 不一致 文件是:encryption-config.yaml 必须保证 secret的参数 一致

3.5、检查 kube-controller-manager

netstat -lntup|grep kube-control
# 显示如下
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      117775/kube-control 
tcp6       0      0 :::10257                :::*                    LISTEN      117775/kube-control

kubectl get cs

kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
# 显示如下,可以看到 kube12变成leader
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube12_753e65bf-1e65-11ea-b9c4-000c293dd01c","leaseDurationSeconds":15,"acquireTime":"2019-12-14T11:32:49Z","renewTime":"2019-12-14T12:43:20Z","leaderTransitions":0}'
  creationTimestamp: "2019-12-14T11:32:49Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "8282"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 753d2be7-1e65-11ea-b980-000c29e3f448

3.6、检查kube-scheduler

netstat -lntup|grep kube-sche
# 显示如下
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      119678/kube-schedul 
tcp6       0      0 :::10259                :::*                    LISTEN      119678/kube-schedul

kubectl get cs

kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
# 显示如下,可以看到kube12变成leader
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube12_89050e00-1e65-11ea-8f5e-000c293dd01c","leaseDurationSeconds":15,"acquireTime":"2019-12-14T11:33:23Z","renewTime":"2019-12-14T12:45:22Z","leaderTransitions":0}'
  creationTimestamp: "2019-12-14T11:33:23Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "8486"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: 899d1625-1e65-11ea-b980-000c29e3f448

3.7、检查kubelet

netstat -lntup|grep kubelet
# 显示如下
tcp        0      0 127.0.0.1:35173         0.0.0.0:*               LISTEN      123215/kubelet      
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      123215/kubelet      
tcp        0      0 192.168.10.11:10250     0.0.0.0:*               LISTEN      123215/kubelet 

kubeadm token list --kubeconfig ~/.kube/config
# 查看创建的token
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
hf0fa4.ta6haf1wsz1fnobf   22h       2019-12-15T19:33:26+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrAppers:kube11
oftjgn.01tob30h8v9l05lm   22h       2019-12-15T19:33:26+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:kube12
zuezc4.7kxhmayoue16pycb   22h       2019-12-15T19:33:26+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:kube13

kubectl get csr
# 已经批准
NAME                                                   AGE   REQUESTOR                 CONDITION
node-csr-Oarn7xdWDiq7-CLn7yrE3fkTtmJtoSenmlGj3XL85lM   72m   system:bootstrap:zuezc4   Approved,Issued
node-csr-hJrfQXlhIqJTROLD1ExmcXq74J78uu6rjHuh5ZyVlMg   72m   system:bootstrap:zuezc4   Approved,Issued
node-csr-s-BAbqc8hOKfDj8xqdJ6fWjwdustqG9LhwbpYxa9x68   72m   system:bootstrap:zuezc4   Approved,Issued
	
kubectl get nodes
# 显示如下
NAME            STATUS   ROLES    AGE   VERSION
192.168.10.11   Ready    <none>   73m   v1.14.8
192.168.10.12   Ready    <none>   73m   v1.14.8
192.168.10.13   Ready    <none>   73m   v1.14.8

systemctl status kubelet
#### 1.遇到报错:
 Failed to connect to apiserver: the server has asked for the client to provide credentials
#### 检查api是不是有问题,如没有问题,需要重新生成kubelet-bootstrap.kubeconfig文件,然后重启kubelet

#### 2.启动不起来,没有报错信息
#检查kubelet.config.json 文件 "address": "192.168.10.12", 是不是本机IP

#### 3.遇到问题:
failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "192.168.10.12" is forbidden: User "system:node:192.168.10.11" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
Unable to register node "192.168.10.12" with API server: nodes "192.168.10.12" is forbidden: node "192.168.10.11" is not allowed to modify node "192.168.10.12"
#检查kubelet.config.json 文件 "address": "192.168.10.12", 是不是本机IP

3.8、检查kube-proxy

netstat -lnpt|grep kube-proxy
# 显示如下
tcp        0      0 192.168.10.11:10249     0.0.0.0:*               LISTEN      125459/kube-proxy   
tcp        0      0 192.168.10.11:10256     0.0.0.0:*               LISTEN      125459/kube-proxy   
tcp6       0      0 :::32698                :::*                    LISTEN      125459/kube-proxy   
tcp6       0      0 :::32699                :::*                    LISTEN      125459/kube-proxy   
tcp6       0      0 :::32700                :::*                    LISTEN      125459/kube-proxy

ipvsadm -ln

4、检查附加组件

4.1、检查coredns

kubectl  get pods -n kube-system	#查看pod是否都启动完成

#使用容器验证    
kubectl run dig --rm -it --image=docker.io/azukiapp/dig /bin/sh
#ping 百度
ping www.baidu.com
PING www.baidu.com (180.101.49.11): 56 data bytes
64 bytes from 180.101.49.11: seq=0 ttl=127 time=10.772 ms
64 bytes from 180.101.49.11: seq=1 ttl=127 time=9.347 ms
64 bytes from 180.101.49.11: seq=2 ttl=127 time=10.937 ms
64 bytes from 180.101.49.11: seq=3 ttl=127 time=11.149 ms
64 bytes from 180.101.49.11: seq=4 ttl=127 time=10.677 ms

cat /etc/resolv.conf 	#查看
nameserver 10.254.0.2
search default.svc.cluster.local. svc.cluster.local. cluster.local.
options ndots:5

nslookup www.baidu.com
#显示如下
Server:         10.254.0.2
Address:        10.254.0.2#53

Non-authoritative answer:
www.baidu.com   canonical name = www.a.shifen.com.
Name:   www.a.shifen.com
Address: 180.101.49.12
Name:   www.a.shifen.com
Address: 180.101.49.11
    
nslookup kubernetes.default	#执行
Server:         10.254.0.2
Address:        10.254.0.2#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.254.0.1

nslookup kubernetes		#执行
Server:         10.254.0.2
Address:        10.254.0.2#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.254.0.1

4.2、检查dashboard

### 使用谷歌浏览器访问https://192.168.10.13:10250/metrics 报Unauthorized  是需要使用证书,生成证书方式参考如下

#1.windows机器,需要安装jdk然后使用keytool工具在bin目录下, 需要把ca.pem拷贝下来,我放在E盘了,执行导入证书命令
.keytool -import -v -trustcacerts -alias appmanagement -file "E:ca.pem" -storepass password -keystore cacerts	#导入证书
.keytool -delete -v -trustcacerts -alias appmanagement -file "E:ca.pem" -storepass password -keystore cacerts	#删除证书

#2.执行过后,然后在linux上执行如下:
openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pem

#3.然后通过浏览器把admin.pfx证书导进去,就可以正常访问了。

# 然后访问dashboard
https://192.168.10.13:32700
#### 或者
https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
#### 需要使用kubeconfig:已经自动生成了在/etc/kubernetes/dashboard.kubeconfig
#令牌保存在 {{k8s_home}}/dashboard_login_token.txt文件里,也可以用下面的命令获取token
kubectl -n kube-system describe secret `kubectl -n kube-system get secret|grep dashboard | awk '{print $1}'`

4.3、检查traefik

#每个node节点上部署一个traefik
kubectl  get pod,deploy,daemonset,service,ingress -n kube-system | grep traefik
### 显示如下
pod/traefik-ingress-controller-gl7vs        1/1     Running   0          43m
pod/traefik-ingress-controller-qp26j        1/1     Running   0          43m
pod/traefik-ingress-controller-x99ls        1/1     Running   0          43m
daemonset.extensions/traefik-ingress-controller   3         3         3       3            3           <none>          43m
service/traefik-ingress-service   ClusterIP   10.254.148.220   <none>        80/TCP,8080/TCP          43m
service/traefik-web-ui            ClusterIP   10.254.139.95    <none>        80/TCP                   43m
ingress.extensions/traefik-web-ui   traefik-ui             80      43m

# 访问返回如下:
curl -H 'host:traefik-ui' 192.168.10.11
<a href="/dashboard/">Found</a>.
curl -H 'host:traefik-ui' 192.168.10.12
<a href="/dashboard/">Found</a>.
curl -H 'host:traefik-ui' 192.168.10.13
<a href="/dashboard/">Found</a>.

#查看端口
netstat -lntup|grep traefik
tcp6       0      0 :::8080                 :::*                    LISTEN      66426/traefik       
tcp6       0      0 :::80                   :::*                    LISTEN      66426/traefik 

#然后访问http://192.168.10.11:8080/

4.4、检查metrics

kubectl top node

###报错:Error from server (Forbidden): forbidden: User "system:anonymous" cannot get path "/apis/metrics.k8s.io/v1beta1"
Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope
###解决办法
kubectl create clusterrolebinding the-boss --user system:anonymous --clusterrole cluster-admin

### 遇到报错:Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

4.5、检查EFK

es:		http://192.168.10.11:32698/
Kibana:	http://192.168.10.11:32699

5、验证集群

# 部署glusterfs 参考:https://www.cnblogs.com/fan-gx/p/12101686.html

kubectl create ns myapp

kubectl apply -f nginx.yaml 

kubectl get pod,svc,ing -n myapp -o wide
###显示如下
NAME                            READY   STATUS    RESTARTS   AGE   IP             NODE            NOMINATED NODE   READINESS GATES
pod/my-nginx-69f8f65796-zd777   1/1     Running   0          19m   172.30.36.15   192.168.10.11   <none>           <none>

NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/my-nginx   ClusterIP   10.254.131.1   <none>        80/TCP    21m   app=my-nginx

NAME                          HOSTS             ADDRESS   PORTS   AGE
ingress.extensions/my-nginx   myapp.nginx.com             80      21m

#验证访问是否正常
curl http://172.30.36.15
curl http://10.254.131.1
curl -H "host:myapp.nginx.com" 192.168.10.11
### 通过谷歌浏览器访问:http://192.168.10.100:8088/
### 我们部署的时候已经通过nginx代理了traefik地址 /data/nginx/conf/nginx.conf

kubectl exec -it my-nginx-69f8f65796-zd777 -n myapp bash
echo "hello world" >/usr/share/nginx/html/index.html	#然后浏览器访问http://192.168.10.100:8088/ 显示 hello world

6、重启所有组件

systemctl restart etcd && systemctl status etcd

systemctl restart flanneld && systemctl status flanneld

systemctl restart docker && systemctl status docker

systemctl stop nginx && systemctl start nginx && systemctl status nginx

systemctl restart keepalived && systemctl status keepalived

systemctl restart kube-apiserver && systemctl status kube-apiserver

systemctl restart kube-controller-manager && systemctl status kube-controller-manager

systemctl restart kube-scheduler && systemctl status kube-scheduler

systemctl restart kubelet && systemctl status kubelet

systemctl restart kube-proxy && systemctl status kube-proxy

作者:Fantasy

出处:http://dwz.date/bWku

20个免费 K8S 名额:http://dwz.date/bUTc



Tags:k8s   点击:()  评论:()
声明:本站部分内容及图片来自互联网,转载是出于传递更多信息之目的,内容观点仅代表作者本人,如有任何标注错误或版权侵犯请与我们联系(Email:2595517585@qq.com),我们将及时更正、删除,谢谢。
▌相关推荐
背景:目前prometheus 给pod的内存告警阀值设置的85%,由于JVM 设置最高申请内存为pod limit 的75%,通过arthas-boot查看到堆内存和元空间占用内存之和跟prometheus告警值不同。...【详细内容】
2021-09-16  Tags: k8s  点击:(104)  评论:(0)  加入收藏
近期由于工作原因,在项目支持的过程中,进行了一次K8S的基础环境部署,云平台一直是公司的重要底座,而我由于一系列原因,一直没有亲自尝试,通过本次的机会,让我重新做了一遍,也找到了...【详细内容】
2021-09-09  Tags: k8s  点击:(90)  评论:(0)  加入收藏
概述谷歌云使用先进的 Andromeda 网络来实现 VPC 内实例之间的相互访问,以及 Google Kubernetes Engine (GKE) 的 Pod 的跨节点互访,避免了配置静态路由或者 Overlay 网络带来...【详细内容】
2021-08-20  Tags: k8s  点击:(103)  评论:(0)  加入收藏
1.创建yaml模板 kubectl create deployment web --image=nginx --dry-run -o yaml > web.yaml 2.修改模板 vim web.yamlapiVersion: apps/v1kind: Deploymentmetadata:label...【详细内容】
2021-07-20  Tags: k8s  点击:(160)  评论:(0)  加入收藏
Jenkins 是目前最常用的持续集成工具,拥有近50%的市场份额,他还是很多技术团队的第一个使用的自动化工具。由此可见他的重要性!这份Jenkins宝典从入门介绍到结合Docker+SpringC...【详细内容】
2021-06-09  Tags: k8s  点击:(145)  评论:(0)  加入收藏
kubeadm 是官方社区推出的一个用于快速部署 kubernetes 集群的工具。这个工具能通过两条指令完成一个 kubernetes 集群的部署:# 创建一个 Master 节点$ kubeadm init # 将一...【详细内容】
2021-04-29  Tags: k8s  点击:(276)  评论:(0)  加入收藏
K8S 网络设计与实现是在学习 K8S 网络过程中总结的内容。本文按照 K8S 网络设计原则、Pod 内部网络、Pod 之间网络等几个步骤讲解 K8S 复杂的网络架构。 图片出自:《你女儿也...【详细内容】
2021-04-01  Tags: k8s  点击:(266)  评论:(0)  加入收藏
记录在 ubuntu-20.04.2-live-server系统上通过rancher安装k8s的过程。0. 更换阿里云软件源为了保障下载速度,将ubuntu软件源替换为阿里云软件源。如果网络通畅,可跳过。0.1....【详细内容】
2021-03-26  Tags: k8s  点击:(324)  评论:(0)  加入收藏
1.创建一个简单的应用程序在安装好Docker后,现在让我们来创建一个简单的应用程序。我们先创建一个简单的Node.js Web应用,然后将它打包到镜像中。该应用可以接受HTTP请求并返...【详细内容】
2021-03-10  Tags: k8s  点击:(219)  评论:(0)  加入收藏
一、日志收集的需求背景:&bull; 业务发展越来越庞大,服务器越来越多​ &bull; 各种访问日志、应用日志、错误日志量越来越多​ &bull; 开发人员排查问题,需要到服务器上查日志,...【详细内容】
2021-03-05  Tags: k8s  点击:(189)  评论:(0)  加入收藏
▌简易百科推荐
为了构建高并发、高可用的系统架构,压测、容量预估必不可少,在发现系统瓶颈后,需要有针对性地扩容、优化。结合楼主的经验和知识,本文做一个简单的总结,欢迎探讨。1、QPS保障目标...【详细内容】
2021-12-27  大数据架构师    Tags:架构   点击:(3)  评论:(0)  加入收藏
前言 单片机开发中,我们往往首先接触裸机系统,然后到RTOS,那么它们的软件架构是什么?这是我们开发人员必须认真考虑的问题。在实际项目中,首先选择软件架构是非常重要的,接下来我...【详细内容】
2021-12-23  正点原子原子哥    Tags:架构   点击:(7)  评论:(0)  加入收藏
现有数据架构难以支撑现代化应用的实现。 随着云计算产业的快速崛起,带动着各行各业开始自己的基于云的业务创新和信息架构现代化,云计算的可靠性、灵活性、按需计费的高性价...【详细内容】
2021-12-22    CSDN  Tags:数据架构   点击:(10)  评论:(0)  加入收藏
▶ 企业级项目结构封装释义 如果你刚毕业,作为Java新手程序员进入一家企业,拿到代码之后,你有什么感觉呢?如果你没有听过多模块、分布式这类的概念,那么多半会傻眼。为什么一个项...【详细内容】
2021-12-20  蜗牛学苑    Tags:微服务   点击:(8)  评论:(0)  加入收藏
我是一名程序员关注我们吧,我们会多多分享技术和资源。进来的朋友,可以多了解下青锋的产品,已开源多个产品的架构版本。Thymeleaf版(开源)1、采用技术: springboot、layui、Thymel...【详细内容】
2021-12-14  青锋爱编程    Tags:后台架构   点击:(20)  评论:(0)  加入收藏
在了解连接池之前,我们需要对长、短链接建立初步认识。我们都知道,网络通信大部分都是基于TCP/IP协议,数据传输之前,双方通过“三次握手”建立连接,当数据传输完成之后,又通过“四次挥手”释放连接,以下是“三次握手”与“四...【详细内容】
2021-12-14  架构即人生    Tags:连接池   点击:(16)  评论:(0)  加入收藏
随着移动互联网技术的快速发展,在新业务、新领域、新场景的驱动下,基于传统大型机的服务部署方式,不仅难以适应快速增长的业务需求,而且持续耗费高昂的成本,从而使得各大生产厂商...【详细内容】
2021-12-08  架构驿站    Tags:分布式系统   点击:(23)  评论:(0)  加入收藏
本系列为 Netty 学习笔记,本篇介绍总结Java NIO 网络编程。Netty 作为一个异步的、事件驱动的网络应用程序框架,也是基于NIO的客户、服务器端的编程框架。其对 Java NIO 底层...【详细内容】
2021-12-07  大数据架构师    Tags:Netty   点击:(16)  评论:(0)  加入收藏
前面谈过很多关于数字化转型,云原生,微服务方面的文章。虽然自己一直做大集团的SOA集成平台咨询规划和建设项目,但是当前传统企业数字化转型,国产化和自主可控,云原生,微服务是不...【详细内容】
2021-12-06  人月聊IT    Tags:架构   点击:(23)  评论:(0)  加入收藏
微服务看似是完美的解决方案。从理论上来说,微服务提高了开发速度,而且还可以单独扩展应用的某个部分。但实际上,微服务带有一定的隐形成本。我认为,没有亲自动手构建微服务的经历,就无法真正了解其复杂性。...【详细内容】
2021-11-26  GreekDataGuy  CSDN  Tags:单体应用   点击:(35)  评论:(0)  加入收藏
最新更新
栏目热门
栏目头条