kubectl 常用命令

  1. 常用资源列表:
资源名 简称 版本 是否受命名空间影响
nodes no v1
namespaces ns v1
configmaps cm v1
secrets secret v1
persistentvolumes pv v1
persistentvolumeclaims pvc v1
jobs job batch/v1
cronjobs cj batch/v1
pods po v1
deployments deploy apps/v1
daemonsets ds apps/v1
horizontalpodautoscalers hpa autoscaling/v2
services svc v1
statefulsets sts apps/v1
storageclasses sc storage.k8s.io/v1
ingresses ing networking.k8s.io/v1
serviceaccounts sa v1
roles role rbac.authorization.k8s.io/v1
rolebindings rolebinding rbac.authorization.k8s.io/v1
clusterroles clusterrole rbac.authorization.k8s.io/v1
clusterrolebindings clusterrolebindings rbac.authorization.k8s.io/v1
  1. 创建资源:
    kubectl create -f yaml文件名

  2. 修改资源:
    kubectl apply -f yaml文件名

  3. 删除节点/资源:

    kubectl delete -f yaml文件名
    kubectl delete no/deploy/po/svc/cm/... 名称
    kubectl delete no/deploy/po/svc/cm/... 名称 --force
  4. 查询节点/资源:

    1. 查看列表:
      # 查看命名空间下的资源
      kubectl get deploy/po/svc/cm/... -n 命名空间
      # 查看所有命名空间下的资源
      kubectl get deploy/po/svc/cm/... --all-namespaces
      # 查看详情
      kubectl get deploy/po/svc/cm/... -o wide
      # 查看资源动态
      kubectl get deploy/po/svc/cm/... --watch
    2. 查看配置过程:
      kubectl describe deploy/po/svc/cm/... 名称
    3. 查看pod日志:
      kubectl logs pod名
      kubectl logs -f pod名
    4. 查看资源列表:
      kubectl api-resources
  5. 节点操作:

    1. 标签设置:
      1. 打标签:
        # 做标记
        kubectl label no master key1=value1 key2=value2
        # 给 worker 节点打上角色标签
        kubectl label no node1 node-role.kubernetes.io/worker=
      2. 删除标签:
        kubectl label no master key1- key2-
      3. 查看节点标签:
        kubectl get no --show-labels
    2. 污点设置:
      1. 查看:
        kubectl describe node master | grep Taints
      2. 设置:
        kubectl taint node master taint-name=true:NoSchedule
      3. 删除:
        kubectl taint node master taint-name-
  6. 进入pod内部:
    kubectl exec -it pod名 /bin/sh

  7. 运行临时pod
    kubectl run -it --image busybox test-pod --restart=Never -n test-ns --rm /bin/sh

  8. pod内部文件/文件夹复制到主机:
    kubectl cp pod名:/path/to/pod/file /path/to/host

  9. 端口转发:
    kubectl port-forward pod-name local-port:pod-port -n namespace --address 0.0.0.0

命名空间 - Namespace

apiVersion: v1
kind: Namespace
metadata:
    name: test-ns

配置文件 - ConfigMap/Secret

  1. 配置文件 - ConfigMap

     apiVersion: v1
     kind: ConfigMap
     metadata:
         name: test-cm
         namespace: test-ns
     data:
         mysql.conf: |
             host=127.0.0.1
             port=3306
         redis.conf: |
             host=127.0.0.1
             port=6379
    1. 创建
      kubectl create configmap configmap-test --from-file=/path/to/config-files
    2. 使用
      apiVersion: apps/v1
      kind: Deployment
      metadata:
       name: test-deploy
       namespace: test-ns
       labels:
           app: test
      spec:
       selector:
           matchLabels:
               app: test
       template:
           metadata:
               labels:
                   app: test
           spec:
               containers:
                   -   name: test-deploy-container-name
                       image: busybox
      #                    env:
      #                        -   name: REDIS
      #                            valueFrom:
      #                                configMapKeyRef:
      #                                    name: test-cm
      #                                    key: redis.conf
                       command: ["sh", "-c", "ls -lah /tmp/config"]
                       volumeMounts:
                           -   name: test-cm-volume
                               mountPath: /tmp/config
               volumes:
                   -   name: test-cm-volume
                       configMap:
                           name: test-cm
  2. 加密文件 - secret

    1. 对需要进行加密的字段进行 base64 编码
      echo 'admin123' | base64
    2. 编写配置文件
       apiVersion: v1
       kind: Secret
       metadata:
           name: test-secret
           namespace: test-ns
       type: Opaque
       data:
           password: YWRtaW4xMjMK
    3. 查看
      kubectl get secret test-secret -o yaml
    4. 使用
       apiVersion: apps/v1
       kind: Deployment
       metadata:
           name: test-deploy
           namespace: test-ns
           labels:
               app: test
       spec:
           selector:
               matchLabels:
                   app: test
           template:
               metadata:
                   labels:
                       app: test
               spec:
                   containers:
                       -   name: test-deploy-container-name
                           image: busybox
       #                    env:
       #                        -   name: PASSWORD
       #                            valueFrom:
       #                                secretKeyRef:
       #                                    name: test-secret
       #                                    key: password
                           command: ["sh", "-c", "ls -lah /tmp/config && cat /tmp/config/password"]
                           volumeMounts:
                               -   name: test-secret-volume
                                   mountPath: /tmp/config
                   volumes:
                       -   name: test-secret-volume
                           secret:
                               secretName: test-secret

持久化卷 - PV

    apiVersion: v1
    kind: PersistentVolume
    metadata:
        name:  test-pv
    spec:
        capacity:
            storage: 1Gi
        accessModes:
            - ReadWriteOnce  # ReadOnlyMany / ReadWriteMany
        persistentVolumeReclaimPolicy: Recycle  # Retain / Delete
        nfs:
            server: 10.0.0.11
            path: /home/vagrant/nfs/server

卷声明 - PVC

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
        name: test-pvc
        namespace: test-ns
    spec:
        accessModes:
            - ReadWriteOnce
        resources:
            requests:
                storage: 1Gi
        #storageClassName: test-sc  # 使用 StorageClass
  • 测试
      apiVersion: apps/v1
      kind: Deployment
      metadata:
          name: test-deploy
          namespace: test-ns
          labels:
              app: nginx
      spec:
          selector:
              matchLabels:
                  app: nginx
          template:
              metadata:
                  labels:
                      app: nginx
              spec:
                  containers:
                      -   name: test-deploy-container-name
                          image: nginx:1.7.9
                          imagePullPolicy: IfNotPresent
                          ports:
                              -   containerPort: 80
                                  name: t-d-p
                          volumeMounts:
                              -   name: test-deploy-volume
                                  mountPath: /usr/share/nginx/html
                                  subPath: nginx
                  volumes:
                      -   name: test-deploy-volume
                          persistentVolumeClaim:
                              claimName: test-pvc

任务处理 - Job/CronJob

  1. 一次性任务 - Job

     apiVersion: batch/v1
     kind: Job
     metadata:
         name: test-job
         namespace: test-ns
     spec:
         template:
             spec:
                 restartPolicy: Never
                 containers:
                     -   name: test-job-container-name
                         env:
                             -   name: STR
                                 value: "hello job"
                         command: ["/bin/sh", "-c", "echo $STR > /tmp/job/job.txt"]
                         image: busybox
                         imagePullPolicy: IfNotPresent
                         volumeMounts:
                             -   name: test-job-volume
                                 mountPath: /tmp/job
     #                            subPath: tmp
                 volumes:
                     -   name: test-job-volume
                         hostPath:
                             path: /home/vagrant/usage/job
  2. 定时任务 - CronJob

     apiVersion: batch/v1beta1
     kind: CronJob
     metadata:
         name: test-cj
         namespace: test-ns
     spec:
         schedule: "*/1 * * * *"
         jobTemplate:
             spec:
                 template:
                     spec:
                         restartPolicy: OnFailure
                         containers:
                             -   name: test-cj-container-name
                                 env:
                                     -   name: STR
                                         value: "hello cronJob"
                                 command: ["/bin/sh", "-c", "echo $STR >> /tmp/cronjob/cronjob.txt"]
                                 image: busybox
                                 imagePullPolicy: IfNotPresent
                                 volumeMounts:
                                     -   name: test-cj-volume
                                         mountPath: /tmp/cronJob
     #                                    subPath: tmp
                         volumes:
                             -   name: test-cj-volume
                                 hostPath:
                                     path: /home/vagrant/usage/cronJob

滚动升级 - Deployment

    apiVersion: apps/v1
    kind: Deployment
    metadata:
        name: test-deploy
        namespace: test-ns
        labels:
            app: nginx
    spec:
        replicas: 1
        revisionHistoryLimit: 10
        selector:
            matchLabels:
                app: nginx
        template:
            metadata:
                labels:
                    app: nginx
            spec:
                initContainers:
                    -   name: test-deploy-init-container-name
                        image: busybox
                        env:
                            -   name: STR
                                value: "hello nginx"
                        command: ["sh", "-c", "echo $STR > /usr/share/nginx/html/index.html"]
                        volumeMounts:
                            -   name: test-deploy-volume
                                mountPath: /usr/share/nginx/html
                #                    securityContext:
                #                        privileged: true
                containers:
                    -   name: nginx-pod
                        image: nginx:1.7.9
                        imagePullPolicy: IfNotPresent
                        ports:
                            -   containerPort: 80
                                name: t-d-p
                    #                    env:
                    #                        -   name: TEST
                    #                            value: test
                        readinessProbe:
                            failureThreshold: 3
                            httpGet:
                                path: /
                                port: 80
                                scheme: HTTP
                            initialDelaySeconds: 10
                            periodSeconds: 5
                            successThreshold: 1
                            timeoutSeconds: 5
                        livenessProbe:
                            failureThreshold: 3
                            httpGet:
                                path: /
                                port: 80
                                scheme: HTTP
                            initialDelaySeconds: 10
                            periodSeconds: 5
                            successThreshold: 1
                            timeoutSeconds: 5
                        resources:
                            limits:
                                cpu: 100m
                                memory: 256Mi
                            requests:
                                cpu: 100m
                                memory: 256Mi
                        volumeMounts:
                            -   name: test-deploy-volume
                                mountPath: /usr/share/nginx/html
                #                            subPath: nginx
                nodeSelector:
                    kubernetes.io/hostname: master
                tolerations:
                    -   key: node-role.kubernetes.io/control-plane
                        operator: Exists
                        effect: NoSchedule
                #            securityContext:
                #                fsGroup: 472
                #                runAsUser: 472
                volumes:
                    -   name: test-deploy-volume
                        hostPath:
                            path: /home/vagrant/usage/www
    #                -   name: test-deploy-volume
    #                    persistentVolumeClaim:
    #                        claimName: my-pvc
    #                -   name: test-deploy-volume
    #                    configMap:
    #                        name: my-configMap
  1. 查看历史版本

    # 查看全部版本
    kubectl rollout history deployment test-deploy
    # 查看指定版本
    kubectl rollout history deployment test-deploy --revision=1
  2. 回滚

    # 回滚到前一版本
    kubectl rollout undo deployment test-deploy
    # 回滚到指定版本
    kubectl rollout undo deployment test-deploy --to-revision=1

守护进程 - DaemonSet

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
        name: test-ds
        labels:
            app: nginx
    spec:
        selector:
            matchLabels:
                app: nginx
        template:
            metadata:
                labels:
                    app: nginx
            spec:
                initContainers:
                    -   name: test-ds-init-container-name
                        image: busybox
                        env:
                            -   name: STR
                                value: "hello nginx"
                        command: ["sh", "-c", "echo $STR > /usr/share/nginx/html/index.html"]
                        volumeMounts:
                            -   name: test-ds-volume
                                mountPath: /usr/share/nginx/html
                #                    securityContext:
                #                        privileged: true
                containers:
                    -   name: nginx-pod
                        image: nginx:1.7.9
                        imagePullPolicy: IfNotPresent
                        ports:
                            -   containerPort: 80
                                name: t-ds-p
                        #                    env:
                        #                        -   name: TEST
                        #                            value: test
                        readinessProbe:
                            failureThreshold: 3
                            httpGet:
                                path: /
                                port: 80
                                scheme: HTTP
                            initialDelaySeconds: 10
                            periodSeconds: 5
                            successThreshold: 1
                            timeoutSeconds: 5
                        livenessProbe:
                            failureThreshold: 3
                            httpGet:
                                path: /
                                port: 80
                                scheme: HTTP
                            initialDelaySeconds: 10
                            periodSeconds: 5
                            successThreshold: 1
                            timeoutSeconds: 5
                        resources:
                            limits:
                                cpu: 100m
                                memory: 256Mi
                            requests:
                                cpu: 100m
                                memory: 256Mi
                        volumeMounts:
                            -   name: test-ds-volume
                                mountPath: /usr/share/nginx/html
                #                            subPath: nginx
                tolerations:
                    -   key: node-role.kubernetes.io/control-plane
                        operator: Exists
                        effect: NoSchedule
                #            securityContext:
                #                fsGroup: 472
                #                runAsUser: 472
                volumes:
                    -   name: test-ds-volume
                        hostPath:
                            path: /home/vagrant/usage/www
    #                -   name: test-ds-volume
    #                    persistentVolumeClaim:
    #                        claimName: my-pvc
    #                -   name: test-ds-volume
    #                    configMap:
    #                        name: my-configMap

服务暴露 - Service

  1. POD间通信方式:
    {podName}.{serviceName}.{namespace}.svc.cluster.local

  2. 创建 Service:

     apiVersion: v1
     kind: Service
     metadata:
         name: test-svc
         namespace: test-ns
     spec:
         selector:
             app: nginx
         type: NodePort
         ports:
             -   name: test-svc-port
                 protocol: TCP
                 port: 80
                 targetPort: t-d-p
                 nodePort: 30080

动态卷 - StorageClass(自动创建 PV)

  1. 安装nfs
    参见:nfs 文件服务器

  2. 创建nfs-client-provisioner
    vim nfs-client-provisioner.yaml

     apiVersion: apps/v1
     kind: Deployment
     metadata:
         name: nfs-client-provisioner
         labels:
             app: nfs-client-provisioner
         namespace: test-ns
     spec:
         replicas: 1
         strategy:
             type: Recreate
         selector:
             matchLabels:
                 app: nfs-client-provisioner
         template:
             metadata:
                 labels:
                     app: nfs-client-provisioner
             spec:
                 serviceAccountName: nfs-client-provisioner
                 containers:
                     - name: nfs-client-provisioner
                       image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
                       volumeMounts:
                           - name: nfs-client-root
                             mountPath: /persistentvolumes
                       env:
                           - name: PROVISIONER_NAME
                             value: test-nfs-client-provisioner
                           - name: NFS_SERVER
                             value: 10.0.0.11
                           - name: NFS_PATH
                             value: /home/vagrant/nfs/server
                 volumes:
                     - name: nfs-client-root
                       nfs:
                           server: 10.0.0.11
                           path: /home/vagrant/nfs/server
     ---
     apiVersion: v1
     kind: ServiceAccount
     metadata:
         name: nfs-client-provisioner
         namespace: test-ns
     ---
     kind: ClusterRole
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
         name: nfs-client-provisioner-runner
     rules:
         - apiGroups: [""]
           resources: ["nodes"]
           verbs: ["get", "list", "watch"]
         - apiGroups: [""]
           resources: ["persistentvolumes"]
           verbs: ["get", "list", "watch", "create", "delete"]
         - apiGroups: [""]
           resources: ["persistentvolumeclaims"]
           verbs: ["get", "list", "watch", "update"]
         - apiGroups: ["storage.k8s.io"]
           resources: ["storageclasses"]
           verbs: ["get", "list", "watch"]
         - apiGroups: [""]
           resources: ["events"]
           verbs: ["create", "update", "patch"]
     ---
     kind: ClusterRoleBinding
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
         name: run-nfs-client-provisioner
     subjects:
         - kind: ServiceAccount
           name: nfs-client-provisioner
           namespace: test-ns
     roleRef:
         kind: ClusterRole
         name: nfs-client-provisioner-runner
         apiGroup: rbac.authorization.k8s.io
     ---
     kind: Role
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
         name: leader-locking-nfs-client-provisioner
         namespace: test-ns
     rules:
         - apiGroups: [""]
           resources: ["endpoints"]
           verbs: ["get", "list", "watch", "create", "update", "patch"]
     ---
     kind: RoleBinding
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
         name: leader-locking-nfs-client-provisioner
         namespace: test-ns
     subjects:
         - kind: ServiceAccount
           name: nfs-client-provisioner
           namespace: test-ns
     roleRef:
         kind: Role
         name: leader-locking-nfs-client-provisioner
         apiGroup: rbac.authorization.k8s.io

    kubectl apply -f nfs-client-provisioner.yaml

  3. 创建StorageClass

     apiVersion: storage.k8s.io/v1
     kind: StorageClass
     metadata:
         name: test-sc
         annotations:
             storageclass.kubernetes.io/is-default-class: "true"
     provisioner: test-nfs-client-provisioner

弹性扩容 - StatefulSet

  1. 创建 PV 或 StorageClass(略)
    PVC名称格式:{volumeClaimTemplates.Name}-{podName}

  2. 创建 headless-service(略)
    POD间通信方式:{podName}.{serviceName}.{namespace}.svc.cluster.local

  3. 创建 StatefulSet:

     apiVersion: apps/v1
     kind: StatefulSet
     metadata:
         name: test-sts
         namespace: test-ns
     spec:
         serviceName: "test-svc"
         replicas: 1
         selector:
             matchLabels:
                 app: mysql
         template:
             metadata:
                 labels:
                     app: mysql
             spec:
                 initContainers:
                     -   name: test-sts-init-container-name
                         image: busybox
                         command: [ "sh","-c","echo $(hostname)" ]
                 containers:
                     -   name: test-sts-container-name
                         image: mysql:5.7
                         env:
                             -   name: MYSQL_ROOT_PASSWORD
                                 value: "123456"
                         ports:
                             -   name: t-s-p
                                 containerPort: 3306
                         volumeMounts:
                             -   name: test-sts-vct
                                 mountPath: /var/lib/mysql
                 nodeSelector:
                     kubernetes.io/hostname: master
                 volumes:
                     -   name: test-sts-volume
                         hostPath:
                             path: /home/vagrant/usage/mysql
     #                -   name: test-sts-volume
     #                    persistentVolumeClaim:
     #                        claimName: my-pvc
     #                -   name: test-sts-volume
     #                    configMap:
     #                        name: my-configMap
         volumeClaimTemplates:
             -   metadata:
                     name: test-sts-vct
                 spec:
                     accessModes: [ "ReadWriteOnce" ]
                     resources:
                         requests:
                             storage: 1Gi
                     storageClassName: test-sc

域名绑定 - Ingress

  1. 通过kuboard安装ingress-nginx控制器(略)

  2. 查看ingress-nginx的代理端口:
    kubectl get svc -n ingress-nginx | grep NodePort

  3. 创建负载均衡服务器,如: nginx,将域名nginx.lee.com解析到ingress-nginx所在服务器的 80 和 443 端口所对应的nodePort端口,如:31893、30736:

     server {
         listen 80;
         server_name nginx.lee.com;
         location / {
             proxy_pass http://10.0.0.11:31893;
             proxy_redirect off;
             proxy_set_header Host $host:$server_port;
             proxy_set_header X-Real-IP $remote_addr;
             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
             proxy_read_timeout 90;
         }
     }
  4. 创建 Ingress:

     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
       #annotations:
         #kubernetes.io/ingress.class: nginx
       name: test-ing
       namespace: test-ns
     spec:
       ingressClassName: "ingress-nginx"
       rules:
         - host: nginx.lee.com
           http:
             paths:
               - backend:
                   service:
                     name: test-svc
                     port:
                       number: 80
                 path: /
                 pathType: Prefix

弹性伸缩 - HPA(自动扩容 Deployment && StatefulSet)

    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
        name: test-hpa
        namespace: test-ns
    spec:
        scaleTargetRef:
            apiVersion: apps/v1
            kind: Deployment
            name: test-deploy
        minReplicas: 1
        maxReplicas: 10
        metrics:
            -   resource:
                    name: cpu
                    target:
                        averageUtilization: 10  # cpu 最大利用率: 10%
                        type: Utilization
                type: Resource

权限管理 - RBAC

  1. 用户

     apiVersion: v1
     kind: ServiceAccount
     metadata:
         name: test-sa
         namespace: test-ns
  2. 角色

     apiVersion: rbac.authorization.k8s.io/v1
     kind: Role
     metadata:
         name: test-role
         namespace: test-ns
     rules:
         - apiGroups: [""]
           resources: ["pods"]
           verbs: ["get", "watch", "list"]
         - apiGroups: ["apps"]
           resources: ["deployments"]
           verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  3. 用户角色绑定

     apiVersion: rbac.authorization.k8s.io/v1
     kind: RoleBinding
     metadata:
         name: test-rolebinding
         namespace: test-ns
     subjects:
         - kind: ServiceAccount
           name: test-sa
           namespace: test-ns
     roleRef:
         kind: Role
         name: test-role
         apiGroup: rbac.authorization.k8s.io
  4. 集群角色

     apiVersion: rbac.authorization.k8s.io/v1
     kind: ClusterRole
     metadata:
         name: test-clusterrole
     rules:
         - apiGroups: [""]
           resources: ["pods"]
           verbs: ["get", "watch", "list"]
         - apiGroups: ["apps"]
           resources: ["deployments"]
           verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  5. 集群用户角色绑定

     apiVersion: rbac.authorization.k8s.io/v1
     kind: ClusterRoleBinding
     metadata:
         name: test-clusterrolebindings
     subjects:
         - kind: ServiceAccount
           name: test-sa
           namespace: test-ns
     roleRef:
         kind: ClusterRole
         name: test-clusterrole
         apiGroup: rbac.authorization.k8s.io

通过 StatefulSet 安装 mysql/redis/etcd… 集群

  1. 熟悉 linux 安装过程,规划好 StatefulSet 的数量

  2. 通过 Dockerfile 创建自定义镜像

  3. 不同角色的 StatefulSet 分别创建一个 Headless Service(无头服务),相同角色的 StatefulSet 可以创建多个副本

  4. Pod 之间通过{podName}.{headlessServiceName}.{namespace}.svc.cluster.local这种方式通信

私有仓库 - harbor

  1. docker 镜像仓库

    1. 登录
      docker login -u username -p password xx.xx.xx.xx:9011
    2. 打标签
      docker tag my-image:v1.0 xx.xx.xx.xx:9011/library/my-image:v1.0
    3. 推送
      docker push xx.xx.xx.xx:9011/library/my-image:v1.0
  2. helm 包管理仓库

    • 添加仓库
      helm repo add my-repo --username=admin --password=admin123 http://xx.xx.xx.xx:9011/chartrepo/library

包管理工具 - helm

  1. 仓库管理

    1. 添加
      helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    2. 删除
      helm repo remove stable
    3. 更新
      helm repo update
    4. 查看
      helm repo list
  2. 本地包管理

    1. 创建
      helm create local-project
    2. 安装
      helm install ./local-project -n release-name [-f my-values.yaml]
    3. 打包
      helm package ./local-project
  3. 远程包管理

    1. 搜索
      helm search mysql
    2. 查看配置项
      helm inspect values stable/mysql
    3. 安装
      helm install stable/mysql -n mysql [-f my-values.yaml]
    4. 下载
      helm fetch stable/mysql
  4. 管理已安装的包

    1. 查看列表
      helm list [-a]
    2. 查看详情
      helm status mysql
    3. 删除
      helm delete mysql [--purge]
  5. 版本管理

    1. 升级
      helm upgrade mysql stable/mysql [-f my-values.yaml]
    2. 查看历史
      helm history mysql
    3. 回滚
      helm rollback mysql 1
文档更新时间: 2024-04-20 10:57   作者:lee