为了账号安全,请及时绑定邮箱和手机立即绑定

GO 获取 K8S api 服务器健康状态

GO 获取 K8S api 服务器健康状态

Go
慕斯王 2022-07-04 16:37:16
我有一个 golang 程序,我需要向K8S API server状态 (livez) api 添加一个新调用以获取健康状态。https://kubernetes.io/docs/reference/using-api/health-checks/该程序应该在 api 服务器的同一集群上运行并且需要获取/livez状态,我试图在 client-go lib 中找到这个 API,但没有找到实现它的方法......https://github.com/kubernetes/client-go有没有办法从运行在 API 服务器运行的同一集群上的 Go 程序中做到这一点?
查看完整描述

1 回答

?
守着一只汪

TA贡献1872条经验 获得超3个赞

更新(最终答案)

附加

OP要求我修改我的答案以显示“微调”或“特定”服务帐户的配置,而不使用集群管理员..


据我所知,/healthz默认情况下,每个 pod 都具有读取权限。例如,以下内容CronJob在不使用 aServiceAccount的情况下也可以正常工作:


# cronjob

apiVersion: batch/v1beta1

kind: CronJob

metadata:

  name: is-healthz-ok-no-svc

spec:

  schedule: "*/5 * * * *" # at every fifth minute

  jobTemplate:

    spec:

      template:

        spec:

######### serviceAccountName: health-reader-sa

          containers:

            - name: is-healthz-ok-no-svc

              image: oze4/is-healthz-ok:latest

          restartPolicy: OnFailure

//img1.sycdn.imooc.com//62c2a6d9000100e211920180.jpg

原来的

我继续为此写了一个概念证明。您可以在此处找到完整的 repo,但代码如下。


main.go

package main


import (

    "os"

    "errors"

    "fmt"


    "k8s.io/client-go/kubernetes"

    "k8s.io/client-go/rest"

)


func main() {

    client, err := newInClusterClient()

    if err != nil {

        panic(err.Error())

    }


    path := "/healthz"

    content, err := client.Discovery().RESTClient().Get().AbsPath(path).DoRaw()

    if err != nil {

        fmt.Printf("ErrorBadRequst : %s\n", err.Error())

        os.Exit(1)

    }


    contentStr := string(content)

    if contentStr != "ok" {

        fmt.Printf("ErrorNotOk : response != 'ok' : %s\n", contentStr)

        os.Exit(1)

    }


    fmt.Printf("Success : ok!")

    os.Exit(0)

}


func newInClusterClient() (*kubernetes.Clientset, error) {

    config, err := rest.InClusterConfig()

    if err != nil {

        return &kubernetes.Clientset{}, errors.New("Failed loading client config")

    }

    clientset, err := kubernetes.NewForConfig(config)

    if err != nil {

        return &kubernetes.Clientset{}, errors.New("Failed getting clientset")

    }

    return clientset, nil

}

dockerfile

FROM golang:latest

RUN mkdir /app

ADD . /app

WORKDIR /app

RUN go build -o main .

CMD ["/app/main"]

部署.yaml

(作为 CronJob)


# cronjob

apiVersion: batch/v1beta1

kind: CronJob

metadata:

  name: is-healthz-ok

spec:

  schedule: "*/5 * * * *" # at every fifth minute

  jobTemplate:

    spec:

      template:

        spec:

          serviceAccountName: is-healthz-ok

          containers:

            - name: is-healthz-ok

              image: oze4/is-healthz-ok:latest

          restartPolicy: OnFailure

---

# service account

apiVersion: v1

kind: ServiceAccount

metadata:

  name: is-healthz-ok

  namespace: default

---

# cluster role binding

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: is-healthz-ok

subjects:

  - kind: ServiceAccount

    name: is-healthz-ok

    namespace: default

roleRef:

  kind: ClusterRole

  ##########################################################################

  # Instead of assigning cluster-admin you can create your own ClusterRole #

  # I used cluster-admin because this is a homelab                         #

  ##########################################################################

  name: cluster-admin

  apiGroup: rbac.authorization.k8s.io

---

截屏

成功的 CronJob 运行

//img1.sycdn.imooc.com//62c2a6e60001f24b11950182.jpg

更新 1

OP 询问如何部署“in-cluster-client-config”,所以我提供了一个示例部署(我正在使用的)..


你可以在这里找到回购


示例部署(我使用的是 CronJob,但它可以是任何东西):


cronjob.yaml

apiVersion: batch/v1beta1

kind: CronJob

metadata:

  name: remove-terminating-namespaces-cronjob

spec:

  schedule: "0 */1 * * *" # at minute 0 of each hour aka once per hour

  #successfulJobsHistoryLimit: 0

  #failedJobsHistoryLimit: 0

  jobTemplate:

    spec:

      template:

        spec:

          serviceAccountName: svc-remove-terminating-namespaces

          containers:

          - name: remove-terminating-namespaces

            image: oze4/service.remove-terminating-namespaces:latest

          restartPolicy: OnFailure

rbac.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

  name: svc-remove-terminating-namespaces

  namespace: default

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: crb-namespace-reader-writer

subjects:

- kind: ServiceAccount

  name: svc-remove-terminating-namespaces

  namespace: default

roleRef:

  kind: ClusterRole

  ##########################################################################

  # Instead of assigning cluster-admin you can create your own ClusterRole #

  # I used cluster-admin because this is a homelab                         #

  ##########################################################################

  name: cluster-admin

  apiGroup: rbac.authorization.k8s.io

---

原始答案

听起来您正在寻找的是来自 client-go 的“in-cluster-client-config”。


重要的是要记住,当使用“in-cluster-client-config”时,Go 代码中的 API 调用使用“that”pod 的服务帐户。只是想确保您使用有权读取“/livez”的帐户进行测试。


我测试了以下代码,我能够获得“livez”状态..


package main


import (

    "errors"

    "flag"

    "fmt"

    "path/filepath"


    "k8s.io/client-go/kubernetes"

    "k8s.io/client-go/tools/clientcmd"

    "k8s.io/client-go/rest"

    "k8s.io/client-go/util/homedir"

)


func main() {

    // I find it easiest to use "out-of-cluster" for tetsing

    // client, err := newOutOfClusterClient()


    client, err := newInClusterClient()

    if err != nil {

        panic(err.Error())

    }


    livez := "/livez"

    content, _ := client.Discovery().RESTClient().Get().AbsPath(livez).DoRaw()


    fmt.Println(string(content))

}


func newInClusterClient() (*kubernetes.Clientset, error) {

    config, err := rest.InClusterConfig()

    if err != nil {

        return &kubernetes.Clientset{}, errors.New("Failed loading client config")

    }

    clientset, err := kubernetes.NewForConfig(config)

    if err != nil {

        return &kubernetes.Clientset{}, errors.New("Failed getting clientset")

    }

    return clientset, nil

}


// I find it easiest to use "out-of-cluster" for tetsing

func newOutOfClusterClient() (*kubernetes.Clientset, error) {

    var kubeconfig *string

    if home := homedir.HomeDir(); home != "" {

        kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")

    } else {

        kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")

    }

    flag.Parse()


    // use the current context in kubeconfig

    config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)

    if err != nil {

        return nil, err

    }


    // create the clientset

    client, err := kubernetes.NewForConfig(config)

    if err != nil {

        return nil, err

    }


    return client, nil

}



查看完整回答
反对 回复 2022-07-04
  • 1 回答
  • 0 关注
  • 215 浏览
慕课专栏
更多

添加回答

举报

0/150
提交
取消
微信客服

购课补贴
联系客服咨询优惠详情

帮助反馈 APP下载

慕课网APP
您的移动学习伙伴

公众号

扫描二维码
关注慕课网微信公众号