成人国产在线小视频_日韩寡妇人妻调教在线播放_色成人www永久在线观看_2018国产精品久久_亚洲欧美高清在线30p_亚洲少妇综合一区_黄色在线播放国产_亚洲另类技巧小说校园_国产主播xx日韩_a级毛片在线免费

資訊專欄INFORMATION COLUMN

k8s與監(jiān)控--解讀prometheus監(jiān)控kubernetes的配置文件

UCloud / 3150人閱讀

摘要:前言是一個開源和社區(qū)驅動的監(jiān)控報警時序數(shù)據(jù)庫的項目。集群上部署的應用監(jiān)控部署在集群上的應用。通過和的接口采集。相應,配置文件官方也提供了一份,今天我們就解讀一下該配置文件。對于服務的終端節(jié)點,也需要加注解,為則會將作為監(jiān)控目標。

前言

Prometheus 是一個開源和社區(qū)驅動的監(jiān)控&報警&時序數(shù)據(jù)庫的項目。來源于谷歌BorgMon項目?,F(xiàn)在最常見的Kubernetes容器管理系統(tǒng)中,通常會搭配Prometheus進行監(jiān)控。主要監(jiān)控:

Node:如主機CPU,內存,網(wǎng)絡吞吐和帶寬占用,磁盤I/O和磁盤使用等指標。node-exporter采集。

容器關鍵指標:集群中容器的CPU詳細狀況,內存詳細狀況,Network,F(xiàn)ileSystem和Subcontainer等。通過cadvisor采集。

Kubernetes集群上部署的應用:監(jiān)控部署在Kubernetes集群上的應用。主要是pod,service,ingress和endpoint。通過black-box和kube-apiserver的接口采集。

prometheus自身提供了一些資源的自動發(fā)現(xiàn)功能,下面是我從官方github上截圖,羅列了目前提供的資源發(fā)現(xiàn):

由上圖可知prometheus自身提供了自動發(fā)現(xiàn)kubernetes的監(jiān)控目標的功能。相應,配置文件官方也提供了一份,今天我們就解讀一下該配置文件。

配置文件解讀

首先直接上官方的配置文件:

# A scrape configuration for running Prometheus on a Kubernetes cluster.
# This uses separate scrape configs for cluster components (i.e. API server, node)
# and services to allow each to use different authentication configs.
#
# Kubernetes labels will be added as Prometheus labels on metrics via the
# `labelmap` relabeling action.
#
# If you are using Kubernetes 1.7.2 or earlier, please take note of the comments
# for the kubernetes-cadvisor job; you will need to edit or remove this job.

# Scrape config for API servers.
#
# Kubernetes exposes API servers as endpoints to the default/kubernetes
# service so this uses `endpoints` role and uses relabelling to only keep
# the endpoints associated with the default/kubernetes service using the
# default named port `https`. This works for single API server deployments as
# well as HA API server deployments.
scrape_configs:
- job_name: "kubernetes-apiservers"

  kubernetes_sd_configs:
  - role: endpoints

  # Default to scraping over https. If required, just disable this or change to
  # `http`.
  scheme: https

  # This TLS & bearer token file config is used to connect to the actual scrape
  # endpoints for cluster components. This is separate to discovery auth
  # configuration because discovery & scraping are two separate concerns in
  # Prometheus. The discovery auth config is automatic if Prometheus runs inside
  # the cluster. Otherwise, more config options have to be provided within the
  # .
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    # If your node certificates are self-signed or use a different CA to the
    # master CA, then disable certificate verification below. Note that
    # certificate verification is an integral part of a secure infrastructure
    # so this should only be disabled in a controlled environment. You can
    # disable certificate verification by uncommenting the line below.
    #
    # insecure_skip_verify: true
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

  # Keep only the default/kubernetes service endpoints for the https port. This
  # will add targets for each API server which Kubernetes adds an endpoint to
  # the default/kubernetes service.
  relabel_configs:
  - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: default;kubernetes;https

# Scrape config for nodes (kubelet).
#
# Rather than connecting directly to the node, the scrape is proxied though the
# Kubernetes apiserver.  This means it will work if Prometheus is running out of
# cluster, or can"t connect to nodes for some other reason (e.g. because of
# firewalling).
- job_name: "kubernetes-nodes"

  # Default to scraping over https. If required, just disable this or change to
  # `http`.
  scheme: https

  # This TLS & bearer token file config is used to connect to the actual scrape
  # endpoints for cluster components. This is separate to discovery auth
  # configuration because discovery & scraping are two separate concerns in
  # Prometheus. The discovery auth config is automatic if Prometheus runs inside
  # the cluster. Otherwise, more config options have to be provided within the
  # .
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

  kubernetes_sd_configs:
  - role: node

  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - target_label: __address__
    replacement: kubernetes.default.svc:443
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __metrics_path__
    replacement: /api/v1/nodes/${1}/proxy/metrics

# Scrape config for Kubelet cAdvisor.
#
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
# (those whose names begin with "container_") have been removed from the
# Kubelet metrics endpoint.  This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
# in that case (and ensure cAdvisor"s HTTP server hasn"t been disabled with
# the --cadvisor-port=0 Kubelet flag).
#
# This job is not necessary and should be removed in Kubernetes 1.6 and
# earlier versions, or it will cause the metrics to be scraped twice.
- job_name: "kubernetes-cadvisor"

  # Default to scraping over https. If required, just disable this or change to
  # `http`.
  scheme: https

  # This TLS & bearer token file config is used to connect to the actual scrape
  # endpoints for cluster components. This is separate to discovery auth
  # configuration because discovery & scraping are two separate concerns in
  # Prometheus. The discovery auth config is automatic if Prometheus runs inside
  # the cluster. Otherwise, more config options have to be provided within the
  # .
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

  kubernetes_sd_configs:
  - role: node

  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - target_label: __address__
    replacement: kubernetes.default.svc:443
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __metrics_path__
    replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

# Scrape config for service endpoints.
#
# The relabeling allows the actual service scrape endpoint to be configured
# via the following annotations:
#
# * `prometheus.io/scrape`: Only scrape services that have a value of `true`
# * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
# to set this to `https` & most likely set the `tls_config` of the scrape config.
# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
# * `prometheus.io/port`: If the metrics are exposed on a different port to the
# service then set this appropriately.
- job_name: "kubernetes-service-endpoints"

  kubernetes_sd_configs:
  - role: endpoints

  relabel_configs:
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
    action: replace
    target_label: __scheme__
    regex: (https?)
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
    action: replace
    target_label: __address__
    regex: ([^:]+)(?::d+)?;(d+)
    replacement: $1:$2
  - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_service_name]
    action: replace
    target_label: kubernetes_name

# Example scrape config for probing services via the Blackbox Exporter.
#
# The relabeling allows the actual service scrape endpoint to be configured
# via the following annotations:
#
# * `prometheus.io/probe`: Only probe services that have a value of `true`
- job_name: "kubernetes-services"

  metrics_path: /probe
  params:
    module: [http_2xx]

  kubernetes_sd_configs:
  - role: service

  relabel_configs:
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
    action: keep
    regex: true
  - source_labels: [__address__]
    target_label: __param_target
  - target_label: __address__
    replacement: blackbox-exporter.example.com:9115
  - source_labels: [__param_target]
    target_label: instance
  - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_service_name]
    target_label: kubernetes_name

# Example scrape config for probing ingresses via the Blackbox Exporter.
#
# The relabeling allows the actual ingress scrape endpoint to be configured
# via the following annotations:
#
# * `prometheus.io/probe`: Only probe services that have a value of `true`
- job_name: "kubernetes-ingresses"

  metrics_path: /probe
  params:
    module: [http_2xx]

  kubernetes_sd_configs:
    - role: ingress

  relabel_configs:
    - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
      regex: (.+);(.+);(.+)
      replacement: ${1}://${2}${3}
      target_label: __param_target
    - target_label: __address__
      replacement: blackbox-exporter.example.com:9115
    - source_labels: [__param_target]
      target_label: instance
    - action: labelmap
      regex: __meta_kubernetes_ingress_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_ingress_name]
      target_label: kubernetes_name

# Example scrape config for pods
#
# The relabeling allows the actual pod scrape endpoint to be configured via the
# following annotations:
#
# * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
# * `prometheus.io/port`: Scrape the pod on the indicated port instead of the
# pod"s declared ports (default is a port-free target if none are declared).
- job_name: "kubernetes-pods"

  kubernetes_sd_configs:
  - role: pod

  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::d+)?;(d+)
    replacement: $1:$2
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name

當然該配置文件,是在prometheus部署在k8s中生效的,即in-cluster模式。

kubernetes-apiservers

該項主要是讓prometheus程序可以訪問kube-apiserver,進而進行服務發(fā)現(xiàn)??匆幌路瞻l(fā)現(xiàn)的代碼可以看出,主要服務發(fā)現(xiàn):node,service,ingress,pod。

    switch d.role {
    case "endpoints":
        var wg sync.WaitGroup

        for _, namespace := range namespaces {
            elw := cache.NewListWatchFromClient(rclient, "endpoints", namespace, nil)
            slw := cache.NewListWatchFromClient(rclient, "services", namespace, nil)
            plw := cache.NewListWatchFromClient(rclient, "pods", namespace, nil)
            eps := NewEndpoints(
                log.With(d.logger, "role", "endpoint"),
                cache.NewSharedInformer(slw, &apiv1.Service{}, resyncPeriod),
                cache.NewSharedInformer(elw, &apiv1.Endpoints{}, resyncPeriod),
                cache.NewSharedInformer(plw, &apiv1.Pod{}, resyncPeriod),
            )
            go eps.endpointsInf.Run(ctx.Done())
            go eps.serviceInf.Run(ctx.Done())
            go eps.podInf.Run(ctx.Done())

            for !eps.serviceInf.HasSynced() {
                time.Sleep(100 * time.Millisecond)
            }
            for !eps.endpointsInf.HasSynced() {
                time.Sleep(100 * time.Millisecond)
            }
            for !eps.podInf.HasSynced() {
                time.Sleep(100 * time.Millisecond)
            }
            wg.Add(1)
            go func() {
                defer wg.Done()
                eps.Run(ctx, ch)
            }()
        }
        wg.Wait()
    case "pod":
        var wg sync.WaitGroup
        for _, namespace := range namespaces {
            plw := cache.NewListWatchFromClient(rclient, "pods", namespace, nil)
            pod := NewPod(
                log.With(d.logger, "role", "pod"),
                cache.NewSharedInformer(plw, &apiv1.Pod{}, resyncPeriod),
            )
            go pod.informer.Run(ctx.Done())

            for !pod.informer.HasSynced() {
                time.Sleep(100 * time.Millisecond)
            }
            wg.Add(1)
            go func() {
                defer wg.Done()
                pod.Run(ctx, ch)
            }()
        }
        wg.Wait()
    case "service":
        var wg sync.WaitGroup
        for _, namespace := range namespaces {
            slw := cache.NewListWatchFromClient(rclient, "services", namespace, nil)
            svc := NewService(
                log.With(d.logger, "role", "service"),
                cache.NewSharedInformer(slw, &apiv1.Service{}, resyncPeriod),
            )
            go svc.informer.Run(ctx.Done())

            for !svc.informer.HasSynced() {
                time.Sleep(100 * time.Millisecond)
            }
            wg.Add(1)
            go func() {
                defer wg.Done()
                svc.Run(ctx, ch)
            }()
        }
        wg.Wait()
    case "ingress":
        var wg sync.WaitGroup
        for _, namespace := range namespaces {
            ilw := cache.NewListWatchFromClient(reclient, "ingresses", namespace, nil)
            ingress := NewIngress(
                log.With(d.logger, "role", "ingress"),
                cache.NewSharedInformer(ilw, &extensionsv1beta1.Ingress{}, resyncPeriod),
            )
            go ingress.informer.Run(ctx.Done())

            for !ingress.informer.HasSynced() {
                time.Sleep(100 * time.Millisecond)
            }
            wg.Add(1)
            go func() {
                defer wg.Done()
                ingress.Run(ctx, ch)
            }()
        }
        wg.Wait()
    case "node":
        nlw := cache.NewListWatchFromClient(rclient, "nodes", api.NamespaceAll, nil)
        node := NewNode(
            log.With(d.logger, "role", "node"),
            cache.NewSharedInformer(nlw, &apiv1.Node{}, resyncPeriod),
        )
        go node.informer.Run(ctx.Done())

        for !node.informer.HasSynced() {
            time.Sleep(100 * time.Millisecond)
        }
        node.Run(ctx, ch)

    default:
        level.Error(d.logger).Log("msg", "unknown Kubernetes discovery kind", "role", d.role)
    }   
kubernetes-nodes

發(fā)現(xiàn)node以后,通過/api/v1/nodes/${1}/proxy/metrics來獲取node的metrics。

kubernetes-cadvisor

cadvisor已經(jīng)被集成在kubelet中,所以發(fā)現(xiàn)了node就相當于發(fā)現(xiàn)了cadvisor。通過 /api/v1/nodes/${1}/proxy/metrics/cadvisor采集容器指標。

kubernetes-services和kubernetes-ingresses

該兩種資源監(jiān)控方式差不多,都是需要安裝black-box,然后類似于探針去定時訪問,根據(jù)返回的http狀態(tài)碼來判定service和ingress的服務可用性。
PS:不過我自己在這里和官方的稍微有點區(qū)別,

- target_label: __address__
      replacement: blackbox-exporter.example.com:9115

官方大致是需要我們要創(chuàng)建black-box 的ingress從外部訪問,這樣從效率和安全性都不是最合適的。所以我一般都是直接內部dns訪問。如下

- target_label: __address__
      replacement: blackbox-exporter.kube-system:9115

當然看源碼可以發(fā)現(xiàn),并不是所有的service和ingress都會健康監(jiān)測,如果需要將服務進行健康監(jiān)測,那么你部署應用的yaml文件加一些注解。例如:
對于service和ingress:
需要加注解:prometheus.io/scrape: "true"

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: "true"
  name: prometheus-node-exporter
  namespace: kube-system
  labels:
    app: prometheus
    component: node-exporter
spec:
  clusterIP: None
  ports:
    - name: prometheus-node-exporter
      port: 9100
      protocol: TCP
  selector:
    app: prometheus
    component: node-exporter
  type: ClusterIP
kubernetes-pods

對于pod的監(jiān)測也是需要加注解:

prometheus.io/scrape,為true則會將pod作為監(jiān)控目標。

prometheus.io/path,默認為/metrics

prometheus.io/port , 端口

所以看到此處可以看出,該job并不是監(jiān)控pod的指標,pod已經(jīng)通過前面的cadvisor采集。此處是對pod中應用的監(jiān)控。寫過exporter的人應該對這個概念非常清楚。通俗講,就是你pod中的應用提供了prometheus的監(jiān)控功能,加上對應的注解,那么該應用的metrics會定時被采集走。

kubernetes-service-endpoints

對于服務的終端節(jié)點,也需要加注解:

prometheus.io/scrape,為true則會將pod作為監(jiān)控目標。

prometheus.io/path,默認為/metrics

prometheus.io/port , 端口

prometheus.io/scheme 默認http,如果為了安全設置了https,此處需要改為https

這個基本上同上的。采集service-endpoints的metrics。

個人認為:如果某些部署應用只有pod沒有service,那么這種情況只能在pod上加注解,通過kubernetes-pods采集metrics。如果有service,那么就無需在pod加注解了,直接在service上加即可。畢竟service-endpoints最終也會落到pod上。

總結 配置項總結

kubernetes-service-endpoints和kubernetes-pods采集應用中metrics,當然并不是所有的都提供了metrics接口。

kubernetes-ingresses 和kubernetes-services 健康監(jiān)測服務和ingress健康的狀態(tài)

kubernetes-cadvisor 和 kubernetes-nodes,通過發(fā)現(xiàn)node,監(jiān)控node 和容器的cpu等指標

自動發(fā)現(xiàn)源碼

參考client-go和prometheus自動發(fā)現(xiàn)k8s,這種監(jiān)聽k8s集群中資源的變化,使用informer實現(xiàn),不要輪詢kube-apiserver接口。

參考

該配置文件需要部署一些組件來支持prometheus對k8s的監(jiān)控,例如black-exporter。因為要自動發(fā)現(xiàn),獲取集群的一些信息,所以也要做rbac的授權。具體參考:
github

文章版權歸作者所有,未經(jīng)允許請勿轉載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉載請注明本文地址:http://systransis.cn/yun/32635.html

相關文章

  • k8s監(jiān)控--從kubernetes監(jiān)控prometheusfederation機制

    摘要:也就是說整個監(jiān)控系統(tǒng)不是部署在中。非的怎么監(jiān)控是今天需要討論的問題。官方給出的配置聯(lián)邦機制也實現(xiàn)了的擴展。我們的集群外監(jiān)控的方案就基于此。在同一個數(shù)據(jù)中心,每個監(jiān)控其他的。上一級的監(jiān)控數(shù)據(jù)中心級別的。 前言 有時候對于一個公司,k8s集群或是所謂的caas只是整個技術體系的一部分,往往這個時候監(jiān)控系統(tǒng)不僅僅要k8s集群以及k8s中部署的應用,而且要監(jiān)控傳統(tǒng)部署的項目。也就是說整個監(jiān)控系...

    wangym 評論0 收藏0
  • k8s監(jiān)控--從kubernetes監(jiān)控prometheusfederation機制

    摘要:也就是說整個監(jiān)控系統(tǒng)不是部署在中。非的怎么監(jiān)控是今天需要討論的問題。官方給出的配置聯(lián)邦機制也實現(xiàn)了的擴展。我們的集群外監(jiān)控的方案就基于此。在同一個數(shù)據(jù)中心,每個監(jiān)控其他的。上一級的監(jiān)控數(shù)據(jù)中心級別的。 前言 有時候對于一個公司,k8s集群或是所謂的caas只是整個技術體系的一部分,往往這個時候監(jiān)控系統(tǒng)不僅僅要k8s集群以及k8s中部署的應用,而且要監(jiān)控傳統(tǒng)部署的項目。也就是說整個監(jiān)控系...

    curlyCheng 評論0 收藏0
  • 容器監(jiān)控實踐—Prometheus部署方案

    摘要:同時有權限控制日志審計整體配置過期時間等功能。將成為趨勢前置條件要求的版本應該是因為和支持的限制的核心思想是將的部署與它監(jiān)控的對象的配置分離,做到部署與監(jiān)控對象的配置分離之后,就可以輕松實現(xiàn)動態(tài)配置。 一.單獨部署 二進制安裝各版本下載地址:https://prometheus.io/download/ Docker運行 運行命令:docker run --name promet...

    GeekQiaQia 評論0 收藏0
  • 容器監(jiān)控實踐—Prometheus配置服務發(fā)現(xiàn)

    摘要:一概述的配置可以用命令行參數(shù)或者配置文件,如果是在集群內,一般配置在中以下均為版本查看可用的命令行參數(shù),可以執(zhí)行也可以指定對應的配置文件,參數(shù)一般為如果配置有修改,如增添采集,可以重新加載它的配置。目前主要支持種服務發(fā)現(xiàn)模式,分別是。 本文將分析Prometheus的常見配置與服務發(fā)現(xiàn),分為概述、配置詳解、服務發(fā)現(xiàn)、常見場景四個部分進行講解。 一. 概述 Prometheus的配置可以...

    longshengwang 評論0 收藏0
  • 容器監(jiān)控實踐—Prometheus配置服務發(fā)現(xiàn)

    摘要:一概述的配置可以用命令行參數(shù)或者配置文件,如果是在集群內,一般配置在中以下均為版本查看可用的命令行參數(shù),可以執(zhí)行也可以指定對應的配置文件,參數(shù)一般為如果配置有修改,如增添采集,可以重新加載它的配置。目前主要支持種服務發(fā)現(xiàn)模式,分別是。 本文將分析Prometheus的常見配置與服務發(fā)現(xiàn),分為概述、配置詳解、服務發(fā)現(xiàn)、常見場景四個部分進行講解。 一. 概述 Prometheus的配置可以...

    hiyang 評論0 收藏0

發(fā)表評論

0條評論

UCloud

|高級講師

TA的文章

閱讀更多
最新活動
閱讀需要支付1元查看
<