成人国产在线小视频_日韩寡妇人妻调教在线播放_色成人www永久在线观看_2018国产精品久久_亚洲欧美高清在线30p_亚洲少妇综合一区_黄色在线播放国产_亚洲另类技巧小说校园_国产主播xx日韩_a级毛片在线免费

資訊專(zhuān)欄INFORMATION COLUMN

大數(shù)據(jù)開(kāi)發(fā)系列四:hadoop& flink 配置kerberos認(rèn)證

IT那活兒 / 4036人閱讀
大數(shù)據(jù)開(kāi)發(fā)系列四:hadoop& flink 配置kerberos認(rèn)證

點(diǎn)擊上方“IT那活兒”公眾號(hào),關(guān)注后了解更多內(nèi)容,不管IT什么活兒,干就完了?。?!


前 言

我們之前的大數(shù)據(jù)開(kāi)發(fā)系列文章介紹了kerberos是如何安裝與使用,本次在已安裝的kerberos服務(wù)基礎(chǔ)上對(duì)hadoop 與flink組件進(jìn)行kerberos認(rèn)證配置。

環(huán)境依賴(lài)


類(lèi)型
主機(jī)
主機(jī)hostname
安裝組件
kerberos服務(wù)端
192.168.199.102
bigdata-03
krb5-server
krb5-workstation
krb5-libs
krb5-devel
kerberos客戶(hù)端
192.168.199.104
bigdata-05
krb5-workstation
krb5-devel
hadoop環(huán)境
192.168.199.104
bigdata-05
hadoop-3.3.3


hadoop認(rèn)證配置

3.1 創(chuàng)建principle添加用戶(hù)
hadoop的kerberos認(rèn)證,一般需要配置三種principle,分別是 hadoop, host, HTTP。
格式為:用戶(hù)名/主機(jī)[email protected]。
如果現(xiàn)有的HDFS和YARN守護(hù)程序用的是同一個(gè)用戶(hù)身份運(yùn)行,可以配置成一個(gè)hadoop principle。
kadmin.local -q "addprinc -randkeyhadoop/[email protected]"
kadmin.local -q "addprinc -randkeyhadoop/[email protected]"
listprincs/list_principals 查詢(xún)所有用戶(hù)。
3.2 創(chuàng)建keytab密碼文件
kadmin.local -q "xst -k /root/keytabs/kerberos/hadoop.keytabhadoop/[email protected]"
kadmin.local -q "xst -k /root/keytabs/kerberos/hadoop.keytab hadoop/[email protected]"
查看:
klist -kt /root/keytabs/kerberos/hadoop.keytab
klist -kt /home/gpadmin/hadoop.keytab
3.3 hadoop配置修改
1)core-site.xml 配置新增
<property>
<name>hadoop.security.authenticationname>
<value>kerberosvalue>
property>
<property>
<name>hadoop.security.authorizationname>
<value>truevalue>
property>
2)hdfs-site.xml 配置新增

    <property>
      <name>dfs.block.access.token.enablename>
      <value>truevalue>
    property>
                <property>
                  <name>dfs.permissions.enabledname>
                  <value>falsevalue>
                property
    
    <property>
      <name>dfs.namenode.kerberos.principalname>
      <value>hadoop/[email protected]value>
    property>
    
    <property>
      <name>dfs.namenode.keytab.filename>
      <value>/home/gpadmin/hadoop.keytabvalue>
    property>

      
    <property>
      <name>dfs.secondary.namenode.kerberos.principalname>
      <value>hadoop/[email protected]value>
    property>
    
    <property>
      <name>dfs.secondary.namenode.keytab.filename>
      <value>/home/gpadmin/hadoop.keytabvalue>
    property>

    
    <property>
      <name>dfs.web.authentication.kerberos.principalname>
      <value>hadoop/[email protected]value>
    property>
        
    <property>
      <name>dfs.web.authentication.kerberos.keytabname>
      <value>/home/gpadmin/hadoop.keytabvalue>
    property>


    
    <property>
      <name>dfs.datanode.kerberos.principalname>
      <value>hadoop/[email protected]value>
    property>

    
    <property>
      <name>dfs.datanode.keytab.filename>
      <value>/home/gpadmin/hadoop.keytabvalue>
    property>


    
    <property>
      <name>dfs.data.transfer.protectionname>
      <value>authenticationvalue>
    property>

    
    <property>
      <name>dfs.http.policyname>
      <value>HTTPS_ONLYvalue>
      <description>所有開(kāi)啟的web頁(yè)面均使用https, 細(xì)節(jié)在ssl server 和client那個(gè)配置文件內(nèi)配置description>
    property>
3)yarn-site.xml配置新增

<property>
  <name>yarn.resourcemanager.principalname>
  <value>hadoop/[email protected]value>
property>


<property>
  <name>yarn.resourcemanager.keytabname>
  <value>/home/gpadmin/hadoop.keytabvalue>
property>


<property>
  <name>yarn.nodemanager.principalname>
  <value>hadoop/[email protected]value>
property>


<property>
  <name>yarn.nodemanager.keytabname>
  <value>/home/gpadmin/hadoop.keytabvalue>
property>
4)mapred.xml配置新增

<property>
<name>mapreduce.jobhistory.principalname>
<value>hadoop/[email protected]value>
property>


<property>
<name>mapreduce.jobhistory.keytabname>
<value>/home/gpadmin/hadoop.keytabvalue>
property>
5)ssl-server配置新增
<property>
<name>ssl.server.truststore.locationname>
<value>/home/gpadmin/kerberos_https/keystorevalue>
<description>Truststore to be used by NN and DN. Must be specified.
description>
property>

<property>
<name>ssl.server.truststore.passwordname>
<value>passwordvalue>
<description>Optional. Default value is "".
description>
property>

<property>
<name>ssl.server.truststore.typename>
<value>jksvalue>
<description>Optional. The keystore file format, default value is "jks".
description>
property>

<property>
<name>ssl.server.truststore.reload.intervalname>
<value>10000value>
<description>Truststore reload check interval, in milliseconds.
Default value is 10000 (10 seconds).
description>
property>

<property>
<name>ssl.server.keystore.locationname>
<value>/home/gpadmin/kerberos_https/keystorevalue>
<description>Keystore to be used by NN and DN. Must be specified.
description>
property>

<property>
<name>ssl.server.keystore.passwordname>
<value>passwordvalue>
<description>Must be specified.
description>
property>

<property>
<name>ssl.server.keystore.keypasswordname>
<value>passwordvalue>
<description>Must be specified.
description>
property>

<property>
<name>ssl.server.keystore.typename>
<value>jksvalue>
<description>Optional. The keystore file format, default value is "jks".
description>
property>

<property>
<name>ssl.server.exclude.cipher.listname>
<value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
SSL_RSA_WITH_RC4_128_MD5value>
<description>Optional. The weak security cipher suites that you want excluded
from SSL communication.description>
property>
3.4 https證書(shū)配置
keytool -keystore keystore -alias hadoop -validity 365000 -
keystore/home/gpadmin/kerberos_https/keystore/keystore -
genkey -keyalg RSA -keysize 2048 -dname "CN=hadoop,
OU=shsnc, O=snc, L=hunan, ST=changsha, C=CN"
生成keystore 證書(shū)文件。
3.5 認(rèn)證測(cè)試
查看 hdfs 目錄:hdfs  dfs  -ls  /
報(bào)錯(cuò)信息:2022-11-22 10:22:15,444 WARN ipc.Client: Exception encountered while connecting to the server
org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
說(shuō)明已加了認(rèn)證后不能直接訪問(wèn),客戶(hù)端先進(jìn)行認(rèn)證才能正常訪問(wèn)目錄結(jié)構(gòu)。
kinit  -kt /home/gpadmin/hadoop.keytabhadoop/bigdata-05@HADOOP.COM


flink認(rèn)證配置

4.1 認(rèn)證用戶(hù)配置
如果hdfs-site.xml 屬性項(xiàng)配置:
<property>
<name>dfs.permissions.enabledname>
<value>truevalue>
property>
  • 為true時(shí),新建憑證為hadoop 安裝用戶(hù),如以gpadmin用戶(hù)安裝了hadoop。
    kadmin.local -q "xst -k /root/keytabs/kerberos/hadoop.keytab [email protected]"
  • 為false時(shí),新建憑證可以不是hadoop 安裝用戶(hù)。
    kadmin.local -q "xst -k /root/keytabs/kerberos/hadoop.keytab [email protected]"
驗(yàn)證:
klist -kt /root/keytabs/kerberos/hadoop.keytab
4.2 flink-conf.yaml 新增配置
security.kerberos.login.use-ticket-cache: true
security.kerberos.login.keytab: /home/gpadmin/hadoop.keytab
security.kerberos.login.principal: [email protected]
security.kerberos.login.contexts: Client
4.3 認(rèn)證測(cè)試
flink run -m yarn-cluster
-p 1
-yjm 1024
-ytm 1024
-ynm amp_zabbix
-c com.shsnc.fk.task.tokafka.ExtratMessage2KafkaTask
-yt /home/gpadmin/jar_repo/config/krb5.conf
-yD env.java.opts.jobmanager=-Djava.security.krb5.conf=krb5.conf
-yD env.java.opts.taskmanager=-Djava.security.krb5.conf=krb5.conf
-yD security.kerberos.login.keytab=/home/gpadmin/hadoop.keytab
-yD [email protected]

$jarname
在提交到flink 任務(wù)參數(shù)里面加入紅色部份認(rèn)證配置,能正常提交到y(tǒng)arn 集群且日志沒(méi)有相關(guān)認(rèn)證報(bào)錯(cuò)信息,說(shuō)明認(rèn)證配置成功。


本文作者:長(zhǎng)研架構(gòu)小組(上海新炬王翦團(tuán)隊(duì))

本文來(lái)源:“IT那活兒”公眾號(hào)

文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉(zhuǎn)載請(qǐng)注明本文地址:http://systransis.cn/yun/129142.html

相關(guān)文章

  • 數(shù)據(jù)開(kāi)發(fā)系列五:kafka&amp; zookeeper 配置kerberos認(rèn)證

    大數(shù)據(jù)開(kāi)發(fā)系列五:kafka& zookeeper 配置kerberos認(rèn)證 img{ display:block; margin:0 auto !important; width:100%; } body{ ...

    不知名網(wǎng)友 評(píng)論0 收藏2694
  • 魅族數(shù)據(jù)運(yùn)維平臺(tái)實(shí)踐

    摘要:一大數(shù)據(jù)平臺(tái)介紹大數(shù)據(jù)平臺(tái)架構(gòu)演變?nèi)鐖D所示魅族大數(shù)據(jù)平臺(tái)架構(gòu)演變歷程年底,我們開(kāi)始實(shí)踐大數(shù)據(jù),并部署了測(cè)試集群。因此,大數(shù)據(jù)運(yùn)維的目標(biāo)是以解決運(yùn)維復(fù)雜度的自動(dòng)化為首要目標(biāo)。大數(shù)據(jù)運(yùn)維存在的問(wèn)題大數(shù)據(jù)運(yùn)維存在的問(wèn)題包括部署及運(yùn)維復(fù)雜。 一、大數(shù)據(jù)平臺(tái)介紹 1.1大數(shù)據(jù)平臺(tái)架構(gòu)演變 ?showImg(https://segmentfault.com/img/bVWDPj?w=1024&h=...

    appetizerio 評(píng)論0 收藏0

發(fā)表評(píng)論

0條評(píng)論

最新活動(dòng)
閱讀需要支付1元查看
<