微信搜索superit|邀请体验:大数据, 数据管理、OLAP分析与可视化平台 | 赞助作者:赞助作者

(原创分享)亲自kafka的安装,重点是kerberos权限认证,划重点

原创 aide_941 32℃
重点:!!!!
# https://www.cnblogs.com/felixzh/p/9526118.html
# https://www.orchome.com/802
# http://www.voidcn.com/article/p-bipdjtdl-btg.html
1.jdk8_161以上,,jre/lib/security/java.securty开启policy=unlimited,,/usr/local/jdk1.8.0_191/jre/lib/security/java.security:
#crypto.policy=unlimited 放开注释
crypto.policy=unlimited
2. server.properties中的 zookeeper.connect=localhost:2181 的ip,决定了zookeeper_jaas中的principal对应的ip
假如使用默认的zookeeper.connect=localhost:2181 则zookeeper_jaas中的principal对应的是zookeeper/localhost!!!!!
假如使用默认的zookeeper.connect=ip/主机名:2181 则zookeeper_jaas中的principal对应的是zookeeper/主机名!!!!!
!!!!!!一定通过日志来判断该用哪个principal! tail -100f /var/log/krb5kdc.log!!!
3.三个jaas文件的用户名很重要: kafka_server_jaas中都用kafka/主机名,zookeeper_jaas用哪个principal 一定要对!!!!!!否则报crypto.Aes256C on: Checksum failed
4.vim /var/kerberos/krb5kdc/kdc.conf:
admin_keytab = /var/kerberos/krb5kdc/kafka.keytab
这个要放所有的princs,,,默认的/var/kerberos/krb5kdc/kadm5.keytab可以不用要
5.查看调用的日志
你可以通过日志来判断错误 tail -100f /var/log/krb5kdc.log 你可以通过日志来判断错误!!!!!!
default = /var/log/krb5libs.log
kdc = /var/log/krb5kdc.log
admin_server = /var/log/kadmind.log
6.所有的jaas中的keytab都要是/var/kerberos/krb5kdc/kafka.keytab,,,单个客户端的可以将密钥单个导出单个文件,然后放到单个客户端对应的位置上
7.消费,生产报错
sh bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test
sh bin/kafka-topics.sh –zookeeper localhost:2181 –list
kafka实战kerberos(笔记)
https://www.orchome.com/500
https://www.cnblogs.com/felixzh/p/9526118.html
https://www.cnblogs.com/HarSenZhao/p/11508687.html
http://blog.sina.com.cn/s/blog_167a8c6480102xfu6.html
# 认证方式集合:
# https://blog.csdn.net/ZhongGuoZhiChuang/article/details/79550570
# https://makeling.github.io/bigdata/72ac84e3.html
# https://makeling.github.io/bigdata/72ac84e3.html
# 参考!!!!!!
# kafka-manager连接kerberos认证的kafka步骤:
# https://blog.csdn.net/huanqingdong/article/details/84979110
# https://blog.csdn.net/O_Victorain/article/details/84200981
# https://www.cnblogs.com/xxoome/p/7423822.html
# https://www.orchome.com/1944
# https://help.ubuntu.com/community/Kerberos
# 操作系统:centos 6.10
# kafka版本:kafka_2.11-0.11.0.3.tgz 下载
# kafka-manager版本:kafka-manager-1.3.3.21.zip 下载
# Kerberos单点安装:
# hosts到当前主机:
10.210.156.22 example.com kerberos.example.com
# centos:
rpm -qa | grep krb5-libs-1.15.1-37.el7_7.2.x86_64
rpm -e –nodeps krb5-libs-1.15.1-37.el7_7.2.x86_64
rpm -e –nodeps krb5-server-1.15.1-37.el7_7.2.x86_64
yum install -y krb5-server krb5-libs krb5-auth-dialog #server
yum install -y cyrus-sasl-plain cyrus-sasl-devel cyrus-sasl-gssapi
yum install -y krb5-workstation krb5-libs krb5-auth-dialog #client
yum install -y krb5-pkinit-openssl #ssl
# ubuntu:
sudo apt-get install krb5-kdc krb5-admin-server
sudo dpkg-reconfigure krb5-kdc
timedatectl set-timezone Asia/Shanghai # 设置时区
timedatectl set-ntp yes # 开启时钟同步
timedatectl # 查看时钟同步
# 如果修改域名的话,修改以下三个文件(开发环境不建议修改,生产环境必须拿真实域名修改)
vim /var/kerberos/krb5kdc/kdc.conf
vim /etc/krb5.conf #开发环境直接default_realm = EXAMPLE.COM #放开注释
vim /var/kerberos/krb5kdc/kadm5.acl
# 1.———–
# EXAMPLE.COM改为AUTH.COM ,可随意修改,一般为公司域名大写:
# vim /var/kerberos/krb5kdc/kdc.conf:
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
EXAMPLE.COM = {
#master_key_type = aes256-cts #如果开启需要下载jce
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
# 2.修改krb5配置文件:
# 如果换域名的话:
EXAMPLE.COM改为AUTH.COM ,与上面kdc.conf中名称要一致
example.com改为AUTH.COM
kerberos.example.com改为主机名,我的是centos610
# vim /etc/krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = EXAMPLE.COM #放开注释
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
EXAMPLE.COM = {
kdc = vmm3 #kdc服务器名,一般为当前主机名#放开注释
admin_server = vmm3 #admin服务器名,一般为当前主机名#放开注释
}
[domain_realm]
.example.com = EXAMPLE.COM #放开注释
example.com = EXAMPLE.COM #放开注释
# kafka = EXAMPLE.COM
# zookeeper = EXAMPLE.COM
# clients = EXAMPLE.COM
sed -i ‘s/EXAMPLE.COM/AUTH.COM/g’ /var/kerberos/krb5kdc/kdc.conf
sed -i ‘s/example.com/auth.com/g’ /etc/krb5.conf
sed -i ‘s/EXAMPLE.COM/AUTH.COM/g’ /etc/krb5.conf
# 如果不换域名的话,直接放开/etc/krb5.conf里面的注释:
default_realm = EXAMPLE.COM #放开注释
# 3.初始化KDC数据库
# kdb5_util: Configuration file does not specify default realm while getting default realm
default_realm = EXAMPLE.COM #放开注释
# realm 域
# 如果不修改域名,直接使用example.com默认配置,就直接放开default_realm = AUTH.COM 放开注释
# 然后可以直接启动,但是得绑定example.com域名到这台机子上:
/usr/sbin/kdb5_util create -s
# Loading random data
# Initializing database ‘/var/kerberos/krb5kdc/principal’ for realm ‘EXAMPLE.COM’,
# master key name ‘K/M@EXAMPLE.COM’
# You will be prompted for the database Master Password.
# It is important that you NOT FORGET this password.
# Enter KDC database master key:
# Re-enter KDC database master key to verify:
123456
# 当Kerberos database创建好后,可以看到目录 /var/kerberos/krb5kdc 下生成了几个文件
# /var/kerberos/krb5kdc/principal:
# 下面的这几个是数据库文件,如果需要重新安装数据库就把这几个文件移走再重新运行kdb5_util create -s:
# -rw——- 1 root root 8192 11月 21 02:07 principal
# -rw——- 1 root root 8192 11月 21 02:07 principal.kadm5
# -rw——- 1 root root 0 11月 21 02:07 principal.kadm5.lock
# -rw——- 1 root root 0 11月 21 02:08 principal.ok
# ——接下来开始进入配置了———-
# 1.!!!!!!!权限配置,相当通过设置满足正则表达式的用户名的权限规则:
# 编辑 Kerberos ACL
# ​编辑 vim /var/kerberos/krb5kdc/kadm5.acl
# 该文件用于 kadmind 确定哪个 principal 具有访问 Kerberos 数据库的权限,以及具有何种访问级别。可设置为如下:
# sample:
#*/admin@EXAMPLE.COM *
*/admin@AUTH.COM    *
https://github.com/steveloughran/kerberos_and_hadoop/blob/master/sections/errors.md
# https://blog.csdn.net/O_Victorain/article/details/84200981
# https://blog.csdn.net/lovebomei/article/details/79807484
# 添加database administrator!!!!!,创建第一个 principal
/usr/sbin/kadmin.local -q “addprinc root/admin”
# 密码:123456
systemctl start krb5kdc kadmin # 开启服务
systemctl enable krb5kdc kadmin # 开机启动
systemctl restart krb5kdc kadmin
systemctl stop krb5kdc kadmin
# 重要!!!!!!
tail -100f /var/log/krb5kdc.log
# JAAS文件很重要!!s!!!!!!
kafka_server_jaas.conf 里面—用kafka/主机名
zookeeper_jaas.conf 里面—用zookeeper/localhost
# 否则报错!!!!!!!!!!!!
# 2.!!!!!!!创建Kerberos Principals,,相当于创建符合权限的正则用户名
# Principal 代表 Kerberos 系统中的唯一 ID,作为接收 Kerberos ticket 并以此访问对应服务的主体[^principal]。
# Principal 由 / 分隔的几部分组成,也可作为普通主体不包含 / 分隔符。另外,我们也可以在后面加上 @Realm ,如果没有指定 Realm ,将使用 krb5.conf 中定义的默认 Realm。
# Primary: 如果 Principal 代表一个用户,则 primary 可为用户名;如果代表服务,则可为服务名;如果代表 host,则可以直接使用字符串 “host”
# Instance: 可以进一步描述用户,作为该用户/服务的一个实例,如 root/admin@DATACENTER.COM,再如 kafka/server1@DATACENTER.COM
# Realm: 可以为一切 ASCII 码,但通常为域名,约定大写。如 datacenter.com 可写为 DATACENTER.COM
# !建议看下面这个!!!!!!!:
# https://www.orchome.com/500
# https://help.ubuntu.com/community/Kerberos
# https://www.orchome.com/1944
# https://blog.csdn.net/weixin_34409741/article/details/89780562
# 创建你自己的principal。principal规则:用户或服务/主机名@域,表示一条用户或服务的记录
# keytab文件相当于数据库的一张表, 存放Principal规则,可以多个Principal放在同一个keytab
sudo /usr/sbin/kadmin.local -q ‘addprinc -randkey kafka/{hostname}@{REALM}’
sudo /usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}”
# sample:
mkdir -p /etc/security/keytabs/
/usr/sbin/kadmin.local -q “addprinc -pw 123456 root”
/usr/sbin/kadmin.local -q “addprinc -pw 123456 root/admin”
/usr/sbin/kadmin.local -q “addprinc -pw 123456 kafka/vmm3” #一定要有
/usr/sbin/kadmin.local -q “addprinc -pw 123456 kafka/127.0.0.1”
/usr/sbin/kadmin.local -q “addprinc -pw 123456 kafka/localhost”
/usr/sbin/kadmin.local -q “addprinc -pw 123456 kafka/admin”
# /usr/sbin/kadmin.local -q “addprinc -pw 123456 kafka/vmm4” #演示只在单机
/usr/sbin/kadmin.local -q “addprinc -pw 123456 zookeeper/vmm3”
/usr/sbin/kadmin.local -q “addprinc -pw 123456 zookeeper/127.0.0.1”
/usr/sbin/kadmin.local -q “addprinc -pw 123456 zookeeper/localhost” #一定要有
/usr/sbin/kadmin.local -q “addprinc -pw 123456 clients” #一定要有
/usr/sbin/kadmin.local -q “addprinc -pw 123456 clients/vmm3”
/usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey kafka/admin”
/usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey kafka/vmm3”
/usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey kafka/127.0.0.1”
/usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey kafka/localhost”
# /usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey kafka/vmm4”
/usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey zookeeper/vmm3”
/usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey zookeeper/127.0.0.1”
/usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey zookeeper/localhost”
/usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey clients”
/usr/sbin/kadmin.local -q “ktadd -k /etc/security/keytabs/kafka.keytab -norandkey clients/vmm3”
# 设定输入密码 123456
# 检验:
klist -t -e -k /etc/security/keytabs/kafka.keytab
# 如果失败:
rm -rf /etc/security/keytabs/kafka.keytab
# 重新执行上面所有的
“ktadd -k /etc/security/keytabs/kafka.keytab -norandkey …”
# 或:
# 首先是为broker每台服务器在kerber服务器生成相应的principal和keytab,将下列命令里生成的kafka.keytab文件分发到对应broker机器的统一位置,比如/etc/kafka.keytab
addprinc -randkey kafka/vmm3@EXAMPLE.COM
addprinc -randkey kafka/vmm4@EXAMPLE.COM
# ………
xst -norandkey -k /opt/vmm/kafka.keytab kafka/vmm3@EXAMPLE.COM
xst -norandkey -k /opt/vmm/kafka.keytab kafka/vmm4@EXAMPLE.COM
# 确保使用主机名可以访问所有主机 — Kerberos要求所有的host都可以用其FQDN解析所有主机。
# kadmin.local 和 kamdin 是操作 KDC 的命令行接口,可以用来查看、添加和删除 principal 等。
# kadmin.local 只能用于 KDC 本地,无需输入密码;
# kamdin 可用于客户端,需输入密码。
kadmin.local:
listprincs
getprinc test
addprinc test
delprinc test
change_password -pw 123456 kafka/admin 修改密码
modprinc -maxrenewlife 1week krbtgt/EXAMPLE.COM@EXAMPLE.COM
# —-
getprinc zookeeper/vmm3
modprinc -maxrenewlife 1week zookeeper/vmm3@EXAMPLE.COM
# Authenticating as principal zookeeper/admin@EXAMPLE.COM with password.
# kadmin.local: getprinc zookeeper/vmm3
# Principal: zookeeper/vmm3@EXAMPLE.COM
# Expiration date: [never]
# Last password change: 四 11月 21 15:53:39 CST 2019
# Password expiration date: [never]
# Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 0 days 00:00:00
# Last modified: 四 11月 21 15:53:39 CST 2019 (root/admin@EXAMPLE.COM)
# Last successful authentication: [never]
# Last failed authentication: [never]
# Failed password attempts: 0
# Number of keys: 7
# Key: vno 1, aes128-cts-hmac-sha1-96
# Key: vno 1, des3-cbc-sha1
# Key: vno 1, arcfour-hmac
# Key: vno 1, camellia256-cts-cmac
# Key: vno 1, camellia128-cts-cmac
# Key: vno 1, des-hmac-sha1
# Key: vno 1, des-cbc-md5
# MKey: vno 1
# Attributes:
# Policy: [none]
kadmin.local -q ‘modprinc -maxrenewlife “7d” krbtgt/ANYTHING.COM’
# ================上面的都是server配置,,接下来是属于client端的调用了,算是进入kafka配置了==============
# https://blog.csdn.net/O_Victorain/article/details/84200981
# https://www.orchome.com/500
# https://www.orchome.com/1944
2. 配置Kafka Broker
原因分析:1、在zookeeper的认证请求中,zookeeper端的默认principall应该是zookeeper/<hostname>@<realm>
2、当采用zkCli.sh 的方式请求中,默认的host应该是localhost
因此在kdc中才会发现客户端的请求和 zookeeper/localhost@NETEASE.COM 这个principal进行认证,但是在kerberos的database中却没有这个principal。
解决方法: 使用zkCli.sh -server host:port 访问。 同时zookeeper配置文件中sever部分的principal必须为zookeeper/<hostname>@<your realm>
KafkaConfig values:
添加一个JAAS文件,类似下面的每个kafka broker的配置目录。在本例中我们将其命名为kafka_server_jaas.conf(注意,每个broker都应该有自己的keytab)。
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab=”/etc/security/keytabs/kafka_server.keytab”
principal=”kafka/kafka1.hostname.com@EXAMPLE.COM”;
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab=”/etc/security/keytabs/kafka_server.keytab”
principal=”kafka/kafka1.hostname.com@EXAMPLE.COM”;
};
JAAS文件中的KafkaServer告诉broker哪个principal要使用,以及存储该principal的keytab的位置。它允许broker使用指定的keytab进行登录。
https://www.orchome.com/500
=================
# Kafka Kerberos 启动报错
# 错误修改:
Failure unspecified at GSS-API level (Mechanism level: Checksum failed)
Closing client connection due to SASL authentication failure. (org.apache.zookeeper.server.ZooKeeperServer)
https://www.orchome.com/898
https://blog.csdn.net/ZhouyuanLinli/article/details/78115530
https://www.cnblogs.com/qingqing74647464/p/9851500.html
https://stackoverflow.com/questions/24274281/kerberos-check-sum-failed-issue
https://blog.csdn.net/weixin_34409741/article/details/89780562
生成keytab后,使用该文件认证
认证的命令如下:
kinit -kt /xxx/xxx.keytab xxx@xxx.xxx.com 认证后,kerberos就给这个用户一个有效期,用klist命令可以看到这个有效期 klist -kt /xxx/xxx.keytab
可以用chmod给其他用户加上r权限,这样其他用户也能使用这个keytab文件
配合svn命令一起用,在有效期内就不需要每次都去输入密码,可以写在python文件里,去执行命令, 比如用脚本执行svn指令自动更新这种应用场景。
kinit -kt /xxx/xxx.keytab xxx@xxx.xxx.com;svn up

转载请注明:SuperIT » (原创分享)亲自kafka的安装,重点是kerberos权限认证,划重点

喜欢 (1)or分享 (0)