最小化微服务漏洞
最小化微服务漏洞
目录
[toc]
本节实战
实战名称 |
---|
💘 案例:设置容器以普通用户运行-2023.5.29(测试成功) |
💘 案例:避免使用特权容器,选择使用capabilities-2023.5.30(测试成功) |
💘 案例:只读挂载容器文件系统-2023.5.30(测试成功)== |
案例1:禁止创建特权模式的Pod |
示例2:禁止没指定普通用户运行的容器 |
💘 实战:部署Gatekeeper-2023.6.1(测试成功) |
💘 案例1:禁止容器启用特权-2023.6.1(测试成功) |
💘 案例2:只允许使用特定的镜像仓库-2023.6.1(测试成功) |
💘 实战:gVisor与Docker集成-2023.6.2(测试成功) |
💘 实战:gVisor与Containerd集成-2023.6.2(测试成功) |
💘 实战:K8s使用gVisor运行容器-2023.6.3(测试成功) |
1、Pod安全上下文
**安全上下文(Security Context):**K8s对Pod和容器提供的安全机制,可以设置Pod特权和访问控制。
安全上下文限制维度:
• 自主访问控制(Discretionary Access Control):基于用户ID(UID)和组ID(GID),来判定对对象(例如文件)的访问权限。
• 安全性增强的 Linux(SELinux): 为对象赋予安全性标签。
• 以特权模式或者非特权模式运行。
• Linux Capabilities:为进程赋予 root 用户的部分特权而非全部特权。
• AppArmor:定义Pod使用AppArmor限制容器对资源访问限制
• Seccomp:定义Pod使用Seccomp限制容器进程的系统调用
• AllowPrivilegeEscalation: 禁止容器中进程(通过 SetUID 或 SetGID 文件模式)获得特权提升。当容器以特权模式运行或者具有CAP_SYS_ADMIN能力时,AllowPrivilegeEscalation总为True。
• readOnlyRootFilesystem:以只读方式加载容器的根文件系统。
案例1:设置容器以普通用户运行
==💘 案例:设置容器以普通用户运行-2023.5.29(测试成功)==
- 实验环境
实验环境:1、win10,vmwrokstation虚机;2、k8s集群:3台centos7.61810虚机,1个master节点,2个node节点k8sversion:v1.20.0docker:提取码:08202023.5.29-securityContext-runAsUser-code
方法1:Dockerfile里使用USER指定运行用户
- 上传软件并解压
[root@k8s-master1 ~]#ll -h flask-demo.zip -rw-r--r--1rootroot1.3KMay2906:33flask-demo.zip[root@k8s-master1 ~]#unzip flask-demo.zip Archive:flask-demo.zipcreating:flask-demo/creating:flask-demo/templates/inflating:flask-demo/templates/index.htmlinflating:flask-demo/main.pyinflating:flask-demo/Dockerfile[root@k8s-master1 ~]#cd flask-demo/[root@k8s-master1 flask-demo]#lsDockerfilemain.pytemplates
- 查看相关文件内容
[root@k8s-master1 flask-demo]#pwd/root/flask-demo[root@k8s-master1 flask-demo]#lsDockerfilemain.pytemplates[root@k8s-master1 flask-demo]#cat templates/index.html #将要渲染的文件 <!DOCTYPEhtml><html lang="en"><head><metacharset="UTF-8"><title>首页</title></head><body><h1>Hello Python!</h1></body></html>[root@k8s-master1 flask-demo]#cat main.py fromflaskimportFlask,render_templateapp=Flask(__name__)@app.route('/')defindex():returnrender_template("index.html")if__name__=="__main__":app.run(host="0.0.0.0",port=8080) #程序用的是8080端口[root@k8s-master1 flask-demo]#cat Dockerfile FROMpythonRUNuseraddpythonRUNmkdir/data/www-pCOPY./data/wwwRUNchown-Rpython/dataRUNpipinstallflask-ihttps:pipinstallprometheus_client-ihttps:WORKDIR/data/wwwUSERpythonCMDpythonmain.py
- 构建镜像
[root@docker flask-demo]#docker build -t flask-demo:v1 .[root@docker flask-demo]#docker images|grepflaskflask-demov1a9cabb6241ed22secondsago937MB
- 启动一个容器
[root@docker flask-demo]#docker run -d --name demo-v1 -p 8080:8080 flask-demo:v1f3d7173e84ad6242d3c037e33a2ff0d654550bcac970534777188092e173dc6e[root@docker flask-demo]#docker ps -lCONTAINERIDIMAGECOMMANDCREATEDSTATUSPORTSNAMESf3d7173e84adflask-demo:v1"/bin/sh -c 'python …"41secondsagoUp40seconds0.0.0.0:8080->8080/tcp,:::8080->8080/tcpdemo-v1
- 如果我们把Dockerfile里的
USER python
去掉,我们再启动一个容器看下效果,容器默认就是以root身份来运行程序的
[root@docker flask-demo]#vim DockerfileFROMpythonRUNuseraddpythonRUNmkdir/data/www-pCOPY./data/wwwRUNchown-Rpython/dataRUNpipinstallflask-ihttps:pipinstallprometheus_client-ihttps:WORKDIR/data/www#USER python CMDpythonmain.py# 构建[root@docker flask-demo]#docker build -t flask-demo:v2 .#启动容器[root@docker flask-demo]#docker run -d --name demo-v2 -p 8081:8080 flask-demo:v2ce527b2bb4b59b28a3b5bb6fb50c4de4d0a5183eaafdb5631d866025af01a319[root@docker flask-demo]#docker ps -lCONTAINERIDIMAGECOMMANDCREATEDSTATUSPORTSNAMESce527b2bb4b5flask-demo:v2"/bin/sh -c 'python …"4secondsagoUp3seconds0.0.0.0:8081->8080/tcp,:::8081->8080/tcpdemo-v2#查看容器内程序用户:[root@docker flask-demo]#docker exec -it demo-v2 bashroot@ce527b2bb4b5:/data/www#iduid=0(root) gid=0(root) groups=0(root)root@ce527b2bb4b5:/data/www#idpythonuid=1000(python) gid=1000(python) groups=1000(python)root@ce527b2bb4b5:/data/www#ps-efUIDPIDPPIDCSTIMETTYTIMECMDroot10004:41?00:00:00/bin/sh-cpythonmain.pyroot71004:41?00:00:00pythonmain.py#查看宿主机次容器进程用户[root@docker flask-demo]#ps -ef|grepmain10008008580065012:30?00:00:00/bin/sh-cpythonmain.py10008011480085012:30?00:00:00pythonmain.pyroot8086680847012:41?00:00:00/bin/sh-cpythonmain.pyroot8089680866012:41?00:00:00pythonmain.py
符合预期。
- 另外,docker run命令也是可以指定容器内运行程序用户的,但是次用户必须是镜像内之前已存在的用户才行
[root@docker flask-demo]#docker run -u xyy -d --name demo-test -p 8082:8080 flask-demo:v2\f7cd85c396fc769982fab012d32aa8cf979c49078503328c522c34c8b6f9f7fcdocker:Errorresponsefromdaemon:unabletofinduserxyy:nomatchingentriesinpasswdfile.[root@docker flask-demo]#docker ps -aCONTAINERIDIMAGECOMMANDCREATEDSTATUSPORTSNAMESf7cd85c396fcflask-demo:v2"/bin/sh -c 'python …"54secondsagoCreated0.0.0.0:8082->8080/tcp,:::8082->8080/tcpdemo-testce527b2bb4b5flask-demo:v2"/bin/sh -c 'python …"4minutesagoUp4minutes0.0.0.0:8081->8080/tcp,:::8081->8080/tcpdemo-v2f3d7173e84adflask-demo:v1"/bin/sh -c 'python …"15minutesagoUp15minutes0.0.0.0:8080->8080/tcp,:::8080->8080/tcpdemo-v1d83dbc25e4dakindest/node:v1.25.3"/usr/local/bin/entr…"2monthsagoUp2monthsdemo-worker28e32056c89dakindest/node:v1.25.3"/usr/local/bin/entr…"2monthsagoUp2monthsdemo-workerbf4947c5578akindest/node:v1.25.3"/usr/local/bin/entr…"2monthsagoUp2months127.0.0.1:39911->6443/tcpdemo-control-plane
测试结束。😘
方法2:K8s里指定spec.securityContext.runAsUser,指定容器默认用户UID
- 部署pod
mkdir /root/securityContext-runAsUsercd /root/securityContext-runAsUser[root@k8s-master1 securityContext-runAsUser]#kubectl create deployment flask-demo --image=nginx --dry-run=client -oyaml >deployment.yaml[root@k8s-master1 securityContext-runAsUser]#vim deployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:labels:app:flask-demoname:flask-demospec:replicas:1selector:matchLabels:app:flask-demotemplate:metadata:labels:app:flask-demospec:securityContext:runAsUser:1000containers:- image:lizhenliang/flask-demo:rootname:web
说明:
lizhenliang/flask-demo:root#这个镜像标识容器里程序是以root身份运行的; lizhenliang/flask-demo:noroot#这个镜像标识容器里程序是以1000身份运行的;
部署以上deployment:
[root@k8s-master1 securityContext-runAsUser]#kubectl apply -f deployment.yaml deployment.apps/flask-democreated
- 查看
[root@k8s-master1 securityContext-runAsUser]#kubectl get poNAMEREADYSTATUSRESTARTSAGEflask-demo-6c78dcd8dd-j6s671/1Running082s[root@k8s-master1 securityContext-runAsUser]#kubectl exec -it flask-demo-6c78dcd8dd-j6s67 -- bashpython@flask-demo-6c78dcd8dd-j6s67:/data/www$iduid=1000(python) gid=1000(python) groups=1000(python)python@flask-demo-6c78dcd8dd-j6s67:/data/www$ps-efUIDPIDPPIDCSTIMETTYTIMECMDpython10012:10?00:00:00/bin/sh-cpythonmain.pypython71012:10?00:00:00pythonmain.pypython80012:11pts/000:00:00bashpython158012:11pts/000:00:00ps-efpython@flask-demo-6c78dcd8dd-j6s67:/data/www$
可以看到,此时容器是以spec.securityContext.runAsUser
指定的用户来启动的,符合预期。
- 如果
spec.securityContext.runAsUser
指定一个不存在的用户id,此时会发生什么现象?
[root@k8s-master1 securityContext-runAsUser]#cp deployment.yaml deployment2.yaml[root@k8s-master1 securityContext-runAsUser]#vim deployment2.yamlapiVersion:apps/v1kind:Deploymentmetadata:labels:app:flask-demoname:flask-demo2spec:replicas:1selector:matchLabels:app:flask-demotemplate:metadata:labels:app:flask-demospec:securityContext:runAsUser:1001containers:- image:lizhenliang/flask-demo:rootname:web
部署:
[root@k8s-master1 securityContext-runAsUser]#kubectl apply -f deployment2.yaml deployment.apps/flask-demo2created
查看:
[root@k8s-master1 securityContext-runAsUser]#kubectl get po -owideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESflask-demo-6c78dcd8dd-j6s671/1Running06m35s10.244.169.173k8s-node2<none><none>flask-demo2-5d978567ff-tcqws1/1Running018s10.244.169.174k8s-node2<none><none>[root@k8s-master1 securityContext-runAsUser]#kubectl exec -it flask-demo2-5d978567ff-tcqws -- bashIhavenoname!@flask-demo2-5d978567ff-tcqws:/data/www$ Ihavenoname!@flask-demo2-5d978567ff-tcqws:/data/www$ iduid=1001gid=0(root) groups=0(root)Ihavenoname!@flask-demo2-5d978567ff-tcqws:/data/www$ cat/etc/passwd|grep1001Ihavenoname!@flask-demo2-5d978567ff-tcqws:/data/www$ tail-4/etc/passwdgnats:x:41:41:GnatsBug-ReportingSystem(admin):/var/lib/gnats:/usr/sbin/nologinnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin_apt:x:100:65534::/nonexistent:/usr/sbin/nologinpython:x:1000:1000::/home/python:/bin/shIhavenoname!@flask-demo2-5d978567ff-tcqws:/data/www$
可以看到,如果
spec.securityContext.runAsUser
指定一个不存在的用户id,创建的pod不会报错,但容器里主机名称显示为I have no name!,但是系统下依旧会分配一个不存在的uid。
- 再测试下
lizhenliang/flask-demo:noroot
镜像
[root@k8s-master1 securityContext-runAsUser]#cp deployment2.yaml deployment3.yaml[root@k8s-master1 securityContext-runAsUser]#vim deployment3.yamlapiVersion:apps/v1kind:Deploymentmetadata:labels:app:flask-demoname:flask-demo3spec:replicas:1selector:matchLabels:app:flask-demotemplate:metadata:labels:app:flask-demospec:#securityContext:# runAsUser:1001containers:- image:lizhenliang/flask-demo:norootname:web
部署:
[root@k8s-master1 securityContext-runAsUser]#kubectl apply -f deployment3.yaml deployment.apps/flask-demo3created
测试:
[root@k8s-master1 securityContext-runAsUser]#kubectl get poNAMEREADYSTATUSRESTARTSAGEflask-demo-6c78dcd8dd-j6s671/1Running014mflask-demo2-5d978567ff-tcqws1/1Running08m4sflask-demo3-5fd4b7787c-cjjct1/1Running084s[root@k8s-master1 securityContext-runAsUser]#kubectl exec -it flask-demo3-5fd4b7787c-cjjct -- bashpython@flask-demo3-5fd4b7787c-cjjct:/data/www$iduid=1000(python) gid=1000(python) groups=1000(python)python@flask-demo3-5fd4b7787c-cjjct:/data/www$ps-efUIDPIDPPIDCSTIMETTYTIMECMDpython10012:23?00:00:00/bin/sh-cpythonmain.pypython71012:23?00:00:00pythonmain.pypython80012:23pts/000:00:00bashpython158012:24pts/000:00:00ps-efpython@flask-demo3-5fd4b7787c-cjjct:/data/www$
符合预期。
- 再测试:如果
spec.securityContext.runAsUser
指定用户名时,此时会发生什么现象?
[root@k8s-master1 securityContext-runAsUser]#cp deployment3.yaml deployment4.yaml[root@k8s-master1 securityContext-runAsUser]#vim deployment4.yamlapiVersion:apps/v1kind:Deploymentmetadata:labels:app:flask-demoname:flask-demo4spec:replicas:1selector:matchLabels:app:flask-demotemplate:metadata:labels:app:flask-demospec:securityContext:runAsUser:"python"containers:- image:lizhenliang/flask-demo:norootname:web
部署时,会直接报错的。
因此:
spec.securityContext.runAsUser
这里必须指定为用户uid才行。
测试结束。😘
案例2:避免使用特权容器
相当于Capabilities对繁多的linux系统调用方法又做了一个归类。
需要注意:
可以使用capsh命令来查询当前shell支持的一些能力有哪些?#宿主机上,centos系统[root@k8s-master1 Capabilities]#capsh --printCurrent:=cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36+epBoundingset=cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36Securebits:00/0x0/1'b0secure-noroot:no (unlocked)secure-no-suid-fixup:no (unlocked)secure-keep-caps:no (unlocked)uid=0(root)gid=0(root)groups=0(root)#我们再来启动一个centos容器,在看下这些能力有哪些?[root@k8s-master1 Capabilities]#docker run -d centos sleep 24hUnable to find image 'centos:latest'locallylatest:Pulling from library/centosa1d0c7532777:Pull completeDigest:sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177Status:Downloaded newer image for centos:lateste69eddf42c4cc9c44d943786a2978cd55e3bdf68b9de23f6e221a2e44d8f63b0[root@k8s-master1 Capabilities]#docker exec -it e69eddf42c4cc9c44d943786a2978cd55e3bdf68b9de23f6e221a2e44d8f63b0 bash[root@e69eddf42c4c /]# capsh --printCurrent:=cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap+eipBounding set =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcapAmbient set =Securebits:00/0x0/1'b0secure-noroot:no(unlocked)secure-no-suid-fixup:no(unlocked)secure-keep-caps:no(unlocked)secure-no-ambient-raise:no(unlocked)uid=0(root)gid=0(root)groups=[root@e69eddf42c4c /]##可以看到,启动的centos容器能力明显比宿主机上能力少好多。
==💘 案例:避免使用特权容器,选择使用capabilities-2023.5.30(测试成功)==
- 实验环境
实验环境:1、win10,vmwrokstation虚机;2、k8s集群:3台centos7.61810虚机,1个master节点,2个node节点k8sversion:v1.20.0docker:--privileged
- 之前也说过,即使是用root用户启动的容器进程,这个root用户具有的能力和宿主机的root能力还是差不少呢,因为处于安全考虑,容器引擎对这个做了一定的限制。
例如:我们在容器里,即使是root用户,也是无法使用mount命令的。
[root@k8s-master1 ~]#kubectl get poNAMEREADYSTATUSRESTARTSAGEbusybox1/1Running138d[root@k8s-master1 ~]#kubectl exec -it busybox -- sh/# mount -t tmpfs /tmp /tmpmount:permissiondenied(are youroot?)/# iduid=0(root) gid=0(root) groups=10(wheel)/##当然宿主机里的root用户时可以正常使用mount命令的:[root@k8s-master1 ~]#ls /tmp/2023.5.23-code2023.5.23-code.tar.gzvmware-root_5805-1681267545[root@k8s-master1 ~]#mount -t tmpfs /tmp /tmp[root@k8s-master1 ~]#df -hT|greptmp/tmptmpfs910M0910M0%/tmp
- 开启特权功能,进行测试
[root@k8s-master1 ~]#mkdir Capabilities[root@k8s-master1 ~]#cd Capabilities/[root@k8s-master1 Capabilities]#vim pod1.yamlapiVersion:v1kind:Podmetadata:name:pod-sc1spec:containers:- image:busyboxname:webcommand:- sleep- 24hsecurityContext:privileged:true#部署:[root@k8s-master1 Capabilities]#kubectl apply -f pod1.yamlpod/pod-sc1 created#测试:[root@k8s-master1 Capabilities]#kubectl get poNAME READY STATUS RESTARTS AGEpod-sc1 1/1 Running 0 61s[root@k8s-master1 Capabilities]#kubectl exec -it pod-sc1 -- sh/#/# mount -t tmpfs /tmp /tmp/# iduid=0(root) gid=0(root) groups=10(wheel)/# df -hT|grep tmptmpfs tmpfs 64.0M 0 64.0M 0% /devtmpfs tmpfs 909.8M 0 909.8M 0% /sys/fs/cgroupshm tmpfs 64.0M 0 64.0M 0% /dev/shmtmpfs tmpfs 909.8M 12.0K 909.8M 0% /var/run/secrets/kubernetes.io/serviceaccount/tmp tmpfs 909.8M 0 909.8M 0% /tmp/# ls /tmp//#
可以看到,开启了特权功能后,容器就可以正常使用mount命令来挂载文件系统了。
- 现在,我们来避免使用特权容器,选择使用
SYS_ADMIN
这个capabilities
来解决以上这个问题
SYS_ADMIN能力作用如下:
部署下pod:
[root@k8s-master1 Capabilities]#cp pod1.yaml pod2.yaml[root@k8s-master1 Capabilities]#vim pod2.yamlapiVersion:v1kind:Podmetadata:name:pod-sc2spec:containers:- image:busyboxname:webcommand:- sleep- 24hsecurityContext:capabilities:add:["SYS_ADMIN"]#这里需要注意的是:这里写 "SYS_ADMIN"或者 "CAP_SYS_ADMIN"都是可以的,即:CAP关键字是可以省略的。#部署:[root@k8s-master1 Capabilities]#kubectl apply -f pod2.yamlpod/pod-sc2 created#测试:[root@k8s-master1 Capabilities]#kubectl get poNAME READY STATUS RESTARTS AGEpod-sc1 1/1 Running 0 9m9spod-sc2 1/1 Running 0 36s[root@k8s-master1 Capabilities]#kubectl exec -it pod-sc2 -- sh/#/# mount -t tmpfs /tmp /tmp/# iduid=0(root) gid=0(root) groups=10(wheel)/# df -hT|grep tmp/tmp tmpfs 909.8M 0 909.8M 0% /tmp/#
此时,容器配置了SYS_ADMIN
这个capabilities
后,也具有了mount权限了,符合预期。😘
案例3:只读挂载容器文件系统
==💘 案例:只读挂载容器文件系统-2023.5.30(测试成功)==
- 实验环境
实验环境:1、win10,vmwrokstation虚机;2、k8s集群:3台centos7.61810虚机,1个master节点,2个node节点k8sversion:v1.20.0docker:/#/# touch 1/# ls1bindevetchomeliblib64procrootsystmpusrvar/#
- 但是,处于安全需求,我们不希望程序对容器里的任何文件进行更改,此时该如何解决呢?
就需要用到只读挂载容器文件系统
功能了,接下来我们测试下。
[root@k8s-master1 Capabilities]#cp pod2.yaml pod3.yaml[root@k8s-master1 Capabilities]#vim pod3.yamlapiVersion:v1kind:Podmetadata:name:pod-sc3spec:containers:- image:busyboxname:webcommand:- sleep- 24hsecurityContext:readOnlyRootFilesystem:true#部署:[root@k8s-master1 Capabilities]#kubectl apply -f pod3.yamlpod/pod-sc3 created#测试:[root@k8s-master1 Capabilities]#kubectl get po |grep pod-scpod-sc1 1/1 Running 0 17mpod-sc2 1/1 Running 0 9m5spod-sc3 1/1 Running 0 52s[root@k8s-master1 Capabilities]#kubectl exec -it pod-sc3 -- sh/#/# lsbin dev etc home lib lib64 proc root sys tmp usr var/# touch atouch:a:Read-only file system/# mkdir amkdir:can't create directory 'a':Read-only file system/#
pod里配置了只读挂载容器文件系统
功能后,此时容器内就无法创建任何文件了,也无法修改任何文件,以上问题需求已实现,测试结束。😘
2、Pod安全策略(PSP)
**PodSecurityPolicy(简称PSP):**Kubernetes中Pod部署时重要的安全校验手段,能够有效地约束应用运行时行为安全。
使用PSP对象定义一组Pod在运行时必须遵循的条件及相关字段的默认值,只有Pod满足这些条件才会被K8s接受。
用户使用SA (ServiceAccount)创建了一个Pod,K8s会先验证这个SA是否可以访问PSP资源权限,如果可以进一步验证Pod配置是否满足PSP规则,任
意一步不满足都会拒绝部署。因此,需要实施需要有这几点:
• 创建SA服务账号
• 该SA需要具备创建对应资源权限,例如创建Pod、Deployment
• SA使用PSP资源权限:创建Role,使用PSP资源权限,再将SA绑定Role
==💘 实战:Pod安全策略-2023.5.31(测试成功)==
- 实验环境
实验环境:1、win10,vmwrokstation虚机;2、k8s集群:3台centos7.61810虚机,1个master节点,2个node节点k8sversion:v1.20.0docker:提取码:08202023.5.31-psp-code
案例1:禁止创建特权模式的Pod
示例1:禁止创建特权模式的Pod# 创建SAkubectlcreateserviceaccountaliang6# 将SA绑定到系统内置Rolekubectlcreaterolebindingaliang6--clusterrole=edit--serviceaccount=default:aliang6# 创建使用PSP权限的Rolekubectlcreaterolepsp:unprivileged--verb=use--resource=podsecuritypolicy--resource-name=psp-example# 将SA绑定到Rolekubectlcreaterolebindingaliang:psp:unprivileged--role=psp:unprivileged--serviceaccount=default:aliang6
1.k8s集群启用Pod安全策略
Pod安全策略实现为一个准入控制器,默认没有启用,当启用后会强制实施Pod安全策略,没有满足的Pod将无法创建。因此,建议在启用PSP之前先添加策略并对其授权。
PodSecurityPolicy
[root@k8s-master1 ~]#vim /etc/kubernetes/manifests/kube-apiserver.yaml 将---enable-admission-plugins=NodeRestriction替换为---enable-admission-plugins=NodeRestriction,PodSecurityPolicy
2.创建sa,并将SA绑定到系统内置Role
[root@k8s-master1 ~]#kubectl create serviceaccount aliang6^C[root@k8s-master1 ~]#mkdir psp[root@k8s-master1 ~]#cd psp/[root@k8s-master1 psp]#kubectl create serviceaccount aliang6serviceaccount/aliang6created[root@k8s-master1 psp]#kubectl create rolebinding aliang6 --clusterrole=edit--serviceaccount=default:aliang6rolebinding.rbac.authorization.k8s.io/aliang6created#说明:edit这个clusterrole基本具有很多权限,除了自己不能修改权限外。[root@k8s-master1 psp]#kubectl get clusterrole|grep-vsystem:NAMECREATEDATadmin2022-10-22T02:34:47Zcalico-kube-controllers2022-10-22T02:41:12Zcalico-node2022-10-22T02:41:12Zcluster-admin2022-10-22T02:34:47Zedit2022-10-22T02:34:47Zingress-nginx2022-11-29T11:28:49Zkubeadm:get-nodes2022-10-22T02:34:48Zkubernetes-dashboard2022-10-22T02:42:46Zview2022-10-22T02:34:47Z
3.配置下psp策略
[root@k8s-master1 psp]#vim psp.yamlapiVersion:policy/v1beta1kind:PodSecurityPolicymetadata:name:psp-examplespec:privileged:false# 不允许特权Pod# 下面是一些必要的字段seLinux:rule:RunAsAnysupplementalGroups:rule:RunAsAnyrunAsUser:rule:RunAsAnyfsGroup:rule:RunAsAnyvolumes:- '*'#部署:[root@k8s-master1 psp]#kubectl apply -f psp.yamlpodsecuritypolicy.policy/psp-example created
4.创建资源测试
这创建deployment测试下:
[root@k8s-master1 psp]#kubectl --as=system:serviceaccount:default:aliang6createdeploymentweb--image=nginxdeployment.apps/webcreated[root@k8s-master1 psp]#kubectl get deployNAMEREADYUP-TO-DATEAVAILABLEAGEweb0/1006s[root@k8s-master1 psp]#kubectl get pod|grepweb#无输出
可以看到,Deployment资源被创建了,但是pod没有被创建成功。
再创建pod测试下:
[root@k8s-master1 psp]#kubectl --as=system:serviceaccount:default:aliang6 run web --image=nginxError from server (Forbidden):pods "web"is forbidden:PodSecurityPolicy:unable to admit pod:[]
能够看到,创建deployment和pod资源均失败。这是因为我们还需要对aliang6这个sa赋予访问psp资源的权限才行。
5.创建使用PSP权限的Role,并将SA绑定到Role
# 创建使用PSP权限的Role[root@k8s-master1 psp]#kubectl create role psp:unprivileged --verb=use--resource=podsecuritypolicy--resource-name=psp-examplerole.rbac.authorization.k8s.io/psp:unprivilegedcreated# 将SA绑定到Role[root@k8s-master1 psp]#kubectl create rolebinding aliang6:psp:unprivileged --role=psp:unprivileged--serviceaccount=default:aliang6rolebinding.rbac.authorization.k8s.io/aliang6:psp:unprivilegedcreated
6.再一次进行测试
创建具有特权权限pod测试:
[root@k8s-master1 psp]#kubectl run pod-psp --image=busybox --dry-run=client -oyaml >pod.yaml[root@k8s-master1 psp]#vim pod.yamlapiVersion:v1kind:Podmetadata:name:pod-pspspec:containers:- image:busyboxname:pod-pspcommand:- sleep- 24hsecurityContext:privileged:true#部署[root@k8s-master1 psp]#kubectl --as=system:serviceaccount:default:aliang6 apply -f pod.yamlError from server (Forbidden):error when creating "pod.yaml":pods "pod-psp"is forbidden:PodSecurityPolicy:unable to admit pod:[spec.containers[0].securityContext.privileged:Invalid value:true:Privileged containers are not allowed]
可以看到,psp策略起作用了,禁止pod的创建。
此时,将pod里面的特权给禁用掉,再次创建,观察现象。
[root@k8s-master1 psp]#vim pod.yamlapiVersion:v1kind:Podmetadata:name:pod-pspspec:containers:- image:busyboxname:pod-pspcommand:- sleep- 24h#部署[root@k8s-master1 psp]#kubectl --as=system:serviceaccount:default:aliang6 apply -f pod.yamlpod/pod-psp created[root@k8s-master1 psp]#kubectl get po|grep pod-psppod-psp 1/1 Running 0 19s
此时,未启用特权的pod是可以被成功创建的。
此时,利用再创建下pod,deployment看下现象。
#创建pod[root@k8s-master1 psp]#kubectl --as=system:serviceaccount:default:aliang6runweb10--image=nginxpod/web10created[root@k8s-master1 psp]#kubectl get po|grepweb10web101/1Running013s#创建deployment[root@k8s-master1 psp]#kubectl --as=system:serviceaccount:default:aliang6createdeploymentweb11--image=nginxdeployment.apps/web11created[root@k8s-master1 psp]#kubectl get deployment |grepweb11web110/10038s[root@k8s-master1 psp]#kubectl get events
通过查看events日志,可以发现创建deployment报错的原因。
Repliaset是没这个权限,replicaset使用的是默认这个sa账号的权限。
[root@k8s-master1 psp]#kubectl get saNAMESECRETSAGEaliang615h15mdefault1221d
现在要做的就是让这个默认sa具有访问psp权限就可以了。
有2种方法:
创建deploy时指定默认的sa为aliang账号;
用命令给默认sa赋予访问psp权限;
我们来使用第二种方法:
[root@k8s-master1 psp]#kubectl create rolebinding default:psp:unprivileged --role=psp:unprivileged--serviceaccount=default:defaultrolebinding.rbac.authorization.k8s.io/default:psp:unprivilegedcreated
配置完成后,查看:
[root@k8s-master1 psp]#kubectl --as=system:serviceaccount:default:aliang6createdeploymentweb12--image=nginxdeployment.apps/web12created[root@k8s-master1 psp]#kubectl get deployment|grepweb12web121/11111s[root@k8s-master1 psp]#kubectl get po|grepweb12web12-7b88cfd55f-9z5hz1/1Running020s
此时,就可以正常部署deployment了。
测试结束。😘
示例2:禁止没指定普通用户运行的容器
- 我们在上面psp策略基础上重新配置下策略
[root@k8s-master1 psp]#vim psp.yamlapiVersion:policy/v1beta1kind:PodSecurityPolicymetadata:name:psp-examplespec:privileged:false# 不允许特权Pod# 下面是一些必要的字段seLinux:rule:RunAsAnysupplementalGroups:rule:RunAsAnyrunAsUser:rule:MustRunAsNonRootfsGroup:rule:RunAsAnyvolumes:- '*'#部署[root@k8s-master1 psp]#kubectl apply -f psp.yamlpodsecuritypolicy.policy/psp-example configured
- 部署测试pod
[root@k8s-master1 psp]#cp pod.yaml pod2.yaml[root@k8s-master1 psp]#vim pod2.yamlapiVersion:v1kind:Podmetadata:name:pod-psp2spec:containers:- image:lizhenliang/flask-demo:rootname:websecurityContext:runAsUser:1000#部署[root@k8s-master1 psp]#kubectl apply -f pod2.yamlpod/pod-psp2 created#验证[root@k8s-master1 psp]#kubectl get po|grep pod-psp2pod-psp2 1/1 Running 0 40s[root@k8s-master1 psp]#kubectl exec -it pod-psp2 -- bashpython@pod-psp2:/data/www$ iduid=1000(python) gid=1000(python) groups=1000(python)python@pod-psp2:/data/www$#如果用命令行再创建一个pod测试[root@k8s-master1 psp]#kubectl --as=system:serviceaccount:default:aliang6 run web22 --image=nginxpod/web22 created[root@k8s-master1 psp]#kubectl get po|grep web22web22 0/1 CreateContainerConfigError 0 15s[root@k8s-master1 psp]#kubectl describe po web22Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 32s default-scheduler Successfully assigned default/web22 to k8s-node2Normal Pulled 28s kubelet Successfully pulled image "nginx"in 1.756553796sNormal Pulled 25s kubelet Successfully pulled image "nginx"in 2.394107372sWarning Failed 11s (x3 over 28s) kubelet Error:container has runAsNonRoot and image will run as root (pod:"web22_default(0db376b0-0da9-4119-97a1-091ed8798159)",container:web22)#可以看到,这里策略起作用了,符合预期。
测试结束。😘
3、OPA Gatekeeper策略引擎
- OPA介绍
1、关闭psp策略(如果存在的话)
[root@k8s-master1 ~]#vim /etc/kubernetes/manifests/kube-apiserver.yaml 将---enable-admission-plugins=NodeRestriction,PodSecurityPolicy替换为---enable-admission-plugins=NodeRestriction
2、安装opa gatekeeper
[root@k8s-master1 ~]#mkdir opa[root@k8s-master1 ~]#cd opa/[root@k8s-master1 opa]#wget https:[root@k8s-master1 opa]#kubectl apply -f gatekeeper.yamlnamespace/gatekeeper-systemcreatedresourcequota/gatekeeper-critical-podscreatedcustomresourcedefinition.apiextensions.k8s.io/assign.mutations.gatekeeper.shcreatedcustomresourcedefinition.apiextensions.k8s.io/assignmetadata.mutations.gatekeeper.shcreatedcustomresourcedefinition.apiextensions.k8s.io/configs.config.gatekeeper.shcreatedcustomresourcedefinition.apiextensions.k8s.io/constraintpodstatuses.status.gatekeeper.shcreatedcustomresourcedefinition.apiextensions.k8s.io/constrainttemplatepodstatuses.status.gatekeeper.shcreatedcustomresourcedefinition.apiextensions.k8s.io/constrainttemplates.templates.gatekeeper.shcreatedcustomresourcedefinition.apiextensions.k8s.io/modifyset.mutations.gatekeeper.shcreatedcustomresourcedefinition.apiextensions.k8s.io/mutatorpodstatuses.status.gatekeeper.shcreatedcustomresourcedefinition.apiextensions.k8s.io/providers.externaldata.gatekeeper.shcreatedserviceaccount/gatekeeper-admincreatedpodsecuritypolicy.policy/gatekeeper-admincreatedrole.rbac.authorization.k8s.io/gatekeeper-manager-rolecreatedclusterrole.rbac.authorization.k8s.io/gatekeeper-manager-rolecreatedrolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebindingcreatedclusterrolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebindingcreatedsecret/gatekeeper-webhook-server-certcreatedservice/gatekeeper-webhook-servicecreateddeployment.apps/gatekeeper-auditcreateddeployment.apps/gatekeeper-controller-managercreatedpoddisruptionbudget.policy/gatekeeper-controller-managercreatedmutatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-mutating-webhook-configurationcreatedvalidatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configurationcreated[root@k8s-master1 opa]##查看[root@k8s-master1 opa]#kubectl get po -ngatekeeper-systemNAMEREADYSTATUSRESTARTSAGEgatekeeper-audit-5b869c66f9-6qvs51/1Running076sgatekeeper-controller-manager-85498f495c-46ghm1/1Running076sgatekeeper-controller-manager-85498f495c-8hwkz1/1Running076sgatekeeper-controller-manager-85498f495c-xc8db1/1Running076s
部署完成后,就可以使用下面2个案例测试效果了。😘
案例1:禁止容器启用特权
==💘 案例1:禁止容器启用特权-2023.6.1(测试成功)==
- 实验环境
实验环境:1、win10,vmwrokstation虚机;2、k8s集群:3台centos7.61810虚机,1个master节点,2个node节点k8sversion:v1.20.0docker:https:提取码:08202023.6.1-opa-code
已安装好Gatekeeper。
部署privileged_tpl.yaml和privileged_constraints.yaml
[root@k8s-master1 opa]#vim privileged_tpl.yamlapiVersion:templates.gatekeeper.sh/v1kind:ConstraintTemplatemetadata:name:privilegedspec:crd:spec:names:kind:privilegedtargets:- target:admission.k8s.gatekeeper.shrego:|package admissionviolation[{"msg":msg}] {#如果violation为true(表达式通过)说明违反约束containers =input.review.object.spec.template.spec.containersc_name :=containers[0].namecontainers[0].securityContext.privileged #如果返回true,说明违反约束msg :=sprintf("提示: '%v'容器禁止启用特权!",[c_name])}[root@k8s-master1 opa]#vim privileged_constraints.yamlapiVersion:constraints.gatekeeper.sh/v1beta1kind:privilegedmetadata:name:privilegedspec:match:#匹配的资源kinds:- apiGroups:["apps"]kinds:- "Deployment"- "DaemonSet"- "StatefulSet"#部署[root@k8s-master1 opa]#kubectl apply -f privileged_tpl.yamlconstrainttemplate.templates.gatekeeper.sh/privileged created[root@k8s-master1 opa]#kubectl apply -f privileged_constraints.yamlprivileged.constraints.gatekeeper.sh/privileged created#查看资源[root@k8s-master1 opa]#kubectl get ConstraintTemplateNAME AGEprivileged 71s[root@k8s-master1 opa]#kubectl get constraintsNAME AGEprivileged 2m9s
- 部署测试deployment
[root@k8s-master1 opa]#kubectl create deployment web --image=nginx --dry-run=client -oyaml >deployment.yaml[root@k8s-master1 opa]#vim deployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:labels:app:webname:web61spec:replicas:1selector:matchLabels:app:webtemplate:metadata:labels:app:webspec:containers:- image:nginxname:nginxsecurityContext:privileged:true#部署[root@k8s-master1 opa]#kubectl apply -f deployment.yamlError from server ([privileged] 提示: 'nginx'容器禁止启用特权!):error when creating "deployment.yaml":admission webhook "validation.gatekeeper.sh"denied the request:[privileged] 提示: 'nginx'容器禁止启用特权!#可以看到,部署时报错了,我们再来关闭privileged,再次部署,观察效果[root@k8s-master1 opa]#vim deployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:labels:app:webname:web61spec:replicas:1selector:matchLabels:app:webtemplate:metadata:labels:app:webspec:containers:- image:nginxname:nginx#securityContext:# privileged:true[root@k8s-master1 opa]#kubectl apply -f deployment.yamldeployment.apps/web61 created[root@k8s-master1 opa]#kubectl get deployment|grep web61web61 1/1 1 1 18s#关闭privileged后,就可以正常部署了
测试结束。😘
案例2:只允许使用特定的镜像仓库
==💘 案例2:只允许使用特定的镜像仓库-2023.6.1(测试成功)==
- 实验环境
实验环境:1、win10,vmwrokstation虚机;2、k8s集群:3台centos7.61810虚机,1个master节点,2个node节点k8sversion:v1.20.0docker:https:提取码:08202023.6.1-opa-code
已安装好Gatekeeper。
部署资源
[root@k8s-master1 opa]#cp privileged_tpl.yaml image-check_tpl.yaml[root@k8s-master1 opa]#cp privileged_constraints.yaml image-check_constraints.yaml[root@k8s-master1 opa]#vim image-check_tpl.yamlapiVersion:templates.gatekeeper.sh/v1kind:ConstraintTemplatemetadata:name:image-checkspec:crd:spec:names:kind:image-checkvalidation:# Schema for the `parameters` fieldopenAPIV3Schema:type:objectproperties:prefix:type:stringtargets:- target:admission.k8s.gatekeeper.shrego:|package imageviolation[{"msg":msg}] {containers =input.review.object.spec.template.spec.containersimage :=containers[0].imagenot startswith(image,input.parameters.prefix) #镜像地址开头不匹配并取反则为true,说明违反约束msg :=sprintf("提示: '%v'镜像地址不再可信任仓库",[image])}[root@k8s-master1 opa]#vim image-check_constraints.yaml.yamlapiVersion:constraints.gatekeeper.sh/v1beta1kind:image-checkmetadata:name:image-checkspec:match:kinds:- apiGroups:["apps"]kinds:- "Deployment"- "DaemonSet"- "StatefulSet"parameters:#传递给opa的参数prefix:"lizhenliang/"#部署: [root@k8s-master1 opa]#kubectl apply -f image-check_tpl.yamlconstrainttemplate.templates.gatekeeper.sh/image-check created[root@k8s-master1 opa]#kubectl apply -f image-check_constraints.yaml.yamlimage-check.constraints.gatekeeper.sh/image-check created#查看[root@k8s-master1 opa]#kubectl get constrainttemplateNAME AGEimage-check 71sprivileged 59m[root@k8s-master1 opa]#kubectl get constraintsNAME AGEimage-check.constraints.gatekeeper.sh/image-check 70sNAME AGEprivileged.constraints.gatekeeper.sh/privileged 59m
- 创建测试pod
[root@k8s-master1 opa]#kubectl create deployment web666 --image=nginxerror:failedtocreatedeployment:admissionwebhook"validation.gatekeeper.sh"deniedtherequest:[image-check] 提示: 'nginx'镜像地址不再可信任仓库[root@k8s-master1 opa]#kubectl create deployment web666 --image=lizhenliang/nginxdeployment.apps/web666created
符合预期。😘
4、Secret存储敏感数据
见其它md。
5、安全沙箱运行容器:gVisor介绍
gVisor介绍
所知,容器的应用程序可以直接访问Linux内核的系统调用,容器在安全隔离上还是比较弱,虽然内核在不断地增强自身的安全特性,但由于内核自身代码极端复杂,CVE 漏洞层出不穷。所以要想减少这方面安全风险,就是做好安全隔离,阻断容器内程序对物理机内核的依赖。
Google开源的一种gVisor容器沙箱技术就是采用这种思路,gVisor隔离容器内应用和内核之间访问,提供了大部分Linux内核的系统调用,巧妙的将容器内进程的系统调用转化为对gVisor的访问。
gVisor兼容OCI,与Docker和K8s无缝集成,很方面使用。