k8s wordpress pod启动不了

Apr 25 17:11:34 k8s_n1 kubelet: I0425 17:11:34.860041    2476 kuberuntime_manager.go:742] checking backoff for container "wordpress" in pod "wordpress-1595585052-n89d8_default(f63ddbae-2995-11e7-a7d0-5254000d6f84)"

Apr 25 17:11:34 k8s_n1 kubelet: I0425 17:11:34.860221    2476 kuberuntime_manager.go:752] Back-off 20s restarting failed container=wordpress pod=wordpress-1595585052-n89d8_default(f63ddbae-2995-11e7-a7d0-5254000d6f84)

Apr 25 17:11:34 k8s_n1 kubelet: E0425 17:11:34.860343    2476 pod_workers.go:182] Error syncing pod f63ddbae-2995-11e7-a7d0-5254000d6f84 ("wordpress-1595585052-n89d8_default(f63ddbae-2995-11e7-a7d0-5254000d6f84)"), skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 20s restarting failed container=wordpress pod=wordpress-1595585052-n89d8_default(f63ddbae-2995-11e7-a7d0-5254000d6f84)"

Apr 25 17:11:35 k8s_n1 kubelet: I0425 17:11:35.513133    2476 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/3a44f3e6-256b-11e7-8373-5254000d6f84-default-token-73np4" (spec.Name: "default-token-73np4") pod "3a44f3e6-256b-11e7-8373-5254000d6f84" (UID: "3a44f3e6-256b-11e7-8373-5254000d6f84").

Apr 25 17:11:35 k8s_n1 kubelet: W0425 17:11:35.572486    2476 docker_sandbox.go:263] Couldn‘t find network status for default/wordpress-1595585052-n89d8 through plugin: invalid network status for

Apr 25 17:11:42 k8s_n1 kubelet: I0425 17:11:42.703337    2476 helpers.go:101] Unable to get network stats from pid 25234: couldn‘t read network stats: failure opening /proc/25234/net/dev: open /proc/25234/net/dev: no such file or directory

Apr 25 17:11:47 k8s_n1 kubelet: I0425 17:11:47.208676    2476 helpers.go:101] Unable to get network stats from pid 24895: couldn‘t read network stats: failure opening /proc/24895/net/dev: open /proc/24895/net/dev: no such file or directory

Apr 25 17:11:48 k8s_n1 kubelet: I0425 17:11:48.457148    2476 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/f63ddbae-2995-11e7-a7d0-5254000d6f84-default-token-h49ds" (spec.Name: "default-token-h49ds") pod "f63ddbae-2995-11e7-a7d0-5254000d6f84" (UID: "f63ddbae-2995-11e7-a7d0-5254000d6f84").

Apr 25 17:11:48 k8s_n1 kubelet: I0425 17:11:48.736002    2476 kuberuntime_manager.go:458] Container {Name:wordpress Image:wordpress:4.7.3-apache Command:[] Args:[] WorkingDir: Ports:[{Name:wordpress HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:WORDPRESS_DB_HOST Value:wordpress-mysql ValueFrom:nil} {Name:WORDPRESS_DB_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:mysql-pass,},Key:password.txt,Optional:nil,},}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:wordpress-persistent-storage ReadOnly:false MountPath:/var/www/html SubPath:} {Name:default-token-h49ds ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.

Apr 25 17:11:48 k8s_n1 kubelet: I0425 17:11:48.736342    2476 kuberuntime_manager.go:742] checking backoff for container "wordpress" in pod "wordpress-1595585052-n89d8_default(f63ddbae-2995-11e7-a7d0-5254000d6f84)"

Apr 25 17:11:48 k8s_n1 kubelet: I0425 17:11:48.736586    2476 kuberuntime_manager.go:752] Back-off 20s restarting failed container=wordpress pod=wordpress-1595585052-n89d8_default(f63ddbae-2995-11e7-a7d0-5254000d6f84)

Apr 25 17:11:48 k8s_n1 kubelet: E0425 17:11:48.736660    2476 pod_workers.go:182] Error syncing pod f63ddbae-2995-11e7-a7d0-5254000d6f84 ("wordpress-1595585052-n89d8_default(f63ddbae-2995-11e7-a7d0-5254000d6f84)"), skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 20s restarting failed container=wordpress pod=wordpress-1595585052-n89d8_default(f63ddbae-2995-11e7-a7d0-5254000d6f84)"

Apr 25 17:11:50 k8s_n1 kubelet: I0425 17:11:50.465307    2476 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/3ab8dbab-256b-11e7-8373-5254000d6f84-elasticsearch-token-szhsl" (spec.Name: "elasticsearch-token-szhsl") pod "3ab8dbab-256b-11e7-8373-5254000d6f84" (UID: "3ab8dbab-256b-11e7-8373-5254000d6f84").

Apr 25 17:11:56 k8s_n1 kubelet: E0425 17:11:56.687817    2476 fsHandler.go:121] failed to collect filesystem stats - rootDiskErr: <nil>, rootInodeErr: <nil>, extraDiskErr: du command failed on /var/lib/docker/containers/f1eb964f079a7e59503c013434c9503947043b83e12b817a062c3c10ae623c4d with output stdout: , stderr: du: cannot access ‘/var/lib/docker/containers/f1eb964f079a7e59503c013434c9503947043b83e12b817a062c3c10ae623c4d’: No such file or directory

Apr 25 17:11:56 k8s_n1 kubelet: - exit status 1

时间: 04-23

k8s wordpress pod启动不了的相关文章

k8s 之 pod 启动时使用本地镜像仓库时,仓库的认证方法

docker 的本地仓库认证和 Kubernetes 对本地仓库的认证是分开的~~~k8s 拉镜像用的是自己和仓库之间的认证:这个挺坑.的 docker配置本地仓库的认证 Docker version 1.13.1这个老版本的docker要认证本地仓库需要先加配置文件 vim /etc/sysconfig/docker OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --insecur

k8s集群启动了上万个容器(一个pod里放上百个容器,起百个pod就模拟出上万个容器)服务器超时,无法操作的解决办法

问题说明: 一个POD里放了百个容器,然后让K8S集群部署上百个POD,得到可运行上万个容器的实验目的. 实验环境:3台DELL裸机服务器,16核+64G,硬盘容量忽略吧,上T了,肯定够. 1.一开始运行5000多个容器的时候(也就50个POD),集群部署后,10几分钟就起来了,感觉还不错. 2.增加压力,把50个POD增加到100个POD,感觉也不会很长时间,都等到下班后又过了半个小时,还是没有起来,集群链接缓慢,使用kubect里面的命令,好久都出不来信息,UI界面显示服务器超时. 心想,完

K8s之Pod进阶

注意此篇文章接上篇:K8s之Pod资源管理及创建Harbor私有镜像仓库https://blog.51cto.com/14464303/2471369 一.资源限制: pod和container的资源请求和限制: spec.containers[].resources.limits.cpu #cpu上限 spec.containers[].resources.limits.memory #内存上限 spec.containers[].resources.requests.cpu #创建时分配的基

k8s之Pod健康检测

对于Pod的健康状态检测,kubernetes提供了两类探针(Probes)来执行对Pod的健康状态检测:LivenessProbe和ReadinessProb. LivenessProbe:用于判断容器是否存活,即running状态,如果LivenessProbe探针探测到容器不健康,则kubelet将kill掉容器,并根据容器的重启策略是否重启,如果一个容器不包含LivenessProbe探针,则Kubelet认为容器的LivenessProbe探针的返回值永远成功.ReadinessPro

解决k8s出现pod服务一直处于ContainerCreating状态的问题的过程

参考于: https://blog.csdn.net/learner198461/article/details/78036854 https://liyang.pro/solve-k8s-pod-containercreating/ https://blog.csdn.net/golduty2/article/details/80625485 根据实际情况稍微做了修改和说明. 在创建Dashborad时,查看状态总是ContainerCreating [[email protected] k8

k8s的Pod状态和生命周期管理

Pod状态和生命周期管理 一.什么是Pod? 二.Pod中如何管理多个容器? 三.使用Pod 四.Pod的持久性和终止 五.Pause容器 六.init容器 七.Pod的生命周期 (1)Pod phase(Pod的相位) (2)Pod的创建过程 (3)Pod的状态 (4)Pod存活性探测 (5)livenessProbe和readinessProbe使用场景 (6)Pod的重启策略 (7)Pod的生命 (8)livenessProbe解析 一.什么是Pod? Pod是kubernetes中你可以

K8s的POD连接数据库时报错

[[email protected] xxxx]# ./showlog.sh dr iff-dr-1128668949-lb90g 2017-09-29 03:21:57,575 INFO [org.wildfly.swarm] (main) WFSWARM0013: Installed fraction: Logging - STABLE org.wildfly.swarm:logging:2017.8.1 2017-09-29 03:21:57,612 INFO [org.wildfly.s

k8s的pod

一.Pod的分类 自主式Pod : 控制器管理的Pod:Kubernetes使用更高级的称为Controller的抽象层,来管理Pod实例.每个Pod都有一个特殊的被称为“根容器”的Pause容器. Pod与controllers的关系 • controllers:在集群上管理和运行容器的对象 • 通过label-selector相关联 • Pod通过控制器实现应用的运维,如伸缩,升级等 二.Pod容器分类 • Infrastructure Container:基础容器 • 维护整个Pod网络空

k8s更新Pod镜像

实际使用k8s中,如果使用RC启动pod可以直接使用滚动更新进行pod版本的升级,但是我们使用的情况是在pod里面启动有状态的mysql服务,没有和RC进行关联,这样更新的时候只能通过 更新pod的配置直接替换的形式进行更新了,以下脚本是我们进行更新的简单脚本: #!/bin/bash #命名空间 ns=$1 #pod名称 podname=$2 #获取pod yaml配置 /root/k8s.sh th --namespace=$ns get pods $podname -o yaml > &quo