Room9

CKAD - Application Design and Build (Volumes) 본문

Kubernetes

CKAD - Application Design and Build (Volumes)

Room9_ 2022. 1. 18. 15:39

Volumes

Kubernetes는 여러 볼륨의 유형을 지원한다. Pod는 여러 볼륨 유형을 동시에 사용할 수 있다. 임시 볼륨은 Pod의 수명주기를 함께한다. 파드가 죽으면 임시 볼륨도 같이 죽는다. 영구 볼륨(Persistent Volume)은 Pod의 수명주기를 넘어 존재한다.

영구 볼륨을 생성하고 클레임을 이용하여 파드에서 해당 영구볼륨을 사용해보겠다.


KodeKloud

Q1. We have deployed a POD. Inspect the POD and wait for it to start running.

root@controlplane:~# kubectl get pods
NAME     READY   STATUS    RESTARTS   AGE
webapp   1/1     Running   0          3m54s

Q2. The application stores logs at location /log/app.log. View the logs.

root@controlplane:~# kubectl exec webapp -- cat /log/app.log
[2022-01-18 04:44:53,286] INFO in event-simulator: USER4 is viewing page2
[2022-01-18 04:44:54,288] INFO in event-simulator: USER4 is viewing page1
[2022-01-18 04:44:55,289] INFO in event-simulator: USER4 is viewing page3
[2022-01-18 04:44:56,291] INFO in event-simulator: USER3 is viewing page3
[2022-01-18 04:44:57,291] INFO in event-simulator: USER3 is viewing page3
[2022-01-18 04:44:58,293] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILED ATTEMPTS.
[2022-01-18 04:44:58,293] INFO in event-simulator: USER1 is viewing page1
[2022-01-18 04:44:59,294] INFO in event-simulator: USER4 is viewing page2
[2022-01-18 04:45:00,295] INFO in event-simulator: USER1 is viewing page3
[2022-01-18 04:45:01,296] WARNING in event-simulator: USER7 Order failed as the item is OUT OF STOCK.
[2022-01-18 04:45:01,301] INFO in event-simulator: USER3 logged in
[2022-01-18 04:45:02,302] INFO in event-simulator: USER4 is viewing page2
[2022-01-18 04:45:03,304] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILED ATTEMPTS.

Q3. If the POD was to get deleted now, would you be able to view these logs.

 > No , 볼륨을 구성하지 않았으며, 컨테이너 내부에서 저장된 파일은 파드가 삭제될때 같이 삭제된다.


Q4. Configure a volume to store these logs at /var/log/webapp on the host.

  • Name: webapp
  • Image Name: kodekloud/event-simulator
  • Volume HostPath: /var/log/webapp
  • Volume Mount: /log
apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: default
spec:
  containers:
  - image: kodekloud/event-simulator
    name: event-simulator
    volumeMounts:
    - mountPath: /log
      name: my-volume
  volumes:
  - name: my-volume
    hostPath:
      path: /var/log/webapp

Q5. Create a Persistent Volume with the given specification.

  • Volume Name: pv-log
  • Storage: 100Mi
  • Access Modes: ReadWriteMany
  • Host Path: /pv/log
  • Reclaim Policy: Retain
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-log
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /pv/log

Q6. Let us claim some of that storage for our application. Create a Persistent Volume Claim with the given specification.

  • Volume Name: claim-log-1
  • Storage Request: 50Mi
  • Access Modes: ReadWriteOnce
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: claim-log-1
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 50Mi

Q7.  What is the state of the Persistent Volume Claim?

root@controlplane:~# kubectl get pvc
NAME          STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim-log-1   Pending                                                     117s

 > PENDING


Q8. What is the state of the Persistent Volume?

root@controlplane:~# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-log   100Mi      RWX            Retain           Available                                   2m11s

 > Available


Q9. Why is the claim not bound to the available Persistent Volume?

 > Access Modes Mismatch


Q10.Update the Access Mode on the claim to bind it to the PV. Delete and recreate the claim-log-1.
  • Volume Name: claim-log-1
  • Storage Request: 50Mi
  • PVol: pv-log
  • Status: Bound
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: claim-log-1
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 50Mi
root@controlplane:~# kubectl apply -f pvc.yaml 
persistentvolumeclaim/claim-log-1 created
root@controlplane:~# 
root@controlplane:~# kubectl get pvc
NAME          STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim-log-1   Bound    pv-log   100Mi      RWX                           3s

Q11. You requested for 50Mi, how much capacity is now available to the PVC?

root@controlplane:~# kubectl get pvc
NAME          STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim-log-1   Bound    pv-log   100Mi      RWX                           86s

> 100Mi


Q12. Update the webapp pod to use the persistent volume claim as its storage. Replace hostPath configured earlier with the newly created PersistentVolumeClaim.

  • Name: webapp
  • Image Name: kodekloud/event-simulator
  • Volume: PersistentVolumeClaim=claim-log-1
  • Volume Mount: /log
apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: default
spec:
  containers:
  - env:
    - name: LOG_HANDLERS
      value: file
    image: kodekloud/event-simulator
    name: event-simulator
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-tcvnh
      readOnly: true
    - mountPath: /log
      name: log-volume
  volumes:
  - name: default-token-tcvnh
    secret:
      defaultMode: 420
      secretName: default-token-tcvnh
  - name: log-volume
    persistentVolumeClaim:
      claimName: claim-log-1
root@controlplane:~# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
pv-log   100Mi      RWX            Retain           Bound    default/claim-log-1                           10m

Q13. What is the Reclaim Policy set on the Persistent Volume pv-log?

 > Retain


Q14. What would happen to the PV if the PVC was destroyed?

 > The PV is not deleted but not available


Q15. Try deleting the PVC and notice what happens.

>  Terminating state


Q16. Why stuck Terminating state PVC

> The PVC is being used by a POD


Q17. Let us now delete the webapp Pod. Once deleted, wait for the pod to fully terminate.

root@controlplane:~# kubectl delete pod webapp 
pod "webapp" deleted

Q18. What is the state of the PVC now?

root@controlplane:~# kubectl get pvc
No resources found in default namespace.

 > Deleted


Q19. What is the state of the Persistent Volume now?

root@controlplane:~# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                 STORAGECLASS   REASON   AGE
pv-log   100Mi      RWX            Retain           Released   default/claim-log-1                           17m

 > Released


 

Comments