<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>sunny-10.log</title>
        <link>https://velog.io/</link>
        <description>IT</description>
        <lastBuildDate>Sat, 03 May 2025 21:42:08 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>Copyright (C) 2019. sunny-10.log. All rights reserved.</copyright>
        <atom:link href="https://v2.velog.io/rss/sunny-10" rel="self" type="application/rss+xml"/>
        <item>
            <title><![CDATA[서비스 메쉬 관측 플랫폼 구성]]></title>
            <link>https://velog.io/@sunny-10/%EC%84%9C%EB%B9%84%EC%8A%A4-%EB%A9%94%EC%89%AC-%EA%B4%80%EC%B8%A1-%ED%94%8C%EB%9E%AB%ED%8F%BC-%EA%B5%AC%EC%84%B1</link>
            <guid>https://velog.io/@sunny-10/%EC%84%9C%EB%B9%84%EC%8A%A4-%EB%A9%94%EC%89%AC-%EA%B4%80%EC%B8%A1-%ED%94%8C%EB%9E%AB%ED%8F%BC-%EA%B5%AC%EC%84%B1</guid>
            <pubDate>Sat, 03 May 2025 21:42:08 GMT</pubDate>
            <description><![CDATA[<h2 id="개념">[개념]</h2>
<h3 id="1-istio">1. Istio</h3>
<p>역할
서비스 메쉬를 구성해 트래픽 관리, 보안, 모니터링, 트레이싱을 지원.
서비스 앞에 Envoy proxy(Sidecar) 를 주입해 네트워크 트래픽을 제어함.
세부 기능:
Ingress Gateway: 외부 트래픽 진입점
VirtualService &amp; DestinationRule: 트래픽 라우팅, 버전 분기 등
Telemetry: Envoy proxy가 Prometheus, OpenTelemetry로 메트릭 및 트레이싱을 수집함
Security: mTLS, 인증/인가, JWT 처리 등</p>
<h3 id="2-opentelemetry-collector">2. OpenTelemetry Collector</h3>
<p>역할
Istio/Envoy가 수집한 트레이싱 데이터(Span) 를 수신해서 다양한 백엔드로 전달.
Jaeger, Zipkin, NewRelic, OTLP 등으로 멀티 백엔드 export가 가능.
세부 기능
receivers: 데이터를 수신 (예: OTLP, Zipkin)
processors: 가공 및 필터링 (선택사항)
exporters: 백엔드로 전달 (예: Jaeger, Prometheus Remote Write)
service.pipelines: 위 구성들을 연결</p>
<h3 id="3-kiali">3. Kiali</h3>
<p>역할
Istio 서비스 메쉬의 시각화 도구. 실시간 트래픽 흐름, 라우팅 정책, 오류율, 성능 등을 볼 수 있음.
세부 기능
실시간 서비스 맵
요청 수, 지연 시간, 오류율 등 시각화
Istio 설정(VirtualService, DestinationRule 등) 검토 가능
서비스 간 의존성 확인 (디버깅에 매우 유용)</p>
<h2 id="구성">[구성]</h2>
<h3 id="1-기본-istio-환경-구축--샘플-앱-배포">1. 기본 Istio 환경 구축 + 샘플 앱 배포</h3>
<pre><code># brew로 istioctl 설치
brew install istioctl

# istioctl 버전 확인을 진행하여 설치 확인
istioctl version

# Istio demo 프로파일로 설치
# → Prometheus, Kiali, Jaeger, Grafana 포함
istioctl install --set profile=demo -y
# Tech CS 분들은 여기서 오류 발생 가능! VPN 연결 후 재시도해 보세요

# 카카오클라우드로 진행 시 failed to call Webhook 발생
# https://kko.kakao.com/NwPwmdCDcA
# deploy 전체 spec 설정이 아닌 pod spec 설정에서 수정해 주세요.
kubectl edit deployment -n istio-system istiod
#***
#spec:
#    hostNetwork: true # 추가
#    dnsPolicy: ClusterFirstWithHostNet
#containers:
#***

# default 네임스페이스에 istio-injection label 추가
kubectl label namespace default istio-injection=enabled

# Bookinfo 샘플 앱 배포
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.21/samples/bookinfo/platform/kube/bookinfo.yaml

# Gateway + 트래픽 라우팅 정의 적용
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.21/samples/bookinfo/networking/bookinfo-gateway.yaml

# Istio Ingress Gateway의 EXTERNAL-IP 확인
# 주소 확인 후 웹브라우저에서 접속
kubectl get svc istio-ingressgateway -n istio-system
# 예: http://&lt;EXTERNAL-IP&gt;/productpage</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/b705567d-cd1b-4808-9f82-1aeecdfd6558/image.png" alt=""></p>
<table>
<thead>
<tr>
<th>필요 조건</th>
<th>추천</th>
</tr>
</thead>
<tbody><tr>
<td>단순한 웹 트래픽 라우팅 (e.g. ingress + path 기반)</td>
<td>NGINX Ingress Controller</td>
</tr>
<tr>
<td>복잡한 트래픽 분할, 인증(mTLS), 정책 제어, 전체 서비스 메시</td>
<td>Istio Ingress Gateway</td>
</tr>
<tr>
<td>A/B 테스트, 카나리 배포, 헤더 기반 라우팅, 가시성 확보가 필요한 환경</td>
<td>Istio</td>
</tr>
<tr>
<td>빠르게 설정하고 운영 단순화가 중요한 경우</td>
<td>NGINX</td>
</tr>
</tbody></table>
<p>NGINX Ingress Controller는 &quot;단순한 출입문&quot;,
Istio + Ingress Gateway는 &quot;보안·관제·스마트 제어까지 가능한 첨단 통제소&quot;</p>
<h3 id="2-kiali-prometheus-grafana로-메트릭-시각화">2. Kiali, Prometheus, Grafana로 메트릭 시각화</h3>
<p>Prometheus : Istio에서 수집한 메트릭 데이터를 저장
Grafana : Prometheus 데이터를 시각화 (대시보드)
Kiali : Istio 메시에 연결된 서비스 간 관계/트래픽 흐름 시각화</p>
<pre><code># observability 네임스페이스 생성 (선택)
kubectl create namespace observability
# 필요 시 생성하고, 아래의 파드 생성 시 선택적으로 네임스페이스 변경하여 진행.
# 기본적으로 istio-system 네임스페이스에 생성됨. 변경하는 법은 따로 기재하지 않음.

# Istio에서 제공하는 애드온 매니페스트 적용
# Prometheus, Grafana, Kiali 포함
# istio-system 네임스페이스에 관련 Pod들을 배포
# prometheus는 이미 설치되어 있을 수 있음!
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/addons/prometheus.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/addons/grafana.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/addons/kiali.yaml

# 모두 running 되고 있는지 확인
kubectl get pods -n istio-system

# 접근 테스트 (포트포워딩)
# 내부 서비스이기에 서비스화 시키지 않고 포트포워딩 하여 사용합니다!
# 이는 비슷한 구성은 카카오클라우드 grafana 기술문서에도 나와있습니다.

# Kiali (서비스 맵 보기)
# http://localhost:20001
kubectl port-forward svc/kiali -n istio-system 20001:20001
# Grafana (대시보드)
# http://localhost:3000
kubectl port-forward svc/grafana -n istio-system 3000:3000
# Prometheus (메트릭 확인)
# http://localhost:9090
kubectl port-forward svc/prometheus -n istio-system 9090:9090</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/00108599-2ace-4d8f-a45b-bf69a5d97f17/image.png" alt=""></p>
<h3 id="3-트레이스-추적">3. 트레이스 추적</h3>
<p>OpenTelemetry Collector : Istio에서 수집한 trace 데이터를 수집/가공/전송
Jaeger : 트레이스 데이터를 저장하고 시각화하는 UI 제공
Istio : Envoy 사이드카에서 trace 생성, OpenTelemetry에 전달</p>
<pre><code># Jaeger 배포
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/addons/jaeger.yaml

# OpenTelemetry Collector 배포
# 이 구성은 OpenTelemetry Collector가 4317 포트로 OTLP 데이터를 수신하고, 
# 수신한 트레이스를 Jaeger에 전달하도록 설정되어 있음
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/addons/extras/opentelemetry.yaml

# 포트포워딩으로 UI 확인
# http://localhost:16686
kubectl port-forward svc/jaeger -n istio-system 16686:16686

# 테스트 명령어 -&gt; 몇번 실행 후 Jaeger UI에서 서비스 간 호출 경로, 요청 지연 시간 등 확인 가능
kubectl exec &quot;$(kubectl get pod -l app=productpage -o jsonpath={.items..metadata.name})&quot; \
    -c productpage -- curl -s http://localhost:9080/productpage</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/ea021c27-7e69-4075-afcf-f1e75345d000/image.png" alt=""></p>
<hr>
<p>(참고)</p>
<ul>
<li><a href="https://blog.serialexperiments.co.uk/posts/kubernetes-port-forward-already-in-use/">https://blog.serialexperiments.co.uk/posts/kubernetes-port-forward-already-in-use/</a></li>
<li><a href="https://nginxstore.com/blog/istio/istio-prometheus-%EB%A9%94%ED%8A%B8%EB%A6%AD-%EC%88%98%EC%A7%91-%EB%B0%8F-grafana-%EC%97%B0%EB%8F%99/">https://nginxstore.com/blog/istio/istio-prometheus-%EB%A9%94%ED%8A%B8%EB%A6%AD-%EC%88%98%EC%A7%91-%EB%B0%8F-grafana-%EC%97%B0%EB%8F%99/</a></li>
<li><a href="https://istio.io/latest/docs/ops/integrations/jaeger/#installation">https://istio.io/latest/docs/ops/integrations/jaeger/#installation</a></li>
<li><a href="https://istio.io/latest/docs/tasks/observability/distributed-tracing/opentelemetry/">https://istio.io/latest/docs/tasks/observability/distributed-tracing/opentelemetry/</a></li>
<li><a href="https://velog.io/@rudclthe/istio-kiali">https://velog.io/@rudclthe/istio-kiali</a></li>
</ul>
]]></description>
        </item>
        <item>
            <title><![CDATA[Docker & K8S offline Install]]></title>
            <link>https://velog.io/@sunny-10/Docker-K8S-offline-Install</link>
            <guid>https://velog.io/@sunny-10/Docker-K8S-offline-Install</guid>
            <pubDate>Sat, 03 May 2025 20:21:48 GMT</pubDate>
            <description><![CDATA[<p>[본 게시글은 22년 기준으로 작성되었음]
테스트 목적 : 폐쇄망에서 Docker 와 K8S를 설치하여 각 서비스의 구조 파악</p>
<h2 id="1-환경-세팅">1. 환경 세팅</h2>
<p>VM 3대를 준비 (master 1 + worker 2)
Offline 환경을 구성하기 위해 Outbound정책 Port에 제한을 둠</p>
<p>(단, KIC 환경에서는 VM 생성 시 아웃바운드 정책 Port를 모두 열고 VM생성이 끝나면 SG를 변경해 줄 것)</p>
<p>6443 - api server
2379 - etcd
2380 - etcd
4443 - metric server
179 - calico
10250 - kubelet API
<img src="https://velog.velcdn.com/images/sunny-10/post/dc2c03a7-549b-40f3-99d0-7052d15cdcb9/image.png" alt=""></p>
<h2 id="2-docker-설치">2. Docker 설치</h2>
<h3 id="21-docker-설치에-필요한-패키지를-준비">2.1 Docker 설치에 필요한 패키지를 준비</h3>
<p>URL을 통해 받거나 Docker가 설치되어 있는 VM에서 구해도 됨 </p>
<p>focal 은 ubuntu 20.xx
bionic 은 ubuntu 18.xx
URL : <a href="https://download.docker.com/linux/ubuntu/dists/bionic/pool/stable/amd64/">https://download.docker.com/linux/ubuntu/dists/bionic/pool/stable/amd64/</a></p>
<p>VM 경로 : /var/cache/apt/archives 확인</p>
<p>필요 패키지 - yohan guide (버전을 맞추어 설치)
containerd~
docker-ce-cli~
docker-ce-rootless-extra~
docker-ce~
docker-compose~
docker-scan-plugin~
libltdl~
pigz~</p>
<p>(본 테스트에서는 containerd, docker-ce, docker-cli 로만 구성)</p>
<h3 id="22-scp-를-통한-패키지-이동">2.2 scp 를 통한 패키지 이동</h3>
<p>로컬에서 패키지를 받았으면
로컬 → GateWay(oliver) → offline vm 으로 이동
통신가능한 VM에서 패키지를 받았으면
online vm → GateWay(oliver) → offline vm 으로 이동
<img src="https://velog.velcdn.com/images/sunny-10/post/12f716c1-cb44-4deb-a8e2-34a3073c3bd0/image.png" alt=""></p>
<h3 id="23-dpkg-를-통한-설치">2.3 dpkg 를 통한 설치</h3>
<p>.deb가 있는 디렉토리에서 dpkg 를 통한 패키지 설치</p>
<pre><code>$ sudo dpkg -i *.deb</code></pre><hr>
<h4 id="trouble-shooting❗️">Trouble Shooting❗️</h4>
<ul>
<li>containerd 버전문제로 1.4.1 이상으로 해결</li>
</ul>
<p>dpkg: error processing package docker-ce (–install): dependency problems - leaving unconfigured</p>
<p>의존성 트러블은 패키지가 부족하면 수도 없이 나옴</p>
<hr>
<h3 id="24-docker-설정-및-croup-변경">2.4 docker 설정 및 croup 변경</h3>
<h4 id="241-docker-설정-선택적">2.4.1 docker 설정 (선택적)</h4>
<pre><code>$ sudo usermod -aG docker $USER 
$ 재시작 하면 적용
사용자 그룹을 추가한다면 sudo 를 안붙여도 됨</code></pre><h4 id="242-cgroup-변경">2.4.2 cgroup 변경</h4>
<pre><code>cat &lt;&lt;EOF | sudo tee /etc/docker/daemon.json
{
  &quot;exec-opts&quot;: [&quot;native.cgroupdriver=systemd&quot;],
  &quot;log-driver&quot;: &quot;json-file&quot;,
  &quot;log-opts&quot;: {
    &quot;max-size&quot;: &quot;100m&quot;
  },
  &quot;storage-driver&quot;: &quot;overlay2&quot;
}
EOF

$ sudo systemctl daemon-reload
$ sudo systemctl restart docker</code></pre><h3 id="25-docker-설치-확인">2.5 Docker 설치 확인</h3>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/4518f44b-cddb-45eb-b246-bbea45b1f8af/image.png" alt=""></p>
<h2 id="3-k8s-설치">3. K8S 설치 </h2>
<p>kube-apiserver 는 최대 1단계의 마이너 버전 차이까지 가능
ex) 하나의 kube-apiserver 의 버전이 1.21 라면 다른 kube-apiserver 는 1.20 or 1.21 이여야 함</p>
<p>kubelet 은 kube-apiserver 보다 최신이여서는 안됨 그리고 최대 2단계 이전의 마이너 버전 차이까지 가능
ex) kube-apiserver 가 1.21 이라면 kubelet 은 1.21 or 1.20 or 1.19 까지 사용가능</p>
<p>kubectl 은 kube-apiserver 과 최대 1단계의 마이너 버전 차이까지 가능
ex) kube-apiserver 가 1.21 이라면 kubectl 은 1.22 or 1.21 or 1.20 까지 사용가능</p>
<h3 id="31-환경-설정">3.1 환경 설정</h3>
<p>필요한 패키지파일과 이미지 파일을 준비
패키지 파일은 2.1 과 동일하게 구할 수 있음
이미지의 경우 docker save로 .tar 저장</p>
<p>필요 패키지
kubeadm~
kubectl~
kubelet~
kubernetes-cni~
conntrack~
cri-tools~
socat~</p>
<p>필요 이미지
kube-proxy
kube-apiserver
kube-controller-manager
kube-scheduler
etcd
coredns
pause
calico/cni                           (CNI 설치 시 사용)
calico/node                       (CNI 설치 시 사용)
calico/kube-controllers   (CNI 설치 시 사용)
<img src="https://velog.velcdn.com/images/sunny-10/post/ac18fd16-5a66-4190-be53-f6d29e46b132/image.png" alt=""></p>
<h3 id="32-k8s-설치-master-node">3.2 K8S 설치 (master node)</h3>
<h4 id="321-설치">3.2.1 설치</h4>
<p>준비한 이미지 파일을 docker load 로 이미지 업로드 
준비한 패키지 파일을 dpkg 로 설치</p>
<pre><code>$ sudo dpkg -i *.deb</code></pre><h4 id="322-설정">3.2.2 설정</h4>
<pre><code>$ sudo swapoff -a
$ sudo sed -i &#39;/ swap / s/^\(.*\)$/#\1/g&#39; /etc/fstab
$ sudo kubeadm init</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/8931c6c4-7387-4781-b8c3-c576ba5d300d/image.png" alt=""></p>
<pre><code>(master node)
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

(확인)
$ kubectl get nodes </code></pre><p>CNI 가 미설치 되어 NotReady  확인 가능 </p>
<h3 id="33-cni-설치">3.3 CNI 설치</h3>
<p>calico 를 사용
앞 서 준비한 calico 이미지 사용
<a href="https://projectcalico.docs.tigera.io/manifests/calico.yaml">https://projectcalico.docs.tigera.io/manifests/calico.yaml</a>
의 셀스크립트를 calico.yaml 파일 저장</p>
<pre><code>$ kubectl apply -f calico.yaml

(확인)
$ kubectl get all -A
$ kubectl get nodes</code></pre><h3 id="34-worker-node-구축">3.4 worker node 구축</h3>
<p>2.docker 설치와 3.k8s설치 내용대로 설치 (단, init 하지말 것)
worker node의 k8s 설치에 사용 했던 이미지</p>
<p>kube-proxy
coredns
pause
calico/cni                           (CNI 설치 시 사용)
calico/node                       (CNI 설치 시 사용)</p>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/2b939fc3-081d-4e52-b10a-a986ece20b4e/image.png" alt=""></p>
<pre><code>(worker node)
master node 에서 init 했을 때 받은 토큰으로 조인
$ sudo kubeadm join 172.30.3.97:6443 --token gdpgce.j5mvth8gt7qu4u86 \
--discovery-token-ca-cert-hash sha256:ed4b205d09552dc4c957fb0e08c962560af5c9c93b0e3f222dfd7e29af7796e0 

(master node 에서 확인)
$ kubectl get all -A
$ kubectl get nodes</code></pre><h3 id="35-k8s-설치-결과">3.5 k8s 설치 결과</h3>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/d4d49603-fcb9-4002-b11a-f6367180526e/image.png" alt=""></p>
<h2 id="4-k8s-multi-master-구성">4. k8s Multi Master 구성</h2>
<h3 id="41-vm-세팅">4.1 VM 세팅</h3>
<p>앞선 Docker &amp; K8S offline install을 통해 master node 1대, worker node 2대가 구성이 되어 있을 것 (재사용)
같은 방법으로 master node 2대 추가, 기존의 node들은 kubeadm reset 처리
총, master node 3대 worker node 2대를 연결예정</p>
<h3 id="42-main-master-token-얻기">4.2 Main Master Token 얻기</h3>
<p>$ sudo kubeadm init --token-ttl 0  --control-plane-endpoint &quot;[Loadbalancer IP]:6443&quot; --upload-certs --pod-network-cidr=192.168.50.0/24 &gt;&gt; kubeadm-init-result.txt</p>
<p>(pod network cidr 를 지정하지 않으면 추 후 조인 시에 중복으로 인해 에러발생할 수 있음)
<img src="https://velog.velcdn.com/images/sunny-10/post/de46a531-7b47-4fe8-9253-874c66af68cb/image.png" alt=""></p>
<p>빨간 상자= control-plane token
노란 상자 = worker token</p>
<h3 id="43-각-노드-별-필요한-token으로-join">4.3 각 노드 별 필요한 Token으로 Join</h3>
<p>( Controll-plane Node )
 4.1 VM 세팅이 끝난 후 kubeadm init 대신 Controll-plane Token 값 적용 (위 빨간 상자 확인)</p>
<p>(Worker Node)
 4.1 VM 세팅이 끝난 후 Worker Token 값 적용 (위 노란 상자 확인)</p>
<h3 id="44-연결-확인">4.4 연결 확인</h3>
<p>Master node에서 확인</p>
<pre><code>$ kubectl get all -A
$ kubectl get nodes</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/85f0ad11-dbca-4881-b735-bc41afa21e49/image.png" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/3e2e7c74-2fbd-4990-879c-4059455070c6/image.png" alt="">
Main Master 에서만 kubectl apply -f calico.yaml 하면 됨 ( Sub Master 에서 실행 X )
각 Node 들은 필요 이미지만 들고 있으면 됨</p>
<hr>
<p>(참고)
Docker 참고 주소</p>
<ul>
<li>get.docker.com</li>
<li><a href="https://choco-life.tistory.com/40">https://choco-life.tistory.com/40</a></li>
<li><a href="https://docs.cyberwatch.fr/deploy/en/2_deploy_cyberwatch/offline/swarm/install_docker.html">https://docs.cyberwatch.fr/deploy/en/2_deploy_cyberwatch/offline/swarm/install_docker.html</a></li>
</ul>
<p>K8S 참고 주소</p>
<ul>
<li><a href="https://velog.io/@seokbin/Kubernetes-%ED%81%B4%EB%9F%AC%EC%8A%A4%ED%84%B0-%EC%84%A4%EC%B9%98-kubeadm-offline-%ED%99%98%EA%B2%BD">https://velog.io/@seokbin/Kubernetes-%ED%81%B4%EB%9F%AC%EC%8A%A4%ED%84%B0-%EC%84%A4%EC%B9%98-kubeadm-offline-%ED%99%98%EA%B2%BD</a></li>
<li><a href="https://github.com/tmax-cloud/install-k8s">https://github.com/tmax-cloud/install-k8s</a></li>
<li><a href="https://docs.genesys.com/Documentation/GCXI/latest/Dep/DockerOffline">https://docs.genesys.com/Documentation/GCXI/latest/Dep/DockerOffline</a></li>
<li><a href="https://www.sobyte.net/post/2022-06/k8s-intranet/">https://www.sobyte.net/post/2022-06/k8s-intranet/</a></li>
<li><a href="https://www.centlinux.com/2019/04/install-kubernetes-k8s-offline-on-centos-7.html">https://www.centlinux.com/2019/04/install-kubernetes-k8s-offline-on-centos-7.html</a></li>
</ul>
]]></description>
        </item>
        <item>
            <title><![CDATA[k8s 내 Ceph Storage Cluster 구성]]></title>
            <link>https://velog.io/@sunny-10/k8s-%EB%82%B4-Ceph-Storage-Cluster-%EA%B5%AC%EC%84%B1</link>
            <guid>https://velog.io/@sunny-10/k8s-%EB%82%B4-Ceph-Storage-Cluster-%EA%B5%AC%EC%84%B1</guid>
            <pubDate>Sat, 03 May 2025 19:52:21 GMT</pubDate>
            <description><![CDATA[<p>Kubernetes는 여러 스토리지 선택지가 있지만, 
그중 온프레미스 환경에서 관리가 쉽고 강력한 성능을 내는 Ceph이 있습니다. </p>
<p>Ceph Storage Cluster는 안정적이며, 높은 성능과 Block, File, Object 스토리지로 다양하게 사용이 가능합니다. 
Kubernetes Cluster내에 Ceph Storage Cluster를 구성하여 이를 PVC로 사용하는 테스트 입니다.</p>
<h3 id="k8s-구성-환경">k8s 구성 환경</h3>
<p>k8s node 1 : 8vCPU / 32 GiB Memory / 볼륨 100GB + 50GB
k8s node 2 : 8vCPU / 32 GiB Memory / 볼륨 100GB + 50GB
k8s node 3 : 8vCPU / 32 GiB Memory / 볼륨 100GB + 50GB
k8s node 4 : 8vCPU / 32 GiB Memory / 볼륨 100GB + 50GB
Ceph 클러스터를 위해 2개의 MGR / 3개의 MON / 디스크 수만큼의 OSD 데몬이 실행되어야 합니다.
<br></p>
<h3 id="1-rook-예제-및-소스파일-다운로드">1. Rook 예제 및 소스파일 다운로드</h3>
<p>Rook 예제 파일은 거의 모든 환경에서 대응할 수 있는 예제코드를 제공</p>
<pre><code>$ git clone https://github.com/rook/rook.git/</code></pre><br>

<h3 id="2-helm-활용-rook-오퍼레이터-구성">2. helm 활용 rook 오퍼레이터 구성</h3>
<p>helm 으로 간단하게 rook-ceph 구성</p>
<pre><code>(repo 추가) 
$ helm repo add rook-release https://charts.rook.io/release 
$ helm search repo rook-ceph </code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/304f50be-1544-4cfd-a9e0-35393118bf20/image.png" alt=""></p>
<pre><code>(rook 오퍼레이터 설치) 
$ kubectl create namespace rook-ceph 
(repo 추가) 
$ helm install --namespace rook-ceph rook-ceph rook-release/rook-ceph 
$ kubectl get all -n rook-ceph</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/9b8f02e3-d404-46d1-bda5-f34d035f9ede/image.png" alt="">
<br></p>
<h3 id="3-ceph-클러스터-구성">3. Ceph 클러스터 구성</h3>
<p>기본으로 구성할 경우, 2개의 MGR / 3개의 MON / 빈디스크 수만큼의 OSD 데몬이 생성되며 
최소 3개 노드이상일 경우 기본구성으로 설치하면 적당함 </p>
<p>아래 설치하기 전에 위에 구성한 오퍼레이터가 꼭 동작하고 있어야 하며, 
만약 오퍼레이터가 설치 중이라면 설치완료 후 진행</p>
<pre><code>$ helm install --namespace rook-ceph rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster 
$ kubectl get all -n rook-ceph</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/c9951c1f-6715-4c77-b885-73e388a07456/image.png" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/5dbf4211-246e-494a-934b-73ee0ccbc987/image.png" alt=""></p>
<hr>
<h4 id="trouble-shooting❗️">Trouble Shooting❗️</h4>
<p>rook-ceph-osd-prepare-[host명] 파드는 Completed 되었으나 정작 osd 파드가 생성되지 않음</p>
<pre><code>kubectl -n rook-ceph logs &lt;ceph-osd-prepare-pod&gt;</code></pre><p>를 통해 로그 확인 시 
&#39;cephosd: skipping OSD configuration as no devices matched the storage settings for this nod&#39; 가 발생하는 것을 확인</p>
<p>각 노드별 볼륨을 추가하여 해결</p>
<hr>
<br>

<h3 id="4-toolbox-설치">4. Toolbox 설치</h3>
<p>rook/deploy/examples/ 에 toolbox.yaml 파일을 사용하여 구성 
toolbox를 통해 ceph cli를 사용할 수 있음</p>
<pre><code>$ kubectl apply -f toolbox.yaml 
$ kubectl get deploy rook-ceph-tools -n rook-ceph 
$ kubectl -n rook-ceph exec -it 
(kubectl -n rook-ceph get pod -l &quot;app=rook-ceph-tools&quot; -o jsonpath=&#39;{.items[0].metadata.name}&#39;) bash 
(toolbox)$ ceph -s 
(toolbox)$ ceph osd status</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/ea77fca0-8eda-437e-ab58-383071edae26/image.png" alt="">
<br></p>
<h3 id="5-dashboard-사용">5. Dashboard 사용</h3>
<p>CephCluster의 Dashboard 서비스를 ssl을 사용하지 않도록 변경하고, 기본 url을 변경</p>
<pre><code>$ kubectl get svc -n rook-ceph 
$ kubectl edit CephCluster rook-ceph -n rook-ceph 
(수정전) 
dashboard : 
  enabled : true 
  ssl : true 

(수정후) 
dashboard : 
  enabled : true 
  ssl : false 
  urlPrefix : /ceph-dashboard</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/bf4d282d-197c-4083-a0dd-9008852ebf8b/image.png" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/f7518306-3d0c-48e7-a3c4-d61d527b3635/image.png" alt=""></p>
<p>대시보드가 ssl을 사용한다면 8443 포트이지만, 
ssl을 사용하지 않도록 구성하니 7000 포트로 고정 
rook-ceph 예제파일 중 dashboard-loadbalancer.yaml 파일을 수정하고 적용</p>
<pre><code>$ vi dashboard-loadbalancer.yaml 
(수정전) 
port : 8443 
targetPort : 8443 

(수정후) 
port : 7000 
targetPort : 7000 

$ kubectl apply -f dashboard-loadbalancer.yaml</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/f1333ae8-e4d9-46e3-930c-46fedb30dbbc/image.png" alt="">
이제 rook-ceph-mgr-dashboard-loadbalancer라는 서비스가 LoadBalancer 타입으로 만들어지고 외부 IP가 확인됨
(본 테스트에서는 카카오클라우드 LB를 사용하여 접속)
<img src="https://velog.velcdn.com/images/sunny-10/post/bc8be0b5-7d11-4433-aeaf-efc6cc0d63aa/image.png" alt=""></p>
<p>브라우저에서 <a href="http://PublicIP:7000/ceph-dashboard/">http://PublicIP:7000/ceph-dashboard/</a> 경로로 접속 
계정은 admin이고 패스워드는 아래 명령으로 확인할 수 있음</p>
<pre><code># 패스워드 확인 
$ kubectl get secret rook-ceph-dashboard-password -n rook-ceph -o yaml | grep &quot;password:&quot; | awk &#39;{print $2}&#39; | base64 --decode</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/92c3f009-c6fb-447b-adb0-f2b01d195dcd/image.png" alt="">
(아래 접속 화면은 pvc 생성 후 캡처)
<img src="https://velog.velcdn.com/images/sunny-10/post/856d1d83-a4b7-4030-9495-e11d1f1de2c4/image.png" alt="">
<br></p>
<h3 id="6-ceph-filesystem-pvc-생성">6. (Ceph-filesystem) PVC 생성</h3>
<p>PVC를 생성하면 자동으로 PV도 함께 생성되고 연결됨</p>
<pre><code>$ vi cephfs-pvc01.yaml 
--- 
apiVersion : v1
kind : PersistentVolumeClaim
metadata :
  name : cephfs-pvc01
spec :
  accessModes :
    - ReadWriteMany
  resources :
    requests :
      storage : 1Gi
  storageClassName : ceph-filesystem

$ kubectl apply -f cephfs-pvc01.yaml 
$ kubectl get pvc  </code></pre><br>

<h3 id="7-ceph-block-pvc-생성">7. (ceph-block) PVC 생성</h3>
<p>CephFS와 가장 큰 차이점은 PVC 생성 시 accessModes값을 ReadWriteOnce로 구성 
RBD는 ReadWriteMany로 구성할 수 없음</p>
<pre><code>$ vi cephblock-pvc01.yaml 
---
apiVersion : v1
kind : PersistentVolumeClaim
metadata :
  name : rbd-pvc
spec :
  accessModes :
    - ReadWriteOnce
  resources :
    requests :
      storage : 10Gi
  storageClassName : ceph-block

$ kubectl apply -f cephblock-pvc01.yaml 
$ kubectl get pvc</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/c5ad34f2-5c82-4962-92f0-dcfa9682ff8c/image.png" alt="">
<br></p>
<h3 id="8-test-pod-생성-및-확인">8. Test pod 생성 및 확인</h3>
<p>pod를 만들어 마운트가 되는지 테스트</p>
<pre><code>$ vi ceph-test-pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: ceph-test-pod
spec:
  containers:
    - name: web-server
      image: nginx
      volumeMounts:
        - name: ceph-filesystem-01
          mountPath: /data1
        - name: ceph-block-01
          mountPath: /data2
  volumes:
    - name: ceph-filesystem-01
      persistentVolumeClaim:
        claimName: cephfs-pvc01
        readOnly: false
    - name: ceph-block-01
      persistentVolumeClaim:
        claimName: rbd-pvc
        readOnly: false

$ kubectl apply -f ceph-test-pod.yaml

(확인)                    
$ kubectl exec -ti ceph-test-pod /bin/bash                    
rdb-test-pod01&gt;df -h</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/49df5812-2cf4-44d0-8f4e-69b76b8d5e25/image.png" alt="">
설정한 마운트 설정에 맞게 마운트가 된 것을 볼 수 있음</p>
<hr>
<p>(참고)</p>
<ul>
<li><a href="https://itmaya.co.kr/wboard/view.php?wb=tech&amp;idx=35">https://itmaya.co.kr/wboard/view.php?wb=tech&amp;idx=35</a></li>
<li><a href="https://jay-chamber.tistory.com/entry/rook-ceph-trouble-shooting-OSD%EA%B0%80-%EC%83%9D%EC%84%B1%EB%90%98%EC%A7%80-%EC%95%8A%EC%95%84%EC%9A%94-Try-1">https://jay-chamber.tistory.com/entry/rook-ceph-trouble-shooting-OSD%EA%B0%80-%EC%83%9D%EC%84%B1%EB%90%98%EC%A7%80-%EC%95%8A%EC%95%84%EC%9A%94-Try-1</a></li>
<li><a href="https://rook.io/docs/rook/v1.9/ceph-teardown.html#zapping-devices">https://rook.io/docs/rook/v1.9/ceph-teardown.html#zapping-devices</a></li>
<li>24단계 실습으로 끝내는 쿠버네티스(Rook-ceph) <a href="https://naver.me/I55O7bmc">https://naver.me/I55O7bmc</a></li>
<li><a href="https://computing-jhson.tistory.com/112">https://computing-jhson.tistory.com/112</a></li>
</ul>
]]></description>
        </item>
        <item>
            <title><![CDATA[최종 프로젝트 테라폼 소스]]></title>
            <link>https://velog.io/@sunny-10/%EC%B5%9C%EC%A2%85-%ED%94%84%EB%A1%9C%EC%A0%9D%ED%8A%B8-%ED%85%8C%EB%9D%BC%ED%8F%BC-%EC%86%8C%EC%8A%A4</link>
            <guid>https://velog.io/@sunny-10/%EC%B5%9C%EC%A2%85-%ED%94%84%EB%A1%9C%EC%A0%9D%ED%8A%B8-%ED%85%8C%EB%9D%BC%ED%8F%BC-%EC%86%8C%EC%8A%A4</guid>
            <pubDate>Fri, 23 Dec 2022 02:43:48 GMT</pubDate>
            <description><![CDATA[<p><a href="https://github.com/Sunny-1030/effective-eureka/tree/main/terraform">https://github.com/Sunny-1030/effective-eureka/tree/main/terraform</a></p>
]]></description>
        </item>
        <item>
            <title><![CDATA[Miniproject(22.06.07~22.06.14)]]></title>
            <link>https://velog.io/@sunny-10/test</link>
            <guid>https://velog.io/@sunny-10/test</guid>
            <pubDate>Mon, 13 Jun 2022 19:14:23 GMT</pubDate>
            <description><![CDATA[<h1 id="miniproject-_4조">Miniproject _4조</h1>
<p><code>Cloud Infra ochestration
: Tool 구성을 aws로 진행</code></p>
<table>
<thead>
<tr>
<th>팀원</th>
<th>강재민</th>
<th>김효진</th>
<th>박민선</th>
<th>박지연</th>
<th>임재헌</th>
</tr>
</thead>
<tbody><tr>
<td>역할</td>
<td>Ansible 구축</td>
<td>Jenkins 구축</td>
<td>Terraform 구축</td>
<td>문서작성</td>
<td>Ansible 구축</td>
</tr>
</tbody></table>
<p><br/><br/><br/></p>
<h2 id="1-프로젝트-개요">1. 프로젝트 개요</h2>
<h3 id="1-1-workflow">1-1. WorkFlow</h3>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/239f2126-1bc3-4277-a19d-ef68b2484c6e/image.png" alt=""></p>
<h3 id="1-2-기술스택과-도구">1-2. 기술스택과 도구</h3>
<p>|스택 및 도구|
|:-:|:-:|
|jenkins|<img src="https://velog.velcdn.com/images/sunny-10/post/fa7e8136-b5fc-4ba0-a2ff-2622c8ea3990/image.png" alt="">
|Ansible|<img src="https://velog.velcdn.com/images/sunny-10/post/692c7530-5745-4553-aac3-8b3ca4697c61/image.png" alt="">
|Terraform|<img src="https://velog.velcdn.com/images/sunny-10/post/26fbb57b-4cf9-43b5-b0ff-b6ca3f354416/image.png" alt="">
|Docker|<img src="https://velog.velcdn.com/images/sunny-10/post/7351466d-56c3-40b3-9515-108a04721761/image.png" alt="">
|Kubernetes|<img src="https://velog.velcdn.com/images/sunny-10/post/e499284c-bcf7-4471-8eee-b8bdf421f575/image.png" alt="">
|Git|<img src="https://velog.velcdn.com/images/sunny-10/post/8f3531a1-5c8b-4e95-9fc0-14e20f3ee89d/image.png" alt=""></p>
<hr>
<h2 id="2-프로젝트-구성">2. 프로젝트 구성</h2>
<p><code>초기 Architecture 설계도</code>
<img src="https://velog.velcdn.com/images/sunny-10/post/192e1c6a-0863-469a-a713-a024c2674755/image.png" alt=""></p>
<p><code>네트워크</code>|<code>IPv4 CIDR</code>
|:--:|:--:|
VPC|<code>10.0.0.0/16</code>
public subnet|<code>10.0.10.0/24</code>
private subnet|<code>10.0.30.0/24</code>
</br></p>
<table>
<thead>
<tr>
<th></th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody><tr>
<td>Jenkins</td>
<td>13.125.140.70</td>
<td>10.0.10.206</td>
</tr>
<tr>
<td>Ansible</td>
<td>13.209.20.29</td>
<td>10.0.10.209</td>
</tr>
<tr>
<td>Docker</td>
<td>3.38.107.141</td>
<td>10.0.10.95</td>
</tr>
<tr>
<td>Kubernetes</td>
<td>3.38.192.209</td>
<td>10.0.10.248</td>
</tr>
</tbody></table>
<hr>
<h2 id="3-cicd를-위한-기본-인프라-구성">3. CI/CD를 위한 기본 인프라 구성</h2>
<h3 id="3-1-kubernetes-cluster-구성">3-1. Kubernetes Cluster 구성</h3>
<p><code></code>|<code>kubeadm</code>|<code>kubelet</code>|<code>kubectl</code>
|:--:|:--:|:--:|:--:|
<code>버전</code>|<code>1.22.8</code>|<code>1.22.8</code>|<code>1.22.8</code></p>
<br/>

<p>구성|설정값
|:--:|:--:|
설치방법|Kubeadm
pod-network-cidr|172.16.0.0/16</p>
<br/>

<h3 id="3-2-build-및-deployment-serverjenkins-ansible-docker-구성">3-2. Build 및 Deployment Server(Jenkins, Ansible, Docker) 구성</h3>
<h4 id="3-2-1-기본-terraform-구성">3-2-1) 기본 terraform 구성</h4>
<p><code>vi provider.tf</code> : Infrastructure의 type</p>
<pre><code>terraform {
  required_providers {
    aws = {
      source  = &quot;hashicorp/aws&quot;
      version = &quot;~&gt; 3.0&quot;
    }
  }
}

provider &quot;aws&quot; {
  region = &quot;ap-northeast-2&quot;
}</code></pre><br/>



<p><code>vi vpc.tf</code> : 가용영역1개, public2개, private1개 생성</p>
<pre><code>module &quot;project1_vpc&quot; {
  source = &quot;terraform-aws-modules/vpc/aws&quot;

  name = &quot;project1_vpc&quot;

  cidr = &quot;10.0.0.0/16&quot;

  azs             = [&quot;ap-northeast-2a&quot;]
  public_subnets  = [&quot;10.0.10.0/24&quot;, &quot;10.0.20.0/24&quot;]
  private_subnets = [&quot;10.0.30.0/24&quot;]

  create_database_subnet_group = true

  create_igw = true

  enable_nat_gateway = true
  single_nat_gateway = true

}
</code></pre><br/>



<p><code>vi ec2.tf</code> : 4대의 ec2 생성</p>
<pre><code>resource &quot;aws_key_pair&quot; &quot;project1_key&quot; {
  key_name   = &quot;project1_key&quot;
  public_key = file(&quot;/home/vagrant/.ssh/id_rsa.pub&quot;)
}

resource &quot;aws_instance&quot; &quot;jenkins&quot; {
  ami                    = &quot;ami-058165de3b7202099&quot;
  availability_zone      = module.project1_vpc.azs[0]
  instance_type          = &quot;t2.medium&quot;
  vpc_security_group_ids = [aws_security_group.all-sg.id]
  subnet_id              = module.project1_vpc.public_subnets[0]
  key_name               = aws_key_pair.project1_key.key_name

  tags = {
    Name = &quot;jenkins&quot;
  }
}


resource &quot;aws_instance&quot; &quot;ansible&quot; {
  ami                    = &quot;ami-058165de3b7202099&quot;
  availability_zone      = module.project1_vpc.azs[0]
  instance_type          = &quot;t2.micro&quot;
  vpc_security_group_ids = [aws_security_group.all-sg.id]
  subnet_id              = module.project1_vpc.public_subnets[0]
  key_name               = aws_key_pair.project1_key.key_name

  tags = {
    Name = &quot;ansible&quot;
  }
}

resource &quot;aws_instance&quot; &quot;docker&quot; {
  ami                    = &quot;ami-058165de3b7202099&quot;
  availability_zone      = module.project1_vpc.azs[0]
  instance_type          = &quot;t2.micro&quot;
  vpc_security_group_ids = [aws_security_group.all-sg.id]
  subnet_id              = module.project1_vpc.public_subnets[0]
  key_name               = aws_key_pair.project1_key.key_name

  tags = {
    Name = &quot;docker&quot;
  }
}


resource &quot;aws_instance&quot; &quot;k8s&quot; {
  ami                    = &quot;ami-058165de3b7202099&quot;
  availability_zone      = module.project1_vpc.azs[0]
  instance_type          = &quot;t2.medium&quot;
  vpc_security_group_ids = [aws_security_group.all-sg.id]
  subnet_id              = module.project1_vpc.public_subnets[0]
  key_name               = aws_key_pair.project1_key.key_name

  tags = {
    Name = &quot;k8s&quot;
  }
}
</code></pre><br/>



<p><code>vi sg.tf</code> : 모든 port를 열어 놓음</p>
<pre><code>resource &quot;aws_security_group&quot; &quot;all-sg&quot; {
  name        = &quot;all-sg&quot;
  description = &quot;Allow all &quot;
  vpc_id      = module.project1_vpc.vpc_id

  ingress {
    cidr_blocks = [&quot;0.0.0.0/0&quot;]
    from_port   = 0
    to_port     = 0
    protocol    = &quot;-1&quot;
  }

  egress {
    cidr_blocks = [&quot;0.0.0.0/0&quot;]
    from_port   = 0
    protocol    = &quot;-1&quot;
    to_port     = 0
  }
}</code></pre><br/>



<h4 id="3-2-2-ansible구성">3-2-2) Ansible구성</h4>
<ul>
<li>bastionhost 
-- hosts.ini
-- playbook
   ㄴ installJenkins.yml
   ㄴ installDoker.yaml
   ㄴ installk8s.yaml<br/>

</li>
</ul>
<h4 id="3-2-3-jenkins-build">3-2-3) Jenkins build</h4>
<pre><code>- hosts: jenkins_host

  tasks:
    - shell: sudo apt-get update
      ignore_errors: yes
    - shell: sudo apt install -y openjdk-11-jdk
    - shell: curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc &gt; /dev/null
    - shell: echo &quot;deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/&quot; | sudo tee /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
    - shell: sudo apt-get update
      ignore_errors: yes
    - command: apt install -y fontconfig jenkins
    - command: apt install -y maven</code></pre><br/>

<h4 id="3-2-4-docker-build">3-2-4) Docker build</h4>
<pre><code>- name: Docker VM Provisioning
  hosts: docker_host
  gather_facts: false

  tasks:
    - command: apt update
      # 사용 가능한 패키지와 그 버전 리스트 업데이트

    - command: apt install -y ca-certificates curl gnupg lsb-release
      # docker 설치

    - command: apt install -y python3-pip
      # python3용 pip 설치 및 python 모듈 구축에 필요한 모든 종속성 설치

    - shell: curl https://get.docker.com | sh
      # docker 설치

    - shell: usermod -aG docker ubuntu
      ubuntu 사용자를 doker 그룹에 추가

    - pip:
        name:
          - docker
          - docker-compose
          # pip를 사용한 docker, docker-compose 설치
</code></pre><br/>

<h4 id="3-2-5-ansible-build">3-2-5) ansible build</h4>
<pre><code>- name: Ansible VM Provisioning
  hosts: ansible_host
  gather_facts: false

  tasks:
    - command: apt update
    - command: apt install -y ca-certificates curl gnupg lsb-release
    - command: apt install -y python3-pip
    - shell: curl https://get.docker.com | sh
    - shell: usermod -aG docker ubuntu
    - pip:
        name:
          - docker
          - docker-compose
    - command: apt install -y ansible
    - command: apt install -y python3-pip
    - shell: sed -i &#39;s/PasswordAuthentication no/PasswordAuthentication yes/g&#39; /etc/ssh/sshd_config
    - shell: pip install openshift==0.11
    - shell: echo &#39;ubuntu:ubuntu&#39; | chpasswd
    - shell: sudo systemctl restart ssh
    - shell: mkdir /home/ubuntu/.kube
    - shell: curl -LO https://dl.k8s.io/release/v1.22.8/bin/linux/amd64/kubectl
    - shell: sudo install kubectl /usr/local/bin/
   #- shell: scp ~/.kube/config .kube/config</code></pre><br/>

<h4 id="3-2-6-k8s-build">3-2-6) k8s build</h4>
<pre><code>- name: Contorl-Plane VM Provisioning
  hosts: controlplane_host
  gather_facts: false

  tasks:
    - command: apt update
    - command: apt install -y ca-certificates curl gnupg lsb-release
    - command: apt install -y python3-pip
    - shell: curl https://get.docker.com | sh
    - shell: usermod -aG docker ubuntu
    - pip:
        name:
          - docker
          - docker-compose
    - command: apt-get install -y apt-transport-https ca-certificates curl
    - shell: curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
    - shell: echo &quot;deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main&quot; | sudo tee /etc/apt/sources.list.d/kubernetes.list
    - command: apt-get update
    - shell: sudo apt-get install kubeadm=1.22.8-00 kubelet=1.22.8-00 kubectl=1.22.8-00 -y
    - copy:
        src: &quot;/home/ubuntu/daemon.json&quot;
        dest: &quot;/etc/docker/&quot;
    - shell: sudo systemctl restart docker
    - shell: sudo systemctl daemon-reload &amp;&amp; sudo systemctl restart kubelet
    - shell: IPADDR=`ip addr | tail -n 8 | head -n 1 | cut -f 6 -d&#39; &#39; | cut -f 1 -d &#39;/&#39;`
    - shell: sudo kubeadm init --control-plane-endpoint &quot;{{ lookup(&#39;env&#39;, &#39;IPADDR&#39;) }}&quot; --pod-network-cidr 172.16.0.0/16 --apiserver-advertise-address &quot;{{ lookup(&#39;env&#39;, &#39;IPADDR&#39;) }}&quot;
    - shell: mkdir -p /home/ubuntu/.kube
    - shell: sudo cp -i /etc/kubernetes/admin.conf /home/ubuntu/.kube/config
    - shell: sudo chown ubuntu:ubuntu /home/ubuntu/.kube/config
    - fetch:
        src: &quot;/home/ubuntu/.kube/config&quot;
        dest: &quot;/home/ubuntu/.kube/config&quot;
        flat: yes
   #- shell: kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
    - shell: curl https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -O
    - replace:
        path: /home/ubuntu/custom-resources.yaml
        regexp: 192.168
        replace: 172.16
   #- shell: kubectl create -f custom-resources.yaml</code></pre><h4 id="3-2-6-worker-node-build">3-2-6) worker node build</h4>
<pre><code>- name: Contorl-Plane VM Provisioning
  hosts: controlplane_host
  gather_facts: false

  tasks:
    - command: apt update
    - command: apt install -y ca-certificates curl gnupg lsb-release
    - command: apt install -y python3-pip
    - shell: curl https://get.docker.com | sh
    - shell: usermod -aG docker ubuntu
    - pip:
        name:
          - docker
          - docker-compose
    - command: apt-get install -y apt-transport-https ca-certificates curl
    - shell: curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
    - shell: echo &quot;deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main&quot; | sudo tee /etc/apt/sources.list.d/kubernetes.list
    - command: apt-get update
    - shell: sudo apt-get install kubeadm=1.22.8-00 kubelet=1.22.8-00 kubectl=1.22.8-00 -y
    - copy:
        src: &quot;/home/ubuntu/daemon.json&quot;
        dest: &quot;/etc/docker/&quot;
    - shell: sudo systemctl restart docker
    - shell: sudo systemctl daemon-reload &amp;&amp; sudo systemctl restart kubelet</code></pre><pre><code>#! /bin/sh
sudo kubeadm join 10.0.10.248:6443 --token u3adz9.flbop6nslkaupqrq --discovery-token-ca-cert-hash sha256:70f23d516ea80a39c784d129bddb13d6f71a96865b97acac573054443183b355</code></pre><br/>

<hr>
<h2 id="4-cicd-구현">4. ci/cd 구현</h2>
<h3 id="4-1-kubernetes-cluster-pod-배포를-위한-ansible-playbook">4-1. Kubernetes Cluster Pod 배포를 위한 Ansible Playbook</h3>
<h4 id="4-1-1-kubernetes-playbook">4-1-1) kubernetes playbook</h4>
<p><code>vi docker_build_and_push.yaml</code> : DockerHub에서 image build 및 push</p>
<pre><code>- name: Docker Image Build and Push
  hosts: docker_host
  gather_facts: false

  tasks:
    - command: docker image build -t repush/cicdproject:&quot;{{ lookup(&#39;env&#39;, &#39;BUILD_NUMBER&#39;) }}&quot; ~/
    - command: docker login -u repush -p &quot;{{ lookup(&#39;env&#39;, &#39;TOKEN&#39;) }}&quot;
    - command: docker push repush/cicdproject:&quot;{{ lookup(&#39;env&#39;, &#39;BUILD_NUMBER&#39;) }}&quot;
    - command: docker logout</code></pre><ul>
<li>첫번째 command: image build command, 변수를 통해 버전을 지정 (중복이 되지 않도록 build 횟수로 버전 생성)</li>
<li>두번째 command: DockerHub login command, 변수를 통해 Token을 지정</li>
<li>세번째 command: push command, 변수를 통해 버전을 지정 (중복이 되지 않도록 build 횟수로 버전 생성)</li>
<li>네번째 command: logout command
<br/><br/></li>
</ul>
<p><code>vi kube_deploy.yaml</code> : k8s deployment 및 service</p>
<pre><code>- hosts: ansible_host
  gather_facts: no

  tasks:
    #- command: kubectl apply -f java-hello-world/kube_manifest/
    - name: Create Deployment                        # container 배포
      k8s:
        state: present
        definition:
          apiVersion: apps/v1
          kind: Deployment
          metadata:
            name: java-hello
            namespace: default
          spec:
            replicas: 6                              # pod 개수 지정
            selector:
              matchLabels:
                app: java-hello
            template:
              metadata:
                labels:
                  app: java-hello
              spec:
                containers:
                  - name: java-hello
                    image: &quot;repush/cicdproject:{{ lookup(&#39;env&#39;, &#39;BUILD_NUMBER&#39;) }}&quot;       # DockerHub에 push된 image 
                    imagePullPolicy: Always
                    ports:
                      - containerPort: 8080           # containerPort지정
    - name: Create Service                            # service 배포
      k8s:
        state: present
        definition:
          apiVersion: v1
          kind: Service
          metadata:
            name: java-hello-svc
            namespace: default
          spec:
            type: NodePort
            selector:
              app: java-hello
            ports:
              - port: 80
                targetPort: 8080
                nodePort: 31313                        # nodePort 지정</code></pre><p><br/><br/></p>
<h3 id="4-2-docker-image-build를-위한-이미지용-dockerfile">4-2. Docker Image Build를 위한 이미지용 Dockerfile</h3>
<h4 id="4-2-1-dockerfile">4-2-1) Dockerfile</h4>
<p><code>vi Dockerfile</code></p>
<pre><code>FROM tomcat:9.0-jre11-openjdk                # tomcat:9.0-jre11-openjdk 이미지를 가져옴

COPY webapp.war /usr/local/tomcat/webapps    # 현재 경로의 war파일을 이미지 안에 /usr/local/tomcat/ 경로에 복사</code></pre><br/>


<h3 id="4-3-cicd-구현을-위한-jenkins-job-구성">4-3. ci/cd 구현을 위한 Jenkins Job 구성</h3>
<h4 id="4-3-1-사용된-플러그인">4-3-1) <strong>사용된 플러그인</strong></h4>
<ul>
<li>Maven Integration plugin 3.19</li>
<li>Maven Invoker plugin 2.4</li>
<li>Publish Over SSH 1.24</li>
</ul>
<h4 id="4-3-2-java-project-build를-위한-jdk환경변수-세팅">4-3-2) <strong>JAVA Project Build를 위한 JDK환경변수 세팅</strong></h4>
<p><em>Jenkins 관리 - Global Tool Configuration - JDK</em>
<img src="https://velog.velcdn.com/images/skyvault05/post/fba8a09f-0b29-4bf4-88da-0cbde91497a3/image.png" alt=""></p>
<h4 id="4-3-3-java-project-build를-위한-maven환경변수-세팅">4-3-3) <strong>JAVA Project Build를 위한 Maven환경변수 세팅</strong></h4>
<p><em>Jenkins 관리 - Global Tool Configuration - Maven</em></p>
<p><img src="https://velog.velcdn.com/images/skyvault05/post/e372ae5a-f218-4f38-9777-081e28ac0f43/image.png" alt=""></p>
<h4 id="4-3-4-cicd-구현을-위한-junkins-job-설정">4-3-4) <strong>ci/cd 구현을 위한 Junkins Job 설정</strong></h4>
<p><em>새로운 Item - Pipeline</em></p>
<h5 id="1-dockerhub-로그인을-위한-로그인-토큰-설정"><strong>1) DockerHub 로그인을 위한 로그인 토큰 설정</strong></h5>
<p><em>General</em></p>
<ul>
<li>Default Value: DockerHub 계정의 Token
<img src="https://velog.velcdn.com/images/skyvault05/post/9111f9a6-79cb-4ffa-921f-db10b3599626/image.png" alt=""></li>
</ul>
<h5 id="2-프로젝트의-브랜치-변화-감지를-위한-build-triggers-설정"><strong>2) 프로젝트의 브랜치 변화 감지를 위한 Build Triggers 설정</strong></h5>
<p><em>Build Triggers</em></p>
<ul>
<li>매 분마다 Github의 Push Event감지하는 Polling설정
<img src="https://velog.velcdn.com/images/skyvault05/post/5304c730-1b28-4ef8-9379-93b5a9e28745/image.png" alt=""></li>
</ul>
<h5 id="3-pipeline과-jenkins파일-구성"><strong>3) Pipeline과 Jenkins파일 구성</strong></h5>
<p><em>Pipiline - Definition - SCM</em></p>
<p><img src="https://velog.velcdn.com/images/skyvault05/post/f734cbf3-f65f-4cda-a1d0-a0ae31825f45/image.png" alt=""></p>
<p><em>Pipiline - Definition - Script Path</em></p>
<ul>
<li>Github Repository 내의 jenkinsfile 상대경로</li>
</ul>
<p><img src="https://velog.velcdn.com/images/skyvault05/post/9a87b42d-1ed8-4346-b010-713fb62ed5c4/image.png" alt=""></p>
<p><em>Jenkinsfile 구성</em></p>
<ul>
<li>Java Build stage: Maven을 통해 Java Project Archive파일 빌드</li>
<li>Docekr Image Build With Remote Ansible Server AND Remote Docker Server Using Publish Over SSH Module: Publish Over SSH Module모듈로 Docker Image 빌드를 위한 파일을 Ansible서버에 전송 후 Docker서버에 전송, Ansible 서버에 Docker Image 빌드와 Project 배포 실행을 위한 Ansible Playbook 전송 및 실행.</li>
</ul>
<p><code>jenkinsfile</code></p>
<pre><code>pipeline {
    agent any

    tools {
        // Install the Maven version configured as &quot;M3&quot; and add it to the path.
        maven &quot;M2_HOME&quot;
    }

    stages {
        stage(&#39;Java Build&#39;) {
            steps {
                // Run Maven on a Unix agent.
                sh &quot;mvn -Dmaven.test.failure.ignore=true clean package -f pom.xml&quot;
            }
        }
        stage(&#39;Docekr Image Build With Remote Ansible Server AND Remote Docker Server Using Publish Over SSH Module&#39;) {
            steps {
                sshPublisher(publishers: [sshPublisherDesc(configName: &#39;ansible-host&#39;, transfers: [sshTransfer(cleanRemote: false, excludes: &#39;&#39;, execCommand: &#39;&#39;, execTimeout: 120000, flatten: false, makeEmptyDirs: false, noDefaultExcludes: false, patternSeparator: &#39;[, ]+&#39;, remoteDirectory: &#39;java-hello-world&#39;, remoteDirectorySDF: false, removePrefix: &#39;webapp/target/&#39;, sourceFiles: &#39;webapp/target/webapp.war&#39;), sshTransfer(cleanRemote: false, excludes: &#39;&#39;, execCommand: &#39;&#39;, execTimeout: 120000, flatten: false, makeEmptyDirs: false, noDefaultExcludes: false, patternSeparator: &#39;[, ]+&#39;, remoteDirectory: &#39;java-hello-world&#39;, remoteDirectorySDF: false, removePrefix: &#39;docker/&#39;, sourceFiles: &#39;docker/Dockerfile&#39;), sshTransfer(cleanRemote: false, excludes: &#39;&#39;, execCommand: &#39;&#39;&#39;scp java-hello-world/Dockerfile 13.125.234.12:~/
scp java-hello-world/webapp.war 13.125.234.12:~/
TOKEN=`echo $TOKEN` BUILD_NUMBER=`echo $BUILD_NUMBER` ansible-playbook java-hello-world/docker_build_and_push.yaml
BUILD_NUMBER=`echo $BUILD_NUMBER` ansible-playbook java-hello-world/kube_deploy.yaml&#39;&#39;&#39;, execTimeout: 120000, flatten: false, makeEmptyDirs: false, noDefaultExcludes: false, patternSeparator: &#39;[, ]+&#39;, remoteDirectory: &#39;java-hello-world&#39;, remoteDirectorySDF: false, removePrefix: &#39;playbook/&#39;, sourceFiles: &#39;playbook/*.yaml&#39;)], usePromotionTimestamp: false, useWorkspaceInPromotion: false, verbose: true)])
            }
        }
    }
}</code></pre><p><br/><br/></p>
<h2 id="5-구현-결과">5. 구현 결과</h2>
<h3 id="5-1-jenkins-build">5-1. Jenkins build</h3>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/2bafaa74-b194-48a8-ac83-723e3466d022/image.png" alt=""></p>
<h3 id="5-2-로컬에서-접근">5-2. 로컬에서 접근</h3>
<p><img src="https://velog.velcdn.com/images/repush/post/d05b8532-8703-41f9-9712-0f45b4d57360/image.png" alt=""></p>
<h3 id="5-3-웹사이트에서-접근">5-3. 웹사이트에서 접근</h3>
<p><img src="https://velog.velcdn.com/images/repush/post/67da4a67-d3d5-4810-9a41-87a3fe292efd/image.png" alt=""></p>
<h2 id="6-결론">6. 결론</h2>
<h3 id="6-1-as-is">6-1. As-is</h3>
<ul>
<li><p>기본적으로 이번 프로젝트에서 <code>Jenkins</code>를 통한 <code>GitOps</code> <code>CI/CD</code>는 성공했다.</p>
</li>
<li><p><code>Terraform</code>과 <code>Ansible</code>을 사용하여 빌드를 <code>자동화</code>하는데 어느정도 구축은 했다.</p>
</li>
</ul>
<br/>

<h3 id="62-to-be">6.2. To-be</h3>
<ul>
<li>앤서블 플레이북 작성시 몇몇 명령어가 실행되지 못한 점이 아쉬웠다.</li>
<li>앤서블 플레이북을 지금은 <code>shell</code>과 <code>command</code>위주로 사용했는데 <code>모듈</code>을 사용하면 더 완성도있는 결과를 낼 수 있었을것같다.</li>
<li>로드밸드밸런서와 오토스케일링을 활용한 웹서비스 배포를 <code>GUI</code>환경에서는 성공했지만 마지막에 <code>IaC</code>로 구현한 인프라에서는 작동하지 못한 점이 아쉬웠다.</li>
<li>보안그룹을 지금은 간단하게 열어놓았지만 좀더 최적화해서 보안그룹을 구성하면 더 좋을 것 같다.</li>
<li>workernode를 오토스케일링 하여 자동으로 추가하는 작업도 <code>GUI</code>환경에서는 성공했지만 <code>IaC</code>로 구현한 인프라에서는 작동하지 못한 점 또한 아쉽게 되었다.</li>
<li>배스천 호스트를 오토스케일링하고 네트워크 로드밸런서로 구성하여 도메인 네임 엔드포인트로 ssh는 것도 <code>GUI</code>환경에서 성공했지만 <code>IaC</code>로 구현하면서 시간이 부족하여 구성하지 못한 점도 아쉽게 되었다.</li>
</ul>
]]></description>
        </item>
        <item>
            <title><![CDATA[DevOps 실현을 위한 CI/CD 구축 (22.06.07)]]></title>
            <link>https://velog.io/@sunny-10/22.06.07</link>
            <guid>https://velog.io/@sunny-10/22.06.07</guid>
            <pubDate>Tue, 07 Jun 2022 06:46:27 GMT</pubDate>
            <description><![CDATA[<h1 id="argocd">ArgoCD</h1>
<p>GitOps의 구현체</p>
<p>Argo공식site:
<a href="https://argo-cd.readthedocs.io/en/stable/">https://argo-cd.readthedocs.io/en/stable/</a></p>
<h4 id="1-install-argo-cd">1) Install Argo CD</h4>
<p>Argo설치를 위해선 (kubernetes 필요)</p>
<ul>
<li>Installed kubectl command-line tool.</li>
<li>Have a kubeconfig file (default location is ~/.kube/config).<pre><code>kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</code></pre><img src="https://velog.velcdn.com/images/sunny-10/post/61767e06-aec4-4f05-a4b4-21067cb414b0/image.PNG" alt=""></li>
</ul>
<h4 id="2-download-argo-cd-cli">2) Download Argo CD CLI</h4>
<pre><code>curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64

chmod +x /usr/local/bin/argocd</code></pre><p>cli 설치 링크: 
<a href="https://argo-cd.readthedocs.io/en/stable/cli_installation/">https://argo-cd.readthedocs.io/en/stable/cli_installation/</a></p>
<pre><code>kubectl get all -n argocd
# argocd-server    Cluster-IP 확인</code></pre><p>외부에서 접근하도록 설정 필요.
<img src="https://velog.velcdn.com/images/sunny-10/post/27f3305a-227c-4d79-8e6f-9bcc9df1820c/image.PNG" alt=""></p>
<h4 id="3-argo-cd-서비스-노출">3) argo CD 서비스 노출</h4>
<ul>
<li>로드밸런서 타입으로 변경<pre><code>kubectl patch svc argocd-server -n argocd -p &#39;{&quot;spec&quot;: {&quot;type&quot;: &quot;LoadBalancer&quot;}}&#39;
</code></pre></li>
</ul>
<p>kubectl describe svc argocd-server -n argocd</p>
<h1 id="load-balancer-port-확인">Load Balancer Port 확인</h1>
<pre><code>![](https://velog.velcdn.com/images/sunny-10/post/4ec8b3c9-3615-4bc1-986f-7aae01182714/image.PNG)

#### 4) 웹 브라우저에서 login
웹브라우저에서 [master_IP]:[port]
    예)  192.168.59.10:31958
https 로 redirect됨.


web UI의 로그인 계정은 admin
암호는 확인해서 입력</code></pre><p>kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=&quot;{.data.password}&quot; | base64 -d; echo</p>
<pre><code>[문자들이 나옴]= 암호
![](https://velog.velcdn.com/images/sunny-10/post/2f83bc7a-d04a-4689-a868-67008faeb379/image.PNG)![](https://velog.velcdn.com/images/sunny-10/post/ad26c7d7-1fe7-4b01-9a67-030bbe6b3f43/image.PNG)


#### 5) CLI도 argocd 로그인</code></pre><p>argocd 엔터
argocd login --insecure <ARGOCD_SERVER_DOMAIN>:<PORT>
                       192.168.100.100:65001</p>
<pre><code>login id : admin
password : (위에서 확인한 문자)

![](https://velog.velcdn.com/images/sunny-10/post/31d5e93c-21b1-4d6f-9a19-447fe0a8bbd7/image.PNG)

#### 6) application 생성하기 - CLI</code></pre><p>argocd app create guestbook --repo <a href="https://github.com/argoproj/argocd-example-apps.git">https://github.com/argoproj/argocd-example-apps.git</a> --path guestbook --dest-server <a href="https://kubernetes.default.svc">https://kubernetes.default.svc</a> --dest-namespace default</p>
<pre><code>source : (git) --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook 

destination ; local kubernetes cluster
    --dest-server https://kubernetes.default.svc --dest-namespace 
![](https://velog.velcdn.com/images/sunny-10/post/5439a4bf-4e1b-4b34-aca6-b8f3f4aed58f/image.PNG)


#### 7) CLI로 application 확인</code></pre><p>argocd app get guestbook</p>
<pre><code>webui로 확인하시면 노란색 (동기화 안됨)

CLI 동기화 하기 </code></pre><p>argocd app sync guestbook
argocd app get guestbook</p>
<pre><code>![](https://velog.velcdn.com/images/sunny-10/post/d995350c-f1bd-430a-8aa4-e0c79994ee91/image.PNG)![](https://velog.velcdn.com/images/sunny-10/post/52925320-8fda-4b6c-b80e-fec7fdebb9be/image.PNG)


ArgoCD web UI에서 guestbook application을 삭제해보기
kubernetes 확인


#### 8) guestbook appliname을 WEB UI에서 생성하기
web UI에서 동기화


git------------------kubernetes cluster
        applicaton

git 로그인 repository 생성

guestbook-ui-deployment.yaml</code></pre><p>apiVersion: apps/v1
kind: Deployment
metadata:
  name: guestbook-ui
spec:
  replicas: 1
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: guestbook-ui
  template:
    metadata:
      labels:
        app: guestbook-ui
    spec:
      containers:
      - image: gcr.io/heptio-images/ks-guestbook-demo:0.2
        name: guestbook-ui
        ports:
        - containerPort: 80</p>
<pre><code>guestbook-ui-svc.yaml</code></pre><p>apiVersion: v1
kind: Service
metadata:
  name: guestbook-ui
spec:
  ports:</p>
<ul>
<li>port: 80
targetPort: 80
selector:
app: guestbook-ui<pre><code>![](https://velog.velcdn.com/images/sunny-10/post/46028ac5-29a5-4d09-a0e3-57333c7ca862/image.PNG)
/guestbook/ 하위에 등록
</code></pre></li>
</ul>
<h4 id="9-git-repo와-kubernetes-cluster를-연동하는-application을-생성해보기">9) git repo와 kubernetes cluster를 연동하는 application을 생성해보기</h4>
<p>source : git URL 
https  :  <a href="https://github.com/soyoung-2020/argocd.git">https://github.com/soyoung-2020/argocd.git</a>  (**)
ssh   : <a href="mailto:git@github.com">git@github.com</a>:Sunny-1030/argocd.git
 PATH :  repository 하위 디렉토리 명 . guestbook
    repository 하위에 바로 yaml 파일이있을 경우에는  .
    project : default</p>
<p>destination
   <a href="https://kuberntes.default.svc">https://kuberntes.default.svc</a>
   namespace : default (다른 이름이 지정이 가능하나, 미리 생성해야함)</p>
<p>application name : myguestbook</p>
<p>myguestbook을 kubernetes cluster와 동기화 </p>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/00fb5e3a-7a2e-4ca6-801b-03cd648c6eed/image.PNG" alt=""></p>
<h4 id="10-git-repository---guestbook-ui-deploymentyaml">10) git repository - guestbook-ui-deployment.yaml</h4>
<p>1차 변경
replicas: 1    ---&gt;   2 개<br>local pc라면 commit &gt;  push</p>
<p>git site에서 편집</p>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/a8a57ca7-86c4-4607-a390-3e1cab7a8004/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/e4a17846-18d1-42cc-a94c-5118d323fff8/image.PNG" alt=""></p>
<h4 id="11-argocd-webui에서--git소스live소스를-diff-확인---동기화sync">11) argoCD webui에서  git소스/live소스를 diff 확인  &gt; 동기화sync</h4>
<pre><code>kubectl get all -n default</code></pre><p>Argocd WebUI  - application 내에서 상단에 “HISTORY”</p>
<p>현시점 외에 과거로 rollback 해보기(2개,1개),  동기화 하면 다시 현재로.(2개)
<img src="https://velog.velcdn.com/images/sunny-10/post/39a65546-d107-452b-9383-bc424b79d2ac/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/a3c7b55d-1503-4884-be78-2d7b546d1b8b/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/f9a5b933-cc2a-4ace-9b66-920b52755600/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/46ea6ad1-727b-4efb-bb33-0c4f874b07ad/image.PNG" alt=""></p>
<p>rollback 시 이전(1개) 으로 돌아가며 sync(동기화)하면 원 상태(2개)로 복구됨</p>
<h4 id="12-kubernetes-manifest-를-동기화-해보자">12) kubernetes manifest 를 동기화 해보자.</h4>
<p>자신의 git에 repository 생성</p>
<p>nginx-svc.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  type:  LoadBalancer    # 서비스 타입
  ports:
  - port: 8080       # 서비스 포트
    targetPort: 80   # 타켓, 즉 pod의 포트
    protocol: TCP
    name: http
  selector:
    app: nginx</code></pre><p>nginx-deployment.yaml</p>
<pre><code>apiVersion: apps/v1           # 쿠버네티스 api 버전
kind: Deployment              # 생성할 오브젝트 종류
metadata:                
  name: nginx-deployment      # deployment의 이름
  labels:
    app: nginx                # label 지정
spec:                         # deployment의 스펙을 정의
  replicas: 3                 # 3개의 pod 설정
  selector:                   # deployment가 관리할 pod를 찾는 방법을 정의
    matchLabels:     
      app: nginx
  template:
    metadata:
      labels:                 # pod의 label
        app: nginx
    spec:
      containers:             # 컨테이너 설정
      - name: nginx          
        image: nginx:1.14.2
        ports:
        - containerPort: 80</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/267151a3-716f-47ab-a00c-4b9ca4b98554/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/c0b1559e-8d50-4213-beac-eb4eaf812b3a/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/179969d5-1d02-474f-a329-1ad6611d1704/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/7e47f2bd-6109-45dd-aa93-4f40cb52ea50/image.PNG" alt=""></p>
<h4 id="13-templates-이용하는-방법----helm-helm-chart">13) templates 이용하는 방법  - helm (helm chart)</h4>
<p>application 추가
repository :    git  -&gt; helm</p>
<p>repository : <a href="https://github.com/argoproj/argocd-example-apps.git">https://github.com/argoproj/argocd-example-apps.git</a>
path  : apps
chart : repository를 인식하면 자동으로 목록 뜸.  Chart.yaml 선택</p>
<p>하단에 helm 설정파트가 있음 values: values.yaml (또는 values.yaml파일을 보제하여 값을 변경하고자하는 셋팅가능)</p>
<p>repository 
-HTTPS git연결
-SSH git 연결 
    key 생성
    github : public key 등록
    argocd : private 등록, repository 등록 -&gt; application 연결에 사용</p>
<hr>
]]></description>
        </item>
        <item>
            <title><![CDATA[DevOps 실현을 위한 CI/CD 구축 (22.06.03)]]></title>
            <link>https://velog.io/@sunny-10/22.06.03</link>
            <guid>https://velog.io/@sunny-10/22.06.03</guid>
            <pubDate>Mon, 06 Jun 2022 06:16:10 GMT</pubDate>
            <description><![CDATA[<h1 id="cicd">CICD</h1>
<p>Code --&gt; Kubernetes</p>
<ul>
<li>Git/GitHub</li>
<li>Jenkins</li>
<li>Ansible</li>
<li>Docker Image</li>
<li>Code(Java) - Tomcat</li>
</ul>
<h2 id="jenkins-설치">Jenkins 설치</h2>
<pre><code>sudo apt install openjdk-11-jdk</code></pre><pre><code>curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee \
    /usr/share/keyrings/jenkins-keyring.asc &gt; /dev/null</code></pre><pre><code>echo &quot;deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
    https://pkg.jenkins.io/debian-stable binary/&quot; | sudo tee \
    /etc/apt/sources.list.d/jenkins.list &gt; /dev/null</code></pre><pre><code>sudo apt-get update
sudo apt-get install fontconfig jenkins</code></pre><pre><code>systemctl status jenkins</code></pre><pre><code>http://X.X.X.X:8080</code></pre><pre><code>sudo cat /var/lib/jenkins/secerts/initialAdminPassword</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/64d89438-43f5-47b3-8485-fa58e454229d/image.PNG" alt="">
<img src="https://velog.velcdn.com/images/sunny-10/post/2371b968-796f-49da-8cfc-e2df004d9586/image.PNG" alt=""></p>
<p>접속 확인을 할 수 있다.</p>
<h4 id="수업-예제test">수업 예제(test)</h4>
<p>1) 새로운 item -&gt; Freestyle Project -&gt; build
<img src="https://velog.velcdn.com/images/sunny-10/post/be81dc6d-1cdc-41c4-a7a7-7cf35237ae32/image.PNG" alt="">
2) apply -&gt; 저장 후 build now 로 결과(콘솔창) 확인
<img src="https://velog.velcdn.com/images/sunny-10/post/b51f47e1-905d-483c-8cf6-df67c4560fb2/image.PNG" alt=""></p>
<h2 id="maven">Maven</h2>
<p>Maven: Java 프로젝트 빌드 도구
빌드:</p>
<ol>
<li>validate: 필요한 정보가 있는지 확인</li>
<li>compile: 소스코드 컴파일</li>
<li>test: 컴파일된 코드단위 테스트</li>
<li>package: JAR/WAR 파일로 생성</li>
<li>verify: 통합 테스트</li>
<li>install: 로컬 저장소에 배포(~/.m2/repository)</li>
<li>deploy: 원격 저장소에 배포</li>
</ol>
<p>소스코드</p>
<pre><code>git clone https://github.com/Sunny-1030/source-java-maven-hello-world  

#강사님 github에서 fork</code></pre><pre><code>sudo apt install maven </code></pre><pre><code>mvn clean package  #war파일로 만들어 줌</code></pre><p>시스템 환경구성
<code>~/.zshrc</code> 또는 <code>~/.bashrc</code></p>
<pre><code>JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/
M2_HOME=/usr/share/maven
M2=$M2_HOME/bin

PATH=$PATH:$JAVA_HOME:$M2:$M2_HOME</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/7bc069ba-03ff-4429-9bdf-722bf1018c24/image.PNG" alt=""></p>
<pre><code>source ~/.zshrc</code></pre><p>또는</p>
<pre><code>source ~/.bashrc</code></pre><h4 id="플러그인-설치">플러그인 설치</h4>
<ul>
<li>jenkins관리 -&gt; 플러그인 관리</li>
<li>설치가능에서 maven 검색</li>
<li>maven invoker &amp; integration 선택 후</li>
<li>install without restart
<img src="https://velog.velcdn.com/images/sunny-10/post/a36ea36f-8e1b-4d68-9fb5-994071d4986f/image.PNG" alt="">
새로운 item 에서 maven project 구성 확인 가능
<img src="https://velog.velcdn.com/images/sunny-10/post/943d1595-1dd5-415b-b812-07f0fea24d6f/image.PNG" alt=""></li>
</ul>
<h4 id="global-tool-설정">Global Tool 설정</h4>
<p>jenkins관리에 global tool 선택</p>
<p>1) JDK 설정
name JAVA_HOME
위치 /usr/lib/jvm/java-11-openjdk-amd64/
<img src="https://velog.velcdn.com/images/sunny-10/post/849e0632-0ff0-42ab-be22-7f5ea64919eb/image.PNG" alt="">
2) Maven설정
name M2_HOME
위치 /usr/share/maven
<img src="https://velog.velcdn.com/images/sunny-10/post/82629026-289b-46be-8b03-750d140b9381/image.PNG" alt=""></p>
<h4 id="maven-test">maven test</h4>
<p>1) 소스코드 git 선택
<a href="https://github.com/Sunny-1030/source-java-maven-hello-world">https://github.com/Sunny-1030/source-java-maven-hello-world</a>
<img src="https://velog.velcdn.com/images/sunny-10/post/b263e9bf-2d4c-4ebe-9094-d4d955280c9a/image.PNG" alt=""></p>
<p>2) branch 정의 
*/main
<img src="https://velog.velcdn.com/images/sunny-10/post/7e5333af-cdd5-4608-a8a4-f30cb46d9ce2/image.PNG" alt=""></p>
<p>3) build Goals 설정
clean package
<img src="https://velog.velcdn.com/images/sunny-10/post/a8ad8605-40bb-46b9-bdf4-8a4d0541b20c/image.PNG" alt=""></p>
<h4 id="credentials-구성-가능">Credentials 구성 가능</h4>
<p>mange credentials jenkins-global 선택
add credential 할 수 있음
<img src="https://velog.velcdn.com/images/sunny-10/post/e5486a0d-1f78-4fc3-b665-07a402356342/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/cf3d74f8-a1da-484f-880c-366a7031e0be/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/f38cc4fd-9d1d-4b5e-a96b-9a1ed8ff8b86/image.PNG" alt=""></p>
<h2 id="tomcat">Tomcat</h2>
<pre><code>sudo apt install tomcat9 tomcat9-admin</code></pre><pre><code>systemctl status tomcat9</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/5ba32e27-92b4-41ed-9d1a-7bee54c354a0/image.PNG" alt="">
브라우저로 접속 IP: 192.168.59.12:8080
<img src="https://velog.velcdn.com/images/sunny-10/post/8ce78f39-5b3b-47f3-99bc-713f45d841ac/image.PNG" alt=""></p>
<p>docBase</p>
<pre><code>/var/lib/tomcat9/webapp/ROOT</code></pre><h3 id="tomcat-admin-management">Tomcat Admin Management</h3>
<p>관리자 페이지에 접속하기
ID 와 password를 설정해주어야 접속이 가능</p>
<pre><code>http://192.168.59.11:8080/manager/html</code></pre><p>Admin Mgmt 계정/패스워드</p>
<p><code>/etc/tomcat9/tomcat-user.xml</code></p>
<pre><code class="language-xml">&lt;tomcat-users xmlns=&quot;http://tomcat.apache.org/xml&quot;
              xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;
              xsi:schemaLocation=&quot;http://tomcat.apache.org/xml tomcat-users.xsd&quot;
              version=&quot;1.0&quot;&gt;
        &lt;role rolename=&quot;manager-gui&quot;/&gt;
        &lt;role rolename=&quot;manager-script&quot;/&gt;
        &lt;role rolename=&quot;manager-jmx&quot;/&gt;
        &lt;role rolename=&quot;manager-status&quot;/&gt;
        &lt;user username=&quot;admin&quot; password=&quot;P@ssw0rd&quot; roles=&quot;manager-gui, manager-script, manager-jmx, manager-status&quot;/&gt;
&lt;/tomcat-users&gt;</code></pre>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/be1e5bf8-4b52-4769-9161-340f1709c6de/image.PNG" alt="">
admin설정 후 tomcat9 재시작</p>
<pre><code>sudo systemctl restart tomcat9</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/aef2e2fa-1918-401f-9d36-f71a072d2697/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/bc378d89-c3dd-47ca-8e19-77a4f1e0d338/image.PNG" alt="">
설정한 id와 pw로 관리자 페이지 접속가능
<img src="https://velog.velcdn.com/images/sunny-10/post/660b929f-74dd-45dc-9d51-6f5c1916692b/image.PNG" alt=""></p>
<h2 id="ansible-with-docker">Ansible with Docker</h2>
<p>Ansible로 <code>docker_*</code> 모듈로 Docker 호스트 관리</p>
<pre><code>sudo apt install python3-pip</code></pre><pre><code>sudo pip3 install docker</code></pre><p>docker 엔진 설치
참고링크: <a href="https://docs.docker.com/engine/install/ubuntu/">https://docs.docker.com/engine/install/ubuntu/</a>
유저 권한까지 부여해 준다.</p>
<pre><code>id -a 
docker ps

# 권한 확인 , 적용안될 시 재접속</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/96883697-34ea-4967-9c78-1e853da3f8ac/image.PNG" alt=""></p>
<p>webapp.war file을 docker host로 이동</p>
<pre><code>cd webapp.war /tmp
scp /tmp/webapp.war 192.168.59.12:/tmp</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/e606ba5a-c0bb-4672-ab47-2761cb4328f5/image.PNG" alt=""></p>
<p>Docker(vm) 에서 Deploy하기</p>
<pre><code>cp /tmp/webapp.war .

vi Dockerfile   #도커파일생성 아래사진 참고

docker build -t myapp .
docker run -d -p 8080:8080 myapp
</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/4daac39d-34af-47f4-a278-f471a2acf896/image.PNG" alt=""></p>
<p>webapp 접속 확인</p>
<pre><code>curl localhost:808/webapp/  # port가 겹쳐서 808로 설정</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/893173b2-cb9b-4327-b21c-e4bb83986dbe/image.PNG" alt=""></p>
<p>jenkins 자동화를 위해 현재 docker conatainer/images 삭제 (정리)</p>
<pre><code>docker ps
docker rm -f xx

docker images
docker rmi xx</code></pre><blockquote>
<p>jenkins와 tomcat port 겹쳤을 경우
jenkins port 변경
sudo vi /etc/default/jenkins
sudo vi /etc/init.d/jenkins
의 port를 모두 변경해야함
sudo /etc/init.d/jenkins restart</p>
</blockquote>
<h3 id="jenkins를-통한-docker-자동화">jenkins를 통한 docker 자동화</h3>
<p>jenkins 관리 -&gt; 플러그인 매니저
설치가능 -&gt; artifact 검색
Publish Over SSH 설치</p>
<p>새로운 Item 구성 (maven project)
(이전과 같은 포멧으로 설정)
빌드 후 조치
send build artifacts over SSH 
<code>서버 설정을 위해 잠시 저장</code></p>
<p>jenkins관리 -&gt; 시스템설정
(맨 아래) Publish over SSH
SSH Servers 추가</p>
<ul>
<li>name
docker-host</li>
<li>hostname</li>
</ul>
<p>192.168.59.12</p>
<ul>
<li>username
vagrant</li>
<li>Remote Directory
설정 안하면 default값
<img src="https://velog.velcdn.com/images/sunny-10/post/93846b9b-54e5-4f60-abd2-1f98bfcda0b3/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/0f65f439-1d48-43e1-9e11-8a2d49a2b9bc/image.PNG" alt=""></li>
</ul>
<p>고급 누르면 추가적인 기능생성
간단하게 password만 작성
<img src="https://velog.velcdn.com/images/sunny-10/post/128352f1-b63c-4fd0-9ec8-92e1c7d810e4/image.PNG" alt="">
Test Configuration으로 확인
<img src="https://velog.velcdn.com/images/sunny-10/post/855e1709-888c-44f8-9ac5-167b4ebfe78b/image.PNG" alt=""></p>
<p><code>다시 구성으로 이동</code>
서버설정에 서버가 생성된 것을 확인 할 수 있음
<img src="https://velog.velcdn.com/images/sunny-10/post/7f88098b-a034-4ea7-ad35-d7bc5fa98f31/image.PNG" alt=""></p>
<p>transfers
-source files
webapp/target/webapp.war</p>
<p>-remote directory
myweb
(비워두면 홈디렉토리)
<img src="https://velog.velcdn.com/images/sunny-10/post/78936a40-7937-410f-a3cf-716b92b6240d/image.PNG" alt=""></p>
<p>apply -&gt; 저장</p>
<p>Build Now</p>
<p>docker(vm)에서 확인
webapp 이 들어온 것을 확인할 수 있음
단, webapp/target/webapp.war를 통으로 가져오기에
사용상 문제는 없으나 경로를 쓰기 귀찮을 수 있음
<img src="https://velog.velcdn.com/images/sunny-10/post/1875bb15-cfa7-473c-80fc-0d8e23178742/image.PNG" alt=""></p>
<p>이때, 쓰는 것 Remove prefix에 제거할 경로를 
입력하면 복사할 시 지정된 경로는 제거된다.
<img src="https://velog.velcdn.com/images/sunny-10/post/8d034b81-0a62-430e-950f-3134658104d9/image.PNG" alt="">
깔끔히 webapp/target은 사라지고 바로 webapp.war를 볼 수 있다.
<img src="https://velog.velcdn.com/images/sunny-10/post/7502e83e-2236-41e0-89e5-2b836c33024a/image.PNG" alt=""></p>
<h3 id="실습">(실습)</h3>
<pre><code>cd source-java-maven-hello-world</code></pre><pre><code>vi Dockerfile

From tomcat:9.0-jre11-openjdk

COPY webapp.war /usr/local/tomcat/webapps</code></pre><pre><code>git add .

git commit -m &#39;add dockerfile&#39;</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/473a5c39-9bed-4c6b-8a50-f51bc222e1b2/image.PNG" alt=""></p>
<pre><code>git push

name: Sunny-1030
password: 발급받은 token 입력</code></pre><p>github 에서 dockerfile 확인
<img src="https://velog.velcdn.com/images/sunny-10/post/ae81b040-419f-4dca-97ee-4ccdf0c22a13/image.PNG" alt=""></p>
<p>다시 Deploy-to-docker 구성
add transfer set 클릭</p>
<ul>
<li><p>source file
Dockerfile</p>
</li>
<li><p>remove</p>
</li>
</ul>
<ul>
<li>remote directory
myweb
apply -&gt; 저장 -&gt; Build Now
<img src="https://velog.velcdn.com/images/sunny-10/post/eb392e84-0c60-4de9-8dcd-72ee542075da/image.PNG" alt="">
jenkins(vm)에서 docker(vm)으로 이동을 확인할 수 있다.
<img src="https://velog.velcdn.com/images/sunny-10/post/45b1c3af-cf47-4620-93a8-c95399bc84f5/image.PNG" alt=""></li>
</ul>
<blockquote>
<p>실패 시 debuging 할때
콘솔창에서 상세정보를 보기위해
구성 -&gt; ssh server -&gt; 고급
verbose output 선택(자세히 볼수 있음)
<img src="https://velog.velcdn.com/images/sunny-10/post/e3720553-9e22-4ff2-ad66-8276ce3c5166/image.PNG" alt=""></p>
</blockquote>
<p>exec command</p>
<pre><code>cd /home/vagrant/myweb
docker build -t myweb .
docker run -d -p 8080:8080 myweb</code></pre><p>apply -&gt; 저장
<img src="https://velog.velcdn.com/images/sunny-10/post/6a0ee3d6-3cd6-446c-b6a3-104ceb34633e/image.PNG" alt=""></p>
<p>docker build &amp; deploy 확인</p>
<pre><code>docker image

docker ps

curl localhost:8080/webapp/</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/a23af096-a4a3-4d4d-9c42-a6ae087a7f1c/image.PNG" alt=""></p>
<blockquote>
<p>빌드유발 시
poll SCM 을 걸면 오류가 발생
H * * * * (= * * * * * ) 매분
앞뒤로 약간의 편차를 줌
H Hash</p>
</blockquote>
<p>:
:
:</p>
<h3 id="jenkins--ansible-멱등성을-위해">jenkins + ansible 멱등성을 위해</h3>
<p>vm은 jenkins 1대 ansible1대 docker1대 준비
ansible vm 에는 ansible과 docker가 같이 있어야함</p>
<pre><code>sudo apt update
sudo apt install ansible

docker 설치 링크: https://docs.docker.com/engine/install/ubuntu/</code></pre><p>ansible 에서 docker vm  ssh 연결</p>
<pre><code>ssh-keygen
ssh-copy-id vagrant@192.168.59.12
ssh-copy-id vagrant@192.168.59.13</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/30725165-8e35-48a0-aa46-f8dd3dfaa0a5/image.PNG" alt=""></p>
<pre><code>#인벤토리 설정
vi .ansible.cfg
[defaults]
inventory = hosts.ini

#호스트 설정
vi hosts.ini
[ansible_host]
192.168.58.13
[docker_host]
192.168.59.12</code></pre><pre><code>ansible all -m ping   #연결확인</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/ba367e16-b598-46e7-b488-e93a860ee8d5/image.PNG" alt=""></p>
<p>jenkins관리 -&gt; 시스템 설정 -&gt; (맨 아래) publish [추가]</p>
<ul>
<li>name
ansible-host</li>
<li>host name</li>
</ul>
<p>192.168.59.13</p>
<ul>
<li>username
vagrant</li>
</ul>
<p>[고급] -&gt; use password (check)</p>
<ul>
<li>password
vagrant</li>
</ul>
<p>testconfiguration 클릭하여 success 확인
<img src="https://velog.velcdn.com/images/sunny-10/post/84efd503-8a24-4964-a64d-9e4cb18edfe7/image.PNG" alt=""></p>
<p>apply -&gt;저장</p>
<h4 id="1-command로-배포하기">1. command로 배포하기</h4>
<p>jenkins 새로운 Item -&gt; maven project
기존과 동일하게 설정
<code>git</code>  -&gt;레포지토리 주소
<code>branch</code> -&gt; */main
<code>golas</code>  -&gt; clean package</p>
<p><code>빌드 후 조치</code>
[send build artifacts over SSH]</p>
<pre><code>(server - name)
ansible-host

(source file)
webapp/target/webapp.war

(remove prefix)
webapp/target

(remote directory)
java-hello-world

추가

(source file)
Dockerfile

(remote directory)
java-hello-world

추가

(source file)
docker_build_and_push.yaml

(remove prefix)
playbook

(remote directory)
java-hello-world

(exec command)
ansible-playbook java-hello-world/docker_build_and_push.yaml</code></pre><p>apply -&gt; 저장</p>
<h4 id="2-jenkins-vm-에서">2. jenkins vm 에서</h4>
<p>mkdir playbook
cd playbook</p>
<p><code>vi docker_build_and_push.yaml</code></p>
<pre><code>- name: Docker Image Build
  hosts: ansible_host
  gather_facts: false

  tasks:
    - command: docker image build -t java-hello-world java-hello-world/
    - command: docker container rm -f java-hello-world
      ignore_errors: yes
    - command: docker container run --name java-hello-world -d -p 8080:8080 java-hello-world</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/e8603626-50f3-4cdf-b33b-f332ae726cc6/image.PNG" alt=""></p>
<pre><code>git add .
git commit -m &#39;create docker&#39;
git push origin main


name -&gt; Sunny-1030
password -&gt; 토큰</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/f74ce5e2-4ec4-4383-81f0-7df5910914e5/image.PNG" alt=""></p>
<blockquote>
<p>git push가 안될 경우
[에러 문구]</p>
</blockquote>
<pre><code> ! [rejected]        main -&gt; main (non-fast-forward)
error: failed to push some refs to &#39;https://github.com/Sunny-1030/source-java-maven-hello-world&#39;</code></pre><p>git pull origin main --allow-unrelated-histories</p>
<p>ansible(vm) 확인</p>
<pre><code>docker ps
docker images</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/8cac937e-bda5-49b3-9eca-371bbd233923/image.PNG" alt=""></p>
<blockquote>
<p>#tip
토큰을 변수처리 
Default Value에 토큰 값 넣기 (docker 토큰을 넣어야함!)
<img src="https://velog.velcdn.com/images/sunny-10/post/471f0650-0bf2-4f09-82a5-cf17605af727/image.PNG" alt=""><img src="https://velog.velcdn.com/images/sunny-10/post/4652b29b-5a18-4f66-8082-d2b5f85fc4fe/image.PNG" alt=""></p>
</blockquote>
<p>docker repogitory create 생성하기
-&gt; git push</p>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/94d454d5-9cc8-49f1-ae31-e5f57fe0afc8/image.PNG" alt=""></p>
<p>확인 후 docker container 삭제하기</p>
<pre><code>docker rm -f `docker ps -a -q`
docker ps</code></pre><h2 id="ansible-with-kubernetes">Ansible with Kubernetes</h2>
<blockquote>
<p>tip
vi편집기 붙여넣기 오류
:set paste 엔터 로 일시적 해결</p>
</blockquote>
<p>참고링크: [재민님 블로그]
<a href="https://velog.io/@repush/CICD-%EA%B7%B8%EB%8C%80%EB%A1%9C-%EB%94%B0%EB%9D%BC%ED%95%98%EA%B8%B0-5%ED%8E%B8">https://velog.io/@repush/CICD-%EA%B7%B8%EB%8C%80%EB%A1%9C-%EB%94%B0%EB%9D%BC%ED%95%98%EA%B8%B0-5%ED%8E%B8</a></p>
<p>Ansible로 <code>k8s_*</code> 모듈을 사용하여 Kubernetes 호스트 관리</p>
<p>사전 구성:</p>
<ul>
<li>kubectl 명령</li>
<li>kubeconfig 파일</li>
</ul>
<pre><code>sudo apt install python3-pip</code></pre><pre><code>sudo pip3 install openshift==0.11</code></pre><blockquote>
<p>시점에 따라 사용하는 버전이 다를 수 있음</p>
</blockquote>
]]></description>
        </item>
        <item>
            <title><![CDATA[DevOps 실현을 위한 CI/CD 구축 (22.05.31)]]></title>
            <link>https://velog.io/@sunny-10/22.05.31</link>
            <guid>https://velog.io/@sunny-10/22.05.31</guid>
            <pubDate>Mon, 06 Jun 2022 06:11:13 GMT</pubDate>
            <description><![CDATA[<h1 id="git">Git</h1>
<h2 id="저장소-만들기">저장소 만들기</h2>
<h3 id="버전관리를-하지-않는-로컬-디렉토리">버전관리를 하지 않는 로컬 디렉토리</h3>
<pre><code>mkdir mygit
cd mygit</code></pre><pre><code>git init</code></pre><h3 id="기존-git-저장소-클론">기존 Git 저장소 클론</h3>
<pre><code>git clone &lt;URL&gt;</code></pre><h2 id="git-프로젝트의-세-가지-상태">Git 프로젝트의 세 가지 상태</h2>
<p><img src="https://git-scm.com/book/en/v2/images/areas.png" alt=""></p>
<h2 id="파일의-생명주기">파일의 생명주기</h2>
<p><img src="https://git-scm.com/book/en/v2/images/lifecycle.png" alt=""></p>
<h2 id="상태-확인">상태 확인</h2>
<pre><code>git status</code></pre><h2 id="스테이징">스테이징</h2>
<pre><code>git add &lt;FILE&gt;
git add .</code></pre><h2 id="gitignore-파일">.gitignore 파일</h2>
<ul>
<li>아무것도 없는 라인이나, <code>#</code>로 시작하는 라인은 무시한다.</li>
<li>표준 Glob 패턴을 사용한다. 이는 프로젝트 전체에 적용된다.</li>
<li>슬래시(/)로 시작하면 하위 디렉토리에 적용되지(Recursivity) 않는다.</li>
<li>디렉토리는 슬래시(/)를 끝에 사용하는 것으로 표현한다.</li>
<li>느낌표(!)로 시작하는 패턴의 파일은 무시하지 않는다.</li>
</ul>
<h2 id="staged와-unstaged-상태-변경-내용-보기">Staged와 Unstaged 상태 변경 내용 보기</h2>
<p>staged 상태가 아닌 파일 비교</p>
<pre><code>git diff</code></pre><p>커밋과 staged 상태 비교</p>
<pre><code>git diff --staged</code></pre><h2 id="변경사항-커밋버저닝-스냅샷">변경사항 커밋(버저닝, 스냅샷)</h2>
<pre><code>git commit</code></pre><blockquote>
<p>기본 에디터 변경
<code>git config --global core.editor &lt;EDITOR&gt;</code></p>
</blockquote>
<p>인라인 메시지</p>
<pre><code>git commit -m &lt;MESSAGE&gt;</code></pre><p>스테이징 및 인라인 메시지</p>
<pre><code>git commit -a -m &lt;MESSAGE&gt;</code></pre><p>좋은 Commit 메시지</p>
<blockquote>
<p><a href="https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53">https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53</a></p>
</blockquote>
<h2 id="파일-삭제">파일 삭제</h2>
<pre><code>rm &lt;FILE&gt;</code></pre><p>삭제한 파일 Staged 상태</p>
<pre><code>git rm &lt;FILE&gt;</code></pre><h2 id="파일명-변경">파일명 변경</h2>
<pre><code>git mv &lt;ORGION&gt; &lt;NEWFILE&gt;</code></pre><h2 id="커밋-히스토리로그">커밋 히스토리/로그</h2>
<pre><code>git log</code></pre><blockquote>
<p>로그 출력 변경
<code>git config --global core.pager &#39;less&#39;</code>
<code>git config --global core.pager &#39;&#39;</code></p>
</blockquote>
<pre><code>git log --oneline</code></pre><hr>
<h3 id="수업-중-예제">수업 중 예제</h3>
<p>1) vi pod.yaml 파일 생성 후 
2) git add pod.yaml or .(전체 파일 add) 
3) git status 상태확인
<img src="https://velog.velcdn.com/images/sunny-10/post/61894559-5ba6-482f-9352-df2b86d346ad/image.PNG" alt="">
4) git config 정보 등록 </p>
<pre><code>git config --global user.email &quot;ypjs09@gmail.com&quot;
git config --global user.name &quot;Suny-1030&quot;
git config --global --list</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/6b891e53-089b-4ac4-837b-3697f3c03ed6/image.PNG" alt="">
5)git commit message 확인</p>
<pre><code>git commit -m &quot;create config&quot;   
  #-m 으로 상태창에 진입하지 않고 바로 commit message를 작성할 수 있음

git log
git log --oneline     # 한줄로 로그 확인 가능</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/e2925e75-002d-4d5e-8177-8faa7b30dbef/image.PNG" alt=""></p>
<p>6) 이전 버전으로 돌아가기</p>
<pre><code>git checkout</code></pre><p>7) 최근 상태로 돌아가기</p>
<pre><code>git checkout master</code></pre><p>8) 파일명 변경</p>
<pre><code>git mv pod.yaml mypod.yaml</code></pre><p>9) 기존 commit mesage 사용</p>
<pre><code>git commit --amend</code></pre><blockquote>
<p>HEAD -&gt; 바라보는 곳</p>
</blockquote>
<p>10) branch 이용</p>
<pre><code>git branch dev1 
git checkout dev1
git log </code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/4f464de8-6d66-4353-820d-2c50a49170e9/image.PNG" alt="">
11) branch dev1 에서 새로운 yaml파일을 구성하여 add/commit</p>
<pre><code>git log --oneline --graph   #변경된 HEAD의 위치를 확인</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/540f8b8c-f7a0-4272-b076-3a6921b08db6/image.PNG" alt=""></p>
<p>12) branch 삭제</p>
<pre><code>git branch -d dev1
git branch -D dev1   # 강제삭제</code></pre><p>13) branch 생성</p>
<pre><code>git checkout -b dev2   # 없는 branch 생성 후 이동</code></pre><p>14) git 병합</p>
<pre><code>git merge dev2</code></pre><blockquote>
<p>merge의 충돌
다른 branch에서 같은 파일의 같은 라인을 수정 하여 병합하면 충돌이 일어남</p>
</blockquote>
<p>15) branch name 변경하기</p>
<pre><code>git branch -M main    #사용중일 때는 m 을 못씀</code></pre><p>16) remote 하기</p>
<pre><code>git remote add origin https://github.com/Sunny-1030/effective-eureka.git</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/7e40ac45-de31-4b8c-a707-efa74d4c20fd/image.PNG" alt=""></p>
<p>17) push 하기</p>
<pre><code>git push -u origin main

username sunnuy-1030
password *****
(만약 이중인증이면 토큰 발급을 해야함)</code></pre><blockquote>
<p>만약 github에서 직접 수정을 하면
(fetch -&gt; fast-forward)
git fetch origin  </p>
</blockquote>
<p>18) fast-forward</p>
<pre><code>git merge origin/main
(fast-forward)</code></pre><blockquote>
<p>github 협업시
세팅 -&gt; collaborators -&gt; add people
(많은 collaborator는 x)
Fork -&gt; 다른사람의 레포지터리를 가져올 수 있다</p>
</blockquote>
<p>19) tag 생성</p>
<pre><code>git tag -a &#39;v0.1&#39; -m &#39;Version 0.1&#39;
git tag -l
git push origin v0.1</code></pre><p>20) ssh key 등록하기</p>
<pre><code>ssh-keygen

cat id_rsa.pub 복사

github에 붙여넣기</code></pre><h3 id="git-간편안내서">git 간편안내서</h3>
<p>참고링크: <a href="https://rogerdudler.github.io/git-guide/index.ko.html">https://rogerdudler.github.io/git-guide/index.ko.html</a></p>
]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.30)]]></title>
            <link>https://velog.io/@sunny-10/22.05.30</link>
            <guid>https://velog.io/@sunny-10/22.05.30</guid>
            <pubDate>Mon, 30 May 2022 08:54:37 GMT</pubDate>
            <description><![CDATA[<h1 id="aws-eks">AWS EKS</h1>
<p>참고링크: <a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/getting-started.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/getting-started.html</a></p>
<pre><code>choco install awscli aws-iam-authenticator eksctl kubernetes-helm</code></pre><p>AWS에서 사용자 생성(.csv 파일 받기)</p>
<pre><code>aws configure</code></pre><pre><code>eksctl create cluster --name myeks --nodes=3 --region=ap-northeast-2</code></pre><blockquote>
<p>안되는 것들
Load Balancer Service = class lb -&gt; nlb
Ingress: X
kubectl top: X -&gt; HPA X</p>
</blockquote>
<p>클러스터 네트워킹 참고링크: <a href="https://kubernetes.io/ko/docs/concepts/cluster-administration/networking/">https://kubernetes.io/ko/docs/concepts/cluster-administration/networking/</a></p>
<hr>
<h2 id="yaml-파일을-이용한-eks-배포">YAML 파일을 이용한 EKS 배포</h2>
<pre><code>mkdir aws-eks
cd aws-eks</code></pre><p><code>myeks.yaml</code></p>
<pre><code>apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: myeks-custom
  region: ap-northeast-2
  version: &quot;1.22&quot;

# AZ
availabilityZones: [&quot;ap-northeast-2a&quot;, &quot;ap-northeast-2b&quot;,  &quot;ap-northeast-2c&quot;]

# IAM OIDC &amp; Service Account
iam:
  withOIDC: true
  serviceAccounts:
    - metadata:
        name: aws-load-balancer-controller
        namespace: kube-system
      wellKnownPolicies:
        awsLoadBalancerController: true
    - metadata:
        name: ebs-csi-controller-sa
        namespace: kube-system
      wellKnownPolicies:
        ebsCSIController: true
    - metadata:
        name: cluster-autoscaler
        namespace: kube-system
      wellKnownPolicies:
        autoScaler: true

# Managed Node Groups
managedNodeGroups:
  # On-Demand Instance
  - name: myeks-ng1
    instanceType: t3.medium
    minSize: 2
    desiredCapacity: 3
    maxSize: 4
    privateNetworking: true
    ssh:
      allow: true
      publicKeyPath: ./keypair/myeks.pub
    availabilityZones: [&quot;ap-northeast-2a&quot;, &quot;ap-northeast-2b&quot;, &quot;ap-northeast-2c&quot;]
    iam:
      withAddonPolicies:
        autoScaler: true
        albIngress: true
        cloudWatch: true
        ebs: true

# Fargate Profiles
fargateProfiles:
  - name: fg-1
    selectors:
    - namespace: dev
      labels:
        env: fargate


# CloudWatch Logging
cloudWatch:
  clusterLogging:
    enableTypes: [&quot;*&quot;]
</code></pre><pre><code>mkdir keypair
ssh-keygen -f keypair/myssh</code></pre><pre><code>eksctl create cluster -f myeks.yaml</code></pre><blockquote>
<p>Classic LoadBalancer는 ec2에만 작동</p>
</blockquote>
<h2 id="nlb-for-loadbalancer-service">NLB for LoadBalancer Service</h2>
<blockquote>
<p><a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/network-load-balancing.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/network-load-balancing.html</a>
<a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/alb-ingress.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/alb-ingress.html</a>
<a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/aws-load-balancer-controller.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/aws-load-balancer-controller.html</a></p>
</blockquote>
<h3 id="aws-load-balancer-controller-설치">AWS Load Balancer Controller 설치</h3>
<pre><code>helm repo add eks https://aws.github.io/eks-charts
helm repo update</code></pre><pre><code>helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=myeks-custom --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller --set image.repository=602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-load-balancer-controller</code></pre><h2 id="샘플-코드">샘플 코드</h2>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: myweb-deploy
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080</code></pre><pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-lb
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: &quot;external&quot;
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: &quot;instance&quot;
    service.beta.kubernetes.io/aws-load-balancer-scheme: &quot;internet-facing&quot;
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080</code></pre><ul>
<li>service.beta.kubernetes.io/aws-load-balancer-nlb-target-type<ul>
<li>instance: EC2 타겟</li>
<li>ip: Pod 타겟(Fargate)</li>
</ul>
</li>
<li>service.beta.kubernetes.io/aws-load-balancer-scheme<ul>
<li>internal: 내부</li>
<li>internet-facing: 외부</li>
</ul>
</li>
</ul>
<blockquote>
<p>internet-facing 설정을 안해주면 lb는 private에 설치됨</p>
</blockquote>
<h2 id="ingress-for-alb">Ingress for ALB</h2>
<blockquote>
<p><a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/alb-ingress.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/alb-ingress.html</a></p>
</blockquote>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myweb-ing
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/target-type: instance
    alb.ingress.kubernetes.io/scheme: internet-facing
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: myweb-svc-lb
                port:
                  number: 80</code></pre><ul>
<li>alb.ingress.kubernetes.io/target-type<ul>
<li>instance: EC2 타겟</li>
<li>ip: Pod 타겟(Fargate)</li>
</ul>
</li>
<li>alb.ingress.kubernetes.io/scheme<ul>
<li>internal: 내부</li>
<li>internet-facing: 외부</li>
</ul>
</li>
</ul>
<blockquote>
<p>internet-facing 설정을 안해주면 lb는 private에 설치됨</p>
</blockquote>
<h2 id="ebs-for-csi">EBS for CSI</h2>
<ul>
<li>EBS 스냅샷</li>
<li>EBS 크기 변경</li>
</ul>
<blockquote>
<p><a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/managing-ebs-csi.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/managing-ebs-csi.html</a></p>
</blockquote>
<pre><code>eksctl get iamserviceaccount --cluster myeks-custom      #arn 확인

NAMESPACE       NAME                            ROLE ARN
kube-system     aws-load-balancer-controller    arn:aws:iam::065144736597:role/eksctl-myeks-custom-addon-iamserviceaccount-Role1-11N0OKMVG2DYY
kube-system     aws-node                        arn:aws:iam::065144736597:role/eksctl-myeks-custom-addon-iamserviceaccount-Role1-CLMK7A6K5NL3
kube-system     cluster-autoscaler              arn:aws:iam::065144736597:role/eksctl-myeks-custom-addon-iamserviceaccount-Role1-1S02W28MZOSL4
kube-system     ebs-csi-controller-sa           arn:aws:iam::065144736597:role/eksctl-myeks-custom-addon-iamserviceaccount-Role1-15HLE8HBOD9CN</code></pre><pre><code>eksctl create addon --name aws-ebs-csi-driver --cluster myeks-custom --service-account-role-arn  arn:aws:iam::065144736597:role/eksctl-myeks-custom-addon-iamserviceaccount-Role1-15HLE8HBOD9CN --force
(나에게 맞는 arn으로 변경)</code></pre><pre><code>kubectl get po -n kube-system

kubectl get sc</code></pre><h2 id="metrics-server">Metrics Server</h2>
<blockquote>
<p><a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/metrics-server.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/metrics-server.html</a></p>
</blockquote>
<pre><code>kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml</code></pre><pre><code>kubectl get po -n kube-system

kubectl top nodes</code></pre><h2 id="cluster-autoscaler">Cluster Autoscaler</h2>
<h3 id="수동-스케일링">수동 스케일링</h3>
<pre><code>eksctl scale nodegroup --name myeks-ng1 --cluster myeks-custom --nodes 2</code></pre><h3 id="자동-스케일링">자동 스케일링</h3>
<blockquote>
<p><a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/autoscaling.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/autoscaling.html</a></p>
</blockquote>
<pre><code>curl -o cluster-autoscaler-autodiscover.yaml https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml</code></pre><p><code>cluster-autoscaler-autodiscover.yaml</code></p>
<pre><code>163: - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/myeks-custom
(163번째 열에 클러스터 이름변경)</code></pre><pre><code>kubectl apply -f cluster-autoscaler-autodiscover.yaml</code></pre><pre><code>(적용이 된다면 사용)
kubectl patch deployment cluster-autoscaler -n kube-system -p &#39;{&quot;spec&quot;:{&quot;template&quot;:{&quot;metadata&quot;:{&quot;annotations&quot;:{&quot;cluster-autoscaler.kubernetes.io/safe-to-evict&quot;: &quot;false&quot;}}}}}&#39;</code></pre><pre><code>kubectl -n kube-system edit deployment.apps/cluster-autoscaler</code></pre><pre><code>      - command:
        - ./cluster-autoscaler
        - --v=4
        - --stderrthreshold=info
        - --cloud-provider=aws
        - --skip-nodes-with-local-storage=false
        - --expander=least-waste
        - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/myeks-custom
        - --balance-similar-node-groups
        - --skip-nodes-with-system-pods=false
        image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.22.6</code></pre><p>수정</p>
<ul>
<li>--node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/myeks-custom</li>
<li>--balance-similar-node-groups</li>
<li>--skip-nodes-with-system-pods=false</li>
<li>image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.22.2</li>
</ul>
<pre><code>kubectl set image deployment cluster-autoscaler -n kube-system cluster-autoscaler=k8s.gcr.io/autoscaling/cluster-autoscaler:v1.22.2</code></pre><p>cluster autoscaler 로그 보기</p>
<pre><code>kubectl -n kube-system logs -f deployment.apps/cluster-autoscaler</code></pre><h3 id="샘플-코드-1">샘플 코드</h3>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: myweb-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb:alpine
          ports:
            - containerPort: 8080
          resources:
            requests:
              cpu: 200m
              memory: 200M
            limits:
              cpu: 200m
              memory: 200M</code></pre><h2 id="cloudwatch-container-insight">CloudWatch Container Insight</h2>
<blockquote>
<p><a href="https://docs.aws.amazon.com/ko_kr/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-EKS-quickstart.html">https://docs.aws.amazon.com/ko_kr/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-EKS-quickstart.html</a></p>
</blockquote>
<blockquote>
<p><a href="https://github.com/git-for-windows/git/releases/download/v2.36.1.windows.1/Git-2.36.1-64-bit.exe">https://github.com/git-for-windows/git/releases/download/v2.36.1.windows.1/Git-2.36.1-64-bit.exe</a>
(window bash가 없어서 64비트로 받아준다)</p>
</blockquote>
<pre><code>ClusterName=myeks-custom
RegionName=ap-northeast-2
FluentBitHttpPort=&#39;2020&#39;
FluentBitReadFromHead=&#39;Off&#39;
[[ ${FluentBitReadFromHead} = &#39;On&#39; ]] &amp;&amp; FluentBitReadFromTail=&#39;Off&#39;|| FluentBitReadFromTail=&#39;On&#39;
[[ -z ${FluentBitHttpPort} ]] &amp;&amp; FluentBitHttpServer=&#39;Off&#39; || FluentBitHttpServer=&#39;On&#39;
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml | sed &#39;s/{{cluster_name}}/&#39;${ClusterName}&#39;/;s/{{region_name}}/&#39;${RegionName}&#39;/;s/{{http_server_toggle}}/&quot;&#39;${FluentBitHttpServer}&#39;&quot;/;s/{{http_server_port}}/&quot;&#39;${FluentBitHttpPort}&#39;&quot;/;s/{{read_from_head}}/&quot;&#39;${FluentBitReadFromHead}&#39;&quot;/;s/{{read_from_tail}}/&quot;&#39;${FluentBitReadFromTail}&#39;&quot;/&#39; | kubectl apply -f - </code></pre><h2 id="fargate">Fargate</h2>
<p>EC2 인스턴스 사용 , 파드를 실행</p>
<blockquote>
<p><a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/fargate.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/fargate.html</a></p>
</blockquote>
<pre><code>kubectl create ns dev</code></pre><pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: myfg
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myfg
  template:
    metadata:
      labels:
        app: myfg
        env: fargate
    spec:
      containers:
      - name: myfg
        image: ghcr.io/c1t1d0s7/go-myweb
        resources:
          limits:
            memory: &quot;128Mi&quot;
            cpu: &quot;500m&quot;
        ports:
        - containerPort: 8080
</code></pre><pre><code>apiVersion: v1
kind: Service
metadata:
  name: mysvc
  namespace: dev
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: &quot;external&quot;
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: &quot;ip&quot;
    service.beta.kubernetes.io/aws-load-balancer-scheme: &quot;internet-facing&quot;
spec:
  selector:
    app: myfg
  ports:
  - port: 80
    targetPort: 8080
  type: LoadBalancer
</code></pre><h2 id="vpa">VPA</h2>
<blockquote>
<p><a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/vertical-pod-autoscaler.html">https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/vertical-pod-autoscaler.html</a></p>
</blockquote>
<p>사전 요구 사항</p>
<ul>
<li>openssl 1.1.1 이상</li>
<li>metrics-server</li>
</ul>
<pre><code>git clone https://github.com/kubernetes/autoscaler.git</code></pre><pre><code>cd autoscaler/vertical-pod-autoscaler/</code></pre><pre><code>/hack/vpa-up.sh</code></pre><pre><code>kubectl get pods -n kube-system</code></pre><h4 id="vpa-예제">VPA 예제</h4>
<pre><code>kubectl apply -f examples/hamster.yaml</code></pre><h2 id="클러스터-삭제">클러스터 삭제</h2>
<pre><code>eksctl delete cluster -f .\myeks.yaml --force --disable-nodegroup-eviction</code></pre><p><strong>cloud watch 로그 리소스 삭제 해주기</strong></p>
<hr>
<h1 id="tip">Tip</h1>
<h2 id="lens">lens</h2>
<p>gui 방식으로 인터페이스 서비스 가능
참고링크: <a href="https://k8slens.dev/">https://k8slens.dev/</a></p>
<pre><code>choco install lens
(패키지 설치 시 에는 관리자 권한으로 접속)</code></pre><h2 id="k9s">k9s</h2>
<p>tui 방식으로 인터페이스 서비스(text 형식)
참고링크: <a href="https://k9scli.io/">https://k9scli.io/</a></p>
<pre><code>choco install k9s

k9s</code></pre><h2 id="visual-studio-code">visual studio code</h2>
<p>extension(확장)
kubernetes 설치
(docker도 설치가능)</p>
<pre><code>choco install kubernetes-helm</code></pre><blockquote>
<p>ctl + shift + p 
명령줄에 바로 kubernetes create 가능(터미널 창에 치지 않아도 됨)</p>
</blockquote>
<h2 id="minikube">minikube</h2>
<blockquote>
<p><a href="https://minikube.sigs.k8s.io/docs/start/">https://minikube.sigs.k8s.io/docs/start/</a></p>
</blockquote>
<pre><code>choco install minikube
(관리자 권한으로 설치)</code></pre><pre><code>choco install kubernetes-cli --version=1.22.4</code></pre><p>클러스터 생성/실행</p>
<pre><code>minikube start
(기본값으로 설치됨)</code></pre><p>클러스터 중지</p>
<pre><code>minikube stop</code></pre><p>클러스터 상태</p>
<pre><code>minikube status</code></pre><p>VM 접속</p>
<pre><code>minikube ssh</code></pre><blockquote>
<p>패키지 관리자 X
kubectl  명령 X
docker 명령 O</p>
</blockquote>
<p>VM 내의 Docker Engine  사용</p>
<pre><code>choco install docker-cli
(docker command 만 설치, 서버x)</code></pre><pre><code>minikube -p minikube docker-env --shell powershell | Invoke-Expression
(변수는 해당 터미널에서만 유효)
</code></pre><pre><code>docker ps</code></pre><p>클러스터 삭제</p>
<pre><code>minikube delete</code></pre><p>추가 옵션을 사용한 클러스터 생성/시작</p>
<pre><code>minikube start --cpus 4 --memory 4G --disk-size 30G --driver virtualbox --kubernetes-version v1.22.9</code></pre><p>노드 추가</p>
<pre><code>minikube node list
minikube node add     #자동으로 join해줌</code></pre><p>서비스 목록 확인</p>
<pre><code>minikube service list</code></pre><p>애드온</p>
<pre><code>minikube addons list</code></pre><pre><code>minikube addons enable metrics-server
minikube addons enable ingress</code></pre><pre><code>minikube addons configure metallb

-- Enter Load Balancer Start IP: 192.168.X.200
-- Enter Load Balancer End IP: 192.168.X.209</code></pre><p>클러스터 기본 옵션 지정</p>
<pre><code>minikube config set cpus 2
minikube config set memory 4G
minikube config set driver virtualbox
minikube config set kubernetes-version v1.22.9
minikube config view</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.27)]]></title>
            <link>https://velog.io/@sunny-10/22.05.27</link>
            <guid>https://velog.io/@sunny-10/22.05.27</guid>
            <pubDate>Sun, 29 May 2022 07:13:10 GMT</pubDate>
            <description><![CDATA[<h1 id="rbac-role-based-access-control">RBAC: Role Based Access Control</h1>
<h2 id="kubeconfig">Kubeconfig</h2>
<p><code>~/.kube/config</code></p>
<pre><code>apiVersion: v1
kind: Config
preferences: {}
clusters:
- name: cluster.local
  cluster:
    certificate-authority-data: LS0tLS1...
    server: https://127.0.0.1:6443
- name: mycluster
  cluster:
    server: https://1.2.3.4:6443
users:
- name: myadmin
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1...
    client-key-data: LS0tLS1...
contexts:
- context:
    cluster: mycluster
    user: myadmin
  name: myadmin@mycluster
- context:
    cluster: cluster.local
    user: kubernetes-admin
  name: kubernetes-admin@cluster.local
current-context: kubernetes-admin@cluster.local</code></pre><pre><code>kubectl config view</code></pre><pre><code>kubectl config get-clusters
kubectl config get-contexts
kubectl config get-users</code></pre><pre><code>kubectl config use-context myadmin@mycluster</code></pre><h2 id="인증">인증</h2>
<p>쿠버네티스의 사용자</p>
<ul>
<li>Service Account(sa): 쿠버네티스가 관리하는 SA 사용자<ul>
<li>사용자 X</li>
<li>Pod 사용</li>
</ul>
</li>
<li>Normal User: 일반 사용자(쿠버네티스가 관리 X)<ul>
<li>사용자 O</li>
<li>Pod X</li>
</ul>
</li>
</ul>
<p>인증 방법:</p>
<ul>
<li>x509 인증서</li>
<li>토큰<ul>
<li>Bearer Token<ul>
<li>http 헤더: </li>
<li><code>Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269</code></li>
</ul>
</li>
<li>SA Token<ul>
<li>JSON Web Token: JWT<ul>
<li>OpenID Connect(OIDC)</li>
<li>외부 인증 표준화 인터페이스</li>
<li>okta, AWS IAM</li>
<li>OAuth2 Provider</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2 id="rbac">RBAC</h2>
<ul>
<li>Role: 권한(NS)</li>
<li>ClusterRole: 권한(Global)</li>
<li>RoleBinding<ul>
<li>Role &lt;-&gt; RoleBinding &lt;-&gt; SA/User</li>
</ul>
</li>
<li>ClusterRoleBinding<ul>
<li>ClusterRole &lt;-&gt; ClusterRoleBinding &lt;-&gt; SA/User</li>
</ul>
</li>
</ul>
<blockquote>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a></p>
</blockquote>
<p>요청 동사</p>
<ul>
<li>create<ul>
<li>kubectl create, kubectl apply</li>
</ul>
</li>
<li>get<ul>
<li>kubectl get po myweb</li>
</ul>
</li>
<li>list<ul>
<li>kubectl get pods</li>
</ul>
</li>
<li>watch<ul>
<li>kubectl get po -w</li>
</ul>
</li>
<li>update<ul>
<li>kubectl edit, replace</li>
</ul>
</li>
<li>patch<ul>
<li>kubectl patch</li>
</ul>
</li>
<li>delete<ul>
<li>kubectl delete po myweb</li>
</ul>
</li>
<li>deletecollection<ul>
<li>kubectl delete po --all</li>
</ul>
</li>
</ul>
<p>ClusterRole</p>
<ul>
<li>view: 읽을 수 있는 권한</li>
<li>edit: 생성/삭제/변경 할 수 있는 권한</li>
<li>admin: 모든것 관리(-RBAC: ClusterRole 제외)</li>
<li>cluster-admin: 모든것 관리</li>
</ul>
<h2 id="sa">SA</h2>
<pre><code>kubectl create sa &lt;NAME&gt;</code></pre><h2 id="사용자-생성을-위한-x509-인증서">사용자 생성을 위한 x509 인증서</h2>
<p>Private Key</p>
<pre><code>openssl genrsa -out myuser.key 2048</code></pre><p>x509 인증서 요청 생성</p>
<pre><code>openssl req -new -key myuser.key -out myuser.csr -subj &quot;/CN=myuser&quot;</code></pre><pre><code>cat myuser.csr | base64 | tr -d &quot;\n&quot;</code></pre><p><code>csr.yaml</code></p>
<pre><code>apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: myuser-csr
spec:
  usages:
  - client auth
  signerName: kubernetes.io/kube-apiserver-client
  request: LS0tLS1CRUdJTiB</code></pre><pre><code>kubectl create -f csr.yaml</code></pre><pre><code>kubectl get csr</code></pre><p>상태: Pending</p>
<pre><code>kubectl certificate approve myuser-csr</code></pre><pre><code>kubectl get csr</code></pre><p>상태: Approved, Issued</p>
<pre><code>kubectl get csr myuser-csr -o yaml</code></pre><p>status.certificates</p>
<pre><code>kubectl get csr myuser-csr -o jsonpath=&#39;{.status.certificate}&#39; | base64 -d &gt; myuser.crt</code></pre><p>Kubeconfig 사용자 생성</p>
<pre><code>kubectl config set-credentials myuser --client-certificate=myuser.crt --client-key=myuser.key --embed-certs=true</code></pre><p>Kubeconfig 컨텍스트 생성</p>
<pre><code>kubectl config set-context myuser@cluster.local --cluster=cluster.local --user=myuser --namespace=default</code></pre><pre><code>kubectl config get-users
kubectl config get-clusters
kubectl config get-contexts</code></pre><pre><code>kubectl config use-context myuser@cluster.local</code></pre><p>클러스터 롤 바인딩 생성</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: myuser-view-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: myuser</code></pre><hr>
<h1 id="helm">Helm</h1>
<p>용어</p>
<ul>
<li>Chart: 차트, 패키지</li>
<li>Repository: 차트 저장소</li>
<li>Release: 쿠버네티스 오브젝트 리소스 (패키지 -&gt; 클러스터에 생성한 인스턴스)</li>
</ul>
<blockquote>
<p>helm v3는 tiller를 사용하지 않음</p>
</blockquote>
<h2 id="helm-client-설치">helm client 설치</h2>
<pre><code>curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg &gt; /dev/null
sudo apt-get install apt-transport-https --yes
echo &quot;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main&quot; | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm</code></pre><blockquote>
<p>Helm Chart 검색
<a href="https://artifacthub.io/">https://artifacthub.io/</a></p>
</blockquote>
<h2 id="차트-구조">차트 구조</h2>
<pre><code>&lt;Chart Name&gt;/
  Chart.yaml
  values.yaml
  templates/</code></pre><ul>
<li>Chart.yaml: 차트의 메타데이타</li>
<li>values.yaml: 패키지를 커스터마이즈/사용자화(벨류)</li>
<li>templates: YAML 오브젝트 파일</li>
</ul>
<h2 id="helm-사용법">helm 사용법</h2>
<p>aritifacthub 검색</p>
<pre><code>helm search hub &lt;PATTERN&gt;</code></pre><p>저장소 추가</p>
<pre><code>helm repo add bitnami https://charts.bitnami.com/bitnami</code></pre><p>저장소 검색</p>
<pre><code>helm search repo wordpress</code></pre><p>차트 설치</p>
<pre><code>helm install mywordpress bitnami/wordpress</code></pre><p>릴리즈 확인</p>
<pre><code>helm list</code></pre><p>릴리즈 삭제</p>
<pre><code>helm uninstall mywordpress</code></pre><p>차트 정보 확인</p>
<pre><code>helm show readme binami/wordpress
helm show chart binami/wordpress
helm show values binami/wordpress</code></pre><p>차트 사용자화</p>
<pre><code>helm install mywp bitnami/wordpress --set replicaCount=2
helm install mywp bitnami/wordpress --set replicaCount=2 --set service.type=NodePort</code></pre><p>릴리즈 업그레이드</p>
<pre><code>helm show value bitnami/wordpress &gt; wp-value.yaml
파일 수정</code></pre><pre><code>helm upgrade mywp bitnami/wordpress -f wp-value.yaml</code></pre><p>릴리즈 업그레이드 히스토리</p>
<pre><code>helm history mywp</code></pre><p>릴리즈 롤백</p>
<pre><code>helm rollback mywp 1</code></pre><p><code>wp-value2.yaml</code></p>
<pre><code>replicaCount: 1

service:
  type: LoadBalancer</code></pre><pre><code>helm upgrade mywp bitnami/wordpress -f wp-value2.yaml</code></pre><hr>
<h1 id="monitoring--logging">Monitoring &amp; Logging</h1>
<h2 id="prometheus-monitoring">Prometheus Monitoring</h2>
<p>CPU, Memoty, Network IO, Disk IO</p>
<ul>
<li>Heapster + InfluxDB: X<ul>
<li>metrics-server: DB 없음, 실시간<ul>
<li>CPU, Memory</li>
</ul>
</li>
<li>Prometheus</li>
</ul>
</li>
</ul>
<p><img src="https://prometheus.io/assets/architecture.png" alt=""></p>
<blockquote>
<p><a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack">https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack</a></p>
</blockquote>
<pre><code>helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm update</code></pre><p><code>prom-value.yaml</code></p>
<pre><code>grafana:
  service:
    type: LoadBalancer</code></pre><pre><code>kubectl create ns monitor</code></pre><pre><code>helm install prom prometheus-community/kube-prometheus-stack -f prom-values.yaml -n monitor</code></pre><p>웹브라우저
<a href="http://192.168.100.24X">http://192.168.100.24X</a>
ID: admin
PWD: prom-operator</p>
<h2 id="efk-logging">EFK Logging</h2>
<p>ELK Stack: Elasticsearch + Logstash + Kibana 
<strong>EFK Stack</strong>: Elasticsearch + Fluentd + Kibana
    Elasticsearch +  Fluent Bit + Kibana
Elastic Stack: Elasticsearch + Beat + Kibana</p>
<h3 id="elasticsearch">Elasticsearch</h3>
<pre><code>helm repo add elastic https://helm.elastic.co
helm repo update</code></pre><pre><code>helm show values elastic/elasticsearch &gt; es-value.yaml</code></pre><p><code>es-value.yaml</code></p>
<pre><code> 18 replicas: 1
 19 minimumMasterNodes: 1

 80 resources:
 81   requests:
 82     cpu: &quot;500m&quot;
 83     memory: &quot;1Gi&quot;
 84   limits:
 85     cpu: &quot;500m&quot;
 86     memory: &quot;1Gi&quot;</code></pre><pre><code>kubectl create ns logging</code></pre><pre><code>helm install elastic elastic/elasticsearch -f es-value.yaml -n logging</code></pre><h3 id="fluent-bit">Fluent Bit</h3>
<blockquote>
<p><a href="https://github.com/fluent/fluent-bit-kubernetes-logging">https://github.com/fluent/fluent-bit-kubernetes-logging</a></p>
</blockquote>
<pre><code>git clone https://github.com/fluent/fluent-bit-kubernetes-logging.git</code></pre><pre><code>cd  fluent-bit-kubernetes-logging</code></pre><pre><code>kubectl create -f fluent-bit-service-account.yaml
kubectl create -f fluent-bit-role-1.22.yaml
kubectl create -f fluent-bit-role-binding-1.22.yaml</code></pre><pre><code>kubectl create -f output/elasticsearch/fluent-bit-configmap.yaml</code></pre><p><code>output/elasticsearch/fluent-bit-ds.yaml</code></p>
<pre><code> 32         - name: FLUENT_ELASTICSEARCH_HOST
 33           value: &quot;elasticsearch-master&quot;</code></pre><pre><code>kubectl create -f output/elasticsearch/fluent-bit-ds.yaml</code></pre><h3 id="kibana">Kibana</h3>
<pre><code>helm show values elastic/kibana &gt; kibana-value.yaml</code></pre><p><code>kibana-value.yaml</code></p>
<pre><code> 49 resources:
 50   requests:
 51     cpu: &quot;500m&quot;
 52     memory: &quot;1Gi&quot;
 53   limits:
 54     cpu: &quot;500m&quot;
 55     memory: &quot;1Gi&quot;

119 service:
120   type: LoadBalancer</code></pre><pre><code>helm install kibana elastic/kibana -f kibana-value.yaml -n logging</code></pre><p><a href="http://192.168.100.X:5601">http://192.168.100.X:5601</a></p>
<ul>
<li>햄버거 -&gt; Management -&gt; Stack Management<ul>
<li>Kibana -&gt; Index Pattern<ul>
<li>Create Index Pattern 우상<ul>
<li>Name: logstash-*</li>
<li>Timestamp: @timestamp</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li>햄버거 -&gt; Analystics -&gt; Discover</li>
</ul>
<hr>
<h1 id="tip">Tip</h1>
<h2 id="powerlevel10k">Powerlevel10k</h2>
<pre><code>git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k</code></pre><p><code>~/.zshrc</code></p>
<pre><code>ZSH_THEME=&quot;powerlevel10k/powerlevel10k&quot;</code></pre><pre><code>exec zsh</code></pre><pre><code>p10k configure</code></pre><h2 id="kubectx--kubens">kubectx &amp; kubens</h2>
<blockquote>
<p><a href="https://github.com/ahmetb/kubectx">https://github.com/ahmetb/kubectx</a></p>
</blockquote>
<pre><code>wget https://github.com/ahmetb/kubectx/releases/download/v0.9.4/kubectx</code></pre><pre><code>wget https://github.com/ahmetb/kubectx/releases/download/v0.9.4/kubens</code></pre><pre><code>sudo install kubectx /usr/local/bin
sudo install kubens /usr/local/bin</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.26)]]></title>
            <link>https://velog.io/@sunny-10/22.05.26</link>
            <guid>https://velog.io/@sunny-10/22.05.26</guid>
            <pubDate>Sun, 29 May 2022 07:10:56 GMT</pubDate>
            <description><![CDATA[<h1 id="pod-scheduling">Pod Scheduling</h1>
<h2 id="nodename">nodeName</h2>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs-nn
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      nodeName: node2
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb</code></pre><h2 id="nodeselector">nodeSelector</h2>
<p>노드 레이블
node1</p>
<pre><code>beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=node1
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=</code></pre><p>node2</p>
<pre><code>beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=node2
kubernetes.io/os=linux</code></pre><p>node3</p>
<pre><code>beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=node3
kubernetes.io/os=linux</code></pre><pre><code>kubectl label node node1 gpu=highend
kubectl label node node2 gpu=midrange
kubectl label node node3 gpu=lowend</code></pre><pre><code>kubectl get nodes -L gpu</code></pre><pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs-ns
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      nodeSelector:
        gpu: lowend
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb</code></pre><h2 id="affinity">Affinity</h2>
<ul>
<li>affinity<ul>
<li>pod</li>
<li>node</li>
</ul>
</li>
<li>anti-affinty<ul>
<li>pod</li>
</ul>
</li>
</ul>
<p><code>myweb-a.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-a
spec:
  replicas: 2
  selector:
    matchLabels:
      app: a
  template:
    metadata:
      labels:
        app: a
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 10
              preference:
                matchExpressions:
                  - key: gpu
                    operator: Exists
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                 matchLabels:
                   app: a
              topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb</code></pre><p><code>myweb-b.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-b
spec:
  replicas: 2
  selector:
    matchLabels:
      app: b
  template:
    metadata:
      labels:
        app: b
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 10
              preference:
                matchExpressions:
                  - key: gpu
                    operator: Exists
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                 matchLabels:
                   app: b
              topologyKey: &quot;kubernetes.io/hostname&quot;
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                 matchLabels:
                   app: a
              topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb</code></pre><h2 id="cordon--drain">Cordon &amp; Drain</h2>
<p>Cordon: 
스케줄링 금지</p>
<pre><code>kubectl cordon &lt;NODENAME&gt;</code></pre><p>스케줄링 허용</p>
<pre><code>kubectl uncordon &lt;NODENAME&gt;</code></pre><p>Drain:
Cordon -&gt; 기존 파드를 제거</p>
<pre><code>kubectl drain &lt;NODENAME&gt; --ignore-daemonsets</code></pre><blockquote>
<p> <code>kubectl uncordn &lt;NODENAME&gt;</code></p>
</blockquote>
<h2 id="taint--toleration">Taint &amp; Toleration</h2>
<p>Control Plane
    Taint: &quot;node-role.kubernetes.io/master:NoSchedule&quot;</p>
<p>Taint: 특정 노드에 역할을 부여
Toleration: Taint 노드에 스케줄링 허용</p>
<pre><code>kubectl taint node node1 node-role.kubernetes.io/master:NoSchedule</code></pre><pre><code>      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
          effect: NoSchedule</code></pre><p><code>myweb-a.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-a
spec:
  replicas: 3
  selector:
    matchLabels:
      app: a
  template:
    metadata:
      labels:
        app: a
    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
          effect: NoSchedule
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 10
              preference:
                matchExpressions:
                  - key: gpu
                    operator: Exists
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                 matchLabels:
                   app: a
              topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb</code></pre><hr>
<pre><code>kubectl cordon node2</code></pre><pre><code>kubectl describe nodes | grep -i taint</code></pre><pre><code>kubectl uncordon node2</code></pre><pre><code>kubdectl describe nodes | grep -i taint</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.25)]]></title>
            <link>https://velog.io/@sunny-10/22.05.25</link>
            <guid>https://velog.io/@sunny-10/22.05.25</guid>
            <pubDate>Wed, 25 May 2022 08:21:07 GMT</pubDate>
            <description><![CDATA[<h1 id="statefulset">StatefulSet</h1>
<p>application의 stateful을 관리하는데 사용하는 워크로드 API 오브젝트
파드들의 순서 및 고유성을 보장</p>
<p>pet vs cattle
고유성의 차이</p>
<p>headless service 와 statefullSet을 같이 사용 (headless service필수)</p>
<h2 id="headless-service">Headless Service</h2>
<p><code>myweb-svc.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc
spec:
  type: ClusterIP
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080</code></pre><p><code>myweb-svc-headless.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-headless
spec:
  type: ClusterIP
  clusterIP: None # &lt;-- Headless Service
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080</code></pre><p><code>myweb-rs.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
      env: dev
  template:
    metadata:
      labels:
        app: web
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080
              protocol: TCP</code></pre><pre><code>kubectl run nettool -it --image ghcr.io/c1t1d0s7/network-multitool --rm

&gt; host myweb-svc
&gt; host myweb-svc-headless </code></pre><h2 id="statefulset-1">StatefulSet</h2>
<h3 id="예제1">예제1</h3>
<p><code>myweb-svc-headless.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-headless
spec:
  type: ClusterIP
  clusterIP: None # &lt;-- Headless Service
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080</code></pre><p><code>myweb-sts.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myweb-sts
spec:
  replicas: 3
  serviceName: myweb-svc-headless
  selector:
    matchLabels:
      app: web
      env: dev
  template:
    metadata:
      labels:
        app: web
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080
              protocol: TCP</code></pre><pre><code>kubectl run nettool -it --image ghcr.io/c1t1d0s7/network-multitool --rm

&gt; host myweb-svc-headless
&gt; host myweb-sts-0.myweb-svc-headless
&gt; host myweb-sts-1.myweb-svc-headless
&gt; host myweb-sts-2.myweb-svc-headless</code></pre><blockquote>
<p>pod가 삭제 되어도 똑같은 이름으로 다시 생성
이름이 고정적(이름뒤에 서수가 붙음 -예측가능)
순서대로 생성되고 삭제됨</p>
</blockquote>
<ul>
<li>template: pod생성</li>
<li>volumeClaimTemplates: 볼륨 pvc생성</li>
</ul>
<h3 id="예제2-pvc-템플릿">예제2: PVC 템플릿</h3>
<p><code>myweb-sts-vol.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myweb-sts-vol
spec:
  replicas: 3
  serviceName: myweb-svc-headless
  selector:
    matchLabels:
      app: web
      env: dev
  template:
    metadata:
      labels:
        app: web
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb:alpine
          ports:
            - containerPort: 8080
              protocol: TCP
          volumeMounts:
            - name: myweb-pvc
              mountPath: /data
  volumeClaimTemplates:
    - metadata:
        name: myweb-pvc
      spec:
        accessMode:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1G
        storageClassName: nfs-client</code></pre><blockquote>
<p>pod가 삭제되어도 pv,pvc는 보존</p>
</blockquote>
<h3 id="예제3-mysql">예제3: mysql</h3>
<blockquote>
<p>참고링크
<a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/">https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/</a></p>
</blockquote>
<p>configMap</p>
<pre><code>wget https://k8s.io/examples/application/mysql/mysql-configmap.yaml
(바로 시작 안하고 받아서 수정 후 apply)

primaty.cnf 와 replica.cnf의 
datadir 라인 제거 </code></pre><pre><code>kubectl create -f mysql-configmap.yaml</code></pre><p>service</p>
<pre><code>kubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml</code></pre><p>statefulset</p>
<pre><code>kubectl apply -f https://k8s.io/examples/application/mysql/mysql-statefulset.yaml</code></pre><pre><code>kubectl get sts,po,pv,pvc</code></pre><p>넷툴로 접속 (--rm 을 사용하면 종료 시 자동으로 삭제됨)</p>
<pre><code>kubectl run nettool -it --image ghcr.io/c1t1d0s7/network-multitool --rm</code></pre><p>host확인</p>
<pre><code>host mysql
host mysql-read</code></pre><p>node0 master확인 (database 추가)</p>
<pre><code>mysql -h mysql-0.mysql -u root
(0번이 master )
show databases;
create database encore;
exit</code></pre><p>node1 (database 확인, 동기화)</p>
<pre><code>mysql -h mysql-1.mysqp -u root
show databases;
(encore 존재 확인)
exit</code></pre><p>(database 삭제, 동기화)</p>
<pre><code>mysql -h mysql-0.mysql -u root
drop databases eoncore;
show databases;

mysql -h mysql-1.mysqp -u root
show databases;
(encore 존재 확인)
exit</code></pre><pre><code>mysql -h mysql-0.mysql -u root
create database encore;
use database encore;
use encore;
create table encore.message (message VARCHAR(50));
show tables;
insert into encore.message values (&quot;hello mysql&quot;);
select * from message;
exit</code></pre><blockquote>
<p>다른 pod를 생성하여도 볼륨에 바로 동기화가 됨</p>
</blockquote>
<hr>
<h1 id="auto-scaling">Auto Scaling</h1>
<h2 id="resource-request--limit">Resource Request &amp; Limit</h2>
<p>요청: request
제한: limit</p>
<p>요청 =&lt; 제한</p>
<p>QoS(서비스 품질) Class:</p>
<ol>
<li>BestEffort: 가장 나쁨</li>
<li>Burstable</li>
<li>Guaranteed: 가장 좋음</li>
</ol>
<ul>
<li>요청/제한 설정되어 있지 않으면: BestEffort</li>
<li>요청 &lt; 제한: Bustable</li>
<li>요청 = 제한: Guaranteed</li>
</ul>
<p><code>pod.spec.containers.resources</code></p>
<ul>
<li>requests<ul>
<li>cpu</li>
<li>memory</li>
</ul>
</li>
<li>limits<ul>
<li>cpu</li>
<li>memory</li>
</ul>
</li>
</ul>
<p>CPU 요청 &amp; 제한: milicore
    ex) 1500m -&gt; 1.5개, 1000m -&gt; 1개
    ex) 1.5, 0.1
Memory 요청 &amp; 제한: M, G, T, Mi, Gi, Ti</p>
<p><code>myweb-reqlim.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-reqlim
spec:
  containers:
    - name: myweb
      image: ghcr.io/c1t1d0s7/go-myweb
      resources:
        requests:
          cpu: 200m
          memory: 200M
        limits:
          cpu: 200m
          memory: 200M</code></pre><blockquote>
<p>replace는 resource 변경이 안되나  뒤에 --force를 붙여 강제로 가능 (삭제하고 재생성)
제한(limit)만 설정하면 요청(request)에 같은 값이 설정됨 
(요청만 설정하면 제한은 설정이 안됨)</p>
</blockquote>
<p>노드별 CPU/Memory 사용량 확인</p>
<pre><code>kubectl top nodes</code></pre><p>파드별 CPU/Memory 사용량 확인</p>
<pre><code>kubectl top pods
kubectl top pods -A</code></pre><p>리소스 모니터링(인프라 모니터링)
Heapster:
-&gt; metric-server: 실시간 cpu/memory 모니터링
-&gt; prometheus: 실시간/이전 cpu/memory/network/disk 모니터링</p>
<p>노드별 요청/제한 양 확인</p>
<pre><code>kubectl describe nodes node1</code></pre><p>실행 할 수 없는 리소스
<code>myweb-big.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-big
spec:
  containers:
    - name: myweb
      image: ghcr.io/c1t1d0s7/go-myweb
      resources:
        limits:
          cpu: 3000m
          memory: 4000M</code></pre><blockquote>
<p>pending 이 오래걸리면 스케줄링이 되지 않는지
볼륨이 연결되지않는지, 이미지를 받지 못하는지 의심할 것</p>
</blockquote>
<h2 id="hpa-horisontal-pod-autoscaler">HPA: Horisontal Pod AutoScaler</h2>
<p>AutoScaling</p>
<ul>
<li>Pod<ul>
<li>HPA</li>
<li>VPA: Vertical Pod Autoscaler</li>
</ul>
</li>
<li>Node<ul>
<li>ClusterAutoScaler</li>
</ul>
</li>
</ul>
<p>HPA: Deployment, ReplicaSet, StatefulSet의 복제본 개수를 조정</p>
<blockquote>
<p>스케일 아웃: 180초
스케인 인: 300초</p>
</blockquote>
<p><code>myweb-deploy.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: myweb-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb:alpine
          ports:
            - containerPort: 8080
          resources:
            requests:
              cpu: 200m
            limits:
              cpu: 200m</code></pre><p>HPA를 위해 최소 request는 설정되여 함</p>
<p><code>myweb-hpa.yaml</code></p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: myweb-hpa
spec:
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myweb-deploy</code></pre><p>부하 (강제로 부하걸기)</p>
<pre><code>kubectl exec &lt;POD&gt; -- sha256sum /dev/zero
kubectl exec myweb-deploy-xx~ --sha256sum /dev/zero</code></pre><blockquote>
<p>원하는 레플리카수 = ceil[ 현재 레플리카 수 * (현재 메트릭 값/원하는 메트릭 값)]
(ceil -&gt; celing천장함수-&gt; 올림)</p>
</blockquote>
<hr>
<p>beta2버전
<code>myweb-hpa-v2beta2.yaml</code></p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: myweb-hpa
spec:
  minReplicas: 1
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          avarageUtilization: 50
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myweb-deploy</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[Nginx를 이용한 HTTPs 서버구성 (22.05.24)]]></title>
            <link>https://velog.io/@sunny-10/22.05.241</link>
            <guid>https://velog.io/@sunny-10/22.05.241</guid>
            <pubDate>Tue, 24 May 2022 10:09:26 GMT</pubDate>
            <description><![CDATA[<h1 id="configmap-과-secret를-사용하여-nginx를-통한-http-구성">configMap 과 Secret를 사용하여 nginx를 통한 http 구성</h1>
<h2 id="nginx-https-서버">Nginx HTTPs 서버</h2>
<p>Nginx</p>
<ul>
<li>Documentation Root: /usr/share/nginx/html/</li>
<li>Configuration File: /etc/nginx/conf.d</li>
</ul>
<h5 id="자체-서명-인증서-생성">자체 서명 인증서 생성</h5>
<p>ssc(self signed certificate 셀프인증) - 테스트용</p>
<p>Secret: </p>
<ul>
<li>Type: <code>kubernetes.io/tls</code></li>
</ul>
<pre><code>mkdir x509 &amp;&amp; cd x509</code></pre><p>Private Key</p>
<pre><code>openssl genrsa -out nginx-tls.key 2048</code></pre><p>Public Key</p>
<pre><code>openssl rsa -in nginx-tls.key -pubout -out nginx-tls</code></pre><p>CSR</p>
<pre><code>openssl req -new -key nginx-tls.key -out nginx-tls.csr</code></pre><blockquote>
<p>ssl 인증 시 필요한 정보 예시
:KR               (나라)
:Seoul            (주)
:Seoul            (시)
:Encore Inc.      (소속)
:IT               (전공)
:<a href="http://www.example.com">www.example.com</a>  (도메인)
:<a href="mailto:admin@encore.com">admin@encore.com</a> (이메일)</p>
</blockquote>
<p>인증서</p>
<pre><code>openssl req -x509 -days 3650 -key nginx-tls.key -in nginx-tls.csr -out nginx-tls.crt</code></pre><pre><code>rm nginx-tls nginx-tls.csr</code></pre><ul>
<li>nginx-tls.key</li>
<li>nginx-tls.crt</li>
</ul>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/5b7d3300-90f6-444e-b713-13bc1ed0e163/image.PNG" alt=""></p>
<h5 id="설정파일">설정파일</h5>
<p>ConfigMap</p>
<pre><code>mkdir conf &amp;&amp; cd conf</code></pre><p><code>nginx-tls.conf</code></p>
<pre><code>server {
    listen              80;
    listen              443 ssl;
    server_name         myapp.example.com;
    ssl_certificate     /etc/nginx/ssl/tls.crt;
    ssl_certificate_key /etc/nginx/ssl/tls.key;
    ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers         HIGH:!aNULL:!MD5;
    location / {
        root   /usr/share/nginx/html;
        index  index.html;
    }
}</code></pre><h4 id="리소스-생성">리소스 생성</h4>
<p>CM 생성
<code>nginx-tls-config.yaml</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-tls-config
data:
  nginx-tls.conf: |
    server {
      listen              80;
      listen              443 ssl;
      server_name         myapp.example.com;
      ssl_certificate     /etc/nginx/ssl/tls.crt;
      ssl_certificate_key /etc/nginx/ssl/tls.key;
      ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
      ssl_ciphers         HIGH:!aNULL:!MD5;
      location / {
        root   /usr/share/nginx/html;
        index  index.html;
      }
    }</code></pre><p>Secret 생성
<code>nginx-tls-secret.yaml</code></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
  name: nginx-tls-secret
type: kubernetes.io/tls
data:
  # base64 x509/nginx-tls.crt -w 0
  tls.crt: |
    LS0tLS1C...
  # base64 x509/nginx-tls.key -w 0
  tls.key: |
    LS0tLS1C...</code></pre><p>Pod 생성
<code>nginx-https-pod.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: nginx-https-pod
  labels:
    app: nginx
spec:
  containers:
    - name: nginx
      image: nginx
      volumeMounts:
      - name: nginx-config
        mountPath: /etc/nginx/conf.d
      - name: nginx-certs
        mountPath: /etc/nginx/ssl
  volumes:
    - name: nginx-config
      configMap:
        name: nginx-tls-config
    - name: nginx-certs
      secret:
        secretName: nginx-tls-secret</code></pre><p>SVC 생성
<code>nginx-svc-lb.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: nginx-svc-lb
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - name: http
      port: 80
      targetPort: 80
    - name: https
      port: 443
      targetPort: 443</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/f05412dc-84b3-4a05-b070-fe0b4bf4f50c/image.PNG" alt="">
<img src="https://velog.velcdn.com/images/sunny-10/post/5fef70f6-cb0d-4d9c-80df-e5f7779fa39f/image.PNG" alt=""></p>
<p>Test</p>
<pre><code>curl -k https://192.168.100.X</code></pre><p><img src="https://velog.velcdn.com/images/sunny-10/post/ec46075a-c8df-458c-98f9-fc672bc8fb4f/image.PNG" alt=""></p>
]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.24)]]></title>
            <link>https://velog.io/@sunny-10/22.05.24</link>
            <guid>https://velog.io/@sunny-10/22.05.24</guid>
            <pubDate>Tue, 24 May 2022 09:44:44 GMT</pubDate>
            <description><![CDATA[<h1 id="configmap--secret">ConfigMap &amp; Secret</h1>
<h2 id="환경변수">환경변수</h2>
<p><code>pods.spec.containers.env</code></p>
<ul>
<li>name</li>
<li>value</li>
</ul>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-env
spec:
  containers:
    - name: myweb
      image: ghcr.io/c1t1d0s7/go-myweb:alpine
      env:
        - name: MESSAGE
          value: &quot;Customized Hello World&quot;</code></pre><h2 id="configmap">ConfigMap</h2>
<p>사용 용도:</p>
<ul>
<li>환경 변수</li>
<li>볼륨/파일<ul>
<li>설정파일</li>
<li>암호화 키/인증서</li>
</ul>
</li>
</ul>
<h4 id="환경-변수">환경 변수</h4>
<p><code>mymessage.yaml</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
  name: mymessage
data:
  MESSAGE: Customized Hello ConfigMap</code></pre><p><code>myweb-env.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-env
spec:
  containers:
    - name: myweb
      image: ghcr.io/c1t1d0s7/go-myweb:alpine
      envFrom:
        - configMapRef:
            name: mymessage</code></pre><pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-env
spec:
  containers:
    - name: myweb
      image: ghcr.io/c1t1d0s7/go-myweb:alpine
      env:
        valueFrom:
          configMapKeyRef:
            name: mymessage
            key: MESSAGE</code></pre><blockquote>
<p>configMapRef 와 configMapKeyRef의 차이
configMapRef은 환경변수 전체를 읽음
configMapKeyRef에 지정한 키값만 읽음</p>
</blockquote>
<h4 id="파일">파일</h4>
<p><code>myweb-cm-vol.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-cm-vol
spec:
  containers:
    - name: myweb
      image: ghcr.io/c1t1d0s7/go-myweb:alpine
      volumeMounts:
        - name: cmvol
          mountPath: /myvol

  volumes:
    - name: cmvol
      configMap:
        name: mymessage</code></pre><h2 id="secret">Secret</h2>
<p>value --base64--&gt; encoded data
(base64로 인코딩하다보니 암호화되지 않아 안전하지 않음)</p>
<blockquote>
<p>안전을 위해 다른 서비스와 같이 사용
Hashicorp Vault
AWS KMS 
...</p>
</blockquote>
<h3 id="환경-변수-1">환경 변수</h3>
<p><code>mydata.yaml</code></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
  name: mydata
type: Opaque
data:
  id: YWRtaW4K          #base64 인코딩한 값 (ex. admin)
  pwd: UEBzc3cwcmQK     #base64 인코딩한 값 (ex. password)</code></pre><pre><code>base64
admin
(값)  #이 값을 id와 pwd에 복사</code></pre><pre><code>kubectl describe secret mydata      #안보임
kubectl get secret mydata -o yaml   #보임 (인코딩된 값)</code></pre><pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-secret
spec:
  containers:
    - name: myweb
      image: ghcr.io/c1t1d0s7/go-myweb:alpine
      envFrom:
        - secretRef:
            name: mydata</code></pre><pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-env
spec:
  containers:
    - name: myweb
      image: ghcr.io/c1t1d0s7/go-myweb:alpine
      env:
        valueFrom:
          secretKeyRef:
            name: mydata 
            key: id</code></pre><h3 id="파일-1">파일</h3>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-sec-vol
spec:
  containers:
    - name: myweb
      image: ghcr.io/c1t1d0s7/go-myweb:alpine
      volumeMounts:
        - name: secvol
          mountPath: /secvol

  volumes:
    - name: secvol
      secret:
        secretName: mydata</code></pre><hr>
<h1 id="deployments-deploy">Deployments (deploy)</h1>
<p>Pod와 ReplicaSet에 대한 선언적 업데이트 제공</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: myweb-deploy
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb:v1
          ports:
            - containerPort: 8080</code></pre><hr>
<h2 id="deployments에서-사용-가능한-스토리지">Deployments에서 사용 가능한 스토리지</h2>
<p><img src="https://velog.velcdn.com/images/sunny-10/post/befdf42c-79d5-48bc-9d57-3cb7b34fb5f1/image.PNG" alt=""></p>
<h3 id="recreate"><strong>recreate</strong></h3>
<ul>
<li><p>장점
셋업이 쉽다
애플리케이션은 완전히 갱신됨</p>
</li>
<li><p>단점
다운타임이 발생한다</p>
</li>
</ul>
<h3 id="rampedrolling-update"><strong>ramped(rolling-update)</strong></h3>
<ul>
<li><p>장점
무중단 24//7
버전이 천천히 배포
db쉽게 이동</p>
</li>
<li><p>단점
롤아웃/롤백이 시간이 걸림
다중api 지원이 어렵다
트래픽 컨트롤방법이 없다</p>
</li>
</ul>
<h3 id="bluegreen">blue/green</h3>
<ul>
<li>특징
recreate와 차이 - 리소스가 더 많이 필요하다
다운타임 최소화
비싸다</li>
</ul>
<h3 id="canary">canary</h3>
<ul>
<li>lb에 일정비율로 배포할 수 있는 기능이 있어야함</li>
</ul>
<h3 id="ab-testing">a/b testing</h3>
<ul>
<li>browser,user-agent등의 정보를 통한 접속 분리 (client 구별)</li>
</ul>
<h3 id="shadow">shadow</h3>
<ul>
<li>고급 기술이 필요</li>
</ul>
<hr>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-lb
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080</code></pre><pre><code>kubectl rollout status deploy myweb-deploy</code></pre><pre><code>kubectl rollout history deploy myweb-deploy</code></pre><blockquote>
<p>image 변경 시
replace, apply, edit, patch, set</p>
</blockquote>
<pre><code>kubectl set image deployments myweb-deploy myweb=ghcr.i
o/c1t1d0s7/go-myweb:v2.0 --record</code></pre><p><code>--record</code>: 명령을 히스토리에 저장</p>
<blockquote>
<p>record를 붙여야 history에서 사유가 작성됨(change case)</p>
</blockquote>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: myweb-deploy
  annotations:
    kubernetes.io/change-cause: &quot;Change Go Myweb version from 3 to 4&quot;
    ...</code></pre><pre><code>kubectl apply -f myweb-deploy.yaml</code></pre><blockquote>
<p>(apply 할때 레코드를 작성하면 history에서 변경 된 파일의 수정 내용을 모르지만 annotaion을 작성하면 알 수 있음) 
-단 annotaion 작성하면 record는 붙이지 않음</p>
</blockquote>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: myweb-deploy
  annotations:
    kubernetes.io/change-cause: &quot;Change Go Myweb version from 3 to 4&quot;
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb:v4.0
          ports:
            - containerPort: 8080</code></pre><h4 id="max-surge">max surge</h4>
<p>(default: 25%) replicas기본3개 서지1개
롤아웃 시 3+1 =4 개 파드가 생성 가능 
기본값은 절대값 및 퍼센트로 지정가능</p>
<h4 id="max-unavailable">max unavailable</h4>
<p>(default: 25%) 파드 삭제 개수
(rs는 지우지 않음 rollback을 위해)</p>
<h4 id="minreadyseconds">minReadySeconds</h4>
<p>최소대기시간
기본값:0 
파드가 준비상태까지의 대기시간 설정</p>
<h4 id="revisionhistorylimit">revisionHistoryLimit</h4>
<p>기본값:10
히스토리 개수</p>
<hr>
<h3 id="tlsssl-termination--with-ingress">TLS/SSL Termination  with Ingress</h3>
<p>SSL3.0과 TLS 1.0은 호환 가능
SSL은 취약점이 발견되어 사용하지 않음
<img src="https://velog.velcdn.com/images/sunny-10/post/db91b9fd-cfc5-42d0-9024-d90a602da042/image.PNG" alt=""></p>
<pre><code>client &lt;--&gt;LB&lt;--&gt;nginx pod https
      https   https
       &lt;---------&gt;
      end to end 방식</code></pre><pre><code>                  &lt; Private networ &gt;
           Esposed SSL
client &lt;-&gt; Termination &lt;--&gt; webservice
             Proxy
       https           http
      &lt;----&gt;
   이 구간만 암호화</code></pre><p>Proxy 에서만 인증서 인증을 할 수 있음
webservice가 인증 받을 필요가 없음</p>
<p>비암호화 구간이 있어야 공격을 탐지할 수 있음
암호화 구간에서는 공격도 암호화가 되어 탐지가 어려움</p>
<blockquote>
<p>암복호화가 메모리를 많이 사용함</p>
</blockquote>
<p><code>ingress-tls-secret.yaml</code></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
  name: ingress-tls-secret
type: kubernetes.io/tls
data:
  # base64 x509/nginx-tls.crt -w 0
  tls.crt: |
    LS0tLS1CRUd...
  # base64 x509/nginx-tls.key -w 0
  tls.key: |
    LS0tLS1CRUdJ...</code></pre><p><code>myweb-rs.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
      env: dev
  template:
    metadata:
      labels:
        app: web
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080
              protocol: TCP</code></pre><p><code>myweb-svc-np.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-np
spec:
  type: NodePort
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080</code></pre><p><code>myweb-ing-tls.yaml</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myweb-ing-tls
spec:
  tls:
    - hosts:
        - &#39;*.nip.io&#39;
      secretName: ingress-tls-secret
  rules:
    - host: &#39;*.nip.io&#39;
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: myweb-svc-np
                port:
                  number: 80</code></pre><pre><code>curl -k https://192-168-100-100.nip.io</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.23)]]></title>
            <link>https://velog.io/@sunny-10/22.05.23</link>
            <guid>https://velog.io/@sunny-10/22.05.23</guid>
            <pubDate>Tue, 24 May 2022 09:23:09 GMT</pubDate>
            <description><![CDATA[<h1 id="volume">Volume</h1>
<p><code>spec.volumes.*</code>: 볼륨 유형</p>
<h2 id="emptydir">emptyDir</h2>
<p>임시로 사용할 빈 볼륨, 파드 삭제 시 볼륨 같이 삭제
디스크 기반의 병합 종류와 같은 스크레치 공간
충돌로부터 복구하기위해 긴 계산을 검사점으로 지정
웹 서버 컨테이너가 데이터를 처리하는 동안 컨텐츠 매니저 컨테이너가 가져오는 파일을 보관
medium을 momery로 쓰면 램메모리 사용가능</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb-pod
spec:
  containers:
    - name: myweb1
      image: httpd
      volumeMounts:
        - name: emptyvol
          mountPath: /empty
    - name: myweb2
      image: ghcr.io/c1t1d0s7/go-myweb:alpine
      volumeMounts:
        - name: emptyvol
          mountPath: /empty
  volumes:
    - name: emptyvol
      emptyDir: {}</code></pre><pre><code>kubectl create -f myweb-pod.yaml</code></pre><pre><code>kubectl exec -it myweb-pod -c myweb1 -- bash

&gt; cd /empty
&gt; touch a b c</code></pre><pre><code>kubectl exec -it myweb-pod -c myweb2 -- sh

&gt; ls /empty</code></pre><pre><code>kubectl describe po myweb-pod  #같은 볼륨을 서로 공유</code></pre><h2 id="gitrepo사용-중지----initcontainer초기화-컨테이너">gitRepo(사용 중지) --&gt; initContainer(초기화 컨테이너)</h2>
<p>pod 생성 시 딱 1번만 실행되고 종료됨</p>
<blockquote>
<p><a href="https://kubernetes.io/ko/docs/concepts/workloads/pods/init-containers/">https://kubernetes.io/ko/docs/concepts/workloads/pods/init-containers/</a></p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: init-pod
spec:
  initContainers:
    - name: gitpull
      image: alpine/git
      args:
        - clone
        - -b
        - v2.18.1
        - https://github.com/kubernetes-sigs/kubespray.git
        - /repo
      volumeMounts:
        - name: gitrepo
          mountPath: /repo
  containers:
    - name: gituse
      image: busybox
      args:
        - tail
        - -f
        - /dev/null
      volumeMounts:
        - name: gitrepo
          mountPath: /kube
  volumes:
    - name: gitrepo
      emptyDir: {}</code></pre><h2 id="hostpath">hostPath</h2>
<p><code>/mnt/web_contents/index.html</code></p>
<pre><code>&lt;h1&gt; Hello hostPath &lt;/h1&gt;</code></pre><blockquote>
<p>참고
로컬 스토리지: 다른 호스트에 스토리지 볼륨을 제공할 수 X</p>
<ul>
<li>emptyDir</li>
<li>hostPath</li>
<li>gitRepo</li>
<li>local</li>
</ul>
</blockquote>
<p><code>vi myweb-rs-hp.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs-hp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: myweb
          image: httpd
          volumeMounts:
            - name: web-contents
              mountPath: /usr/local/apache2/htdocs/
      volumes:
        - name: web-contents
          hostPath:
            type: Directory
            path: /web_contents</code></pre><pre><code>sudo mkdir /web_contents</code></pre><pre><code>sudo echo :hellohostPath&quot; | sudo tee /web_contents/index.html</code></pre><pre><code>cat /web_contents/index,html</code></pre><pre><code>kubectl create -f myweb-rs-hp.yaml</code></pre><pre><code>kubectl get rs,po -o wide</code></pre><p>node1은 생성되지만 node2,3은 생성되지 않는걸 확인 할 수 있음</p>
<p>emptyDir, hostPath는 local storage임
network storage가 아님</p>
<pre><code>ssh node2 sudo kidir /seb_contents
ssh node2 echo &quot;hello hostPath&quot; | sudo tee /web_contents/index.html
ssh node3 sudo kidir /seb_contents
ssh node3 echo &quot;hello hostPath&quot; | sudo tee /web_contents/index.html</code></pre><pre><code>kubectl delete po myweb-   #생성안된 파드 삭제</code></pre><pre><code>kubectl get po</code></pre><h2 id="pv--pvc">PV &amp; PVC</h2>
<ul>
<li>PersistentVolume: 스토리지 볼륨 정의</li>
<li>PersistentVolumeClaim: PV를 요청</li>
</ul>
<h3 id="pv-pvc-예제">pv, pvc 예제</h3>
<p>Pod</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: mypod
      image: httpd
      volumeMounts:
        - name: myvol
          mountPath: /tmp
  volumes:
    - name: myvol
      persistentVolumeClaim:
        name: mypvc</code></pre><p>PVC</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  volumeName: mypv
  ...</code></pre><p>PV</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv
spec:
  hostPath:
    path: /web_contents
    type: DirectoryOrCreate</code></pre><h3 id="pv-pvc-생명주기">PV, PVC 생명주기</h3>
<p>PV &lt;--1:1--&gt; PVC</p>
<ol>
<li>프로비저닝</li>
<li>바인딩</li>
<li>사용</li>
<li>회수/반환(Reclaim)<ul>
<li>Retain: 보존 - PV를 삭제하지 않음(Release &lt;- PVC가 연결 X)</li>
<li><strong>Delete</strong>: 삭제 - PV를 삭제 / 실제 스토리지 내용 삭제</li>
<li>Recycle: 재사용(X) - 실제 스토리지 내용을 비우고, PV를 사용 가능한 상태(Available)</li>
</ul>
</li>
</ol>
<h3 id="접근-모드access-mode">접근 모드(Access Mode)</h3>
<ul>
<li>ReadWriteOnce: RWO</li>
<li>ReadWriteMany: RWX</li>
<li>ReadOnlyMant: ROW</li>
</ul>
<h3 id="nfs를-사용한-정적-프로비저닝static-provision">NFS를 사용한 정적 프로비저닝(Static Provision)</h3>
<p>node1: NFS 서버</p>
<pre><code>sudo apt install nfs-kernel-server -y</code></pre><pre><code>sudo mkdir /nfsvolume
echo &quot;&lt;h1&gt; Hello NFS Volume &lt;/h1&gt;&quot; | sudo tee /nfsvolume/index.html</code></pre><pre><code>sudo chown -R www-data:www-data /nfsvolume</code></pre><p><code>/etc/exports</code></p>
<pre><code>/nfsvolume 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash)</code></pre><pre><code>sudo systemctl restart nfs-kernel-server
systemctl status nfs-kernel-server</code></pre><blockquote>
<p>active(exited) 상태 확인</p>
</blockquote>
<p>node1, node2, node3</p>
<pre><code>sudo apt install nfs-common -y </code></pre><p>또는</p>
<pre><code>ansible all -i ~/kubespray/inventory/mycluster/inventory.ini -m apt -a &#39;name=nfs-common&#39; -b</code></pre><p>PV</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv
spec:
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 1G
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /nfsvolume
    server: 192.168.100.100</code></pre><p>PVC</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1G
  storageClassName: &#39;&#39; # For Static Provisioning
  volumeName: mypv</code></pre><p>RS</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: myweb
          image: httpd
          volumeMounts:
            - name: myvol
              mountPath: /usr/local/apache2/htdocs
      volumes:
        - name: myvol
          persistentVolumeClaim:
            claimName: mypvc</code></pre><p>SVC</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-lb
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: web</code></pre><blockquote>
<p>안될 시 (권한변경)
node1으로 이동
ls -ld /nfsvolume
sudo chown nobody:hogroup -R /nfsvolume
sudo chmod 770 -R /nfsvolume
sudo exportfs -arv</p>
</blockquote>
<blockquote>
<p>파드가 삭제되고 생성되어도 동일한 pvc에 연결됨
pvc와 pv 연결된 상태에서 pv는 삭제 안됨
릴리즈 상태의 pv에 연결 안됨</p>
</blockquote>
<h2 id="동적-프로비저닝">동적 프로비저닝</h2>
<blockquote>
<p>Vagrant 스냅샷: 
스냅샷 생성: vagrant snapshot save before-rook
스냅샷 복구: vagrant snapshot restore before-rook</p>
</blockquote>
<h3 id="nfs-dynamic-provisioner-구성">NFS Dynamic Provisioner 구성</h3>
<blockquote>
<p><a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner">https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner</a></p>
</blockquote>
<pre><code>git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git</code></pre><pre><code>cd nfs-subdir-external-provisioner/deploy</code></pre><pre><code>kubectl create -f rbac.yaml</code></pre><p><code>deployment.yaml</code></p>
<pre><code>...
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.100.100
            - name: NFS_PATH
              value: /nfsvolume
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.100.100
            path: /nfsvolume</code></pre><pre><code>kubectl create -f deployment.yaml</code></pre><pre><code>kubectl create -f class.yaml</code></pre><hr>
<p><code>mypvc-dynamic.yaml</code></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc-dynamic
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1G
  storageClassName: &#39;nfs-client&#39;  #변경</code></pre><pre><code>kubectl create -f mypvc-dynamic.yaml</code></pre><pre><code>sudo ls -l /nfsvolume
sudo ls -l /nfsvolume/default~~</code></pre><pre><code>echo &quot;&lt;h1&gt; Hello NFS Dynamic Provision &lt;/h1&gt;&quot; | sudo tee /nfsvolume/XXX/index.html</code></pre><p><code>myweb-rs-dynamic.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: myweb
          image: httpd
          volumeMounts:
            - name: myvol
              mountPath: /usr/local/apache2/htdocs
      volumes:
        - name: myvol
          persistentVolumeClaim:
            claimName: mypvc-dynamic</code></pre><pre><code>kubectl create -f myweb-rs-dynamic.yaml</code></pre><h3 id="기본-스토리지-클래스">기본 스토리지 클래스</h3>
<p><code>~/nfs-subdir-external-provisioner/deploy/class.yaml</code></p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;true&quot;  #추가
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment&#39;s env PROVISIONER_NAME&#39;
parameters:
  archiveOnDelete: &quot;false&quot;</code></pre><pre><code>kubectl apply -f class.yaml</code></pre><pre><code>kubectl get sc

NAME                   ...
nfs-client (default)   ...</code></pre><pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc-dynamic
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1G</code></pre><blockquote>
<p>Default Storage를 지정해두면
vi mypvc.yaml 에서 storageClassName을 적지 않아도 Default StorageClass로 지정이됨</p>
</blockquote>
]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.20)]]></title>
            <link>https://velog.io/@sunny-10/22.05.20</link>
            <guid>https://velog.io/@sunny-10/22.05.20</guid>
            <pubDate>Fri, 20 May 2022 13:20:46 GMT</pubDate>
            <description><![CDATA[<h1 id="addons">Addons</h1>
<h2 id="metallb">Metallb</h2>
<p><code>~/kubespray/inventory/mycluster/group_vars/k8s-cluster/addons.yml</code></p>
<pre><code>...
139 metallb_enabled: true
140 metallb_speaker_enabled: true
141 metallb_ip_range:
142   - &quot;192.168.100.240-192.168.100.249&quot;
...
168 metallb_protocol: &quot;layer2&quot;
...</code></pre><p><code>~/kubespray/inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml</code></p>
<pre><code>129 kube_proxy_strict_arp: true</code></pre><h2 id="niginx-ingress-controller">Niginx Ingress Controller</h2>
<p><code>~/kubespray/inventory/mycluster/group_vars/k8s-cluster/addons.yml</code></p>
<pre><code> 93 ingress_nginx_enabled: true</code></pre><h2 id="metrics-server">metrics-server</h2>
<p><code>~/kubespray/inventory/mycluster/group_vars/k8s-cluster/addons.yml</code></p>
<pre><code> 16 metrics_server_enabled: true</code></pre><p>적용</p>
<pre><code>ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml -b</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.19)]]></title>
            <link>https://velog.io/@sunny-10/22.05.19</link>
            <guid>https://velog.io/@sunny-10/22.05.19</guid>
            <pubDate>Fri, 20 May 2022 13:10:03 GMT</pubDate>
            <description><![CDATA[<h1 id="service--dns--ingress">Service &amp; DNS &amp; Ingress</h1>
<p>Service Type </p>
<p>1) nodeport 클러스터 외부
2) loadbalancer 클러스터 외부
3) cluster ip 클러스터 내부</p>
<h2 id="service---clusterip">Service - ClusterIP</h2>
<p><code>myweb-svc.yaml</code></p>
<pre><code class="language-yaml">apiVersion: v1
kind: Service
metadata:
  name: myweb-svc
spec:
  selector: # 파드 셀렉터
    app: web
  ports:
    - port: 80 # 서비스 포트
      targetPort: 8080 # 타켓(파드 포트)</code></pre>
<blockquote>
<p>port 클라이언트가 접속하는 포트
targetport 컨테이너 파드 접속포트</p>
</blockquote>
<pre><code>kubectl create -f .</code></pre><pre><code>kubectl get svc myweb-svc</code></pre><pre><code>kubectl describe svc myweb-svc</code></pre><pre><code>kubectl get endpoint myweb-svc</code></pre><pre><code>kubectl run nettool -it --image ghcr.io/c1t1d0s7/network-multitool

&gt; curl x.x.x.x(서비스 리소스의 ClusterIP)
&gt; host myweb-svc
&gt; curl myweb-svc</code></pre><blockquote>
<p><strong>curl</strong> 확인을 위해
apt update; apt install curl
curl x.x.x.x (10.233.28.39)</p>
</blockquote>
<h3 id="session-affinity">Session Affinity</h3>
<p>세션 고정
default 값은 none (ClientIP 변경가능)
<code>meyweb-svc-ses.yaml</code></p>
<pre><code class="language-yaml">apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-ses
spec:
  type: ClusterIP
  sessionAffinity: ClientIP
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080</code></pre>
<h3 id="named-port">Named Port</h3>
<p><code>myweb-rs-named.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs-named
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
      env: dev
  template:
    metadata:
      labels:
        app: web
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080
              protocol: TCP
              name: web8080                #콘테이너 네임을 적을 수 있음</code></pre><p><code>myweb-svc-named.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-named
spec:
  type: ClusterIP
  selector:
    app: web
  ports:
    - port: 80
      targetPort: web8080</code></pre><h3 id="multi-port">Multi Port</h3>
<p><code>myweb-rs-multi.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs-multi
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
      env: dev
  template:
    metadata:
      labels:
        app: web
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080
              protocol: TCP
            - containerPort: 8443
              protocol: TCP</code></pre><blockquote>
<p>http 80
https 443</p>
</blockquote>
<p><code>myweb-svc-multi.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-multi
spec:
  type: ClusterIP
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080
      name: http
    - port: 443
      targetPort: 8443
      name: https</code></pre><h2 id="service-discovery">Service Discovery</h2>
<h3 id="환경-변수를-이용한-sd">환경 변수를 이용한 SD</h3>
<p>모든 파드는 실행 시 현재 시점의 서비스 목록을 환경 변수 제공</p>
<pre><code># env | grep MYWEB
MYWEB_SVC_PORT_80_TCP_PORT=80
MYWEB_SVC_PORT_80_TCP_PROTO=tcp
MYWEB_SVC_PORT_80_TCP=tcp://10.233.3.182:80
MYWEB_SVC_SERVICE_HOST=10.233.3.182
MYWEB_SVC_PORT=tcp://10.233.3.182:80
MYWEB_SVC_SERVICE_PORT=80
MYWEB_SVC_PORT_80_TCP_ADDR=10.233.3.182</code></pre><h3 id="dns를-이용한-sd">DNS를 이용한 SD</h3>
<p>kube-dns(coredns-X 파드)</p>
<p>Service 생성하면 해당 이름으로 FQDN을 DNS 서버에 등록</p>
<pre><code>[서비스 이름].[네임스페이스].[오브젝트 타입].[도메인]

myweb-svc.default.svc.cluster.local</code></pre><p>host myweb-svc
host myweb-svc.default
(nameSpace까지는 써주는거 권장)</p>
<blockquote>
<p>마지막 점 (RootHint)
myweb-svc.default.svc.cluster.local (.) &lt;-</p>
</blockquote>
<h4 id="nodelocal-dns">nodelocal DNS</h4>
<p>nodelocal DNS 캐시 사용
Pod --dns--&gt; 169.254.25.10(node-cache): DNS Cache Server --&gt; coredns SVC(kube-system NS) -&gt; coredns POD</p>
<p>nodelocal DNS 캐시 사용 X
Pod --dns--&gt; coredns SVC(kube-system NS) -&gt; coredns POD</p>
<h2 id="service---nodeport">Service - NodePort</h2>
<p><code>svc.spec.type</code></p>
<ul>
<li>ClusterIP: 클러스터 내에서 사용하는 LB</li>
<li>NodePort: 클러스터 외부에서 접근하는 포인트</li>
<li>LoadBalancer: 클러스터 외부에서 접근하는 LB</li>
</ul>
<p>system port 0<del>1023 (root권한)
registered port 1024</del>49151
dynamic/private port 49152~65535 (client사용)</p>
<p><strong>NodePort의 범위: 30000-32767</strong></p>
<p><code>myweb-svc-np.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-np
spec:
  type: NodePort
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080
      nodePort: 31313</code></pre><blockquote>
<p>nodePort = nodePort + ClusterIP
nodePort를 지정안하면 자동으로 지정</p>
</blockquote>
<h2 id="service---loadbalancer">Service - LoadBalancer</h2>
<p>LoadBalancer : L4 LB  (MetalLB)
ingress : L7 LB  (Nginx)</p>
<p><code>myweb-svc-lb.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-lb
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080
      nodePort: 31313</code></pre><p>Kubernetes는 외부에 LB를 생성할 수 없음
but, METALLB 를 사용하면 가능
(내부 pod을 이용하여)</p>
<h4 id="metallb---addon">Metallb - Addon</h4>
<p><code>~/kubespray/inventory/mycluster/group_vars/k8s-cluster/addons.yml</code></p>
<pre><code>...
139 metallb_enabled: true
140 metallb_speaker_enabled: true
141 metallb_ip_range:
142   - &quot;192.168.100.240-192.168.100.249&quot;
...
168 metallb_protocol: &quot;layer2&quot;
...</code></pre><p><code>~/kubespray/inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml</code></p>
<pre><code>129 kube_proxy_strict_arp: true</code></pre><pre><code>ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml -b</code></pre><blockquote>
<p>LoadBalancer = (외부)LB + NP + ClusterIP</p>
</blockquote>
<p>MetalLb는 2가지 모드가 있음</p>
<ol>
<li>Layer2 가 default값 - 소규모</li>
<li>BGP (L3 SW를 사용) - 대규모</li>
</ol>
<blockquote>
<p>(METALLB 참고링크)
<a href="https://metallb.universe.tf/">https://metallb.universe.tf/</a></p>
</blockquote>
<h2 id="service---externalname">Service - ExternalName</h2>
<p>클러스터 내부에서 클러스터 외부의 특정 서비스에 접속하기 위해 DNS CNAME을 설정</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: weather-ext-svc
spec:
  type: ExternalName
  externalName: www.naver.com</code></pre><pre><code>kubectl replac -f weather-ext-svc.yaml   #[네임(주소)이 변경 되었을때]</code></pre><blockquote>
<p>curl -s &#39;wttr.in/Seoul&#39;
curl -s &#39;wttr.in/Seoul?format=1&#39;
curl -s &#39;wttr.in/Seoul?format=2&#39;
(서울날씨가 뜸!)</p>
</blockquote>
<hr>
<h2 id="ingress">Ingress</h2>
<p>L7 LB = ALB</p>
<pre><code class="language-yaml">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myweb-ing
spec:
  rules:
    - host: &#39;*.encore.xyz&#39;
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: myweb-svc-np
                port:
                  number: 80</code></pre>
<p>도메인 없이 테스트 하는 법</p>
<p>방법1</p>
<pre><code>curl --resolve www.encore.xyz:80:192.168.100.100 http://www.encore.xyz</code></pre><p>방법2
<code>/etc/hosts</code></p>
<pre><code>...
192.168.100.100 www.encore.xyz  #빈곳에 작성</code></pre><pre><code>curl http://www.encore.xyz</code></pre><p>방법3</p>
<blockquote>
<p><a href="https://nip.io/">https://nip.io/</a>
<a href="https://sslip.io/">https://sslip.io/</a></p>
</blockquote>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myweb-ing
spec:
  rules:
    - host: &#39;*.nip.io&#39;
    ...</code></pre><pre><code>kubectl replace -f myweb-ing.yaml</code></pre><pre><code>curl http://192-168-100-100.nip.io</code></pre><h3 id="인그레스-예제">인그레스 예제</h3>
<p>hello:one 이미지
<code>Dockerfile</code></p>
<pre><code>FROM httpd
COPY index.html /usr/local/apache2/htdocs/index.html</code></pre><p><code>index.html</code></p>
<pre><code>&lt;h1&gt; Hello One &lt;/h1&gt;</code></pre><p>hello:two 이미지
<code>Dockerfile</code></p>
<pre><code>FROM httpd
COPY index.html /usr/local/apache2/htdocs/index.html</code></pre><p><code>index.html</code></p>
<pre><code>&lt;h1&gt; Hello Two &lt;/h1&gt;</code></pre><pre><code>docker image build X/hello:one
docker image build X/hello:two</code></pre><pre><code>docker login</code></pre><pre><code>docker push X/hello:one
docker push X/hello:two</code></pre><p>RS
<code>one-rs.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: one-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-one
  template:
    metadata:
      labels:
        app: hello-one
    spec:
      containers:
        - name: hello-one
          image: c1t1d0s7/hello:one
          ports:
            - containerPort: 80
              protocol: TCP</code></pre><p><code>two-rs.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: two-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-two
  template:
    metadata:
      labels:
        app: hello-two
    spec:
      containers:
        - name: hello-two
          image: c1t1d0s7/hello:two
          ports:
            - containerPort: 80
              protocol: TCP</code></pre><p><code>one-svc-np.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: one-svc-np
spec:
  type: NodePort
  selector:
    app: hello-one
  ports:
    - port: 80
      targetPort: 80</code></pre><p><code>two-svc-np.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
  name: two-svc-np
spec:
  type: NodePort
  selector:
    app: hello-two
  ports:
    - port: 80
      targetPort: 80</code></pre><p><code>hello-ing.yaml</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-ing
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: / # URL 재작성, /one -&gt; /, /two -&gt; /
spec:
  rules:
    - host: &#39;*.nip.io&#39;
      http:
        paths:
          - path: /one
            pathType: Prefix
            backend:
              service:
                name: one-svc-np
                port:
                  number: 80
          - path: /two
            pathType: Prefix
            backend:
              service:
                name: two-svc-np
                port:
                  number: 80</code></pre><pre><code>kubectl create -f .</code></pre><h2 id="readiness-probe">Readiness Probe</h2>
<p>파드의 헬스체크를 통해 서비스의 엔드포인트 리소스에 타겟 등록</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
      env: dev
  template:
    metadata:
      labels:
        app: web
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb:alpine
          ports:
            - containerPort: 8080
              protocol: TCP
          readinessProbe:
            exec:
              command:
                - ls
                - /tmp/ready</code></pre><pre><code>apiVersion: v1
kind: Service
metadata:
  name: myweb-svc-lb
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080</code></pre><pre><code>kubectl create -f .</code></pre><pre><code>watch -n1 -d kubectl get po,svc,ep</code></pre><pre><code>kubectl exec &lt;POD&gt; -- touch /tmp/ready</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.18)]]></title>
            <link>https://velog.io/@sunny-10/22.05.18</link>
            <guid>https://velog.io/@sunny-10/22.05.18</guid>
            <pubDate>Wed, 18 May 2022 08:33:42 GMT</pubDate>
            <description><![CDATA[<h1 id="pod-lifecyclelifetime">Pod Lifecycle/Lifetime</h1>
<h2 id="pod-상태">Pod 상태</h2>
<ul>
<li>Pending: 스케줄링되기 전, 이미지 받기 전, 컨테이너가 준비 되기 전</li>
<li>Running: 컨테이너가 실행 중, 실행 전, 재시작 중</li>
<li>Succeed: 정상 종료 (0)</li>
<li>Failed: 비정상 종료 (!0)</li>
<li>Unknown: 노드의 통신 문제로 상태 알 수 없음</li>
</ul>
<h2 id="container-상태">Container 상태</h2>
<ul>
<li>Waiting: 이미지 받기 전, 볼륨 연결 되기 전</li>
<li>Running: 실행 중</li>
<li>Terminated: 종료 </li>
</ul>
<h2 id="재시작-정책">재시작 정책</h2>
<pre><code>kubectl explain pod.spec.restartPolicy</code></pre><ul>
<li>pod.spec.restartPolicy<ul>
<li>Always(기본)</li>
<li>OnFailure</li>
<li>Never</li>
</ul>
</li>
</ul>
<p>실시간 보기 </p>
<pre><code>#(상태가 변할때만 보임) - 여러개 지정 불가
kubectl get pods --watch

#(시간별로 보기) - 여러개 지정가능
watch -n1 -d kubectl get pods</code></pre><h2 id="지수-백오프">지수 백오프</h2>
<ul>
<li>파드 실패시 재시작 정책에 의해 재시작을 하게됨<ul>
<li>재시작 시간 10, 20, 40, 80 ... 300 초 까지 유예기간</li>
</ul>
</li>
</ul>
<h2 id="컨테이너-프로브">컨테이너 프로브</h2>
<h3 id="프로브-종류">프로브 종류</h3>
<ul>
<li>linveness: 애플리케이션 실행/작동 여부</li>
<li>readiness</li>
<li>startup: 애플리케이션이 시작 되었는지 확인, 성공하지 않으면 나머지 프로브 비활성화</li>
</ul>
<h3 id="프로브-메커니즘">프로브 메커니즘</h3>
<ul>
<li>httpGet<ul>
<li>Web, WebApp</li>
<li>응답 코드 2XX, 3XX</li>
</ul>
</li>
<li>tcpSocket<ul>
<li>해당 포트 TCP 연결</li>
</ul>
</li>
<li>grpc<ul>
<li>grcp 프로토콜 연결</li>
</ul>
</li>
<li>exec<ul>
<li>명령 실행</li>
<li>종료 코드 0@</li>
</ul>
</li>
</ul>
<blockquote>
<p>HTTP 응답코드
1xx 정보
2xx 성공
3xx 리디렉션
4xx 클라이언트 오류
5xx 서버오류</p>
</blockquote>
<h3 id="프로브-결과">프로브 결과</h3>
<ul>
<li>success</li>
<li>failuer</li>
<li>unknown</li>
</ul>
<p><code>pods.spec.containers.livenessProbe</code></p>
<ul>
<li>exec</li>
<li>httpGet</li>
<li>tcpSocket</li>
<li>periodSeconds: 프로브 주기</li>
<li>failureThreshold: 실패 임계값</li>
<li>successThreshold: 성공 임계값</li>
<li>initialDelaySecond: 프로브 유예 기간</li>
<li>timeoutSeconds: 프로브 타임아웃</li>
</ul>
<hr>
<h1 id="workload-resource--controller">Workload Resource = Controller</h1>
<h2 id="replicationcontroller">ReplicationController</h2>
<p>ReplicationController (rc)
:파드가 너무 많으면 추가적인 파드를 제거함
너무 적으면 파드를 시작함</p>
<pre><code>kubectl explain rc.spec.template</code></pre><blockquote>
<p>pod.metadata.* = rc.spec.metadata.*
pod.spec.* = rc.spec.template.spec.*</p>
</blockquote>
<p><code>myweb-rc.yaml</code></p>
<pre><code class="language-yaml">apiVersion: v1
kind: ReplicationController
metadata:
  name: myweb-rc
spec:
  replicas: 3
  selector:
    app: web
# Pod Configure
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080
              protocol: TCP</code></pre>
<p>파일을 이용한  rc 생성</p>
<pre><code>kubectl create -f myweb-rc.yaml</code></pre><p>rc 확인</p>
<pre><code>watch kubectl get rc,pods --show-labels -o wide

NAME                 READY   STATUS    RESTARTS   AGE   LABELS
pod/myweb-rc-7m4v7   1/1     Running   0          29m   app=web
pod/myweb-rc-7s4vp   1/1     Running   0          78m   app=web
pod/myweb-rc-jtq7d   1/1     Running   0          78m   app=web</code></pre><p>rc label 변경</p>
<pre><code>kubectl label pod myweb-rc-jtq7d app=db --overwrite</code></pre><pre><code>kubectl label pod myweb-rc-jtq7d app=web --overwrite</code></pre><h3 id="rc-스케일링">RC 스케일링</h3>
<p>명령형 커맨드</p>
<pre><code>kubectl scale rc myweb-rc --replicas=5</code></pre><p>명령형 오브젝트 구성</p>
<pre><code>kubectl replace -f myweb-rc.yaml</code></pre><p>yaml파일 고치고 create 대신 replace
(단, template 정보는 수정해도 변경이 되지않음 -pod 만들때 최초에 적용)</p>
<pre><code>kubectl patch -f myweb-rc.yaml -p &#39;{&quot;spec&quot;: {&quot;replicas&quot;: 3}}&#39;
kubectl patch rc myweb-rc.yaml --patch-file replicas.json</code></pre><p><code>replicas.json</code></p>
<pre><code class="language-json">{&quot;spec&quot;: {&quot;replicas&quot;: 3}}</code></pre>
<pre><code>kubectl edit -f myweb-rc.yaml
kubectl edit rc myweb-rc
kubectl edit rc/myweb-rc</code></pre><p>선언형 오브젝트 구성</p>
<pre><code>kubectl apply -f myweb-rc.yaml</code></pre><h2 id="replicasets">ReplicaSets</h2>
<p>ReplicationController -&gt; ReplicaSets
rc와 rs는 거의 유사하지만 <strong>matchExpressions</strong> 이 추가됨</p>
<p>machLabels 사용 (rc와 유사)</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: ReplicaSets
metadata:
  name: myweb-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
      env: dev
  template:
    metadata:
      labels:
        app: web
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080
              protocol: TCP
</code></pre>
<p>matchExpressions 사용 (자기영역을 잘 셀렉팅하여 관리함)</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myweb-rs-set
spec:
  replicas: 3
  selector:
    matchExpressions:
      - key: app
        operator: In
        values: 
          - web
      - key: env
        operator: Exists
  template:
    metadata:
      labels:
        app: web
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080
              protocol: TCP
</code></pre>
<h2 id="daemonsets">DaemonSets</h2>
<p>: 모든 노드의 파드를 실행
노드가 추가되면 파드도 추가됨
노드가 제거되면 파드는 가비지로 수집됨
데몬셋을 삭제하면 데몬셋이 생성한 파드들이 정리</p>
<blockquote>
<p>주용도 agent
app을 보조하거나 infra유지보수관리</p>
</blockquote>
<p>노드마다 하나씩 파드 배치</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: myweb-ds
spec:
  selector:
    matchExpressions:
      - key: app
        operator: In
        values:
          - myweb
      - key: env
        operator: Exists
  template:
    metadata:
      labels:
        app: myweb
        env: dev
    spec:
      containers:
        - name: myweb
          image: ghcr.io/c1t1d0s7/go-myweb
          ports:
            - containerPort: 8080
              protocol: TCP</code></pre>
<pre><code>cd ~/kubespray</code></pre><p>노드 제거</p>
<pre><code>ansible-playbook -i inventory/mycluster/inventory.ini remove-node.yml -b --extra-vars=&quot;node=node3&quot; --extra-vars reset_nodes=true</code></pre><p>노드 추가</p>
<pre><code>ansible-playbook -i inventory/mycluster/inventory.ini scale.yml -b</code></pre><blockquote>
<p>노드 추가 및 제거 참고링크
<a href="https://kubespray.io/#/docs/getting-started">https://kubespray.io/#/docs/getting-started</a></p>
</blockquote>
<h2 id="jobs">Jobs</h2>
<p>: 성공적으로 종료될 때까지 계속해서 파드를 실행
시작--&gt;완료</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
  name: mypi
spec:
  template:
    spec:
      containers:
        - image: perl  
          name: mypi
          command: [&quot;perl&quot;,  &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;]
      restartPolicy: OnFailure</code></pre><blockquote>
<p>tip. 명령어 작성 시 esc+. 을 치면 마지막 아규먼트, 옵션이 나옴</p>
</blockquote>
<h3 id="잡-컨트롤러의-레이블">잡 컨트롤러의 레이블</h3>
<p>파드 템플릿의 레이블 / 잡 컨트롤러의 레이블 셀렉터는 지정하지 않는다.
-&gt; 잘못된 매핑으로 기존의 파드를 종료하지 않게 하기 위함</p>
<h3 id="파드의-종료-및-삭제">파드의 종료 및 삭제</h3>
<p><code>job.spec.activeDeadlineSeconds</code>: 애플리케이션이 실행될 수 있는 시간 지정
<code>job.spec.ttlSecondsAfterFinished</code>: 컨트롤러 및 파드 삭제</p>
<pre><code class="language-yaml">apiVersion: batch/v1
kind: Job
metadata:
  name: mypi
spec:
  template:
    spec:
      containers:
        - image: perl
          name: mypi
          command: [&quot;perl&quot;,  &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;]
      restartPolicy: OnFailure
  ttlSecondsAfterFinished: 10</code></pre>
<h3 id="작업의-병렬-처리">작업의 병렬 처리</h3>
<p><code>job.spec.completions</code>: 완료 횟수
<code>job.spec.parallelism</code>: 병렬 개수
completions 수 &gt;= parallelism 수 </p>
<pre><code class="language-yaml">apiVersion: batch/v1
kind: Job
metadata:
  name: mypi-para
spec:
  completions: 3
  parallelism: 3
  template:
    spec:
      containers:
        - image: perl
          name: mypi
          command: [&quot;perl&quot;,  &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(1500)&quot;]
      restartPolicy: OnFailure</code></pre>
<h3 id="일시중지">일시중지</h3>
<pre><code>kubectl edit jod mypi</code></pre><p><strong>suspend</strong>를 true로 변경하여 일시중지 가능 (default 값은 false)</p>
<blockquote>
<p>kubectl get po -A로 보는 목록중
뒤에 -node 가 붙어있는 pod들은 static pod이며 kubelet이 관리함</p>
</blockquote>
<h2 id="cronjob">CronJob</h2>
<p>: 주기적으로 동작 시킴</p>
<blockquote>
<p> kubectl explain cj --api-version=batch/v1beta1 (다른버전 상세 확인)
1.8 ~ 1.20 -&gt; v1beta1
1.21 -&gt; v1</p>
</blockquote>
<pre><code class="language-yaml">apiVersion: batch/v1
kind: CronJob
metadata:
  name: sleep-cj
spec:
  schedule: &quot;* * * * *&quot;
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: sleep
              image: ubuntu
              command: [&quot;sleep&quot;, &quot;80&quot;]
          restartPolicy: OnFailure
  #concurrencyPolicy: ( Allow | Forbid | Replace )</code></pre>
<p><code>cj.spec.concurrencyPolicy</code></p>
<ul>
<li>Allow: 동시 작업 가능</li>
<li>Forbid: 동시 작업 금지(이전 작업이 계속 실행 됨)</li>
<li>Replace: 교체(이전 작업은 종료되고 새로운 작업 실행)</li>
</ul>
<blockquote>
<p>100회이상의 일정이 누락되면 잡을 실행하지않고 에러 로그를 남김</p>
</blockquote>
]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.17)]]></title>
            <link>https://velog.io/@sunny-10/22.05.17</link>
            <guid>https://velog.io/@sunny-10/22.05.17</guid>
            <pubDate>Tue, 17 May 2022 14:20:15 GMT</pubDate>
            <description><![CDATA[<h1 id="workload---pod">Workload - Pod</h1>
<blockquote>
<p><a href="https://kubernetes.io/ko/docs/concepts/workloads/pods/">https://kubernetes.io/ko/docs/concepts/workloads/pods/</a></p>
</blockquote>
<p>파드: 컨테이너의 모음
쿠버네티스가 관리할 수 있는 가장 작은 워크로드는 파드</p>
<h2 id="파드-생성-및-관리">파드 생성 및 관리</h2>
<p>명령형 커맨드로 파드 생성</p>
<pre><code>kubectl run myweb --image httpd</code></pre><p>파드 목록 확인</p>
<pre><code>kubectl get pods</code></pre><p>특정 파드 확인</p>
<pre><code>kubectl get pods myweb</code></pre><p>파드 상세 정보</p>
<pre><code>kubectl get pods -o wide</code></pre><pre><code>kubectl get pods -o yaml</code></pre><pre><code>kubectl get pods -o json</code></pre><pre><code>kubectl describe pods myweb</code></pre><p>애플리케이션 로그</p>
<pre><code>kubectl logs myweb</code></pre><p>파드 삭제</p>
<pre><code>kubectl delete pods myweb</code></pre><h2 id="yaml-파일로-파드-정의">YAML 파일로 파드 정의</h2>
<p><code>myweb.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod 
metadata:
  name: myweb
spec:
  containers:
    - name: myweb
      image: httpd
      ports:
        - containerPort: 80
          protocol: TCP</code></pre><blockquote>
<p>kubectl explain pods</p>
</blockquote>
<p>파일을 이용한 pods 생성</p>
<pre><code>kubectl create -f myweb.yaml</code></pre><p>파일을 이용한 pods 확인</p>
<pre><code>kubectl get -f myweb.yaml</code></pre><pre><code>kubectl describe -f myweb.yaml</code></pre><p>파일을 이용한 pods 삭제</p>
<pre><code>kubectl delete -f myweb.yaml</code></pre><h2 id="kubectl-명령의-서브-명령">kubectl 명령의 서브 명령</h2>
<ul>
<li>create</li>
<li>get</li>
<li>describe</li>
<li>logs</li>
<li>delete</li>
<li>replace</li>
<li>patch</li>
<li>apply</li>
<li>diff</li>
</ul>
<h2 id="파드-디자인">파드 디자인</h2>
<p><img src="https://d33wubrfki0l68.cloudfront.net/aecab1f649bc640ebef1f05581bfcc91a48038c4/728d6/images/docs/pod.svg" alt=""></p>
<ul>
<li>단일 컨테이너: 일반 적인 형태</li>
<li>멀티 컨테이너: 메인 애플리케이션이 존재 매인 애플리케이션 기능을 확장 하기 위한 컨테이너를 배치</li>
</ul>
<blockquote>
<p>사이드카 패턴
<a href="https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/">https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/</a></p>
</blockquote>
<ul>
<li>sidecar: 기능의 확장</li>
<li>ambassador: 프록시/LB</li>
<li>adpator: 출력의 표준</li>
</ul>
<h2 id="포트-및-포트-포워딩">포트 및 포트 포워딩</h2>
<p>테스트 &amp; 디버깅 목적</p>
<pre><code>kubectl port-forward pods/myweb 8080:80</code></pre><h2 id="이름--uid">이름 &amp; UID</h2>
<p>이름: 네임스페이스 유일
UID: 클러스터에서 유일</p>
<hr>
<h1 id="namespace">Namespace</h1>
<p>리소스를 분리</p>
<ul>
<li>서비스 별</li>
<li>사용자 별</li>
<li>환경: 개발, 스테이징, 프로덕션</li>
</ul>
<blockquote>
<p>서비스: DNS 이름이 분리되는 용도
RBAC: 권한을 NS에 설정</p>
</blockquote>
<blockquote>
<p><a href="https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/namespaces/">https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/namespaces/</a></p>
</blockquote>
<pre><code>kubectl get namespaces</code></pre><ul>
<li>kube-system: Kubernetes의 핵심 컴포넌트</li>
<li>kube-public: 모든 사용자가 읽기 권한</li>
<li>kube-node-lease: 노드의 HeartBeat 체크를 위한 Lease 리소스가 존재</li>
<li>default: 기본 작업 공간
namespace(ns) 생성<pre><code>kubectl create ns developments</code></pre>ns 삭제<pre><code>kubectl delete ns developments</code></pre>ns 확인<pre><code>kubectl get pods -A | --all-namespaces</code></pre>name 옵션을 통한 확인<pre><code>kubectl get pods -n kube-system</code></pre></li>
</ul>
<p><code>ns-dev.yaml</code></p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
  name: dev</code></pre><pre><code>kubectl create -f ns-dev.yaml</code></pre><p><code>myweb-dev.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: myweb
  namespace: dev
spec:
  containers:
    - name: myweb
      image: httpd
      ports:
        - containerPort: 80
          protocol: TCP          </code></pre><pre><code>kubectl create -f myweb-dev.yaml</code></pre><pre><code>kubectl delete -f myweb-dev.yaml</code></pre><hr>
<h1 id="label--labelselector">Label &amp; LabelSelector</h1>
<blockquote>
<p><a href="https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/labels/">https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/labels/</a>
<a href="https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/common-labels/">https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/common-labels/</a></p>
</blockquote>
<h2 id="label">Label</h2>
<p>레이블 확인</p>
<pre><code>kubectl get pods --show-labels</code></pre><pre><code>kubectl get pods X -o yaml</code></pre><pre><code>kubectl describe pods X</code></pre><p>레이블 관리</p>
<pre><code>kubectl label pods myweb APP=apache</code></pre><pre><code>kubectl label pods myweb ENV=developments</code></pre><pre><code>kubectl label pods myweb ENV=staging</code></pre><p>overwrite 옵션 사용 (덮어쓰기)</p>
<pre><code>kubectl label pods myweb ENV=staging --overwirte</code></pre><pre><code>kubectl label pods myweb ENV-</code></pre><h2 id="labelselector">LabelSelector</h2>
<ul>
<li><strong>검색</strong></li>
<li>리소스 간 연결</li>
</ul>
<h3 id="일치성equality-base">일치성(equality base)</h3>
<ul>
<li><code>=</code></li>
<li><code>==</code></li>
<li><code>!=</code></li>
</ul>
<pre><code>kubectl get pods -l APP=nginx
kubectl get pods -l APP==nginx</code></pre><pre><code>kubectl get pods -l &#39;APP!=nginx&#39;</code></pre><h3 id="집합성set-base">집합성(set base)</h3>
<ul>
<li><code>in</code></li>
<li><code>notin</code></li>
<li><code>exists</code>: 키만 매칭<ul>
<li><code>kubectl get pods -l &#39;APP&#39;</code></li>
</ul>
</li>
<li><code>doesnotexists</code>: 키 제외 매칭<ul>
<li><code>kubectl get pods -l &#39;!APP&#39;</code></li>
</ul>
</li>
</ul>
<h1 id="annotations">Annotations</h1>
<p>레이블과 비슷
<strong>비 식별 메타데이타</strong>
애플리케이션이 해당 메타데이터를 참조할 수 있음 -&gt; 애플리케이션 작동 변경</p>
<p>명령형 커맨드</p>
<pre><code>kubectl annotate pods myweb created-by=Jang</code></pre><pre><code>kubectl annotate pods myweb created-by=Kim --overwrite</code></pre><pre><code>kubectl annotate pods myweb created-by-</code></pre><p>YAML</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Pod
metadata:
  name: myweb-label-anno
  labels:
    APP: apache
    ENV: staging
  annotations:
    Created-by: Jang
spec:
  containers:
    - name: myweb
      image: httpd
      ports:
        - containerPort: 80
          protocol: TCP</code></pre>
<hr>
]]></description>
        </item>
        <item>
            <title><![CDATA[컨테이너 오케스트레이션을 위한 Kubernetes (22.05.16)]]></title>
            <link>https://velog.io/@sunny-10/22.05.16</link>
            <guid>https://velog.io/@sunny-10/22.05.16</guid>
            <pubDate>Mon, 16 May 2022 08:24:35 GMT</pubDate>
            <description><![CDATA[<h1 id="k8s-클러스터-업그레이드">k8s 클러스터 업그레이드</h1>
<blockquote>
<p>Ubuntu 패키지 저장소 변경
sed -i &#39;s/security.ubuntu.com/mirror.kakao.com/g&#39; /etc/apt/sources.list
sed -i &#39;s/archive.ubuntu.com/mirror.kakao.com/g&#39; /etc/apt/sources.list
sudo apt update</p>
</blockquote>
<blockquote>
<p><a href="https://kubernetes.io/ko/releases/version-skew-policy/">https://kubernetes.io/ko/releases/version-skew-policy/</a></p>
</blockquote>
<ol>
<li>kube-apiserver</li>
<li>kube-controller-manager, kube-cloud-controller-manage, kube-scheduler</li>
<li>kubelet(Control Plane -&gt; Worker Node)</li>
<li>kube-proxy(Control Plane -&gt; Worker Node)</li>
</ol>
<p>Control Plane(api -&gt; cm, ccm, sched -&gt; let,proxy) --&gt; Work Node(let, proxy)</p>
<h2 id="kubeadm-업그레이드">kubeadm 업그레이드</h2>
<ol>
<li>Control Plane의 kubeadm 업그레이드</li>
<li>Control Plane의 kubeadm으로 api, cm, sched 업그레이드</li>
<li>Control Plane의 kubelet, kubectl 업그레이드</li>
<li>Work Node의 kubeadm 업그레이드</li>
<li>Work Node의 kubeadm으로 업그레이드</li>
<li>Work Node의 kubelet, kubectl 업그레이드</li>
</ol>
<p>Control Plane</p>
<pre><code>sudo apt-mark unhold kubeadm</code></pre><pre><code>sudo apt update</code></pre><pre><code>sudo apt upgrade kubeadm=1.22.9-00 -y</code></pre><pre><code>kubeadm version</code></pre><pre><code>sudo apt-mark hold kubeadm</code></pre><pre><code>sudo kubeadm upgrade plan</code></pre><pre><code>sudo kubeadm upgrade apply v1.22.9</code></pre><pre><code>sudo apt-mark unhold kubelet kubectl</code></pre><pre><code>sudo apt upgrade kubectl=1.22.9-00 kubelet=1.22.9-00 -y</code></pre><pre><code>sudo apt-mark hold kubelet kubectl</code></pre><pre><code>kubelet --version
kubectl version</code></pre><blockquote>
<p>drain 작업</p>
</blockquote>
<pre><code>sudo systemctl daemon-reload
sudo systemctl restart kubelet</code></pre><blockquote>
<p>uncordon 작업</p>
</blockquote>
<pre><code>systemctl status kubelet</code></pre><p>Work Node</p>
<pre><code>sudo apt-mark unhold kubeadm</code></pre><pre><code>sudo apt update</code></pre><pre><code>sudo apt upgrade kubeadm=1.22.9-00 -y</code></pre><pre><code>kubeadm version</code></pre><pre><code>sudo apt-mark hold kubeadm</code></pre><p>`</p>
<pre><code>sudo kubeadm upgrade node</code></pre><blockquote>
<p>drain 작업</p>
</blockquote>
<pre><code>sudo apt-mark unhold kubelet kubectl</code></pre><pre><code>sudo apt upgrade kubectl=1.22.9-00 kubelet=1.22.9-00 -y</code></pre><pre><code>sudo apt-mark hold kubelet kubectl</code></pre><pre><code>kubelet --version
kubectl version</code></pre><pre><code>sudo systemctl daemon-reload
sudo systemctl restart kubelet</code></pre><blockquote>
<p>uncordon 작업</p>
</blockquote>
<hr>
<h1 id="kubespray">Kubespray</h1>
<blockquote>
<p><a href="https://kubernetes.io/ko/docs/setup/production-environment/tools/kubespray/">https://kubernetes.io/ko/docs/setup/production-environment/tools/kubespray/</a>
<a href="https://kubespray.io/#/">https://kubespray.io/#/</a>
<a href="https://github.com/kubernetes-sigs/kubespray">https://github.com/kubernetes-sigs/kubespray</a></p>
</blockquote>
<p>Control Plane 1
Work Node 3(1 Control Plan + 2 Woker Node)</p>
<p>CPU: 2, Memory 3GB</p>
<p><code>~/vagrant/k8s</code></p>
<pre><code class="language-ruby">Vagrant.configure(&quot;2&quot;) do |config|
    # Define VM
    config.vm.define &quot;k8s-node1&quot; do |ubuntu|
        ubuntu.vm.box = &quot;ubuntu/focal64&quot;
        ubuntu.vm.hostname = &quot;k8s-node1&quot;
        ubuntu.vm.network &quot;private_network&quot;, ip: &quot;192.168.100.100&quot;
        ubuntu.vm.provider &quot;virtualbox&quot; do |vb|
            vb.name = &quot;k8s-node1&quot;
            vb.cpus = 2
            vb.memory = 3000
        end
    end
    config.vm.define &quot;k8s-node2&quot; do |ubuntu|
        ubuntu.vm.box = &quot;ubuntu/focal64&quot;
        ubuntu.vm.hostname = &quot;k8s-node2&quot;
        ubuntu.vm.network &quot;private_network&quot;, ip: &quot;192.168.100.101&quot;
        ubuntu.vm.provider &quot;virtualbox&quot; do |vb|
            vb.name = &quot;k8s-node2&quot;
            vb.cpus = 2
            vb.memory = 3000
        end
    end
    config.vm.define &quot;k8s-node3&quot; do |ubuntu|
        ubuntu.vm.box = &quot;ubuntu/focal64&quot;
        ubuntu.vm.hostname = &quot;k8s-node3&quot;
        ubuntu.vm.network &quot;private_network&quot;, ip: &quot;192.168.100.102&quot;
        ubuntu.vm.provider &quot;virtualbox&quot; do |vb|
            vb.name = &quot;k8s-node3&quot;
            vb.cpus = 2
            vb.memory = 3000
        end
    end

    config.vm.provision &quot;shell&quot;, inline: &lt;&lt;-SHELL
      sed -i &#39;s/PasswordAuthentication no/PasswordAuthentication yes/g&#39; /etc/ssh/sshd_config
      sed -i &#39;s/archive.ubuntu.com/mirror.kakao.com/g&#39; /etc/apt/sources.list
      sed -i &#39;s/security.ubuntu.com/mirror.kakao.com/g&#39; /etc/apt/sources.list
      systemctl restart ssh
    SHELL
end</code></pre>
<h2 id="1-ssh-키-생성-및-복사">1. SSH 키 생성 및 복사</h2>
<pre><code>ssh-keygen</code></pre><pre><code>ssh-copy-id vagrant@192.168.100.100
ssh-copy-id vagrant@192.168.100.101
ssh-copy-id vagrant@192.168.100.102</code></pre><h2 id="2-kubespray-소스-다운로드">2. kubespray 소스 다운로드</h2>
<pre><code>cd ~</code></pre><pre><code>git clone -b v2.18.1 https://github.com/kubernetes-sigs/kubespray.git</code></pre><pre><code>cd kubespray</code></pre><h2 id="3-ansible-netaddr-jinja-등-패키지-설치">3. ansible, netaddr, jinja 등 패키지 설치</h2>
<pre><code>sudo apt update
sudo apt install python3-pip -y</code></pre><pre><code>sudo pip3 install -r requirements.txt</code></pre><h2 id="4-인벤토리-구성">4. 인벤토리 구성</h2>
<pre><code>cp -rpf inventory/sample/ inventory/mycluster</code></pre><p><code>inventory/mycluster/inventory.ini</code></p>
<pre><code class="language-ini">[all]
node1 ansible_host=192.168.100.100 ip=192.168.100.100
node2 ansible_host=192.168.100.101 ip=192.168.100.101
node3 ansible_host=192.168.100.102 ip=192.168.100.102

[kube_control_plane]
node1

[etcd]
node1

[kube_node]
node1
node2
node3

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr</code></pre>
<h2 id="5-변수-설정">5. 변수 설정</h2>
<p>(kubespray 변수 정의 및 정리 - <a href="https://kubespray.io/#/docs/vars">https://kubespray.io/#/docs/vars</a>)</p>
<p><code>inventory/mycluster/group_vars</code></p>
<h2 id="6-플레이북-실행">6. 플레이북 실행</h2>
<pre><code>ansible all -m ping -i inventory/mycluster/inventory.ini</code></pre><pre><code>ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml -b </code></pre><h2 id="7-검증">7. 검증</h2>
<pre><code>mkdir ~/.kube
sudo cp /etc/kubernetes/admin.conf ~/.kube/config
sudo chown vagrant:vagrant ~/.kube/config</code></pre><pre><code>kubectl get nodes</code></pre><pre><code>kubectl get pods -A</code></pre><hr>
<h1 id="kubernetes-objects">Kubernetes Objects</h1>
<blockquote>
<p><a href="https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/kubernetes-objects/">https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/kubernetes-objects/</a></p>
</blockquote>
<h2 id="오브젝트-종류">오브젝트 종류</h2>
<pre><code>kubectl api-resouces</code></pre><ul>
<li><p>Label/LabelSelector</p>
</li>
<li><p>Workload</p>
<ul>
<li>Pod</li>
<li>Controller<ul>
<li>ReplicationController</li>
<li>ReplicaSets</li>
<li>DaemonSets</li>
<li>Jobs</li>
<li>CronJobs</li>
<li>Deployments</li>
<li>StatefulSets</li>
<li>HorizontalPodAutoscaler</li>
</ul>
</li>
</ul>
</li>
<li><p>Network</p>
<ul>
<li>Service</li>
<li>Endpoints</li>
<li>Ingress</li>
</ul>
</li>
<li><p>Storage</p>
<ul>
<li>PersistentVolume</li>
<li>PersistentVolumeClaim</li>
<li>ConfigMap</li>
<li>Secret</li>
</ul>
</li>
<li><p>Authentication</p>
<ul>
<li>ServiceAccount</li>
<li>RBAC<ul>
<li>Role</li>
<li>ClusterRole</li>
<li>RoleBinding</li>
<li>ClusterRoleBinding</li>
</ul>
</li>
</ul>
</li>
<li><p>Resource Isolation</p>
<ul>
<li>Namespaces</li>
</ul>
</li>
<li><p>Resource Limits</p>
<ul>
<li>Limits</li>
<li>Requests</li>
<li>ResourceQuota</li>
<li>LimitRange</li>
</ul>
</li>
<li><p>Scheduling</p>
<ul>
<li>NodeName</li>
<li>NodeSelector</li>
<li>Affinity<ul>
<li>Node Affinity</li>
<li>Pod Affinity</li>
<li>Pod Anti Affinity</li>
</ul>
</li>
<li>Taints/Tolerations</li>
<li>Drain/Cordon</li>
</ul>
</li>
</ul>
<h3 id="오브젝트의-버전">오브젝트의 버전</h3>
<blockquote>
<p><a href="https://kubernetes.io/ko/docs/reference/using-api/#api-%EA%B7%B8%EB%A3%B9">https://kubernetes.io/ko/docs/reference/using-api/#api-%EA%B7%B8%EB%A3%B9</a></p>
</blockquote>
<pre><code>kubectl api-versions</code></pre><p>apps/v1</p>
<ul>
<li>apps: 그룹</li>
<li>v1: 버전</li>
</ul>
<blockquote>
<p>그룹이 없는 api는 core 그룹</p>
</blockquote>
<ul>
<li>Stable<ul>
<li>vX</li>
<li>v1, v2</li>
<li>안정화된 버전</li>
</ul>
</li>
<li>Beta<ul>
<li>v1betaX, v2betaX</li>
<li>충분히 검증 오류 X</li>
<li>버전이 올라가되면 기능 변경이 있을 수 있음<ul>
<li>downtime 발생할 수도 있음: 특정 기능을 사용하기 위해 재시작</li>
</ul>
</li>
<li>Mission Critical </li>
</ul>
</li>
<li>Alpha<ul>
<li>v1alphaX, v2alphaX</li>
<li>기본 비활성화</li>
<li>개발 중인 API</li>
</ul>
</li>
</ul>
<p>Alpha -&gt; Beta -&gt; Stable</p>
<ul>
<li>v1alpha1 -&gt; v1alpha2 -&gt; v1alpha3 -&gt; v1beta1 -&gt; v1beta2 -&gt; v1</li>
</ul>
<h2 id="오브젝트-정의">오브젝트 정의</h2>
<pre><code class="language-yaml">apiVersion:
kind:
metdata:
spec:</code></pre>
<ul>
<li>kind: 오브젝트의 종류</li>
<li>apiVersion: 지원하는 오브젝트의 버전</li>
<li>metadata: 오브젝트의 메타데이터<ul>
<li>이름, 네임스페이스, 레이블, 어노테이션</li>
</ul>
</li>
<li>spec: 오브젝트에 대한 선언</li>
</ul>
<pre><code>kubectl explain pods
kubectl explain pods.metadata
kubectl explain pods.spec
kubectl explain pods.spec.containers
kubectl explain pods.spec.containers.images
kubectl explain pods.spec --recursive</code></pre><h2 id="오브젝트-관리">오브젝트 관리</h2>
<blockquote>
<p><a href="https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/object-management/">https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/object-management/</a></p>
</blockquote>
<ul>
<li>명령형 커멘드: kubectl 명령으로만 구성<ul>
<li><code>kubectl create</code></li>
<li><code>kubectl run</code></li>
<li><code>kubectl expose</code></li>
</ul>
</li>
<li><strong>명령형 오브젝트 구성</strong>: YAML 파일을 순서대로 하나씩 실행<ul>
<li><code>kubectl create -f a.yaml</code></li>
<li><code>kubectl replace -f a.yaml</code></li>
<li><code>kubectl patch -f a.yaml</code></li>
<li><code>kubectl delete -f a.yaml</code></li>
</ul>
</li>
<li>선언형 오브젝트 구성: YAML 파일의 모음을 한번에 실행<ul>
<li><code>kubectl apply -f resources/</code></li>
</ul>
</li>
</ul>
]]></description>
        </item>
    </channel>
</rss>