<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>jupiter</title>
        <link>https://velog.io/</link>
        <description>개발기록</description>
        <lastBuildDate>Thu, 09 Oct 2025 07:11:03 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>Copyright (C) 2019. jupiter. All rights reserved.</copyright>
        <atom:link href="https://v2.velog.io/rss/jupiter-j" rel="self" type="application/rss+xml"/>
        <item>
            <title><![CDATA[ETCD 백업 및 복구 테스트]]></title>
            <link>https://velog.io/@jupiter-j/ETCD-%EB%B0%B1%EC%97%85-%EB%B0%8F-%EB%B3%B5%EA%B5%AC-%ED%85%8C%EC%8A%A4%ED%8A%B8</link>
            <guid>https://velog.io/@jupiter-j/ETCD-%EB%B0%B1%EC%97%85-%EB%B0%8F-%EB%B3%B5%EA%B5%AC-%ED%85%8C%EC%8A%A4%ED%8A%B8</guid>
            <pubDate>Thu, 09 Oct 2025 07:11:03 GMT</pubDate>
            <description><![CDATA[<blockquote>
<ul>
<li>etcd 클러스터가 깨졌을경우 해당 부분을 복구하기 위한 백업 설정 </li>
</ul>
</blockquote>
<br>

<h2 id="1-etcd-ctl-명령어-전에-알아야-할-정보">1. ETCD CTL 명령어 전에 알아야 할 정보</h2>
<table>
<thead>
<tr>
<th>항목</th>
<th>설명</th>
<th>확인 명령어</th>
</tr>
</thead>
<tbody><tr>
<td>① <strong>etcd 엔드포인트 주소 (IP:Port)</strong></td>
<td>etcd가 클러스터 통신용으로 열고 있는 포트 주소</td>
<td><code>cat /etc/etcd/etcd.conf</code> 또는 `ps -ef</td>
</tr>
<tr>
<td>② <strong>인증서 경로 (CA / Admin cert / Admin key)</strong></td>
<td>TLS 인증에 필요한 pem 파일들</td>
<td><code>/etc/ssl/etcd/ssl/</code> 또는 <code>/etc/kubernetes/pki/etcd/</code></td>
</tr>
<tr>
<td>③ <strong>etcdctl API 버전</strong></td>
<td>etcdctl이 사용하는 API 버전</td>
<td><code>ETCDCTL_API=3</code> (항상 v3로 설정)</td>
</tr>
<tr>
<td>④ <strong>etcd 버전</strong></td>
<td>설치된 etcd 버전 (명령어 옵션 호환성 확인용)</td>
<td><code>etcdctl version</code> 또는 <code>etcd --version</code></td>
</tr>
</tbody></table>
<br>

<h2 id="2-etcd-확인">2. ETCD 확인</h2>
<ul>
<li>etcd 서비스 확인</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:~# systemctl status etcd.service 
● etcd.service - etcd
     Loaded: loaded (/etc/systemd/system/etcd.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-10-03 06:09:13 UTC; 4 days ago
   Main PID: 9085 (etcd)
      Tasks: 12 (limit: 19147)
     Memory: 667.9M (peak: 779.4M)
        CPU: 5h 42min 3.431s
     CGroup: /system.slice/etcd.service
             └─9085 /usr/local/bin/etcd</code></pre>
<ul>
<li>etcd ps 확인</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:~# ps -ef | grep etcd
root        9085       1  4 Oct03 ?        05:42:03 /usr/local/bin/etcd
root       16125   15986  7 Oct03 ?        08:25:58 kube-apiserver --advertise-address=172.10.10.200 --allow-privileged=true --anonymous-auth=True --apiserver-count=3 --authorization-mode=Node,RBAC --bind-address=:: --client-ca-file=/etc/kubernetes/ssl/ca.crt --default-not-ready-toleration-seconds=300 --default-unreachable-toleration-seconds=300 --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=False --enable-bootstrap-token-auth=true --endpoint-reconciler-type=lease --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem --etcd-certfile=/etc/ssl/etcd/ssl/node-thk-master-1.pem --etcd-compaction-interval=5m0s --etcd-keyfile=/etc/ssl/etcd/ssl/node-thk-master-1-key.pem --etcd-servers=https://172.10.10.200:2379,https://172.10.10.123:2379,https://172.10.10.202:2379 --event-ttl=1h0m0s --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt </code></pre>
<ul>
<li>etcd 실행 파일 확인</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:~# cat /etc/systemd/system/etcd.service
[Unit]
Description=etcd
After=network.target

[Service]
Type=notify
User=root
EnvironmentFile=/etc/etcd.env
ExecStart=/usr/local/bin/etcd
NotifyAccess=all
Restart=always
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target</code></pre>
<ul>
<li>etcd 설정 파일 확인</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:~#  cat /etc/etcd.env
# Environment file for etcd 3.5.16
ETCD_DATA_DIR=/var/lib/etcd
ETCD_ADVERTISE_CLIENT_URLS=https://172.10.10.200:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://172.10.10.200:2380
ETCD_INITIAL_CLUSTER_STATE=existing
ETCD_METRICS=basic
ETCD_LISTEN_CLIENT_URLS=https://172.10.10.200:2379,https://127.0.0.1:2379
ETCD_ELECTION_TIMEOUT=5000
ETCD_HEARTBEAT_INTERVAL=250
ETCD_INITIAL_CLUSTER_TOKEN=k8s_etcd
ETCD_LISTEN_PEER_URLS=https://172.10.10.200:2380
ETCD_NAME=thk-master-1
ETCD_PROXY=off
ETCD_INITIAL_CLUSTER=thk-master-1=https://172.10.10.200:2380,thk-master-2=https://172.10.10.123:2380,thk-master-3=https://172.10.10.202:2380
ETCD_AUTO_COMPACTION_RETENTION=8
ETCD_SNAPSHOT_COUNT=100000
# Flannel need etcd v2 API
ETCD_ENABLE_V2=true

# TLS settings
ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
ETCD_CERT_FILE=/etc/ssl/etcd/ssl/member-thk-master-1.pem
ETCD_KEY_FILE=/etc/ssl/etcd/ssl/member-thk-master-1-key.pem
ETCD_CLIENT_CERT_AUTH=true

ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
ETCD_PEER_CERT_FILE=/etc/ssl/etcd/ssl/member-thk-master-1.pem
ETCD_PEER_KEY_FILE=/etc/ssl/etcd/ssl/member-thk-master-1-key.pem
ETCD_PEER_CLIENT_CERT_AUTH=True

# CLI settings
ETCDCTL_ENDPOINTS=https://127.0.0.1:2379
ETCDCTL_CACERT=/etc/ssl/etcd/ssl/ca.pem
ETCDCTL_KEY=/etc/ssl/etcd/ssl/admin-thk-master-1-key.pem
ETCDCTL_CERT=/etc/ssl/etcd/ssl/admin-thk-master-1.pem

# ETCD 3.5.x issue
# https://groups.google.com/a/kubernetes.io/g/dev/c/B7gJs88XtQc/m/rSgNOzV2BwAJ?utm_medium=email&amp;utm_source=footer
ETCD_EXPERIMENTAL_INITIAL_CORRUPT_CHECK=True

ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL=5s</code></pre>
<ul>
<li>백업 스냅샷 확인</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:~# cd /var/backups/
root@thk-master-1:/var/backups# ll
total 772
drwxr-xr-x  3 root root   4096 Oct  4 00:00 ./
drwxr-xr-x 13 root root   4096 Oct  3 03:45 ../
-rw-r--r--  1 root root  40960 Oct  4 00:00 alternatives.tar.0
-rw-r--r--  1 root root  37736 Oct  3 05:37 apt.extended_states.0
-rw-r--r--  1 root root      0 Oct  4 00:00 dpkg.arch.0
-rw-r--r--  1 root root   1518 Jun 26 12:55 dpkg.diversions.0
-rw-r--r--  1 root root    100 Jun 26 12:52 dpkg.statoverride.0
-rw-r--r--  1 root root 685461 Oct  3 05:37 dpkg.status.0
drw-------  3 root root   4096 Oct  3 06:09 etcd-2025-10-03_06:09:10/
root@thk-master-1:/var/backups# cd etcd-2025-10-03_06\:09\:10/
root@thk-master-1:/var/backups/etcd-2025-10-03_06:09:10# ls
member  snapshot.db

root@thk-master-1:/var/backups/etcd-2025-10-03_06:09:10# cat snapshot.db 
��
  ���������
          }2���0���
                   ������������59�_�
� �D�            CY l �� %�alarmauth
                                   authRevisionauthRolesauthUsersclusterclusterVersion3.5.0keyleasemembers0Z�Z�Z165748cefbffad9d{&quot;id&quot;:1609835445636541853,&quot;peerURLs&quot;:[&quot;https://172.10.10.200:2380&quot;],&quot;name&quot;:&quot;thk-master-1&quot;}58c4e282fe1a3b29{&quot;id&quot;:6396486423009704745,&quot;peerURLs&quot;:[&quot;https://172.10.10.123:2380&quot;],&quot;name&quot;:&quot;thk-master-2&quot;}7c7b975a758bd6b9{&quot;id&quot;:8969929497613424313,&quot;peerURLs&quot;:[&quot;https://172.10.10.202:2380&quot;],&quot;name&quot;:&quot;thk-master-3&quot;}members_removedmeta0        [confState{&quot;voters&quot;:[1609835445636541853,6396486423009704745,8969929497613424313],&quot;auto_leave&quot;:false}consistent_index
                                                                                                       term
� �D�            CY l �� %�alarmauth
                                   authRevisionauthRolesauthUsersclusterclusterVersion3.5.0keyleasemembers0Z�Z�Z165748cefbffad9d{&quot;id&quot;:1609835445636541853,&quot;peerURLs&quot;:[&quot;https://172.10.10.200:2380&quot;],&quot;name&quot;:&quot;thk-master-1&quot;}58c4e282fe1a3b29{&quot;id&quot;:6396486423009704745,&quot;peerURLs&quot;:[&quot;https://172.10.10.123:2380&quot;],&quot;name&quot;:&quot;thk-master-2&quot;}7c7b975a758bd6b9{&quot;id&quot;:8969929497613424313,&quot;peerURLs&quot;:[&quot;https://172.10.10.202:2380&quot;],&quot;name&quot;:&quot;thk-master-3&quot;}members_removedmeta0        [confState{&quot;voters&quot;:[1609835445636541853,6396486423009704745,8969929497613424313],&quot;auto_leave&quot;:false}consistent_indeterm
� �D�             6 I ^ u � alarmauth
��@��\��8zU9mH~��f�nroot@thk-master-1:/var/backups/etcd-2025-10-03_06:09:10# ^Csemembersmembers_removedmeta�@$�m

root@thk-master-1:/var/backups/etcd-2025-10-03_06:09:10# etcdutl snapshot status /var/backups/etcd-2025-10-03_06\:09\:10/snapshot.db  -w table
+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| 84c36aa6 |        0 |          8 |      20 kB |
+----------+----------+------------+------------+
root@thk-master-1:/var/backups/etcd-2025-10-03_06:09:10# 
</code></pre>
<ul>
<li>etcd상태 확인</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:/var/backups/etcd-2025-10-03_06:09:10# ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
&gt; --cacert=/etc/ssl/etcd/ssl/ca
ca-key.pem  ca.pem      
&gt; --cacert=/etc/ssl/etcd/ssl/ca.pem \
&gt; --cert=/etc/ssl/etcd/ssl/admin-thk-master-1
admin-thk-master-1-key.pem  admin-thk-master-1.pem      
&gt; --cert=/etc/ssl/etcd/ssl/admin-thk-master-1
admin-thk-master-1-key.pem  admin-thk-master-1.pem      
&gt; --cert=/etc/ssl/etcd/ssl/admin-thk-master-1.pem \
&gt; --key=/etc/ssl/etcd/ssl/admin-thk-master-1-key.pem \
&gt; endpoint status -w table</code></pre>
<pre><code class="language-bash">root@thk-master-1:/var/backups/etcd-2025-10-03_06:09:10# ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/ssl/etcd/ssl/ca.pem --cert=/etc/ssl/etcd/ssl/admin-thk-master-1.pem --key=/etc/ssl/etcd/ssl/admin-thk-master-1-key.pem endpoint status -w table
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://127.0.0.1:2379 | 165748cefbffad9d |  3.5.16 |   59 MB |     false |      false |         5 |    3494579 |            3494579 |        |
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
root@thk-master-1:/var/backups/etcd-2025-10-03_06:09:10# </code></pre>
<pre><code class="language-bash">ETCDCTL_API=3 etcdctl \
  --endpoints=https://172.10.10.200:2379,https://172.10.10.123:2379,https://172.10.10.202:2379 \
  --cacert=/etc/ssl/etcd/ssl/ca.pem \
  --cert=/etc/ssl/etcd/ssl/admin-thk-master-1.pem \
  --key=/etc/ssl/etcd/ssl/admin-thk-master-1-key.pem \
  endpoint status -w table
</code></pre>
<pre><code class="language-bash">+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://172.10.10.200:2379 | 165748cefbffad9d |  3.5.16 |   59 MB |     false |      false |         5 |    4334275 |            4334275 |        |
| https://172.10.10.123:2379 | 58c4e282fe1a3b29 |  3.5.16 |   59 MB |     false |      false |         5 |    4334275 |            4334275 |        |
| https://172.10.10.202:2379 | 7c7b975a758bd6b9 |  3.5.16 |   59 MB |      true |      false |         5 |    4334275 |            4334275 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
root@thk-master-1:~# </code></pre>
<h3 id="etcd-리더-확인">etcd 리더 확인</h3>
<pre><code class="language-bash">root@thk-master-1:~# ETCDCTL_API=3 etcdctl   --endpoints=https://172.10.10.200:2379,https://172.10.10.123:2379,https://172.10.10.202:2379   --cacert=/etc/ssl/etcd/ssl/ca.pem   --cert=/etc/ssl/etcd/ssl/admin-thk-master-1.pem   --key=/etc/ssl/etcd/ssl/admin-thk-master-1-key.pem   endpoint status -w table
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://172.10.10.200:2379 | 165748cefbffad9d |  3.5.16 |   59 MB |     false |      false |         5 |    4335173 |            4335173 |        |
| https://172.10.10.123:2379 | 58c4e282fe1a3b29 |  3.5.16 |   59 MB |     false |      false |         5 |    4335173 |            4335173 |        |
| https://172.10.10.202:2379 | 7c7b975a758bd6b9 |  3.5.16 |   59 MB |      true |      false |         5 |    4335173 |            4335173 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+</code></pre>
<h3 id="etcd-클러스터-확인">etcd 클러스터 확인</h3>
<pre><code class="language-bash">root@thk-master-1:~# ETCDCTL_API=3 etcdctl \
  --endpoints=https://172.10.10.200:2379,https://172.10.10.123:2379,https://172.10.10.202:2379 \
  --cacert=/etc/ssl/etcd/ssl/ca.pem \
  --cert=/etc/ssl/etcd/ssl/admin-thk-master-1.pem \
  --key=/etc/ssl/etcd/ssl/admin-thk-master-1-key.pem \
  endpoint health -w table
+----------------------------+--------+------------+-------+
|          ENDPOINT          | HEALTH |    TOOK    | ERROR |
+----------------------------+--------+------------+-------+
| https://172.10.10.202:2379 |   true | 8.443392ms |       |
| https://172.10.10.123:2379 |   true | 8.440635ms |       |
| https://172.10.10.200:2379 |   true | 8.528177ms |       |
+----------------------------+--------+------------+-------+
root@thk-master-1:~# </code></pre>
<br>

<br>


<h2 id="3-백업-실행">3. 백업 실행</h2>
<blockquote>
<p>“특정 마스터(leader - master1)에서 스냅샷 저장. 이때 endpoints는 로컬 127.0.0.1:2379 대신 클러스터 내부 IP(ex: 172.20.10.122) 지정하는 게 안전하며, 리더 노드가 아니더라도 etcd 클러스터의 모든 노드는 동일한 데이터 상태이므로 백업 가능하다.”</p>
</blockquote>
<ul>
<li>보통 리더 노드(=현재 etcd에서 주도권을 가진 노드)에서 백업을 뜨면 가장 직관적이지만 꼭 리더를 복제할 필요는 없다. 리더가 잠시 바뀌어도 etcd는 내부적으로 데이터가 동기화되어 있기 때문이다.</li>
<li><code>127.0.0.1</code>은 “이 컴퓨터 자신”을 뜻하는 주소이지만 etcd의 인증서에는 대부분 <code>127.0.0.1</code>이 아니라 “실제 IP (예: 172.20.10.122)”만 들어있다. 그래서 <code>127.0.0.1</code>로 접속하면 “인증서 이름 불일치”로 에러가 날 수도 있으며 다른 서버에서 백업 스크립트를 실행하려면 <code>127.0.0.1</code>은 통하지 않는다. 그러기 때문에 실제 주소를 지정하는게 안전하다</aside>

</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:~# ETCDCTL_API=3 etcdctl --endpoints=https://172.10.10.200:2379 --cacert=/etc/ssl/etcd/ssl/ca.pem --cert=/
etc/ssl/etcd/ssl/node-thk-master-1.pem --key=/etc/ssl/etcd/ssl/node-thk-master-1-key.pem snapshot save /var/backups/etcd-sta
npshot-$(date +%Y%m%d%H%M).db
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2025-10-09T05:29:56.349161Z&quot;,&quot;caller&quot;:&quot;snapshot/v3_snapshot.go:65&quot;,&quot;msg&quot;:&quot;created temporary db file&quot;,&quot;path&quot;:&quot;/var/backups/etcd-stanpshot-202510090529.db.part&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2025-10-09T05:29:56.355473Z&quot;,&quot;logger&quot;:&quot;client&quot;,&quot;caller&quot;:&quot;v3@v3.5.16/maintenance.go:212&quot;,&quot;msg&quot;:&quot;opened snapshot stream; downloading&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2025-10-09T05:29:56.355540Z&quot;,&quot;caller&quot;:&quot;snapshot/v3_snapshot.go:73&quot;,&quot;msg&quot;:&quot;fetching snapshot&quot;,&quot;endpoint&quot;:&quot;https://172.10.10.200:2379&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2025-10-09T05:29:56.808742Z&quot;,&quot;logger&quot;:&quot;client&quot;,&quot;caller&quot;:&quot;v3@v3.5.16/maintenance.go:220&quot;,&quot;msg&quot;:&quot;completed snapshot read; closing&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2025-10-09T05:29:57.503369Z&quot;,&quot;caller&quot;:&quot;snapshot/v3_snapshot.go:88&quot;,&quot;msg&quot;:&quot;fetched snapshot&quot;,&quot;endpoint&quot;:&quot;https://172.10.10.200:2379&quot;,&quot;size&quot;:&quot;59 MB&quot;,&quot;took&quot;:&quot;1 second ago&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2025-10-09T05:29:57.503496Z&quot;,&quot;caller&quot;:&quot;snapshot/v3_snapshot.go:97&quot;,&quot;msg&quot;:&quot;saved&quot;,&quot;path&quot;:&quot;/var/backups/etcd-stanpshot-202510090529.db&quot;}
Snapshot saved at /var/backups/etcd-stanpshot-202510090529.db</code></pre>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/515b3b52-5064-45c5-bd79-790bf84d1e12/image.png" alt=""></p>
<br>


<h3 id="정상-복제인지-확인">정상 복제인지 확인</h3>
<pre><code class="language-bash">root@thk-master-1:~# etcdctl --endpoints=$ETCDCTL_ENDPOINTS snapshot status /var/backups/etcd-stanpshot-202510090529.db -w t
able
Deprecated: Use `etcdutl snapshot status` instead.

+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| 10f4598d |  3999901 |       6693 |      59 MB |
+----------+----------+------------+------------+</code></pre>
<table>
<thead>
<tr>
<th>항목</th>
<th>뜻</th>
<th>쉽게 말하면</th>
</tr>
</thead>
<tbody><tr>
<td><strong>HASH</strong></td>
<td>파일 고유의 체크값</td>
<td>백업이 손상되지 않았는지 확인용 번호 (파일의 지문)</td>
</tr>
<tr>
<td><strong>REVISION</strong></td>
<td>etcd 내부 데이터 버전</td>
<td>백업 시점의 etcd ‘마지막 저장 번호’ — 높을수록 최신</td>
</tr>
<tr>
<td><strong>TOTAL KEYS</strong></td>
<td>저장된 key(데이터) 개수</td>
<td>쿠버네티스 리소스 개수(파드, 서비스, 시크릿 등)</td>
</tr>
<tr>
<td><strong>TOTAL SIZE</strong></td>
<td>파일 크기</td>
<td>etcd 데이터 전체 용량 (보통 수십 MB면 정상)</td>
</tr>
</tbody></table>
<br>

<h3 id="스냅샷-임시-복원">스냅샷 임시 복원</h3>
<ul>
<li><code>etcdctl snapshot restore</code> 명령어는 <strong>백업된 snapshot.db 파일을 풀어서, etcd가 다시 동작할 수 있는 상태로 재구성</strong></li>
</ul>
<pre><code class="language-bash">ETCDCTL_API=3 etcdctl snapshot restore /var/backups/etcd-stanpshot-202510090529.db --data-dir /tmp/etcd-from-snapshot</code></pre>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/1f46de8f-84e5-47af-9a65-55d929c63dad/image.png" alt=""></p>
<ul>
<li>etcd가 실제로 동작할 때 쓰는 내부 데이터 구조로 바뀜</li>
</ul>
<pre><code class="language-bash">/tmp/etcd-from-snapshot/
└── member/
    ├── snap/    ← etcd 스냅샷(데이터 저장)
    └── wal/     ← WAL 로그(트랜잭션 로그)
</code></pre>
<br>

<h3 id="임시-etcd-실행">임시 ETCD 실행</h3>
<ul>
<li>백업한 etcd 파일로 단일 etcd를 띄워서 실행 (포트 23790으로 변경)</li>
<li>/tmp/etcd-from-snapshot 폴더는 스냅샷 파일(snapshot.db)을 풀어서 만든 완전히 별도의 데이터 디렉토리</li>
</ul>
<pre><code class="language-bash">&quot;msg&quot;:&quot;serving client traffic insecurely&quot;,&quot;address&quot;:&quot;127.0.0.1:23790&quot;
&quot;msg&quot;:&quot;skipped leadership transfer for single voting member cluster&quot;</code></pre>
<ul>
<li><code>&quot;127.0.0.1:23790&quot;</code> → 로컬에서만 열림 (외부 접근 불가)</li>
<li><code>&quot;single voting member cluster&quot;</code> → 혼자서만 구성된 단일 노드</li>
</ul>
<p><strong>운영 환경에 영향이 없는 이유</strong></p>
<ol>
<li><strong>다른 포트에서 동작</strong> → 운영 etcd의 2379/2380 포트와 겹치지 않음</li>
<li><strong>다른 데이터 디렉터리</strong> → 원본 파일 읽기/쓰기 안 함</li>
<li><strong>다른 클러스터 ID</strong> → raft cluster와 통신 불가</li>
<li><strong>로컬(127.0.0.1)에서만 리스닝</strong> → 외부 네트워크 연결 불가</aside>

</li>
</ol>
<pre><code class="language-bash">etcd --data-dir /tmp/etcd-from-snapshot \
  --listen-client-urls http://127.0.0.1:23790 \
  --advertise-client-urls http://127.0.0.1:23790
</code></pre>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/c73e9f3d-7c32-4cc1-b82c-d03d61749ebf/image.png" alt=""></p>
<br>

<h3 id="전체-키-확인">전체 키 확인</h3>
<pre><code class="language-bash">root@thk-master-1:~# ETCDCTP_API=3 etcdctl --endpoints=http://127.0.0.1:23790 get &quot;&quot; --prefix --keys-only</code></pre>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/09f1d34a-4110-48dc-9f7c-e1c96434a4a2/image.png" alt=""></p>
<pre><code class="language-bash">ETCDCTL_API=3 etcdctl \
  --endpoints=http://127.0.0.1:23790 \
  get /registry/ --prefix --keys-only
</code></pre>
<ul>
<li>etcd 백업 안에 실제 쿠버네티스 리소스(특히 <strong>Pod</strong>) 데이터가 들어 있는지 확인하는 실무적인 검증 작업</li>
<li>etcd는 쿠버네티스의<strong>모든 상태를 key-value 형태로 저장하는 데이터베이스.</strong> 이때 모든 데이터는 /registry/라는 경로 아래에 저장함</li>
</ul>
<table>
<thead>
<tr>
<th>Kubernetes 리소스</th>
<th>etcd key 경로 예시</th>
</tr>
</thead>
<tbody><tr>
<td>Namespace</td>
<td><code>/registry/namespaces/default</code></td>
</tr>
<tr>
<td>Pod</td>
<td><code>/registry/pods/&lt;namespace&gt;/&lt;pod이름&gt;</code></td>
</tr>
<tr>
<td>Service</td>
<td><code>/registry/services/specs/&lt;namespace&gt;/&lt;service이름&gt;</code></td>
</tr>
<tr>
<td>ConfigMap</td>
<td><code>/registry/configmaps/&lt;namespace&gt;/&lt;configmap이름&gt;</code></td>
</tr>
<tr>
<td>Secret</td>
<td><code>/registry/secrets/&lt;namespace&gt;/&lt;secret이름&gt;</code></td>
</tr>
</tbody></table>
<pre><code class="language-bash">root@thk-master-1:~# ETCDCTL_API=3 etcdctl --endpoints=http://127.0.0.1:23790   get /registry/ --prefix --keys-only &gt; etcd-keys.txt
root@thk-master-1:~# ls
certs_new.sh  etcd-keys.txt  kube-manifests
root@thk-master-1:~# grep &quot;/registry/pods/&quot; etcd-keys.txt
/registry/pods/auth/keycloak-694fc9d848-2h2zl
/registry/pods/auth/keycloak-694fc9d848-xn6d9
/registry/pods/auth/mariadb-keycloak-0
/registry/pods/auth/oauth2-proxy-admin-76db4f4f7d-d4ll5
/registry/pods/auth/oauth2-proxy-admin-76db4f4f7d-lk54g
/registry/pods/auth/oauth2-proxy-user-55bbf8579b-5cxm7
/registry/pods/auth/oauth2-proxy-user-55bbf8579b-fbtjt
/registry/pods/auth/oauth2-redis-admin-0
/registry/pods/auth/oauth2-redis-admin-1
/registry/pods/auth/oauth2-redis-admin-2
/registry/pods/auth/oauth2-redis-admin-3
/registry/pods/auth/oauth2-redis-admin-4
/registry/pods/auth/oauth2-redis-admin-5
/registry/pods/auth/oauth2-redis-user-0
/registry/pods/auth/oauth2-redis-user-1
/registry/pods/auth/oauth2-redis-user-2
/registry/pods/auth/oauth2-redis-user-3
/registry/pods/auth/oauth2-redis-user-4</code></pre>
<ul>
<li>pod정보와 동일함</li>
</ul>
<pre><code class="language-bash">root@thk-deploy:~# k get po -n auth
NAME                                                    READY   STATUS      RESTARTS       AGE
keycloak-694fc9d848-2h2zl                               1/1     Running     0              5d2h
keycloak-694fc9d848-xn6d9                               1/1     Running     0              5d2h
mariadb-keycloak-0                                      1/1     Running     0              5d2h
oauth2-proxy-admin-76db4f4f7d-d4ll5                     1/1     Running     5 (5d2h ago)   5d2h
oauth2-proxy-admin-76db4f4f7d-lk54g                     1/1     Running     4 (5d2h ago)   5d2h
oauth2-proxy-user-55bbf8579b-5cxm7                      1/1     Running     5 (5d2h ago)   5d2h
oauth2-proxy-user-55bbf8579b-fbtjt                      1/1     Running     5 (5d2h ago)   5d2h
oauth2-redis-admin-0                                    1/1     Running     0              5d2h
oauth2-redis-admin-1                                    1/1     Running     0              5d2h
oauth2-redis-admin-2                                    1/1     Running     0              5d2h
oauth2-redis-admin-3                                    1/1     Running     0              5d2h
oauth2-redis-admin-4                                    1/1     Running     0              5d2h</code></pre>
<br>

<h2 id="4백업자동화">4.백업자동화</h2>
<h3 id="스크립트">스크립트</h3>
<pre><code class="language-bash">#!/bin/bash
# ===============================================
# etcd 자동 백업 스크립트
# -----------------------------------------------
# 1. etcdctl 환경 변수 설정 (TLS 인증 사용)
# 2. 스냅샷 파일 생성 (날짜 기반 이름)
# 3. 7일 초과 백업 파일 자동 삭제
# -----------------------------------------------
# 사용 위치: /usr/local/bin/etcd-backup.sh
# 실행 방법: sudo bash /usr/local/bin/etcd-backup.sh
# Cron 예시 : 매일 새벽 3시 자동 백업
# 0 3 * * * /usr/local/bin/etcd-backup.sh
# ===============================================

set -e  # 오류 발생 시 즉시 종료

# etcdctl 환경 변수 설정
export ETCDCTL_API=3
export ETCDCTL_CACERT=/etc/ssl/etcd/ssl/ca.pem
export ETCDCTL_CERT=/etc/ssl/etcd/ssl/admin-thk-master-1.pem  # ★각 노드에 맞게 변경 필요
export ETCDCTL_KEY=/etc/ssl/etcd/ssl/admin-thk-master-1-key.pem   # ★각 노드에 맞게 변경 필요
export ETCDCTL_ENDPOINTS=&quot;https://172.10.10.200:2379,[https://172.10.10.123:2379](https://172.10.10.123:2379/),[https://172.10.10.202:2379](https://172.10.10.202:2379/)&quot;

# 백업 저장 경로 및 파일명 지정
BACKUP_DIR=&quot;/var/backups&quot;
TIMESTAMP=$(date +%Y%m%d%H%M)
SNAPSHOT_FILE=&quot;$BACKUP_DIR/etcd-snapshot-$TIMESTAMP.db&quot;

# 백업 실행
echo &quot;[$(date &#39;+%Y-%m-%d %H:%M:%S&#39;)] Starting etcd snapshot backup...&quot; &gt;&gt; /var/log/etcd_backup.log
etcdctl snapshot save &quot;$SNAPSHOT_FILE&quot; &gt;&gt; /var/log/etcd_backup.log 2&gt;&amp;1
echo &quot;[$(date &#39;+%Y-%m-%d %H:%M:%S&#39;)] Backup completed: $SNAPSHOT_FILE&quot; &gt;&gt; /var/log/etcd_backup.log

# 오래된 백업(7일 이상) 삭제
find &quot;$BACKUP_DIR&quot; -maxdepth 1 -type f -name &quot;etcd-snapshot-*.db&quot; -mtime +7 -exec rm -f {} \;
echo &quot;[$(date &#39;+%Y-%m-%d %H:%M:%S&#39;)] Old backups deleted (older than 7 days)&quot; &gt;&gt; /var/log/etcd_backup.log
</code></pre>
<ul>
<li>하나의 노드에만 적용시키고 싶을 경우 (굳이3개다 할필요 없으니까…)</li>
</ul>
<pre><code class="language-bash">set -e  # 오류 발생 시 즉시 종료

# etcdctl 환경 변수 설정
export PATH=/usr/local/bin:/usr/bin:/bin
export ETCDCTL_API=3
export ETCDCTL_CACERT=/etc/ssl/etcd/ssl/ca.pem
export ETCDCTL_CERT=/etc/ssl/etcd/ssl/admin-thk-master-1.pem  # ★각 노드에 맞게 변경 필요
export ETCDCTL_KEY=/etc/ssl/etcd/ssl/admin-thk-master-1-key.pem   # ★각 노드에 맞게 변경 필요
export ETCDCTL_ENDPOINTS=&quot;https://172.10.10.200:2379&quot;

# 백업 저장 경로 및 파일명 지정
BACKUP_DIR=&quot;/var/backups&quot;
TIMESTAMP=$(date +%Y%m%d%H%M)
SNAPSHOT_FILE=&quot;$BACKUP_DIR/etcd-snapshot-$TIMESTAMP.db&quot;

# 백업 실행
echo &quot;[$(date &#39;+%Y-%m-%d %H:%M:%S&#39;)] Starting etcd snapshot backup...&quot; &gt;&gt; /var/log/etcd_backup.log
etcdctl snapshot save &quot;$SNAPSHOT_FILE&quot; &gt;&gt; /var/log/etcd_backup.log 2&gt;&amp;1
echo &quot;[$(date &#39;+%Y-%m-%d %H:%M:%S&#39;)] Backup completed: $SNAPSHOT_FILE&quot; &gt;&gt; /var/log/etcd_backup.log

# 오래된 백업(7일 이상) 삭제
find &quot;$BACKUP_DIR&quot; -maxdepth 1 -type f -name &quot;etcd-snapshot-*.db&quot; -mtime +7 -exec rm -f {} \;
echo &quot;[$(date &#39;+%Y-%m-%d %H:%M:%S&#39;)] Old backups deleted (older than 7 days)&quot; &gt;&gt; /var/log/etcd_backup.log
</code></pre>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/8f36084a-5a5c-403b-8c6b-786ac1650e83/image.png" alt=""></p>
<h3 id="서비스-파일-생성">서비스 파일 생성</h3>
<ul>
<li><p>systemd는 “책임 분리 원칙”을 따른다</p>
<ul>
<li><p><code>.service</code> = <em>무엇을 실행할지 정의</em></p>
</li>
<li><p><code>.timer</code> = <em>언제 실행할지 정의</em></p>
<p>타이머가 실제 작업을 실행하려면 항상 서비스 유닛을 호출해야 한다</p>
<pre><code class="language-bash">[타이머]
OnCalendar=*-*-* 00:00
↓
[systemd가 자동 실행]
systemctl start etcd-backup.service
↓
[서비스]
ExecStart=/usr/local/bin/etcd_backup.sh</code></pre>
</li>
</ul>
</li>
<li><p>service1 : <strong><code>/etc/systemd/system/etcd-backup.service</code></strong></p>
</li>
</ul>
<pre><code class="language-bash">[Unit]
Description=etcd Snapshot Backup Service
After=network-online.target

[Service]
Type=oneshot
# etcd 백업 스크립트 경로 — 실제 위치에 맞게 수정
ExecStart=/usr/local/bin/etcd_backup.sh
# 로그가 어디로 가는지 명시 (선택) -- 로그가 꽉 찰수 있음 &gt;&gt;&gt; 아래에 추가 설정 적용
StandardOutput=append:/var/log/etcd_backup.log
StandardError=append:/var/log/etcd_backup.log
</code></pre>
<ul>
<li>service2: <strong><code>/etc/systemd/system/etcd-backup.timer</code></strong></li>
</ul>
<pre><code class="language-bash">[Unit]
Description=Timer for etcd Snapshot Backup (runs at 00:00 and 12:00)

[Timer]
# 매일 00:00:00 과 12:00:00에 실행
OnCalendar=*-*-* 00:00:00
OnCalendar=*-*-* 12:00:00
# 시스템이 꺼져 있다가 켜져도 누락된 백업을 바로 수행
Persistent=true
# 서비스 파일 연결 (자동 인식되지만 명시적으로 작성 가능)
Unit=etcd-backup.service

[Install]
WantedBy=timers.target
</code></pre>
<ul>
<li>적용</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:~# vi /etc/systemd/system/etcd-backup.service
root@thk-master-1:~# vi /etc/systemd/system/etcd-backup.timer
root@thk-master-1:~# systemctl daemon-reload
root@thk-master-1:~# systemctl enable etcd-backup.timer 
Created symlink /etc/systemd/system/timers.target.wants/etcd-backup.timer → /etc/systemd/system/etcd-backup.timer.
root@thk-master-1:~# systemctl start etcd-backup.timer 
root@thk-master-1:~# systemctl status etcd
etcd-backup.service  etcd-backup.timer    etcd.service         
root@thk-master-1:~# systemctl status etcd-backup.timer 
● etcd-backup.timer - Timer for etcd Snapshot Backup (runs at 00:00 and 12:00)
     Loaded: loaded (/etc/systemd/system/etcd-backup.timer; enabled; preset: enabled)
     Active: active (waiting) since Thu 2025-10-09 06:34:44 UTC; 14s ago
    Trigger: Thu 2025-10-09 12:00:00 UTC; 5h 25min left
   Triggers: ● etcd-backup.service

Oct 09 06:34:44 thk-master-1 systemd[1]: Started etcd-backup.timer - Timer for etcd Snapshot Backup (runs at&gt;
lines 1-7/7 (END)

root@thk-master-1:~# systemctl daemon-reload
root@thk-master-1:~# systemctl start etcd-backup.service
root@thk-master-1:~# systemctl status etcd-backup.service
○ etcd-backup.service - etcd Snapshot Backup Service
     Loaded: loaded (/etc/systemd/system/etcd-backup.service; static)
     Active: inactive (dead) since Thu 2025-10-09 06:45:23 UTC; 3s ago
TriggeredBy: ● etcd-backup.timer
    Process: 3450601 ExecStart=/usr/local/bin/etcd_backup.sh (code=exited, status=0/SUCCESS)
   Main PID: 3450601 (code=exited, status=0/SUCCESS)
        CPU: 418ms

Oct 09 06:45:22 thk-master-1 systemd[1]: Starting etcd-backup.service - etcd Snapshot Backup Service...
Oct 09 06:45:23 thk-master-1 systemd[1]: etcd-backup.service: Deactivated successfully.
Oct 09 06:45:23 thk-master-1 systemd[1]: Finished etcd-backup.service - etcd Snapshot Backup Service.</code></pre>
<ul>
<li>실행 실패시 자세한 에러 로그 확인 : <code>cat /var/log/etcd_backup.log | tail -n 50</code></li>
</ul>
<br>

<h2 id="5-백업-로그-삭제-주기-설정">5. 백업 로그 삭제 주기 설정</h2>
<ul>
<li><code>/var/log/etcd_backup.log</code> 로그 파일이 무한히 커지는 걸 방지하기 위하여 로테이트 정책을 설정
<code>/etc/logrotate.d/</code> 디렉토리는 “각 로그 파일의 회전(주기·보관·압축 등 정책)”을 정의하는 설정들이 모여 있는 곳이다<ul>
<li>최근 N일치 로그만 유지 (예: 7일)</li>
<li>오래된 로그는 자동 삭제</li>
<li><code>systemd</code> timer 로 백업이 실행될 때마다 로그 append 하더라도 문제없게 관리</li>
</ul>
</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:/etc/logrotate.d# pwd
/etc/logrotate.d
root@thk-master-1:/etc/logrotate.d# ll
total 56
drwxr-xr-x   2 root root 4096 Jun 26 12:54 ./
drwxr-xr-x 111 root root 4096 Oct  5 05:24 ../
-rw-r--r--   1 root root  120 Feb  5  2024 alternatives
-rw-r--r--   1 root root  126 Apr 22  2022 apport
-rw-r--r--   1 root root  173 Mar 22  2024 apt
-rw-r--r--   1 root root   91 Jan  4  2024 bootlog
-rw-r--r--   1 root root  130 Oct 14  2019 btmp
-rw-r--r--   1 root root  144 May 19 20:00 cloud-init
-rw-r--r--   1 root root  112 Feb  5  2024 dpkg
-rw-r--r--   1 root root  248 Mar 22  2024 rsyslog
-rw-r--r--   1 root root  270 Apr  2  2024 ubuntu-pro-client
-rw-r--r--   1 root root  209 May 16  2023 ufw
-rw-r--r--   1 root root  235 Feb 12  2024 unattended-upgrades
-rw-r--r--   1 root root  145 Oct 14  2019 wtmp</code></pre>
<ul>
<li><code>vi /etc/logrotate.d/etcd_backup</code></li>
</ul>
<pre><code class="language-bash">/var/log/etcd_backup.log {
    daily                  # 매일 로그 회전
    rotate 7               # 최대 7개 파일 보관 (즉, 1주일치)
    compress               # 오래된 로그를 gzip으로 압축
    delaycompress          # 바로 전 로그는 압축하지 않음 (최근 로그 쉽게 열람 가능)
    missingok              # 파일이 없어도 에러 발생시키지 않음
    notifempty             # 비어 있는 로그는 회전하지 않음
    create 640 root root   # 새 로그 파일 권한
    postrotate
        systemctl reload etcd-backup.timer &gt; /dev/null 2&gt;&amp;1 || true
    endscript
}</code></pre>
<ul>
<li>디버깅 및 테스트</li>
</ul>
<pre><code class="language-bash">root@thk-master-1:/etc/logrotate.d# logrotate -fv /etc/logrotate.d/etcd_backup
reading config file /etc/logrotate.d/etcd_backup
acquired lock on state file /var/lib/logrotate/statusReading state from file: /var/lib/logrotate/status
Allocating hash table for state file, size 64 entries
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state

Handling 1 logs

rotating pattern: /var/log/etcd_backup.log  forced from command line (7 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/etcd_backup.log
error: skipping &quot;/var/log/etcd_backup.log&quot; because parent directory has insecure permissions (It&#39;s world writable or writable by group which is not &quot;root&quot;) Set &quot;su&quot; directive in config file to tell logrotate which user/group should be used for rotation.
Creating new state</code></pre>
<table>
<thead>
<tr>
<th>회전 시점</th>
<th>남는 파일 목록</th>
<th>설명</th>
</tr>
</thead>
<tbody><tr>
<td>1일차</td>
<td><code>etcd_backup.log</code>, <code>etcd_backup.log.1</code></td>
<td>새로운 로그 생성</td>
</tr>
<tr>
<td>2일차</td>
<td><code>.log</code>, <code>.1</code>, <code>.2.gz</code></td>
<td>이전 로그 압축</td>
</tr>
<tr>
<td>3일차</td>
<td><code>.log</code>, <code>.1</code>, <code>.2.gz</code>, <code>.3.gz</code></td>
<td></td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>7일차</td>
<td><code>.log</code>, <code>.1</code>, <code>.2.gz</code>, <code>.3.gz</code>, <code>.4.gz</code>, <code>.5.gz</code>, <code>.6.gz</code>, <code>.7.gz</code></td>
<td>7개 유지</td>
</tr>
<tr>
<td>8일차</td>
<td><code>.log</code>, <code>.1</code>, <code>.2.gz</code>, <code>.3.gz</code>, <code>.4.gz</code>, <code>.5.gz</code>, <code>.6.gz</code>, <code>.7.gz</code> → <strong><code>.7.gz</code> 삭제됨</strong></td>
<td>오래된 파일 자동 삭제</td>
</tr>
</tbody></table>
<br>
<br>
<br>


]]></description>
        </item>
        <item>
            <title><![CDATA[harbor debug]]></title>
            <link>https://velog.io/@jupiter-j/harbor-debug</link>
            <guid>https://velog.io/@jupiter-j/harbor-debug</guid>
            <pubDate>Thu, 03 Jul 2025 05:40:58 GMT</pubDate>
            <description><![CDATA[<h2 id="서버에서-harbor가-있는지-확인">서버에서 harbor가 있는지 확인</h2>
<ul>
<li>harbor 프로세스 확인<pre><code>[root@k8s-worker ~]# ps -ef | grep harbor
10000     218032  217960  0 15:46 ?        00:00:03 /home/harbor/harbor_registryctl -c /etc/registryctl/config.yml
10000     218334  218314  2 15:46 ?        00:00:21 /harbor/harbor_core
10000     218835  218806  2 15:46 ?        00:00:27 /harbor/harbor_jobservice -c /etc/jobservice/config.yml
root      232919   93404  0 16:02 pts/0    00:00:00 grep --color=auto harbor</code></pre></li>
<li>harbor 도메인 확인
  <code>vi/etc/hosts</code><pre><code>[root@k8s-worker ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.68 k8s-master
192.168.0.69 k8s-worker harbor.test.com</code></pre><code>find / -name &quot;*harbor.yml*&quot;</code></li>
<li>harbor port 확인
  docker 관련 포트 가능성 높음<pre><code>[root@k8s-worker ~]# netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      775/sshd
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      823/kubelet
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      1355/kube-proxy
tcp        0      0 127.0.0.1:1514          0.0.0.0:*               LISTEN      217653/docker-proxy
tcp        0      0 127.0.0.1:39243         0.0.0.0:*               LISTEN      216752/containerd
tcp        0      0 127.0.0.1:9099          0.0.0.0:*               LISTEN      2059/calico-node
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      218487/docker-proxy
tcp        0      0 0.0.0.0:179             0.0.0.0:*               LISTEN      2384/bird
tcp6       0      0 :::22                   :::*                    LISTEN      775/sshd
tcp6       0      0 :::10250                :::*                    LISTEN      823/kubelet
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd
tcp6       0      0 :::80                   :::*                    LISTEN      218502/docker-proxy
tcp6       0      0 :::10256                :::*                    LISTEN      1355/kube-proxy</code></pre>포트확인을 해야 docker login이 될수있음. 
(<code>/etc/docker/daemon.json</code>에 등록하지 않았을경우, 해당파일을 확인하는것도 방법임)
<img src="https://velog.velcdn.com/images/jupiter-j/post/a88a9d3e-bf55-4ca6-8152-1d72f4c7751a/image.png" alt=""> <img src="https://velog.velcdn.com/images/jupiter-j/post/bd9f39ac-adf9-4715-a589-c28187748837/image.png" alt=""><br>

</li>
</ul>
<h2 id="이미지-pull에러가-났을경우-테스트">이미지 pull에러가 났을경우 테스트</h2>
<ul>
<li><p><strong>Kubernetes 노드(worker)의 containerd가</strong> Harbor에서 jupiter/nginx:250702 이미지를 받아오라고 요청 : <code>ctr --debug images pull &lt;레지스트리 도메인 / 프로젝트 / 이미지명:태그&gt;</code> </p>
<pre><code>[root@k8s-worker certs]# ctr --debug images pull harbor.test.com/jupiter/nginx:250702
DEBU[0000] fetching                                      image=&quot;harbor.test.com/jupiter/nginx:250702&quot;
DEBU[0000] resolving                                     host=harbor.test.com
DEBU[0000] do request                                    host=harbor.test.com request.header.accept=&quot;application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*&quot; request.header.user-agent=containerd/v1.7.13 request.method=HEAD url=&quot;https://harbor.test.com/v2/jupiter/nginx/manifests/250702&quot;
INFO[0000] trying next host                              error=&quot;failed to do request: Head \&quot;https://harbor.test.com/v2/jupiter/nginx/manifests/250702\&quot;: tls: failed to verify certificate: x509: certificate signed by unknown authority&quot; host=harbor.test.com
ctr: failed to resolve reference &quot;harbor.test.com/jupiter/nginx:250702&quot;: failed to do request: Head &quot;https://harbor.test.com/v2/jupiter/nginx/manifests/250702&quot;: tls: failed to verify certificate: x509: certificate signed by unknown authority</code></pre><p><img src="https://velog.velcdn.com/images/jupiter-j/post/91e84358-c7b1-4507-b506-5cca12c144d0/image.png" alt=""></p>
</li>
<li><p>Harbor 서버가 준 인증서가 신뢰할 수 없는 기관(자체 서명/셀프사인)에서 발급됨. 믿을수 없는 인증서이니 통신 중단</p>
<pre><code>tls: failed to verify certificate: x509: certificate signed by unknown authority</code></pre><blockquote>
<h3 id="요청-과정">요청 과정</h3>
</blockquote>
</li>
</ul>
<ol>
<li>harbor.test.com 이게 누구냐? → DNS로 IP를 찾음 </li>
<li><a href="https://harbor.test.com/">https://harbor.test.com/</a>... 로 요청을 보냄 (HTTPS) </li>
<li>서버(Harbor)가 인증서를 줌 → harbor.test.com.crt 같은 거 </li>
<li>containerd가 인증서를 확인함 → &quot;이거 누가 발급했는지 신뢰할 수 있어?&quot; </li>
<li>&quot;신뢰 못해. 이미지 못 받아옴!&quot; </li>
</ol>
<p>즉 현재 에러는 harbor에 인증서는 잘 적용되어있으나 containerd쪽에는 인증서가 등록 및 적용이 되어있지 않아 발생한 에러</p>
<br>

<h2 id="인증서가-적용되어야-하는-장소">인증서가 적용되어야 하는 장소</h2>
<p>Harbor의 자체 서명 인증서를 신뢰하게 하려면 harbor, containerd 두곳에는 최소한 인증서가 적용되어야 한다.</p>
<h3 id="harbor">harbor</h3>
<table>
<thead>
<tr>
<th>위치</th>
<th>의미</th>
</tr>
</thead>
<tbody><tr>
<td><code>/etc/harbor/ssl/harbor.test.com.crt</code></td>
<td>Harbor HTTPS 서버 인증서 (서버에 보여주는 증명서)</td>
</tr>
<tr>
<td><code>/etc/harbor/ssl/harbor.test.com.key</code></td>
<td>Harbor 서버 개인 키</td>
</tr>
<tr>
<td><code>/etc/harbor/ssl/ca.crt</code></td>
<td>(선택) 클라이언트 인증서 체인을 보여줄 때 포함될 CA</td>
</tr>
<tr>
<td><img src="https://velog.velcdn.com/images/jupiter-j/post/7fe649dd-d6bf-4877-846e-5ea76768858b/image.png" alt=""></td>
<td></td>
</tr>
</tbody></table>
<h3 id="containerd">containerd</h3>
<p>Kubernetes 노드에서 containerd가 Harbor 인증서를 신뢰하게 하려면 여기에 등록해야 함</p>
<table>
<thead>
<tr>
<th>위치</th>
<th>설명</th>
</tr>
</thead>
<tbody><tr>
<td><code>/etc/containerd/certs.d/harbor.test.com:443/ca.crt</code></td>
<td><code>containerd</code>가 Harbor 서버 인증서를 신뢰하도록 등록하는 위치</td>
</tr>
<tr>
<td>containerd의 <code>/etc/containerd/certs.d/</code> 하위 디렉토리 이름은 반드시 <code>&lt;레지스트리 주소(dns):포트&gt;</code> 형식으로 정확히 일치해야 한다.</td>
<td></td>
</tr>
</tbody></table>
<h3 id="cacrt-방식-vs-hoststoml-방식">ca.crt 방식 vs hosts.toml 방식</h3>
<p><code>/etc/containerd/certs.d/&lt;레지스터리&gt;</code> 하위에 인증서가 있어야 하는데 ca.crt 방식 , hosts.toml 방식 두가지 방식으로 인증서를 처리할 수 있다. </p>
<h3 id="1-cacrt-방식">1. ca.crt 방식</h3>
<ul>
<li>보안수준이 높음</li>
<li>ca.crt 방식은 포트까지 정확히 맞는 폴더 이름이 필요함 → 관리가 번거로움
ca.crt 방식은 모든 노드에 일일이 등록해야 해서 귀찮고 오류 가능성 큼</li>
<li>생성 방법<pre><code>#  디렉토리 생성 (정확한 도메인:포트)
sudo mkdir -p /etc/containerd/certs.d/harbor.test.com:443/
</code></pre></li>
</ul>
<h1 id="인증서-복사">인증서 복사</h1>
<p>sudo cp /etc/harbor/ssl/ca.crt /etc/containerd/certs.d/harbor.test.com:443/ca.crt</p>
<h1 id="containerd-재시작">containerd 재시작</h1>
<p>sudo systemctl restart containerd</p>
<pre><code>### 2. hosts.toml 방식
* 도메인만으로 동작 가능
* 설정 한 파일만 작성하면 끝
* 인증서 등록/신뢰 처리 안 해도 됨</code></pre><p>[root@k8s-worker ~]# sudo mkdir -p /etc/containerd/certs.d/harbor.test.com/
[root@k8s-worker ~]# cat &lt;&lt;EOF | sudo tee /etc/containerd/certs.d/harbor.test.com/hosts.toml</p>
<blockquote>
<p>server = &quot;<a href="https://harbor.test.com&quot;">https://harbor.test.com&quot;</a></p>
<p>[host.&quot;<a href="https://harbor.test.com&quot;%5D">https://harbor.test.com&quot;]</a>
  capabilities = [&quot;pull&quot;, &quot;resolve&quot;, &quot;push&quot;]
  skip_verify = true
EOF</p>
</blockquote>
<pre><code>

- `/etc/containerd/config.toml` 수정</code></pre><p> vi /etc/containerd/config.toml</p>
<pre><code>154         [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.untrusted_workload_runtime.options]
155
156     [plugins.&quot;io.containerd.grpc.v1.cri&quot;.image_decryption]
157       key_model = &quot;node&quot;
158
159     [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry]
160       config_path = &quot;/etc/containerd/certs.d&quot;
161</code></pre><pre><code>- 설정 확인: `grep -A 5 &quot;registry\]&quot; /etc/containerd/config.toml`
![](https://velog.velcdn.com/images/jupiter-j/post/48301b6a-f983-4039-9acc-43973ee9d69f/image.png)

적용후 에러 </code></pre><p>[host.&quot;<a href="https://harbor.test.com&quot;%5D">https://harbor.test.com&quot;]</a>
  capabilities = [&quot;pull&quot;, &quot;resolve&quot;, &quot;push&quot;]
  skip_verify = true
[root@k8s-worker ~]# sudo systemctl restart containerd
[root@k8s-worker ~]# ctr images pull harbor.test.com/jupiter/nginx:250702
harbor.test.com/jupiter/nginx:250702: resolving      |--------------------------------------|
elapsed: 0.3 s                        total:   0.0 B (0.0 B/s)
harbor.test.com/jupiter/nginx:250702: resolving      |--------------------------------------|
elapsed: 0.5 s                        total:   0.0 B (0.0 B/s)
ctr: failed to resolve reference &quot;harbor.test.com/jupiter/nginx:250702&quot;: failed to do request: Head &quot;<a href="https://harbor.test.com/v2/jupiter/nginx/manifests/250702&quot;">https://harbor.test.com/v2/jupiter/nginx/manifests/250702&quot;</a>: tls: failed to verify certificate: x509: certificate signed by unknown authority</p>
<pre><code>
![](https://velog.velcdn.com/images/jupiter-j/post/13bbcfc1-7250-4a3c-9d9d-68e27c20218f/image.png)

- host.toml 변경</code></pre><p>[root@k8s-worker harbor.test.com]# cat hosts.toml
server = &quot;<a href="https://harbor.test.com&quot;">https://harbor.test.com&quot;</a></p>
<p>[host.&quot;<a href="https://harbor.test.com&quot;%5D">https://harbor.test.com&quot;]</a>
  capabilities = [&quot;pull&quot;, &quot;resolve&quot;, &quot;push&quot;]
  skip_verify = true
  override_path = true ## containerd 1.7.x에서 중요</p>
<pre><code>containerd버전 문제인지 host.toml skip_verify를 인식하지 못해 결국 시스템에 인증서를 적용시킴 
- 시스템 적용 : `cp /etc/harbor/ssl/ca.crt /etc/pki/ca-trust/source/anchors/harbor-ca.crt`
`cp /etc/harbor/ssl/ca-chain.crt /etc/pki/ca-trust/source/anchors/harbor-ca-chain.cr`
`update-ca-trust`

![](https://velog.velcdn.com/images/jupiter-j/post/994bd441-ca9f-4c62-8778-2c9cfcacef4f/image.png)
https로 pull 성공

### 마스터 노드에서 테스트</code></pre><p>[root@k8s-master containerd]# sed -i &#39;s/config_path = &quot;&quot;/config_path = &quot;/etc/containerd/certs.d&quot;/&#39; /etc/containerd/config.toml
[root@k8s-master containerd]# grep config_path /etc/containerd/config.toml
      config_path = &quot;/etc/containerd/certs.d&quot;
    plugin_config_path = &quot;/etc/nri/conf.d&quot;
    config_path = &quot;/etc/containerd/certs.d&quot;
[root@k8s-master containerd]#
[root@k8s-master containerd]# mkdir -p /etc/containerd/certs.d/harbor.test.com
[root@k8s-master containerd]#
[root@k8s-master containerd]# cat &gt; /etc/containerd/certs.d/harbor.test.com/hosts.toml &lt;&lt; &#39;EOF&#39;</p>
<blockquote>
<p>server = &quot;<a href="https://harbor.test.com&quot;">https://harbor.test.com&quot;</a></p>
<p>[host.&quot;<a href="https://harbor.test.com&quot;%5D">https://harbor.test.com&quot;]</a>
  capabilities = [&quot;pull&quot;, &quot;resolve&quot;, &quot;push&quot;]
  skip_verify = true
EOF
[root@k8s-master containerd]# systemctl restart containerd
[root@k8s-master containerd]# sleep 5
[root@k8s-master containerd]# nerdctl pull harbor.test.com/jupiter/nginx:250702
harbor.test.com/jupiter/nginx:250702:                                             resolved       |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:ccde53834eab53e85b35526a647cdb714ea4521b1ddf5a07b5c8787298d13087: exists         |++++++++++++++++++++++++++++++++++++++|
config-sha256:9592f5595f2b12c2ede5d2ce9ec936b33fc328225a00b3901b96019e3dd83528:   exists         |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.9 s                                                                    total:   0.0 B (0.0 B/s)
[root@k8s-master containerd]# vi /etc/con
conntrackd/ containerd/
[root@k8s-master containerd]# vi /etc/containerd/config.toml c
2 files to edit</p>
</blockquote>
<pre><code>

nerdctl로 pull 성공
![](https://velog.velcdn.com/images/jupiter-j/post/f21c767c-8c44-4246-a972-b36c270d94ae/image.png)

만약 ctr로 pull이 안될경우: </code></pre><h1 id="ctr에-hosts-dir-명시적으로-지정">ctr에 hosts-dir 명시적으로 지정</h1>
<p>ctr --debug images pull --hosts-dir /etc/containerd/certs.d harbor.test.com/jupiter/nginx:250702</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/a112b24c-1549-4b97-957f-bb3adde8089c/image.png)</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[harbor]]></title>
            <link>https://velog.io/@jupiter-j/harbor</link>
            <guid>https://velog.io/@jupiter-j/harbor</guid>
            <pubDate>Wed, 02 Jul 2025 08:01:59 GMT</pubDate>
            <description><![CDATA[<h1 id="harbor">harbor</h1>
<blockquote>
<p>http환경 생성후, https환경으로 변경 </p>
</blockquote>
<br>


<h2 id="docker-설치">docker 설치</h2>
<pre><code class="language-bash">sudo dnf -y install dnf-utils
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl enable --now docker
docker version
docker compose version
</code></pre>
<h2 id="harbor-설치">harbor 설치</h2>
<pre><code class="language-bash"># Harbor 설치 파일 다운로드
wget https://github.com/goharbor/harbor/releases/download/v2.11.0/harbor-online-installer-v2.11.0.tgz
tar xzvf harbor-online-installer-v2.11.0.tgz
cd harbor

# 설정 파일 복사 및 편집
cp harbor.yml.tmpl harbor.yml
vim harbor.yml
</code></pre>
<pre><code class="language-bash">      3 # The IP address or hostname to access admin UI and registry service.
      4 # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
      5 hostname: harbor.test.com ##변경
...
      7 # http related config
      8 http:
      9   # port for http, default is 80. If https enabled, this port will redirect to https port
     10   port: 80
     11
     12 # https related config 주석처리
     13 #https:
     14 #  # https port for harbor, default is 443
     15 #  port: 443
     16   # The path of cert and key files for nginx
     17 #  certificate: /your/certificate/path
     18 #  private_key: /your/private/key/path
     19   # enable strong ssl ciphers (default: false)
     20   # strong_ssl_ciphers: false

     44 # The initial password of Harbor admin
     45 # It only works in first time to install harbor
     46 # Remember Change the admin password from UI after launching Harbor.
     47 harbor_admin_password: cloud1234 ##변경
     48

     ## log위치도 확인</code></pre>
<ul>
<li>dns설정</li>
</ul>
<pre><code class="language-bash">vi /etc/hosts
192.168.0.68 k8s-master
192.168.0.69 k8s-worker harbor.test.com</code></pre>
<ul>
<li>docker - harbor registry연동 : <code>/etc/docker/daemon.json</code><ul>
<li>docker login “dns/ip”부분을 등록하지 않으면 로그인시  connect: connection refused 에러가 뜸!</li>
</ul>
</li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/acd4493e-e72b-45fe-86cd-78f8ff07bfd5/image.png" alt=""></p>
<pre><code class="language-bash">vi /etc/docker/daemon.json

[root@k8s-worker ~]# cat /etc/docker/daemon.json
{
  &quot;insecure-registries&quot;: [
    &quot;harbor.test.com:80&quot;,min
    &quot;192.168.0.69:80&quot;
  ],
  &quot;log-driver&quot;: &quot;json-file&quot;,
  &quot;log-opts&quot;: {
    &quot;max-size&quot;: &quot;10m&quot;,
    &quot;max-file&quot;: &quot;3&quot;
  }
}

sudo systemctl restart docker

[root@k8s-worker docker]# docker info | grep -i insecure
 Insecure Registries:
[root@k8s-worker docker]#</code></pre>
<ul>
<li>local pc에서 <code>vi /etc/hosts</code>로 ip:dns등록 후 접속
ip접속
<img src="https://velog.velcdn.com/images/jupiter-j/post/a8065504-c334-4160-b1f3-969c0ebfbfb1/image.png" alt="">
dns접속
<img src="https://velog.velcdn.com/images/jupiter-j/post/1cd223e2-f7fc-45df-a149-674e60d74b83/image.png" alt=""></li>
</ul>
<br>

<h2 id="harbor-system서비스-등록">harbor system서비스 등록</h2>
<p>매번 harbor를 실행하기위해 <code>./install.sh</code> 파일을 실행시키기 귀찮으니 systemd서비스로 관리하도록 하자</p>
<pre><code class="language-bash">sudo curl -L &quot;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&quot; -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
</code></pre>
<ul>
<li>service파일 생성</li>
</ul>
<pre><code class="language-bash">[root@k8s-worker ~]# which docker-compose
/usr/local/bin/docker-compose</code></pre>
<pre><code class="language-bash">[root@k8s-worker ~]# cat /etc/systemd/system/harbor.service
[Unit]
Description=Harbor Container Registry
Documentation=https://goharbor.io/
Requires=docker.service
After=docker.service
Wants=network-online.target
After=network-online.target

[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/opt/harbor/harbor
ExecStartPre=/usr/local/bin/docker-compose down
ExecStart=/usr/local/bin/docker-compose up -d
ExecStop=/usr/local/bin/docker-compose down
ExecReload=/usr/local/bin/docker-compose restart
TimeoutStartSec=0
Restart=on-failure
RestartSec=30

[Install]
WantedBy=multi-user.target</code></pre>
<pre><code class="language-bash">sudo systemctl daemon-reload
sudo systemctl enable harbor
sudo systemctl start harbor
sudo systemctl status harbor</code></pre>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/0492486a-1eee-4a25-83e3-364bd20c2a38/image.png" alt=""></p>
<br>

<h2 id="프로젝트-생성-및-파일-업로드-다운">프로젝트 생성 및 파일 업로드, 다운</h2>
<h3 id="프로젝트-생성">프로젝트 생성</h3>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/dd19fd24-3b51-46f7-bccc-16c58bd9d13b/image.png" alt=""><img src="https://velog.velcdn.com/images/jupiter-j/post/f11bfae2-5dbf-41f0-9b13-5147ad3c3625/image.png" alt=""></p>
<h3 id="파일-업로드">파일 업로드</h3>
<ul>
<li>cli로 로그인<pre><code>## cat etc/docker/daemon.json
[root@k8s-worker ~]# cat /etc/docker/daemon.json
{
&quot;insecure-registries&quot;: [
  &quot;harbor.test.com:80&quot;,
  &quot;192.168.0.69:80&quot;
],
&quot;log-driver&quot;: &quot;json-file&quot;,
&quot;log-opts&quot;: {
  &quot;max-size&quot;: &quot;10m&quot;,
  &quot;max-file&quot;: &quot;3&quot;
}
}
[root@k8s-worker ~]# docker login &quot;http://harbor.test.com:80&quot;
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
</code></pre></li>
</ul>
<p>Login Succeeded
[root@k8s-worker ~]# docker login &quot;<a href="http://192.168.0.69:80&quot;">http://192.168.0.69:80&quot;</a>
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
<a href="https://docs.docker.com/engine/reference/commandline/login/#credentials-store">https://docs.docker.com/engine/reference/commandline/login/#credentials-store</a></p>
<p>Login Succeeded</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/bccc7a98-9193-4f11-a741-26b26a19646a/image.png)

### 테스트 파일 생성
- 파일 다운</code></pre><p>[root@k8s-worker ~]# docker pull nginx:latest
latest: Pulling from library/nginx
3da95a905ed5: Pull complete
6c8e51cf0087: Pull complete
9bbbd7ee45b7: Pull complete
48670a58a68f: Pull complete
ce7132063a56: Pull complete
23e05839d684: Pull complete
ee95256df030: Pull complete
Digest: sha256:93230cd54060f497430c7a120e2347894846a81b6a5dd2110f7362c5423b4abc
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest</p>
<pre><code>- 파일 확인</code></pre><p>[root@k8s-worker ~]# docker image list
REPOSITORY                    TAG       IMAGE ID       CREATED         SIZE
nginx                         latest    9592f5595f2b   7 days ago      192MB
goharbor/redis-photon         v2.11.0   184984d263c2   13 months ago   165MB
goharbor/harbor-registryctl   v2.11.0   f1220f69df90   13 months ago   162MB
goharbor/registry-photon      v2.11.0   95046ed33f52   13 months ago   84.5MB
goharbor/nginx-photon         v2.11.0   681ba9915791   13 months ago   153MB
goharbor/harbor-log           v2.11.0   a0a812a07568   13 months ago   163MB
goharbor/harbor-jobservice    v2.11.0   bba862a3784a   13 months ago   159MB
goharbor/harbor-core          v2.11.0   2cf11c05e0e2   13 months ago   185MB
goharbor/harbor-portal        v2.11.0   ea8fda08df5b   13 months ago   162MB
goharbor/harbor-db            v2.11.0   9bd788ea0df6   13 months ago   271MB
goharbor/prepare              v2.11.0   2baf15fbf5e2   13 months ago   207MB</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/09df2c5c-314a-444a-b747-3001cb4ade08/image.png)

- tar로 저장 : `docker save -o nginx_250702.tar nginx:latest`</code></pre><p>[root@k8s-worker ~]# docker save -o nginx_250702.tar nginx:latest
[root@k8s-worker ~]# ls -al
합계 547180
dr-xr-x---.  7 root root      4096  7월  2 16:42 .
-rw-------   1 root root 196392960  7월  2 16:42 nginx_250702.tar</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/210a0156-7f1a-44ca-8990-7c969a1ceb60/image.png)

- tar 파일 다시 로드: `docker load -i nginx_250702.tar`
### 이미지 tag 변경후 하버 업로드
- 태그 변경:  `docker tag nginx:latest harbor.test.com:80/jupiter/nginx:250702`</code></pre><p>[root@k8s-worker ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6</p>
<p>192.168.0.68 k8s-master
192.168.0.69 k8s-worker harbor.test.com
[root@k8s-worker ~]# docker tag nginx:latest harbor.test.com:80/jupiter/nginx:250702
[root@k8s-worker ~]# docker images
REPOSITORY                         TAG       IMAGE ID       CREATED         SIZE
nginx                              latest    9592f5595f2b   7 days ago      192MB
harbor.test.com:80/jupiter/nginx   250702    9592f5595f2b   7 days ago      192MB
goharbor/redis-photon              v2.11.0   184984d263c2   13 months ago   165MB
goharbor/harbor-registryctl        v2.11.0   f1220f69df90   13 months ago   162MB
goharbor/registry-photon           v2.11.0   95046ed33f52   13 months ago   84.5MB
goharbor/nginx-photon              v2.11.0   681ba9915791   13 months ago   153MB
goharbor/harbor-log                v2.11.0   a0a812a07568   13 months ago   163MB
goharbor/harbor-jobservice         v2.11.0   bba862a3784a   13 months ago   159MB
goharbor/harbor-core               v2.11.0   2cf11c05e0e2   13 months ago   185MB
goharbor/harbor-portal             v2.11.0   ea8fda08df5b   13 months ago   162MB
goharbor/harbor-db                 v2.11.0   9bd788ea0df6   13 months ago   271MB
goharbor/prepare                   v2.11.0   2baf15fbf5e2   13 months ago   207MB</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/7e9ae437-3397-4df7-a2f8-00c7096fc1bd/image.png)
- 이미지 push : `docker push harbor.test.com:80/jupiter/nginx:250702`</code></pre><p>[root@k8s-worker ~]# docker push harbor.test.com:80/jupiter/nginx:250702
The push refers to repository [harbor.test.com:80/jupiter/nginx]
07eaefc6ebf2: Pushed
de2ef8ceb76a: Pushed
e6c40b7bdc83: Pushed
f941308035cf: Pushed
81a9d30670ec: Pushed
1bf33238ab09: Pushed
1bb35e8b4de1: Pushed
250702: digest: sha256:ccde53834eab53e85b35526a647cdb714ea4521b1ddf5a07b5c8787298d13087 size: 1778</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/505affa9-0e8f-4c78-85ff-0666396b6e15/image.png)![](https://velog.velcdn.com/images/jupiter-j/post/a04e5749-4509-430d-8a0b-3bd9380156f2/image.png)

&lt;br&gt;

## harbor https적용</code></pre><p>[root@k8s-worker ssl]# mkdir -p /etc/harbor/ssl
[root@k8s-worker ssl]# cd /etc/harbor/ssl</p>
<p>[root@k8s-worker ssl]# openssl genrsa -out ca.key 4096
Generating RSA private key, 4096 bit long modulus (2 primes)
.............................................++++
..............................++++
e is 65537 (0x010001)
[root@k8s-worker ssl]# openssl req -x509 -new -nodes -key ca.key -subj &quot;/CN=Harbor-CA&quot; -days 3650 -out ca.crt
[root@k8s-worker ssl]# openssl genrsa -out harbor.test.com.key 4096
Generating RSA private key, 4096 bit long modulus (2 primes)
..............................................................++++
...............................................................................++++
e is 65537 (0x010001)
[root@k8s-worker ssl]# ls
ca.crt  ca.key  harbor.test.com.key</p>
<p>[root@k8s-worker ssl]# openssl req -new -key harbor.test.com.key -subj &quot;/CN=harbor.test.com&quot; -out harbor.test.com.csr
[root@k8s-worker ssl]# ls
ca.crt  ca.key  harbor.test.com.csr  harbor.test.com.key</p>
<p>[root@k8s-worker ssl]# openssl x509 -req -in harbor.test.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial \</p>
<blockquote>
<p>-out harbor.test.com.crt -days 3650 -extensions v3_req -extfile &lt;(cat &lt;&lt;EOF
[ v3_req ]
subjectAltName = @alt_names</p>
<p>[alt_names]
DNS.1 = harbor.test.com
EOF
)
Signature ok
subject=CN = harbor.test.com
Getting CA Private Key</p>
</blockquote>
<p>[root@k8s-worker ssl]# ls
ca.crt  ca.key  ca.srl  harbor.test.com.crt  harbor.test.com.csr  harbor.test.com.key</p>
<pre><code>| 파일 이름                 | 설명               | 쉽게 말하면              |
| --------------------- | ---------------- | ------------------- |
| `ca.key`              | 인증기관(CA)의 비밀 열쇠  | &quot;사장님 도장&quot;            |
| `ca.crt`              | 인증기관(CA)의 증명서    | &quot;이 도장은 진짜 사장님 거야!&quot;  |
| `harbor.test.com.key` | Harbor 서버의 비밀 열쇠 | &quot;Harbor 개인 도장&quot;      |
| `harbor.test.com.csr` | 인증서 요청서          | &quot;사장님, 제 도장 인증해주세요&quot;  |
| `harbor.test.com.crt` | Harbor의 인증서      | &quot;사장님이 진짜라고 확인해줬어요!&quot; |

- 적용

harbor 설정 변경 후 반드시 `./prepare` 스크립트를 실행해야함

```bash
[root@k8s-worker harbor]# cd harbor/
[root@k8s-worker harbor]# ls
LICENSE  common  common.sh  docker-compose.yml  harbor.yml  harbor.yml.tmpl  install.sh  prepare
[root@k8s-worker harbor]# ./prepare
prepare base dir is set to /opt/harbor/harbor

systemctl daemon-reload
systemctl restart containerd
systemctl restart docker
systemctl restart harbor</code></pre><p><img src="https://velog.velcdn.com/images/jupiter-j/post/d7c8b67a-559f-48e5-b53d-240c3d399f25/image.png" alt=""></p>
<pre><code>[root@k8s-worker harbor]# docker login https://harbor.test.com:80
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded</code></pre><ul>
<li>태그 변경후 다시 업로드 테스트<pre><code>[root@k8s-worker harbor]# docker tag harbor.test.com:80/jupiter/nginx:250702 harbor.test.com:80/jupiter/nginx:250702v
[root@k8s-worker harbor]# docker images
REPOSITORY                         TAG       IMAGE ID       CREATED         SIZE
nginx                              latest    9592f5595f2b   7 days ago      192MB
harbor.test.com:80/jupiter/nginx   250702    9592f5595f2b   7 days ago      192MB
harbor.test.com:80/jupiter/nginx   250702v   9592f5595f2b   7 days ago      192MB
goharbor/redis-photon              v2.11.0   184984d263c2   13 months ago   165MB
goharbor/harbor-registryctl        v2.11.0   f1220f69df90   13 months ago   162MB
goharbor/registry-photon           v2.11.0   95046ed33f52   13 months ago   84.5MB
goharbor/nginx-photon              v2.11.0   681ba9915791   13 months ago   153MB
goharbor/harbor-log                v2.11.0   a0a812a07568   13 months ago   163MB
goharbor/harbor-jobservice         v2.11.0   bba862a3784a   13 months ago   159MB
goharbor/harbor-core               v2.11.0   2cf11c05e0e2   13 months ago   185MB
goharbor/harbor-portal             v2.11.0   ea8fda08df5b   13 months ago   162MB
goharbor/harbor-db                 v2.11.0   9bd788ea0df6   13 months ago   271MB
goharbor/prepare                   v2.11.0   2baf15fbf5e2   13 months ago   207MB
[root@k8s-worker harbor]# docker push harbor.test.com:80/jupiter/nginx:250702v
The push refers to repository [harbor.test.com:80/jupiter/nginx]
07eaefc6ebf2: Layer already exists
de2ef8ceb76a: Layer already exists
e6c40b7bdc83: Layer already exists
f941308035cf: Layer already exists
81a9d30670ec: Layer already exists
1bf33238ab09: Layer already exists
1bb35e8b4de1: Layer already exists
250702v: digest: sha256:ccde53834eab53e85b35526a647cdb714ea4521b1ddf5a07b5c8787298d13087 size: 1778</code></pre><img src="https://velog.velcdn.com/images/jupiter-j/post/8379e3ec-6df8-4759-8e49-665361b25f2a/image.png" alt=""> <img src="https://velog.velcdn.com/images/jupiter-j/post/7404b968-dd83-4786-8fee-7c851a933812/image.png" alt=""></li>
</ul>
<br>
<br>


<h2 id="harbor-설정파일-확인">harbor 설정파일 확인</h2>
<p>타인의 pc에서 harbor환경을 확인하는 방법</p>
<ol>
<li>하버가 존재하는지 확인 <code>ps -ef | grep harbor</code></li>
</ol>
<pre><code class="language-bash">[root@k8s-worker ~]# ps -ef | grep harbor
10000     218032  217960  0 15:46 ?        00:00:03 /home/harbor/harbor_registryctl -c /etc/registryctl/config.yml
10000     218334  218314  2 15:46 ?        00:00:21 /harbor/harbor_core
10000     218835  218806  2 15:46 ?        00:00:27 /harbor/harbor_jobservice -c /etc/jobservice/config.yml
root      232919   93404  0 16:02 pts/0    00:00:00 grep --color=auto harbor
</code></pre>
<ol>
<li>하버 dns, port 확인 : <code>cat /etc/hosts</code> , <code>cat etc/docker/daemon.json</code> , <code>find / -name &quot;*harbor.yml*&quot;</code> , <code>netstat -nltp</code></li>
</ol>
<pre><code class="language-bash">## cat /etc/hosts
192.168.0.68 k8s-master
192.168.0.69 k8s-worker harbor.test.com

## cat etc/docker/daemon.json
[root@k8s-worker ~]# cat /etc/docker/daemon.json
{
  &quot;insecure-registries&quot;: [
    &quot;harbor.test.com:80&quot;,
    &quot;192.168.0.69:80&quot;
  ],
  &quot;log-driver&quot;: &quot;json-file&quot;,
  &quot;log-opts&quot;: {
    &quot;max-size&quot;: &quot;10m&quot;,
    &quot;max-file&quot;: &quot;3&quot;
  }
}

## find / -name &quot;*harbor.yml*&quot;

## netstat -nltp
tcp6       0      0 :::22                   :::*                    LISTEN      775/sshd
tcp6       0      0 :::10250                :::*                    LISTEN      823/kubelet
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd
tcp6       0      0 :::80                   :::*                    LISTEN      218502/docker-proxy
tcp6       0      0 :::10256                :::*                    LISTEN      1355/kube-proxy</code></pre>
<ol>
<li><p>docker login 테스트</p>
<p> <code>/etc/docker/daemon.json</code>에 적힌 내용이 없으면 login이 안될수 있음 확인필요 </p>
<pre><code class="language-bash"> [root@k8s-worker ~]# cat /etc/docker/daemon.json
 {
   &quot;insecure-registries&quot;: [
     &quot;harbor.test.com:80&quot;,
     &quot;192.168.0.69:80&quot;
   ],
   &quot;log-driver&quot;: &quot;json-file&quot;,
   &quot;log-opts&quot;: {
     &quot;max-size&quot;: &quot;10m&quot;,
     &quot;max-file&quot;: &quot;3&quot;
   }
 }</code></pre>
</li>
</ol>
<pre><code class="language-bash">## 없는것
[root@k8s-worker ~]# docker login &quot;http://harbor.test.com&quot;
Authenticating with existing credentials...
Login did not succeed, error: Error response from daemon: Get &quot;https://harbor.test.com/v2/&quot;: dial tcp 192.168.0.69:443: connect: connection refused
Username (admin): admin
Password:
Error response from daemon: Get &quot;https://harbor.test.com/v2/&quot;: dial tcp 192.168.0.69:443: connect: connection refused

## 등록된것
[root@k8s-worker ~]# docker login &quot;http://harbor.test.com:80&quot;
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
## 등록된것
[root@k8s-worker ~]# docker login &quot;http://192.168.0.69:80&quot;
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded</code></pre>
]]></description>
        </item>
        <item>
            <title><![CDATA[[k8s] ETCD외부 설치 & 인증서 갱신]]></title>
            <link>https://velog.io/@jupiter-j/k8s-ETCD%EC%99%B8%EB%B6%80-%EC%84%A4%EC%B9%98-%EC%9D%B8%EC%A6%9D%EC%84%9C-%EA%B0%B1%EC%8B%A0</link>
            <guid>https://velog.io/@jupiter-j/k8s-ETCD%EC%99%B8%EB%B6%80-%EC%84%A4%EC%B9%98-%EC%9D%B8%EC%A6%9D%EC%84%9C-%EA%B0%B1%EC%8B%A0</guid>
            <pubDate>Wed, 11 Jun 2025 08:10:42 GMT</pubDate>
            <description><![CDATA[<h3 id="외부-etcd를-설치하는-이유">외부 ETCD를 설치하는 이유</h3>
<ul>
<li>안정성 향상: etcd를 Kubernetes 마스터와 분리해 장애 확산 방지</li>
<li>성능 개선: etcd에 독립 리소스 할당으로 부하 분산</li>
<li>운영·보안 용이: 별도 관리 및 보안 정책 적용 가능</li>
<li>확장성 확보: 클러스터와 독립적으로 etcd 확장 가능</li>
<li>유연한 환경 대응: 온프레·클라우드 혼합 환경에서 활용 편리</li>
</ul>
<br>

<blockquote>
<h3 id="vm-환경">VM 환경</h3>
</blockquote>
<ul>
<li>rocky 8.8 minimal </li>
<li>k8s version: 1.29.7v</li>
<li>etcd 3.5.9버전</li>
</ul>
<br>

<h2 id="외부-etcd-설치">외부 ETCD 설치</h2>
<h3 id="1-외부-etcd설치-및-tls-구성">1. 외부 etcd설치 및 TLS 구성</h3>
<ul>
<li>사용자 생성</li>
</ul>
<pre><code class="language-bash">[root@k8s-master ~]# useradd -r -s /sbin/nologin etcd
[root@k8s-master ~]# mkdir -p /etc/etcd /var/lib/etcd
[root@k8s-master ~]# chown -R etcd:etcd /etc/etcd /var/lib/etcd
[root@k8s-master ~]# ls -ld /etc/etcd
drwxr-xr-x 2 etcd etcd 6  6월 11 14:37 /etc/etcd
[root@k8s-master ~]# ls -ld /var/lib/etcd
drwxr-xr-x 2 etcd etcd 6  6월 11 14:37 /var/lib/etcd</code></pre>
<ul>
<li>etcd 설치</li>
</ul>
<pre><code class="language-bash">ETCD_VER=v3.5.9
curl -LO https://github.com/etcd-io/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzvf etcd-${ETCD_VER}-linux-amd64.tar.gz
mv etcd-${ETCD_VER}-linux-amd64/etcd* /usr/local/bin/
</code></pre>
<ul>
<li>서비스 등록</li>
</ul>
<pre><code class="language-bash">[root@k8s-master kubernetes]# cat /etc/systemd/system/etcd.service
[Unit]
Description=etcd
After=network.target

[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \
  --name etcd \
  --data-dir /var/lib/etcd \
  --listen-client-urls https://0.0.0.0:2379 \
  --advertise-client-urls https://192.168.0.66:2379 \
  --cert-file=/etc/etcd/pki/etcd-server.crt \
  --key-file=/etc/etcd/pki/etcd-server.key \
  --client-cert-auth=true \
  --trusted-ca-file=/etc/etcd/pki/ca.crt

Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
</code></pre>
<pre><code class="language-bash">[root@k8s-master ~]# vi /etc/systemd/system/etcd.service
[root@k8s-master ~]# systemctl daemon-reexec
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable --now etcd
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.

[root@k8s-master ~]# systemctl status etcd
● etcd.service - etcd
   Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: activating (start) since Wed 2025-06-11 14:40:11 KST; 1s ago
 Main PID: 32502 (etcd)
    Tasks: 6 (limit: 50014)
   Memory: 10.7M
   CGroup: /system.slice/etcd.service
           └─32502 /usr/local/bin/etcd --name etcd --data-dir /var/lib/etcd --listen-client-urls https://0.0.0.0:2379 --advertise-&gt;
</code></pre>
<ul>
<li>configmap 생성</li>
</ul>
<pre><code class="language-bash">apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.29.7
controlPlaneEndpoint: &quot;192.168.0.66:6443&quot;

etcd:
  external:
    endpoints:
    - https://192.168.0.66:2379
    caFile: /etc/etcd/pki/ca.crt
    certFile: /etc/etcd/pki/etcd-client.crt
    keyFile: /etc/etcd/pki/etcd-client.key

networking:
  podSubnet: 10.244.0.0/16
</code></pre>
<ul>
<li>인증서 생성</li>
</ul>
<pre><code class="language-bash">mkdir -p /root/etcd-certs
cd /root/etcd-certs

# CA 생성
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj &quot;/CN=etcd-ca&quot; -days 3650 -out ca.crt

# etcd 서버용 인증서 생성
openssl genrsa -out etcd-server.key 2048
openssl req -new -key etcd-server.key -subj &quot;/CN=etcd-server&quot; -out etcd-server.csr

openssl x509 -req -in etcd-server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out etcd-server.crt -days 3650 -extensions v3_req -extfile &lt;(printf &quot;[v3_req]\nsubjectAltName=IP:192.168.0.66&quot;)

# 클라이언트용 인증서도 비슷하게 생성

# 작업 끝나면 인증서들을 실제 위치로 복사
mkdir -p /etc/etcd/pki
cp *.crt *.key ca.key /etc/etcd/pki/
chown -R etcd:etcd /etc/etcd/pki
chmod 600 /etc/etcd/pki/*.key
chmod 644 /etc/etcd/pki/*.crt

======================================================================
[root@k8s-master etcd-certs]# mkdir -p /etc/etcd/pki
[root@k8s-master etcd-certs]# cp *.crt *.key ca.key /etc/etcd/pki/
cp: warning: source file &#39;ca.key&#39; specified more than once
[root@k8s-master etcd-certs]# chown -R etcd:etcd /etc/etcd/pki
[root@k8s-master etcd-certs]# chmod 644 /etc/etcd/pki/*.crt
[root@k8s-master etcd-certs]# cd /etc/etcd
[root@k8s-master etcd]# ls
pki
[root@k8s-master etcd]# cd pki/
[root@k8s-master pki]# ls
ca.crt  ca.key  etcd-server.crt  etcd-server.key
[root@k8s-master pki]# ls -ld /etc/etcd/pki
drwxr-xr-x 2 etcd etcd 80  6월 11 14:52 /etc/etcd/pki</code></pre>
<ul>
<li>인증서 생성2 : <code>etcd-client.crt</code>, <code>etcd-client.key</code></li>
</ul>
<pre><code class="language-bash">[root@k8s-master kubernetes]# cd /etc/etcd/pki
[root@k8s-master pki]# ls
ca.crt  ca.key  etcd-server.crt  etcd-server.key
[root@k8s-master pki]#
[root@k8s-master pki]#
[root@k8s-master pki]# openssl genrsa -out /etc/etcd/pki/etcd-client.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
..........................................................+++++
.............................................................+++++
e is 65537 (0x010001)
[root@k8s-master pki]# ls
ca.crt  ca.key  etcd-client.key  etcd-server.crt  etcd-server.key
[root@k8s-master pki]# openssl req -new -key /etc/etcd/pki/etcd-client.key -out /etc/etcd/pki/etcd-client.csr -subj &quot;/CN=etcd-client&quot;
[root@k8s-master pki]# openssl x509 -req -in /etc/etcd/pki/etcd-client.csr -CA /etc/etcd/pki/ca.crt -CAkey /etc/etcd/pki/ca.key -CAcreateserial -out /etc/etcd/pki/etcd-client.crt -days 3650 -sha256
Signature ok
subject=CN = etcd-client
Getting CA Private Key
[root@k8s-master pki]# ls
ca.crt  ca.key  ca.srl  etcd-client.crt  etcd-client.csr  etcd-client.key  etcd-server.crt  etcd-server.key</code></pre>
<br>
<br>

<h2 id="k8s설치">k8s설치</h2>
<h3 id="시스템-설정">시스템 설정</h3>
<ul>
<li>os version</li>
</ul>
<pre><code class="language-bash">[root@k8s-master ~]# cat /etc/os-release
NAME=&quot;Rocky Linux&quot;
VERSION=&quot;8.8 (Green Obsidian)&quot;
ID=&quot;rocky&quot;
ID_LIKE=&quot;rhel centos fedora&quot;
VERSION_ID=&quot;8.8&quot;
PLATFORM_ID=&quot;platform:el8&quot;
PRETTY_NAME=&quot;Rocky Linux 8.8 (Green Obsidian)&quot;
ANSI_COLOR=&quot;0;32&quot;
LOGO=&quot;fedora-logo-icon&quot;
CPE_NAME=&quot;cpe:/o:rocky:rocky:8:GA&quot;
HOME_URL=&quot;https://rockylinux.org/&quot;
BUG_REPORT_URL=&quot;https://bugs.rockylinux.org/&quot;
SUPPORT_END=&quot;2029-05-31&quot;
ROCKY_SUPPORT_PRODUCT=&quot;Rocky-Linux-8&quot;
ROCKY_SUPPORT_PRODUCT_VERSION=&quot;8.8&quot;
REDHAT_SUPPORT_PRODUCT=&quot;Rocky Linux&quot;
REDHAT_SUPPORT_PRODUCT_VERSION=&quot;8.8&quot;</code></pre>
<pre><code class="language-bash">[root@k8s-master ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           7853         139        7538          12         175        7482
Swap:             0           0           0

[root@k8s-master ~]# systemctl disable firewalld &amp;&amp; systemctl stop firewalld
[root@k8s-master ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

[root@k8s-master ~]# findmnt /sys/fs/cgroup
TARGET         SOURCE FSTYPE OPTIONS
/sys/fs/cgroup tmpfs  tmpfs  ro,nosuid,nodev,noexec,mode=755

[root@k8s-master ~]# stat -fc %T /sys/fs/cgroup
tmpfs

## cgroup2로 변경
[root@k8s-master ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=&quot;$(sed &#39;s, release .*$,,g&#39; /etc/system-release)&quot;
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT=&quot;console&quot;
GRUB_CMDLINE_LINUX=&quot;resume=/dev/mapper/rl-swap rd.lvm.lv=rl/root rd.lvm.lv=rl/swap selinux=0 systemd.unified_cgroup_hierarchy=1&quot;
GRUB_DISABLE_RECOVERY=&quot;true&quot;
GRUB_ENABLE_BLSCFG=true

## 부팅시 옵션 적용
[root@k8s-master ~]# sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Adding boot menu entry for EFI firmware configuration
done

## 현재 커널에 cgroup v2 옵션 추가
sudo grubby --args=&quot;systemd.unified_cgroup_hierarchy=1&quot; --update-kernel=ALL
sudo reboot

## 적용 확인
[root@k8s-master ~]# stat -fc %T /sys/fs/cgroup
cgroup2fs
[root@k8s-master ~]# mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)

## 커널 모듈 설정
[root@k8s-worker ~]# sudo vi /etc/modules-load.d/k8s.conf
[root@k8s-worker ~]# cat /etc/modules-load.d/k8s.conf
overlay
br_netfilter

[root@k8s-master ~]# sudo modprobe br_netfilter
[root@k8s-master ~]# lsmod | grep br_netfilter
br_netfilter           24576  0
bridge                290816  1 br_netfilter

[root@k8s-worker ~]# vi /etc/sysctl.d/k8s.conf
[root@k8s-worker ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward=1

# 변경사항 적용
sudo sysctl --system

## 방화벽 
[root@k8s-master ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

## 파일 스크립터 늘이기
cat &lt;&lt; EOF | sudo tee -a /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
root soft nofile 65536
root hard nofile 65536
EOF
</code></pre>
<h3 id="필요한-패키지-설치">필요한 패키지 설치</h3>
<pre><code class="language-bash">## runC  ======================================
curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

sudo install -m 755 runc.amd64 /usr/local/sbin/runc

## containerd 1.7.13  ======================================
sudo dnf install -y wget
wget https://github.com/containerd/containerd/releases/download/v1.7.13/containerd-1.7.13-linux-amd64.tar.gz
tar -C /usr/local -xzf containerd-1.7.13-linux-amd64.tar.gz

sudo mkdir -p /usr/lib/systemd/system

cat &lt;&lt;EOF | sudo tee /usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStart=/usr/local/bin/containerd
Restart=always
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF

# 서비스 시작
sudo systemctl daemon-reexec
sudo systemctl daemon-reload
sudo systemctl enable --now containerd

## containerd cgroup2적용
# 기본 설정 생성
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

# systemd cgroup 드라이버 설정
sudo sed -i &#39;s/SystemdCgroup = false/SystemdCgroup = true/&#39; /etc/containerd/config.toml

# 재시작
sudo systemctl restart containerd

## CNI설치 ======================================
# 기본 CNI 플러그인 다운로드 (필요 시)
sudo mkdir -p /opt/cni/bin
curl -LO https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
sudo tar -C /opt/cni/bin -xzf cni-plugins-linux-amd64-v1.3.0.tgz

[root@k8s-master ~]# containerd --version
containerd github.com/containerd/containerd v1.7.13 7c3aca7a610df76212171d200ca3811ff6096eb8
[root@k8s-master ~]# runc --version
runc version 1.1.12
commit: v1.1.12-0-g51d5e946
spec: 1.0.2-dev
go: go1.20.13
libseccomp: 2.5.4
[root@k8s-master ~]# which runc

## nerdctl설치 =====================================
cd /root
curl -LO https://github.com/containerd/nerdctl/releases/download/v1.7.7/nerdctl-full-1.7.7-linux-amd64.tar.gz
tar -xvf nerdctl-full-1.7.7-linux-amd64.tar.gz
nerdctl --version

sudo yum install -y iproute-tc
</code></pre>
<h3 id="k8s-패키지-설치">k8s 패키지 설치</h3>
<p>rocky8용으로 설정 </p>
<pre><code class="language-bash">cat &lt;&lt;EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
EOF

sudo dnf clean all
sudo dnf install -y kubelet-1.29.7 kubeadm-1.29.7 kubectl-1.29.7
sudo systemctl enable --now kubelet

[root@k8s-master ~]# rpm -qa | grep kube
kubernetes-cni-1.3.0-150500.1.1.x86_64
kubelet-1.29.7-150500.1.1.x86_64
kubectl-1.29.7-150500.1.1.x86_64
kubeadm-1.29.7-150500.1.1.x86_64

[root@k8s-master ~]# kubectl version --client
Client Version: v1.29.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
[root@k8s-master ~]# kubeadm version
kubeadm version: &amp;version.Info{Major:&quot;1&quot;, Minor:&quot;29&quot;, GitVersion:&quot;v1.29.7&quot;, GitCommit:&quot;4e4a18878ce330fefda1dc46acca88ba355e9ce7&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2024-07-17T00:04:38Z&quot;, GoVersion:&quot;go1.22.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}
[root@k8s-master ~]# kubelet --version
Kubernetes v1.29.7

## cgroup kubelet설치 
[root@k8s-master ~]# mkdir -p /etc/default
[root@k8s-master ~]# echo &#39;KUBELET_EXTRA_ARGS=&quot;--cgroup-driver=systemd&quot;&#39; | sudo tee /etc/default/kubelet
KUBELET_EXTRA_ARGS=&quot;--cgroup-driver=systemd&quot;
[root@k8s-master ~]# cd /etc/default
[root@k8s-master default]# ls
grub  kubelet  useradd
</code></pre>
<ul>
<li>init</li>
</ul>
<pre><code class="language-bash">kubeadm init --config=kubeadm-config.yaml --v=5</code></pre>
<pre><code class="language-bash">[root@k8s-master pki]# kubectl get po -A
NAMESPACE     NAME                                 READY   STATUS              RESTARTS   AGE
kube-system   coredns-76f75df574-97g57             0/1     ContainerCreating   0          4m25s
kube-system   coredns-76f75df574-x9jk5             0/1     ContainerCreating   0          4m25s
kube-system   kube-apiserver-k8s-master            1/1     Running             0          4m32s
kube-system   kube-controller-manager-k8s-master   1/1     Running             0          4m32s
kube-system   kube-proxy-47ghm                     1/1     Running             0          4m25s
kube-system   kube-proxy-855m2                     1/1     Running             0          23s
kube-system   kube-scheduler-k8s-master            1/1     Running             0          4m32s</code></pre>
<br>

<h2 id="etcd-인증서-갱신">ETCD 인증서 갱신</h2>
<h3 id="인증서-체크">인증서 체크</h3>
<p>etcd 외부 설치 된 경우 (<code>/etc/etcd</code> 경로)</p>
<ul>
<li>인증서 체크를 해보면 etcd가 누락되어있는 것이 보임</li>
</ul>
<pre><code class="language-bash">[root@k8s-master pki]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with &#39;kubectl -n kube-system get cm kubeadm-config -o yaml&#39;

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jun 11, 2026 06:06 UTC   364d            ca                      no
apiserver                  Jun 11, 2026 06:06 UTC   364d            ca                      no
apiserver-kubelet-client   Jun 11, 2026 06:06 UTC   364d            ca                      no
controller-manager.conf    Jun 11, 2026 06:06 UTC   364d            ca                      no
front-proxy-client         Jun 11, 2026 06:06 UTC   364d            front-proxy-ca          no
scheduler.conf             Jun 11, 2026 06:07 UTC   364d            ca                      no
super-admin.conf           Jun 11, 2026 06:06 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jun 09, 2035 06:06 UTC   9y              no
front-proxy-ca          Jun 09, 2035 06:06 UTC   9y              no</code></pre>
<ul>
<li>인증서 적용 (etcd제외 모든 인증서 갱신)<blockquote>
<p>클러스터 인증서 갱신
<a href="https://somaz.tistory.com/148">https://somaz.tistory.com/148</a></p>
</blockquote>
</li>
</ul>
<pre><code class="language-bash">./update-kubeadm-cert.sh  master # etcd 제외</code></pre>
<ul>
<li>etcd 인증서 날짜 확인</li>
</ul>
<pre><code class="language-bash">[root@k8s-master pki]# ls
ca.crt  ca.key  ca.srl  etcd-client.crt  etcd-client.csr  etcd-client.key  etcd-server.crt  etcd-server.key

[root@k8s-master pki]# openssl x509 -in etcd-server.crt -noout -dates
notBefore=Jun 11 05:52:15 2025 GMT
notAfter=Jun  9 05:52:15 2035 GMT
[root@k8s-master pki]# openssl x509 -in etcd-client.crt -noout -dates
notBefore=Jun 11 05:58:56 2025 GMT
notAfter=Jun  9 05:58:56 2035 GMT
[root@k8s-master pki]#</code></pre>
<table>
<thead>
<tr>
<th>파일명</th>
<th>용도</th>
<th>설명</th>
<th>확인 명령어</th>
</tr>
</thead>
<tbody><tr>
<td><code>ca.crt</code></td>
<td>CA 인증서</td>
<td>인증서 서명자(issuer) 역할. 모든 etcd 관련 인증서의 신뢰 루트.</td>
<td><code>openssl x509 -in ca.crt -noout -subject -dates</code></td>
</tr>
<tr>
<td><code>ca.key</code></td>
<td>CA 비밀키</td>
<td>인증서 서명에 사용되는 비공개 키. 매우 중요. 외부 노출 금지.</td>
<td>🔒 노출 금지</td>
</tr>
<tr>
<td><code>ca.srl</code></td>
<td>CA 시리얼 추적</td>
<td>인증서 일련번호 관리 파일. 자동 생성. 삭제 X.</td>
<td>-</td>
</tr>
<tr>
<td><code>etcd-server.crt</code></td>
<td>etcd 서버 인증서</td>
<td>etcd 자신이 클라이언트(API 서버 등)와 통신할 때 사용하는 인증서. SAN에 IP 들어가야 함.</td>
<td>`openssl x509 -in etcd-server.crt -noout -subject -issuer -dates -text</td>
</tr>
<tr>
<td><code>etcd-server.key</code></td>
<td>etcd 서버 비밀키</td>
<td>위 서버 인증서에 대응하는 개인 키.</td>
<td>🔒 노출 금지</td>
</tr>
<tr>
<td><code>etcd-client.crt</code></td>
<td>etcd 클라이언트 인증서</td>
<td>Kubernetes API 서버가 etcd에 접근할 때 사용하는 인증서. CN은 일반적으로 <code>kube-apiserver</code>.</td>
<td><code>openssl x509 -in etcd-client.crt -noout -subject -dates</code></td>
</tr>
<tr>
<td><code>etcd-client.key</code></td>
<td>클라이언트 비밀키</td>
<td>위 인증서에 대응하는 개인 키.</td>
<td>🔒 노출 금지</td>
</tr>
<tr>
<td><code>etcd-client.csr</code></td>
<td>인증서 서명 요청</td>
<td>클라이언트 인증서를 만들 때 사용했던 요청 파일 (재사용 가능).</td>
<td><code>openssl req -in etcd-client.csr -noout -subject</code></td>
</tr>
<tr>
<td>- SAN포함 여부 확인</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody></table>
<p>CN 외에 인증서가 유효한 도메인 이름이나 IP 주소 등을 여러 개 넣을 수 있는 확장 필드</p>
<p>SAN은 접속 가능한 여러 IP/DNS를 명시해서 인증서가 여러 이름을 커버하도록 함</p>
<pre><code class="language-bash">openssl x509 -in etcd-server.crt -noout -text | grep -A1 &quot;Subject Alternative Name&quot;
</code></pre>
<pre><code class="language-bash">[root@k8s-master pki]# openssl x509 -in etcd-server.crt -noout -text | grep -A1 &quot;Subject Alternative Name&quot;
            X509v3 Subject Alternative Name:
                IP Address:192.168.0.66 ### 출력</code></pre>
<ul>
<li>CN 확인 : 인증서의 주체 이름</li>
</ul>
<pre><code class="language-bash">[root@k8s-master pki]# openssl x509 -in etcd-server.crt -noout -subject
subject=CN = etcd-server
[root@k8s-master pki]# openssl x509 -in etcd-client.crt -noout -subject
subject=CN = etcd-client</code></pre>
<br>

<h3 id="etcd-외부-인증서-갱신">etcd 외부 인증서 갱신</h3>
<aside>

<p>조건</p>
<ul>
<li><p>인증서가 셀프사인이고,</p>
</li>
<li><p>기존 CA (<code>etcd-ca.key</code>, <code>etcd-ca.crt</code>)가 보존되어 있으며,</p>
</li>
<li><p>기존 SAN, CN 정보를 유지하면서 유효기간만 연장하고 싶을 경우</p>
</aside>
</li>
<li><p>config파일</p>
</li>
</ul>
<pre><code class="language-bash">[root@k8s-master pki]# cat openssl-etcd.cnf
[ req ]
default_bits       = 2048
prompt             = no
default_md         = sha256
distinguished_name = req_distinguished_name
req_extensions     = v3_ext

[ req_distinguished_name ]
CN = etcd-server  ## 수정CN

[ v3_ext ]
subjectAltName = @alt_names

[ alt_names ]
IP.1 = 192.168.0.66 ## 수정 SAN</code></pre>
<ul>
<li>config 적용</li>
</ul>
<pre><code class="language-bash"># etcd-server
openssl req -new -key etcd-server.key -out etcd-server.csr -config openssl-etcd.cnf
# etcd-client
openssl req -new -key etcd-client.key -out etcd-client.csr -config openssl-etcd.cnf</code></pre>
<ul>
<li>CSR을 사용하여 새 인증서 생성</li>
</ul>
<pre><code class="language-bash">[root@k8s-master pki]# openssl x509 -req -in etcd-client.csr \
&gt;   -CA ca.crt -CAkey ca.key -CAcreateserial \
&gt;   -out etcd-client.crt -days 7300 -extensions v3_ext \
&gt;   -extfile openssl-etcd.cnf
Signature ok
subject=CN = etcd-client
Getting CA Private Key

[root@k8s-master pki]# openssl x509 -req -in etcd-server.csr \
&gt;   -CA ca.crt -CAkey ca.key -CAcreateserial \
&gt;   -out etcd-server.crt -days 7300 -extensions v3_ext \
&gt;   -extfile openssl-etcd.cnf
Signature ok
subject=CN = etcd-server
Getting CA Private Key</code></pre>
<ul>
<li>생성된 인증서 <code>etcd-server.crt etcd-client.crt</code></li>
</ul>
<pre><code class="language-bash">systemctl restart etcd

## 적용 확인
[root@k8s-master pki]# openssl x509 -in etcd-server.crt -noout -dates
notBefore=Jun 11 07:31:43 2025 GMT
notAfter=Jun  6 07:31:43 2045 GMT
[root@k8s-master pki]# openssl x509 -in etcd-client.crt -noout -dates
notBefore=Jun 11 07:31:16 2025 GMT
notAfter=Jun  6 07:31:16 2045 GMT</code></pre>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/596964f7-5f46-47c0-a496-414f802b0774/image.png" alt=""></p>
]]></description>
        </item>
        <item>
            <title><![CDATA[[CCCR] 프라이빗 오픈 클라우드를 위한 오픈스택 구축 및 운영 (4)]]></title>
            <link>https://velog.io/@jupiter-j/CCCR-%ED%94%84%EB%9D%BC%EC%9D%B4%EB%B9%97-%EC%98%A4%ED%94%88-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C%EB%A5%BC-%EC%9C%84%ED%95%9C-%EC%98%A4%ED%94%88%EC%8A%A4%ED%83%9D-%EA%B5%AC%EC%B6%95-%EB%B0%8F-%EC%9A%B4%EC%98%81-4</link>
            <guid>https://velog.io/@jupiter-j/CCCR-%ED%94%84%EB%9D%BC%EC%9D%B4%EB%B9%97-%EC%98%A4%ED%94%88-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C%EB%A5%BC-%EC%9C%84%ED%95%9C-%EC%98%A4%ED%94%88%EC%8A%A4%ED%83%9D-%EA%B5%AC%EC%B6%95-%EB%B0%8F-%EC%9A%B4%EC%98%81-4</guid>
            <pubDate>Fri, 30 May 2025 05:13:32 GMT</pubDate>
            <description><![CDATA[<br>

<h2 id="인스턴스-생성">인스턴스 생성</h2>
<ol>
<li>이미지(Glance): 부팅 가능한 운영체제가 설치된 디스크 파일<ul>
<li>이미지 -&gt; 인스턴스: 인스턴스의 root-disk의 모든 데이터는 임시</li>
<li>이미지 -&gt; 볼륨 - &gt; 인스턴스: 인스턴스의 root-disk의 모든 데이터는 볼륨에 영구 저장됨</li>
<li>이미지 -&gt; 인스턴스 -&gt; 스냅샷 : 스냅샷으로 인스턴스 생성시 기존 상태와 동일한 인스턴스 생성</li>
</ul>
</li>
<li>플레이버 (Nova): 인스턴스 생성 시 할당한 리소스를 지정 </li>
<li>내부 네트워크(Neutron): 같은 네트워크에 연결한 인스턴스 간의 통신 
일반사용자도 설정 가능(프로젝트 범위로만 생성)
서브넷 생성시 IP대역/게이트웨이 등을 자유롭게 설정 가능
네트워크 유형을 설정X</li>
</ol>
<p>--&gt; 인스턴스 생성 가능(내부통신만 가능)
4. 외부 네트워크: 외부와의 통신을 위해 사용
관리자로만 설정 가능
물리적인 환경에 맞게 IP대역 및 게이트웨이 등 설정
네트워크 유형을 설정 O -&gt; 실제 환경에 맞게 
5. 라우터 : 서로 다른 네트워크를 연결
    -&gt; 외부 네트워크는 게이트웨이로, 내부 네트워크는 서브넷 하나의 프로젝트 안에서 유효</p>
<pre><code>**위의 과정을 통해 인스턴스에서 외부로 통신 가능**</code></pre><ol start="6">
<li>유동IP (FloatingIP): 외부에서 인스턴스에 직접 접속할 수 있게 설정.
외부 네트워크의 주소 범위 안에서 할당 
인스턴스와 1:1로 연결(연결/해제가 자유로움)
기본적으로는 랜덤IP (관리자는 지정 가능)
인스턴스 생성 후에만 연결 가능 </li>
<li>보안그룹: 규칙에 따라 인스턴스에 대한 네트워크 트래픽을 제어 
하나의 인스턴스에 여러 개의 보안그룹 연결 가능
보안그룹 안에도 여러개의 규칙을 설정 가능 
동일한 그룹을 여러 인스턴스에 연결 가능 
인스턴스 생성 시 혹은 생성 후 모두 연결 가능
프로젝트 생성 시 default 보안 그룹이 자동 생성 -&gt; 동일한 보안그룹을 가진 인스턴스 간의 모든 통신 허용</li>
</ol>
<p>*<em>위의 과정을 통해 외부에서 인스턴스로 접속 가능 *</em>
8. ssh키페어: 원격 접속을 위해 사용
클라우드 이미지 대다수는 사용자 패스워드는 공유X
인스턴스 생성시에만 설정 가능함
보유한 키페어 중 공개키만 저장
키페어를 새로 생성해서 공개키는 저장하고 개인키는 다운로드
인스턴스 생성 시에만 설정 가능 </p>
<blockquote>
<p>*<em>cloud-init *</em></p>
</blockquote>
<p>1) 플레이버에 따른 마운트/스왑설정
2) SSH키페어 복사
3) user-data (스크립트)를 통한 초기 구성 가능</p>
<br>

<h2 id="스토리지">스토리지</h2>
<ul>
<li>임시 스토리지: 인스턴스 생성 시 함께 할당해서 인스턴스 삭제 시 데이터까지 모두 삭제 가능</li>
<li><blockquote>
<p>플레이버의 설정에 따라 할당</p>
</blockquote>
</li>
<li>영구 스토리지: 인스턴스와 별개로 생성 및 삭제 가능<ol>
<li>블록 스토리지 (Cinder)</li>
</ol>
<ul>
<li>인스턴스에 직접 연결해서 사용</li>
<li>볼륨은 인스턴스 하나에만 연결 가능</li>
<li>인스턴스 하나에 볼륨 여러개 연결은 가능 </li>
<li>블록 단위로 관리 (포멧/마운트)</li>
<li>인스턴스 내부 작업으로 발생한 데이터를 저장</li>
<li>스냅샷 기능 지원 : 데이터 복제 용도 </li>
<li>백업 기능 지원: 별도의 스토리지에 저장(설정파일)</li>
<li>하나의 프로젝트 안에서만사용</li>
<li>볼륨 전송기능으로 다른 프로젝트로 전달 가능 </li>
<li>기본은 LVM (권장X)
1) 설정파일에서 백엔드 스토리지 설정
2) 설정에 따라 볼륨타입을 생성
3) 볼륨 생성 시 볼륨 타입을 지정 <ol start="2">
<li>오브젝트 스토리지 (Swift)</li>
</ol>
<ul>
<li>인스턴스와 별개로 파일 서버와 유사한 형태로 사용</li>
</ul>
</li>
<li>대시보드나 명령어를 통해 컨테이너에 데이터(오브젝트) 저장</li>
<li>컨테이너를 통해 사용자의 접근 제어, 오브젝트의 목록화를 위해 사용</li>
<li>설정에 따라서 URL을 통한 접속도 가능 </li>
<li>범용적으로 사용 (데이터 유형 및 크기는 제약이 거의 없음)<ol start="3">
<li>공유 스토리지 (Manila)</li>
</ol>
<ul>
<li>인스턴스에 직ㅈ버 연결해서 사용</li>
</ul>
</li>
<li>파일 단위로 작업</li>
<li>여러 인스턴스에 동시 연결 가능 </li>
</ul>
</li>
</ul>
<br>

<h1 id="manila">Manila</h1>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/b0f5f970-32d5-449d-b096-3abda1b69b28/image.png" alt=""><img src="https://velog.velcdn.com/images/jupiter-j/post/6e34bcfd-2610-4ba9-911a-c2b448e681e2/image.png" alt=""><img src="https://velog.velcdn.com/images/jupiter-j/post/65699020-fb60-4083-b5c3-45c0dd201869/image.png" alt=""></p>
<pre><code>
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack flavor list
+--------------------------------------+-------------+------+------+-----------+-------+-----------+
| ID                                   | Name        |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-------------+------+------+-----------+-------+-----------+
| 59d58e86-4f3c-4bb4-b5a9-9a38bbc241c7 | demo-disk   | 1024 |   10 |         1 |     1 | True      |
| 6c4f3d8a-799b-44e1-8035-7e56966b086f | mini-flavor |  512 |    1 |         0 |     1 | True      |
| 76df590a-6e24-465b-98f6-a500b6ff4355 | demo-flavor | 2028 |   10 |         0 |     1 | True      |
+--------------------------------------+-------------+------+------+-----------+-------+-----------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack flavor create --id 100 --vcpus 1 --ram 256 --disk 10 manila-flavor
+----------------------------+---------------+
| Field                      | Value         |
+----------------------------+---------------+
| OS-FLV-DISABLED:disabled   | False         |
| OS-FLV-EXT-DATA:ephemeral  | 0             |
| description                | None          |
| disk                       | 10            |
| id                         | 100           |
| name                       | manila-flavor |
| os-flavor-access:is_public | True          |
| properties                 |               |
| ram                        | 256           |
| rxtx_factor                | 1.0           |
| swap                       | 0             |
| vcpus                      | 1             |
+----------------------------+---------------+

(os-venv) vagrant@openstack-aio:~$ openstack image create --file manila-service-image-master.qcow2 --disk-format qcow2 --public manila-service-image
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                    |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare                                                                                                                                                     |
| created_at       | 2025-05-30T02:26:31Z                                                                                                                                     |
| disk_format      | qcow2                                                                                                                                                    |
| file             | /v2/images/fa8e3e6f-1ad9-4f1c-8f81-0a3146a92d82/file                                                                                                     |
| id               | fa8e3e6f-1ad9-4f1c-8f81-0a3146a92d82                                                                                                                     |
| min_disk         | 0                                                                                                                                                        |
| min_ram          | 0                                                                                                                                                        |
| name             | manila-service-image                                                                                                                                     |
| owner            | 00855a5cafa646478a16f350df1f00f6                                                                                                                         |
| properties       | os_hidden=&#39;False&#39;, owner_specified.openstack.md5=&#39;&#39;, owner_specified.openstack.object=&#39;images/manila-service-image&#39;, owner_specified.openstack.sha256=&#39;&#39; |
| protected        | False                                                                                                                                                    |
| schema           | /v2/schemas/image                                                                                                                                        |
| status           | queued                                                                                                                                                   |
| tags             |                                                                                                                                                          |
| updated_at       | 2025-05-30T02:26:31Z                                                                                                                                     |
| visibility       | public                                                                                                                                                   |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+</code></pre><p><img src="https://velog.velcdn.com/images/jupiter-j/post/8a2ff294-34e1-4713-a9c5-27bc7d6ae169/image.png" alt=""><img src="https://velog.velcdn.com/images/jupiter-j/post/b6de42a0-e371-4a50-b02b-d10364988972/image.png" alt="">
<img src="https://velog.velcdn.com/images/jupiter-j/post/65c1d080-127e-4b79-93e5-5c06d57e13a6/image.png" alt=""></p>
<br>
<br>
<br>



<p><img src="https://velog.velcdn.com/images/jupiter-j/post/3529e127-f696-4cc0-9299-961a0d8b9e59/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/8fbc780e-c94a-44ea-9aa8-12c5df80ed9f/image.png" alt=""></p>
<p>수료 완 </p>
]]></description>
        </item>
        <item>
            <title><![CDATA[[CCCR] 프라이빗 오픈 클라우드를 위한 오픈스택 구축 및 운영 (3)
]]></title>
            <link>https://velog.io/@jupiter-j/CCCR-%ED%94%84%EB%9D%BC%EC%9D%B4%EB%B9%97-%EC%98%A4%ED%94%88-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C%EB%A5%BC-%EC%9C%84%ED%95%9C-%EC%98%A4%ED%94%88%EC%8A%A4%ED%83%9D-%EA%B5%AC%EC%B6%95-%EB%B0%8F-%EC%9A%B4%EC%98%81-3</link>
            <guid>https://velog.io/@jupiter-j/CCCR-%ED%94%84%EB%9D%BC%EC%9D%B4%EB%B9%97-%EC%98%A4%ED%94%88-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C%EB%A5%BC-%EC%9C%84%ED%95%9C-%EC%98%A4%ED%94%88%EC%8A%A4%ED%83%9D-%EA%B5%AC%EC%B6%95-%EB%B0%8F-%EC%9A%B4%EC%98%81-3</guid>
            <pubDate>Thu, 29 May 2025 01:04:25 GMT</pubDate>
            <description><![CDATA[<p>인스턴스 관리를 위하여 nova_api가 healthy상태인지 확인
<img src="https://velog.velcdn.com/images/jupiter-j/post/c2205be6-0eb1-48d9-94de-3f3f8bd18c17/image.png" alt=""></p>
<ul>
<li>로그 확인<pre><code></code></pre></li>
</ul>
<p>(os-venv) vagrant@openstack-aio:/etc/kolla$ sudo ls /var/log/kolla/nova
apache-access.log    nova-api-error.log  nova-conductor.log        nova-metadata-error.log  privsep-helper.log
apache-error.log     nova-api.log        nova-manage.log           nova-novncproxy.log
nova-api-access.log  nova-compute.log    nova-metadata-access.log  nova-scheduler.log
(os-venv) vagrant@openstack-aio:/etc/kolla$</p>
<pre><code>
&lt;br&gt;


## 오픈스택 네트워크 
![](https://velog.velcdn.com/images/jupiter-j/post/09e325e7-d82b-44f3-b724-424440705b5d/image.png)

![](https://velog.velcdn.com/images/jupiter-j/post/6e76e0df-1d45-4082-9afc-9c8b6c756f52/image.png)


* flat: VLAN 없이 하나의 물리 네트워크를 모든 인스턴스가 공유함. 네트워크 격리 불가.

* vlan: VLAN 태깅을 사용해 네트워크를 분리. 물리 네트워크에 VLAN 설정 필요. 보안성과 성능이 좋음.

* vxlan: 오버레이 네트워크로, 논리적으로 분리된 네트워크를 생성함. 확장성 뛰어남. 터널링 방식 사용.

* gre: VXLAN과 유사한 오버레이 방식이지만, 현재는 성능 문제로 잘 사용하지 않음.
</code></pre><p>(os-venv) vagrant@openstack-aio:/etc/kolla$ sudo cat /etc/kolla/neutron-server/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security</p>
<p>[ml2_type_vlan]
network_vlan_ranges =</p>
<p>[ml2_type_flat]
flat_networks = physnet1</p>
<p>[ml2_type_vxlan]
vni_ranges = 1:1000</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/e1525363-28ed-41d6-90c6-4f2b17fc92b4/image.png)

![](https://velog.velcdn.com/images/jupiter-j/post/a346b649-dd3d-4855-9bda-58af7f1c0179/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/f1d8ed3e-443e-451d-b2bd-470ee6f205f0/image.png)

![](https://velog.velcdn.com/images/jupiter-j/post/74d7172e-6146-4aca-8bb8-f3fb96f90011/image.png)

인스턴스는 내부 네트워크를 생성
외부 내트워크는 관리자가 관리&gt;네트워크&gt;네트워크에서 설정.
물리적인 네트워크 설정에 맞춰서 생성 `sudo cat /etc/kolla/neutron-server/ml2_conf.ini`
외부네트워크는 gateway로 따로 설정
라우터 - 인터페이스는 내부 네트워크를 연결할때 사용

- 네트워크 생성: ` openstack network create --external --share --provider-network-type flat --provider-physical-network physnet1 external
`
![](https://velog.velcdn.com/images/jupiter-j/post/6bea8faa-944a-491a-bc64-accacb15dfcd/image.png)
</code></pre><p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack network create --external --share --provider-network-type flat --provider-physical-network physnet1 external
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2025-05-29T02:39:10Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | 8a1128a8-6a73-4d3c-978e-a46fb08236a9 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | False                                |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | external                             |
| port_security_enabled     | True                                 |
| project_id                | 00855a5cafa646478a16f350df1f00f6     |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  | None                                 |
| qos_policy_id             | None                                 |
| revision_number           | 1                                    |
| router:external           | External                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| tenant_id                 | 00855a5cafa646478a16f350df1f00f6     |
| updated_at                | 2025-05-29T02:39:10Z                 |
+---------------------------+--------------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack router create new-rt
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| admin_state_up          | UP                                   |
| availability_zone_hints |                                      |
| availability_zones      |                                      |
| created_at              | 2025-05-29T02:40:37Z                 |
| description             |                                      |
| distributed             | False                                |
| enable_ndp_proxy        | None                                 |
| external_gateway_info   | null                                 |
| flavor_id               | None                                 |
| ha                      | False                                |
| id                      | f4a0cb02-ee72-491d-8383-9ba3aef6e55b |
| name                    | new-rt                               |
| project_id              | 00855a5cafa646478a16f350df1f00f6     |
| revision_number         | 1                                    |
| routes                  |                                      |
| status                  | ACTIVE                               |
| tags                    |                                      |
| tenant_id               | 00855a5cafa646478a16f350df1f00f6     |
| updated_at              | 2025-05-29T02:40:37Z                 |
+-------------------------+--------------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack router set --external-gateway external new-rt
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack subnet create --no-dhcp --subnet-range 192.168.0.0/24 --gateway 192.168.0.254 --allocation-pool start=192.168.0.100,end=192.168.0.200 --network external external-subnet
+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| allocation_pools     | 192.168.0.100-192.168.0.200          |
| cidr                 | 192.168.0.0/24                       |
| created_at           | 2025-05-29T02:44:01Z                 |
| description          |                                      |
| dns_nameservers      |                                      |
| dns_publish_fixed_ip | None                                 |
| enable_dhcp          | False                                |
| gateway_ip           | 192.168.0.254                        |
| host_routes          |                                      |
| id                   | 09589f62-aba5-466b-862f-550975d75799 |
| ip_version           | 4                                    |
| ipv6_address_mode    | None                                 |
| ipv6_ra_mode         | None                                 |
| name                 | external-subnet                      |
| network_id           | 8a1128a8-6a73-4d3c-978e-a46fb08236a9 |
| project_id           | 00855a5cafa646478a16f350df1f00f6     |
| revision_number      | 0                                    |
| segment_id           | None                                 |
| service_types        |                                      |
| subnetpool_id        | None                                 |
| tags                 |                                      |
| updated_at           | 2025-05-29T02:44:01Z                 |
+----------------------+--------------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack router create new-rt
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| admin_state_up          | UP                                   |
| availability_zone_hints |                                      |
| availability_zones      |                                      |
| created_at              | 2025-05-29T02:44:27Z                 |
| description             |                                      |
| distributed             | False                                |
| enable_ndp_proxy        | None                                 |
| external_gateway_info   | null                                 |
| flavor_id               | None                                 |
| ha                      | False                                |
| id                      | e955f531-66a1-4c1d-80a8-539ddf765beb |
| name                    | new-rt                               |
| project_id              | 00855a5cafa646478a16f350df1f00f6     |
| revision_number         | 1                                    |
| routes                  |                                      |
| status                  | ACTIVE                               |
| tags                    |                                      |
| tenant_id               | 00855a5cafa646478a16f350df1f00f6     |
| updated_at              | 2025-05-29T02:44:27Z                 |
+-------------------------+--------------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack router delete e955f531-66a1-4c1d-80a8-539ddf765beb
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack router set --external-gateway external new-rt
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack router add subnet net-work subnet new-rt ^C
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack subnet list --network net-work
No Network found for net-work
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack subnet list
+--------------------------------------+-----------------+--------------------------------------+------------------+
| ID                                   | Name            | Network                              | Subnet           |
+--------------------------------------+-----------------+--------------------------------------+------------------+
| 09589f62-aba5-466b-862f-550975d75799 | external-subnet | 8a1128a8-6a73-4d3c-978e-a46fb08236a9 | 192.168.0.0/24   |
| a2413f79-f267-4f09-af80-dfa00071bc90 | internal-subnet | 0929ac61-e38a-4f18-8cc2-08daf4d61397 | 192.168.100.0/24 |
| e485a912-594b-4c5d-a7a3-46bee0139c2a | lb-mgmt-subnet  | 1b3dc678-3fc3-435c-9d4e-642100de4763 | 10.1.0.0/24      |
+--------------------------------------+-----------------+--------------------------------------+------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack router add subnet new-rt internal-subnet</p>
<pre><code>

![](https://velog.velcdn.com/images/jupiter-j/post/985f3e8e-9fdb-47f9-99ef-62fd831af4ba/image.png)

- `openstack network create --external --share --provider-network-type flat --provider-physical-network physnet1 external
`
- ` openstack router create new-rt`

- ` openstack router set --external-gateway external new-rt`
- `openstack subnet create --no-dhcp --subnet-range 192.168.0.0/24 --gateway 192.168.0.254 --allocation-pool start=192.168.0.100,end=192.168.0.200 --network external external-subnet`


- ` openstack router create new-rt`
- `openstack router set --external-gateway external new-rt`
- `openstack router add subnet new-rt internal-subnet`
![](https://velog.velcdn.com/images/jupiter-j/post/e05bad08-004f-4bd2-849d-8a20a16aebe3/image.png)


## 시큐리티 그룹</code></pre><p> openstack security group create new-sg
 openstack security group rule list new-sg
 openstack security group rule create --protocol tcp --dst-port 80 new-sg
 openstack server add security group test-vm new-sg
 openstack server show test-vm</p>
<p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack server remove security group test-vm new-sg
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack security group delete new-sg</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/ed71befa-d02e-4b17-a992-2cb58cbcfd15/image.png)![](https://velog.velcdn.com/images/jupiter-j/post/4705dcdd-89b4-4d8c-b7f8-a775defae778/image.png)![](https://velog.velcdn.com/images/jupiter-j/post/b756133d-443b-4774-b46a-a31f479868e8/image.png)


## keypair 생성 
</code></pre><p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack keypair create --private-key new-key.pem new-key
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| created_at  | None                                            |
| fingerprint | 7a:5a:62:27:c4:ca:d8:42:e4:72:1d:62:06:58:7b:e3 |
| id          | new-key                                         |
| is_deleted  | None                                            |
| name        | new-key                                         |
| type        | ssh                                             |
| user_id     | 58d8e1d0c87143aaad968509ea167b17                |
+-------------+-------------------------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ chmod 400 new-key.pem
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack server create --key-name ^C
(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack security group create new -sg          ^C
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack security group rule create --protocol tcp --dst-port 2 --remote-ip 0.0.0.0/0 new-sg          ^C
(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack server add security htoup new-sg ssh-vm           ^C
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack floating ip create external     ^C
(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack server add floating ip ssh-vm 192.168.0.150            ^C
(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$ ssh -i new-key.pem <a href="mailto:cirrors@192.19.0.10">cirrors@192.19.0.10</a>^C
(os-venv) vagrant@openstack-aio:/etc/kolla$</p>
<pre><code>

## 스토리지 
&gt;* 블록 스토리지 (Cinder) :
VM에 디스크처럼 attach해서 사용하는 가상 디스크를 제공
* 오브젝트 스토리지 (Swift) :
대용량 비정형 데이터를 객체 형태로 저장. Amazon S3와 유사한 구조
* 이미지 스토리지 (Glance) :
VM을 생성하기 위한 운영체제 이미지를 저장하고 관리
* 파일 스토리지 (Manila) :
여러 VM 간 공유 가능한 파일 시스템(NFS 등)을 제공


&lt;br&gt;


* 볼륨 생성 : `openstack volume create --size 1 new-volume`
![](https://velog.velcdn.com/images/jupiter-j/post/505612ff-df1b-4e18-9866-296c33a0c6d3/image.png)
* 볼륨 연결 : ` openstack server add volume test-vm new-volume`
![](https://velog.velcdn.com/images/jupiter-j/post/cc901df5-7fb6-425d-9311-3768c048ad64/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/c04377f2-0c94-4114-9781-01986ebc26ab/image.png)</code></pre><h3 id="볼륨-삭제">볼륨 삭제</h3>
<p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack server remove volume test-vm new-volume
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack delete new-volume</p>
<pre><code>
&lt;br&gt;

&gt; ## 실습
1. 1G 크기의 볼륨을 생성, 이름:practice-volume
2. test-vm인스턴스에 만든 볼륨을 연결
3. 인스턴스의 콘솔에 접속
4. lsblk등의 명령어로 연결 확인
5. xfs/ext4 파일시스템으로 포멧하고 /mnt 디렉토리 마운트
6. /etc/hosts 파일을 /mnt 디렉토리에 복사
7. 마운트 해제 및 볼륨 연결 해제 
8. new-vm에 볼륨 연결 후 확인
9. /dirA 디렉토리를 만들어서 마운트한 후 확인
10. 볼륨은 연결만 해제 (삭제하지 않음)

</code></pre><p>openstack volume create --size 1 practice-volume
openstack server add volume test-vm practice-volume</p>
<p>login: cirros
password: gocubsgo</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/8cc9ab98-ed17-41b3-b47f-9be83fa713bb/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/d48e62dc-8377-4555-8bfb-8d24db3cf221/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/bdceec4a-bbca-448a-b947-61f2b189ba96/image.png)

&lt;br&gt;
&lt;br&gt;


## 오픈스택에서의 스토리지
1. 임시 스토리지
* 플레이버에 의해 할당되는 저장장치
* 인스턴스 생성시 함께 생성, 삭제 시 함께 삭제
* 데이터도 함께 삭제
2. 영구 스토리지
    1) 블록 스토리지(Cinder 서비스로 관리)
    * 인스턴스에 직접 연결해서 사용(한번에 하나의 인스턴스에만가능)
    * 인스턴스 내부에서 포맷/마운트 후 데이터 저장 
    * 관리 단위를 볼륨
    * 특정 시점의 상태를 저장할 때에는 스냅샷
    * 볼륨의 데이터를 안전하게 백업할 때에는 백업
    * 모든 작업들은 기본적으로 특정 프로젝트 안에서만 사용
    * 볼륨 전송 기능을 통해 다른 프로젝트에 전달

   2) 오브젝트 스토리지(swift)
   * 인스턴스와 별개로 사용하는 스토리지 
   * 대시보드 / openstack 명령어로 접근 가능
   * URL 주소로 접근가능 (설정에 따라)
   * 파일서버처럼 원하는 데이터(파일) 저장/ 다운로드 
   * 






&lt;br&gt;

* 볼륨 스냅샷 생성: ` openstack volume snapshot create --volume practice-volume new-snap --force
`
![](https://velog.velcdn.com/images/jupiter-j/post/78070e1f-34cd-41c7-9951-8719d17f5cc1/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/888bd6f0-337f-40ab-b5a1-5648e217123a/image.png)

* 볼륨 백업 생성 : `openstack volume backup create --name backup-cli practice-volume --force`

![](https://velog.velcdn.com/images/jupiter-j/post/71bc609d-c536-4082-a13f-e99a55e9bc1a/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/277ce1ea-cbf1-4064-ad1c-cb0142dba390/image.png)

볼륨 백업 생성 → 인스턴스에서 분리 → 전송 요청 생성
* transfer 요청 생성: `openstack volume transfer request create practice-volume`
* transfer 수락: `openstack volume transfer request accept &lt;transfer-request-id&gt; --auth-key &lt;auth-key&gt;`

* volume transfer란?
볼륨을 다른 사용자나 프로젝트에 넘기기 위한 기능. 보안상 auth-key가 반드시 필요
transfer 요청은 생성한 쪽과 수락하는 쪽이 다를 때 사용된다.

`
![](https://velog.velcdn.com/images/jupiter-j/post/ff59038d-bb26-4fd0-9459-841563e64e16/image.png)`

&lt;br&gt;

## 오브젝트 스토리지
오브젝트 스토리지는 파일을 저장하고 불러오는 저장소
* 파일 하나하나를 **오브젝트(object)**라고 부름
* 박스 같은 공간인 **컨테이너(container)**에 넣어서 정리
* 하드디스크처럼 디렉토리 구조는 없음, 대신 메타데이터로 관리
* 보통 읽기/쓰기만 하고, 수정은 거의 안 함

</code></pre><h2 id="컨테이너-생성">컨테이너 생성</h2>
<p>openstack container create new-con</p>
<h2 id="오브젝트파일-업로드">오브젝트(파일) 업로드</h2>
<p>openstack object create new-con all-in-one</p>
<h2 id="오브젝트-목록-확인">오브젝트 목록 확인</h2>
<p>openstack object create new-con all-in-one</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/37a700fb-9b8a-42db-b25e-39b95280dce1/image.png)![](https://velog.velcdn.com/images/jupiter-j/post/f4cb7456-58bf-43df-abf8-7c07844fee18/image.png)
컨테이너를 public으로 변경 
해당 설정변경으로 wget명령어를 사용하여 파일을 다운받을 수 있음 
![](https://velog.velcdn.com/images/jupiter-j/post/9ac1a992-3efd-469e-80c7-6a9ba6e405ae/image.png)






















&lt;br&gt;
&lt;br&gt;
&lt;br&gt;



&gt; https://daaa0555.tistory.com/420
</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[[CCCR] 프라이빗 오픈 클라우드를 위한 오픈스택 구축 및 운영 (2)]]></title>
            <link>https://velog.io/@jupiter-j/CCCR-%ED%94%84%EB%9D%BC%EC%9D%B4%EB%B9%97-%EC%98%A4%ED%94%88-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C%EB%A5%BC-%EC%9C%84%ED%95%9C-%EC%98%A4%ED%94%88%EC%8A%A4%ED%83%9D-%EA%B5%AC%EC%B6%95-%EB%B0%8F-%EC%9A%B4%EC%98%81-2</link>
            <guid>https://velog.io/@jupiter-j/CCCR-%ED%94%84%EB%9D%BC%EC%9D%B4%EB%B9%97-%EC%98%A4%ED%94%88-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C%EB%A5%BC-%EC%9C%84%ED%95%9C-%EC%98%A4%ED%94%88%EC%8A%A4%ED%83%9D-%EA%B5%AC%EC%B6%95-%EB%B0%8F-%EC%9A%B4%EC%98%81-2</guid>
            <pubDate>Wed, 28 May 2025 00:49:05 GMT</pubDate>
            <description><![CDATA[<h1 id="개념">개념</h1>
<h2 id="가상화">가상화</h2>
<ol>
<li>개념: 물리적인 리소스를 논리적으로 추상화<ul>
<li>스토리지 - RAID / LVM, Thin Provisioning, SDS</li>
<li>네트워크 - VLAN / Bonding, SDN / NFV</li>
<li>시스템 - 가상머신(VM), 컨테이너</li>
</ul>
</li>
</ol>
<h2 id="클라우드">클라우드</h2>
<ol>
<li>개념: 네트워크(인터넷)를 통해 언제/어디서든 접근 가능한 온디맨드 방식으로 서비스를 제공하는 형태</li>
<li>특징: 비용/ 시간/ 가시성/ 사전지식</li>
<li>종류: 퍼블릭/ 프라이빗/ 하이브리드 클라우드의 종류로 나뉜다. </li>
</ol>
<h2 id="오픈스택">오픈스택</h2>
<ol>
<li>개념: 프라이빗 클라우드를 구성하고 우영하는 도구 중 하나 </li>
<li>배포방식: 구성 서비스를 프로세스 형태냐 혹은 컨테이너 형태인가 </li>
</ol>
<br>

<h3 id="오픈스택-아키텍쳐">오픈스택 아키텍쳐</h3>
<ul>
<li><p>컨트롤러 노드 - 오픈스택 환경 구성 및 관리</p>
</li>
<li><blockquote>
<p>클러스터 구성후 사용(3대 이상)</p>
</blockquote>
</li>
<li><p>컴퓨트 노드 - 생성하는 인스턴스에 리소스 제공</p>
</li>
<li><p>네트워크 노드 - 외부 통신과 관련한 부분 담당, 네트워크 구성 </p>
</li>
<li><p>스토리지 노드(HCI - 스토리지 서비스를 통한 스토리지 제공</p>
</li>
</ul>
<h3 id="오픈스택-컴포넌트">오픈스택 컴포넌트</h3>
<p>keystone(ID관리), horizon(대시보드), nova(컴퓨트-인스턴스 스케줄링), glance(이미지 관리), neutron(네트워크), cinder(블록스토리지), manila(공유 스토리지), heat(오케스트레이션-배포관리), ceilometer/gnocchi/aodh(사용량 측정)  </p>
<h3 id="대시보드">대시보드</h3>
<ul>
<li>도메인 - 논리적인 단위 중 최상위 개체(프로젝트 및 사용자/그룹 포함)</li>
<li>프로젝트 - 리소스들이 격리, 할당량 제한, 접근제어</li>
<li>사용자(그룹) - 접근제어를 위한 대상  </li>
<li>역할(정책) - 개체 별 동작에 대한 가능 여부 결정 </li>
</ul>
<h3 id="역할">역할</h3>
<ul>
<li><p>역할 확인 : <code>openstack role list</code>
<img src="https://velog.velcdn.com/images/jupiter-j/post/a2370148-d29a-404c-9632-b2690ddf2b31/image.png" alt=""></p>
</li>
<li><p>역할 할당 목록: <code>openstack role assignment list --names</code>
<img src="https://velog.velcdn.com/images/jupiter-j/post/4ccf5d32-bd8e-4a60-bfff-7c528a0f34e9/image.png" alt=""></p>
</li>
<li><p>역할 생성 : <code>openstack role add --project demo-project --user demo-user member</code> 
<img src="https://velog.velcdn.com/images/jupiter-j/post/0baccc53-8360-411d-8dc2-f5ef3cbb4d52/image.png" alt=""></p>
</li>
<li><p>역할 삭제 : <code>openstack role remove --project demo-project --user demo-user member</code>
<img src="https://velog.velcdn.com/images/jupiter-j/post/74aaae1e-c52e-4f57-9b94-750a5608c75b/image.png" alt=""></p>
</li>
</ul>
<h2 id="인스턴스-생성">인스턴스 생성</h2>
<ul>
<li>메모리 고려 하여 컨테이너 stop : <code>sudo docker stop $(sudo docker ps | grep -e heat -e gnocchi -e aodh -e ceilo -e octavia | awk &#39;{print $1}&#39;)</code></li>
</ul>
<pre><code>
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack role create vm-list-viewer
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | None                             |
| domain_id   | None                             |
| id          | a6b6e454884d46dbbe03a0a3a191df92 |
| name        | vm-list-viewer                   |
| options     | {}                               |
+-------------+----------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack role add --user test-user --project demo-project vm-list-viewer
No user with a name or ID of &#39;test-user&#39; exists.
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack role assignment list --names --user -test-user
usage: openstack role assignment list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}]
                                      [--noindent] [--max-width &lt;integer&gt;] [--fit-width] [--print-empty]
                                      [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--effective]
                                      [--role &lt;role&gt;] [--role-domain &lt;role-domain&gt;] [--names] [--user &lt;user&gt;]
                                      [--user-domain &lt;user-domain&gt;] [--group &lt;group&gt;] [--group-domain &lt;group-domain&gt;]
                                      [--domain &lt;domain&gt; | --project &lt;project&gt; | --system &lt;system&gt;]
                                      [--project-domain &lt;project-domain&gt;] [--inherited] [--auth-user] [--auth-project]
openstack role assignment list: error: argument --user: expected one argument
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack role assignment list --names --user test-user
No user with a name or ID of &#39;test-user&#39; exists.
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack user create --password 123 test-user
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | d3feb4147e2247939ba5c87aa692b2e4 |
| name                | test-user                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack role assignment list --names --user test-user

(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack role assignment list --names --user test-user

(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$ ㅣㄴ
-bash: ㅣㄴ: command not found
(os-venv) vagrant@openstack-aio:/etc/kolla$ ls
admin-openrc.sh          haproxy                    neutron-server          redis-sentinel
aodh-api                 heat-api                   nova-api                swift
aodh-evaluator           heat-api-cfn               nova-api-bootstrap      swift-account-auditor
aodh-listener            heat-engine                nova-cell-bootstrap     swift-account-reaper
aodh-notifier            horizon                    nova-compute            swift-account-replication-server
ceilometer-central       iscsid                     nova-conductor          swift-account-replicator
ceilometer-compute       keepalived                 nova-libvirt            swift-account-server
ceilometer-notification  keystone                   nova-novncproxy         swift-container-auditor
cinder-api               keystone-fernet            nova-scheduler          swift-container-replication-server
cinder-backup            keystone-ssh               nova-ssh                swift-container-replicator
cinder-scheduler         kolla-toolbox              octavia-api             swift-container-server
cinder-volume            manila-api                 octavia-certificates    swift-container-updater
clouds.yaml              manila-data                octavia-health-manager  swift-object-auditor
config                   manila-scheduler           octavia-housekeeping    swift-object-expirer
cron                     manila-share               octavia-openrc.sh       swift-object-replication-server
demo-user_admin.sh       mariadb                    octavia-worker          swift-object-replicator
fluentd                  mariadb-clustercheck       openvswitch-db-server   swift-object-server
glance-api               memcached                  openvswitch-vswitchd    swift-object-updater
globals.d                neutron-dhcp-agent         passwords.yml           swift-proxy-server
globals.yml              neutron-l3-agent           placement-api           swift-rsyncd
gnocchi-api              neutron-metadata-agent     rabbitmq                tgtd
gnocchi-metricd          neutron-openvswitch-agent  redis
(os-venv) vagrant@openstack-aio:/etc/kolla$ cp admin-openrc.sh test-openrc.sh
</code></pre><pre><code>
(os-venv) vagrant@openstack-aio:/etc/kolla$ cat test-openrc.sh
# Ansible managed

# Clear any old environment that may conflict.
for key in $( set | awk &#39;{FS=&quot;=&quot;}  /^OS_/ {print $1}&#39; ); do unset $key ; done
export OS_PROJECT_DOMAIN_NAME=&#39;Default&#39;
export OS_USER_DOMAIN_NAME=&#39;Default&#39;
export OS_PROJECT_NAME=&#39;demo-project&#39;
export OS_TENANT_NAME=&#39;admin&#39;
export OS_USERNAME=&#39;test-user&#39;
export OS_PASSWORD=&#39;123&#39;
export OS_AUTH_URL=&#39;http://192.168.56.250:5000&#39;
export OS_INTERFACE=&#39;internal&#39;
export OS_ENDPOINT_TYPE=&#39;internalURL&#39;
export OS_MANILA_ENDPOINT_TYPE=&#39;internalURL&#39;
export OS_IDENTITY_API_VERSION=&#39;3&#39;
export OS_REGION_NAME=&#39;Seoul&#39;
</code></pre><p><img src="https://velog.velcdn.com/images/jupiter-j/post/b75c65ff-b070-404b-b705-b2bef9201289/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/715f05d5-d517-45b0-b808-ee6827639fd7/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/a227f8ec-6152-4cd4-bbbf-eb23f07f3312/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/b46ed21b-22a9-4017-b08b-8d9d279c61d1/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/066f0bf5-7782-4fa1-9fcb-433795eb2581/image.png" alt=""></p>
<br>
<br>

<blockquote>
<h2 id="실습-설정">실습 설정</h2>
</blockquote>
<ul>
<li>프로젝트 1개 - practice-project</li>
<li>사용자 2개 - operator(권한:admin역할) , normal-user(권한:member역할)</li>
<li>인증파일 준비 - operator.sh , normal.sh</li>
</ul>
<pre><code>openstack user list
openstack role assignment list --project practice-project --names
openstack user create --domain default --password &#39;123&#39; operator
openstack user create --domain default --password &#39;123&#39; normal-user
openstack project create practice-project
openstack role add --project practice-project --user operator admin
openstack role add --project practice-project --user normal-user member


(os-venv) vagrant@openstack-aio:/etc/kolla$ cat operator.sh
# Ansible managed

# Clear any old environment that may conflict.
for key in $( set | awk &#39;{FS=&quot;=&quot;}  /^OS_/ {print $1}&#39; ); do unset $key ; done
export OS_PROJECT_DOMAIN_NAME=&#39;Default&#39;
export OS_USER_DOMAIN_NAME=&#39;Default&#39;
export OS_PROJECT_NAME=&#39;practice-project&#39;
export OS_TENANT_NAME=&#39;operator&#39;
export OS_USERNAME=&#39;operator&#39;
export OS_PASSWORD=&#39;123&#39;
export OS_AUTH_URL=&#39;http://192.168.56.250:5000&#39;
export OS_INTERFACE=&#39;internal&#39;
export OS_ENDPOINT_TYPE=&#39;internalURL&#39;
export OS_MANILA_ENDPOINT_TYPE=&#39;internalURL&#39;
export OS_IDENTITY_API_VERSION=&#39;3&#39;
export OS_REGION_NAME=&#39;Seoul&#39;
export OS_AUTH_PLUGIN=&#39;password&#39;
(os-venv) vagrant@openstack-aio:/etc/kolla$ cat normal.sh
# Ansible managed

# Clear any old environment that may conflict.
for key in $( set | awk &#39;{FS=&quot;=&quot;}  /^OS_/ {print $1}&#39; ); do unset $key ; done
export OS_PROJECT_DOMAIN_NAME=&#39;Default&#39;
export OS_USER_DOMAIN_NAME=&#39;Default&#39;
export OS_PROJECT_NAME=&#39;practice-project&#39;
export OS_TENANT_NAME=&#39;admin&#39;
export OS_USERNAME=&#39;normal-user&#39;
export OS_PASSWORD=&#39;123&#39;
export OS_AUTH_URL=&#39;http://192.168.56.250:5000&#39;
export OS_INTERFACE=&#39;internal&#39;
export OS_ENDPOINT_TYPE=&#39;internalURL&#39;
export OS_MANILA_ENDPOINT_TYPE=&#39;internalURL&#39;
export OS_IDENTITY_API_VERSION=&#39;3&#39;
export OS_REGION_NAME=&#39;Seoul&#39;
</code></pre><br>
<br>

<blockquote>
<h2 id="실습">실습</h2>
</blockquote>
<ol>
<li>cirrors 이미지를 my-image라는 이름으로 생성</li>
<li>보호 설정을 활성화</li>
<li>삭제 시도</li>
<li>test-project 라는 프로젝트 생성</li>
<li>test-user 사용자 생성</li>
<li>test-user 사용자에 test-project에 대한 member 역할 설정</li>
<li>test-user 사용자로 이미지 목록 확인</li>
<li>가시성 설정을 public으로 변경</li>
<li>test-user 사용자로 다시 확인</li>
</ol>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/7fbf4f20-cd42-4038-a017-5d1570b60961/image.png" alt=""></p>
<h3 id="이미지-생성">이미지 생성</h3>
<ul>
<li>이미지 생성 : <code>openstack image create --file cirros-0.6.2-x86_64-disk.img --disk-format qcow2 demo-im</code></li>
<li>이미지 목록 확인 : <code>openstack image list</code><pre><code></code></pre></li>
</ul>
<p>(os-venv) vagrant@openstack-aio:<del>$ ls
all-in-one                 cirros-0.6.2-x86_64-disk.img        octavia
amphora-x64-haproxy.d      debian-12-genericcloud-amd64.qcow2  os-venv
amphora-x64-haproxy.qcow2  manila-service-image-master.qcow2   VBoxGuestAdditions_7.1.6.iso
(os-venv) vagrant@openstack-aio:</del>$ openstack image create --file cirros-0.6.2-x86_64-disk.img --disk-format qcow2 demo-im                     g
+------------------+-----------------------------------------------------------------------------------------------------                     -----------------------------------------+
| Field            | Value                                                                                                                                                             |
+------------------+-----------------------------------------------------------------------------------------------------                     -----------------------------------------+
| container_format | bare                                                                                                                                                              |
| created_at       | 2025-05-28T06:33:33Z                                                                                                                                              |
| disk_format      | qcow2                                                                                                                                                             |
| file             | /v2/images/fd0fec44-42e3-4925-bb31-f9412798797f/file                                                                                                              |
| id               | fd0fec44-42e3-4925-bb31-f9412798797f                                                                                                                              |
| min_disk         | 0                                                                                                                                                                 |
| min_ram          | 0                                                                                                                                                                 |
| name             | demo-img                                                                                                                                                          |
| owner            | 00855a5cafa646478a16f350df1f00f6                                                                                                                                  |
| properties       | os_hidden=&#39;False&#39;, owner_specified.openstack.md5=&#39;&#39;, owner_specified.openstack.object=&#39;images/demo-i                     mg&#39;, owner_specified.openstack.sha256=&#39;&#39; |
| protected        | False                                                                                                                                                             |
| schema           | /v2/schemas/image                                                                                                                                                 |
| status           | queued                                                                                                                                                            |
| tags             |                                                                                                                                                                   |
| updated_at       | 2025-05-28T06:33:33Z                                                                                                                                              |
| visibility       | shared                                                                                                                                                            |
+------------------+-----------------------------------------------------------------------------------------------------                     -----------------------------------------+
(os-venv) vagrant@openstack-aio:<del>$ openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| 6906bd8f-7956-4d85-ad43-87af6da739b7 | amphora-x64-haproxy | active |
| fd0fec44-42e3-4925-bb31-f9412798797f | demo-img            | active |
| 4d9abb87-9dbb-4a4a-a945-6fbe87dc090c | my-image            | queued |
| 0eb99ac7-03d5-4d35-858f-8a541f9af77b | my-image1           | queued |
| 2d88772b-3fd4-4558-be12-b430b5a00519 | my-image1           | queued |
+--------------------------------------+---------------------+--------+
(os-venv) vagrant@openstack-aio:</del>$</p>
<pre><code>- 보호 해제 후 이미지 삭제 : `openstack image set --unprotected 4d9abb87-9dbb-4a4a-a945-6fbe87dc090c`

![](https://velog.velcdn.com/images/jupiter-j/post/0975594d-9433-451c-8507-693a42fed4ba/image.png)


&lt;br&gt;

### flavor 생성
![](https://velog.velcdn.com/images/jupiter-j/post/ccc7f586-ea8d-4ec9-b33a-f52413b87a10/image.png)

![](https://velog.velcdn.com/images/jupiter-j/post/d9377843-f2b5-4678-9405-4b6c8af7402b/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/080431f9-ed32-4e4d-a0d5-09219da908ad/image.png)</code></pre><p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack flavor list
+--------------------------------------+------+-----+------+-----------+-------+-----------+
| ID                                   | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+------+-----+------+-----------+-------+-----------+
| da5dc3b9-9177-4341-b94d-c1299008f590 | mini | 512 |    1 |         0 |     1 | True      |
+--------------------------------------+------+-----+------+-----------+-------+-----------+</p>
<p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack flavor list --all
+--------------------------------------+---------+-----+------+-----------+-------+-----------+
| ID                                   | Name    | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------+-----+------+-----------+-------+-----------+
| 200                                  | amphora | 512 |    5 |         0 |     1 | False     |
| da5dc3b9-9177-4341-b94d-c1299008f590 | mini    | 512 |    1 |         0 |     1 | True      |
+--------------------------------------+---------+-----+------+-----------+-------+-----------+</p>
<pre><code>

- flavor생성: `openstack flavor create --ram 512
--disk 1 --vcpus 1 mini-flavor`
- disk 생성: ` openstack flavor create --ram 1024 --disk 10 --vcpus 1 --ephemeral 1 --swap 12 demo-disk
`

![](https://velog.velcdn.com/images/jupiter-j/post/88b22f22-9b46-46bf-a77f-8ad71d5dcf76/image.png)![](https://velog.velcdn.com/images/jupiter-j/post/6c077d6f-9118-4e15-af95-2fd6811077ef/image.png)

&lt;br&gt;

### 네트워크 생성 

- 일반 사용자는 내부용 네트워크만 생성 가능
- 관리자 네트워크는 외부, 내부 네트워크 생성 가능

![](https://velog.velcdn.com/images/jupiter-j/post/d9ca1f39-953b-48e6-854d-a8d3fcfb041f/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/26e3aa68-65bb-474f-aa0c-82d56a400f79/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/0394f943-5b8b-4a63-905a-f4d86b0c6432/image.png)

![](https://velog.velcdn.com/images/jupiter-j/post/a86ba2f9-177a-4aad-9750-e574cfd44fe8/image.png)
</code></pre><p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack flavor show demo-flavor
+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| access_project_ids         | None                                 |
| description                | None                                 |
| disk                       | 10                                   |
| id                         | 76df590a-6e24-465b-98f6-a500b6ff4355 |
| name                       | demo-flavor                          |
| os-flavor-access:is_public | True                                 |
| properties                 |                                      |
| ram                        | 2028                                 |
| rxtx_factor                | 1.0                                  |
| swap                       | 0                                    |
| vcpus                      | 1                                    |
+----------------------------+--------------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack network</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/2bf269cb-2dbe-4d87-b729-1a3c0010f7aa/image.png)

* 네트워크 생성 : ` openstack network create new-int`
* 네트워크 목록 확인: `openstack network list`
</code></pre><p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack network list
+--------------------------------------+------------------------+--------------------------------------+
| ID                                   | Name                   | Subnets                              |
+--------------------------------------+------------------------+--------------------------------------+
| 1b3dc678-3fc3-435c-9d4e-642100de4763 | lb-mgmt-net            | e485a912-594b-4c5d-a7a3-46bee0139c2a |
| b026697c-fb06-4536-b7a1-e58b1dc5f402 | manila_service_network |                                      |
| d52a595d-d6e5-41a0-b01d-1115a4550c42 | demo-network           | 281028ff-1f61-4ce9-824a-a3eb13e01e1a |
+--------------------------------------+------------------------+--------------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$</p>
<pre><code>- 서브넷 생성: `openstack subnet create --network demo-network --subnet-range 172.17.0.0/24 --gateway 172.17.0.1 --dhcp --dns-nameserver 8.8.8.8 --allocation-pool start=172.17.0.2,end=172.17.0.254 new-subnet
`</code></pre><p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack subnet create --network demo-network --subnet-range 172.17.0.0/24 --gateway 172.17.0.1 --dhcp --dns-nameserver 8.8.8.8 --allocation-pool start=172.17.0.2,end=172.17.0.254 new-subnet
+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| allocation_pools     | 172.17.0.2-172.17.0.254              |
| cidr                 | 172.17.0.0/24                        |
| created_at           | 2025-05-28T07:23:13Z                 |
| description          |                                      |
| dns_nameservers      | 8.8.8.8                              |
| dns_publish_fixed_ip | None                                 |
| enable_dhcp          | True                                 |
| gateway_ip           | 172.17.0.1                           |
| host_routes          |                                      |
| id                   | 25bc9fb4-aa59-4450-a922-cc9ebb58d25f |
| ip_version           | 4                                    |
| ipv6_address_mode    | None                                 |
| ipv6_ra_mode         | None                                 |
| name                 | new-subnet                           |
| network_id           | d52a595d-d6e5-41a0-b01d-1115a4550c42 |
| project_id           | 00855a5cafa646478a16f350df1f00f6     |
| revision_number      | 0                                    |
| segment_id           | None                                 |
| service_types        |                                      |
| subnetpool_id        | None                                 |
| tags                 |                                      |
| updated_at           | 2025-05-28T07:23:13Z                 |
+----------------------+--------------------------------------+</p>
<p>(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack subnet list
+--------------------------------------+----------------+--------------------------------------+----------------+
| ID                                   | Name           | Network                              | Subnet         |
+--------------------------------------+----------------+--------------------------------------+----------------+
| 25bc9fb4-aa59-4450-a922-cc9ebb58d25f | new-subnet     | d52a595d-d6e5-41a0-b01d-1115a4550c42 | 172.17.0.0/24  |
| 281028ff-1f61-4ce9-824a-a3eb13e01e1a | demo-subnet    | d52a595d-d6e5-41a0-b01d-1115a4550c42 | 192.168.0.0/24 |
| e485a912-594b-4c5d-a7a3-46bee0139c2a | lb-mgmt-subnet | 1b3dc678-3fc3-435c-9d4e-642100de4763 | 10.1.0.0/24    |
+--------------------------------------+----------------+--------------------------------------+----------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/f3ec74b8-523b-479a-a2ea-8ac2b1e992a0/image.png)

- 네트워크 삭제: `openstack network delete demo-network`
네트워크를 지우면 서브넷은 자동으로 삭제가 된다. 


&gt; ## 실습
1. 이미지 생성 
* 이름: test-img / 파일:cirros / 유형: qcow2/ 가시성:share/ 보호설정은 필요없음
* 이름: debian-img / 파일:debian / 유형: qcow2/ 가시성:public/ 보호설정은 활성화 
2. 플레이버 생성 
* 이름: mini / vcpu:1, ram:512M, root-disk:1G 
* 이름: demo-flavor, vcpu:1, ram:2048M, root-disk:10G
3. 네트워크 생성
* 네트워크 이름: internal, 
* subnet이름: internal-subnet, ip대역: 192.168.100.0/24, gateway:192.168.100.1, DNS:8.8.8.8  


### 이미지 생성 
* test-image 생성 : `openstack image create &quot;test-img&quot; --file cirros-0.6.2-x86_64-disk.img --disk-format qcow2 --container-format bare --shared`
* debian-iamge 생성: `openstack image create &quot;debian-img&quot; --file debian-12-genericcloud-amd64.qcow2 --disk-format qcow2 --container-format bare --public --protected`
![](https://velog.velcdn.com/images/jupiter-j/post/1995d4f4-2c26-45df-8ec1-6d25dc72a3ca/image.png)![](https://velog.velcdn.com/images/jupiter-j/post/d2829601-c7af-4921-934b-75fc1329da20/image.png)![](https://velog.velcdn.com/images/jupiter-j/post/6bf626c5-7e96-4cf8-a1a8-19e4fccc5382/image.png)

&lt;br&gt;

### 플레이버 생성 
* mini-flavor: `openstack flavor create --ram 512 --disk 1 --vcpus 1 mini-flavor`
* demo-flavor: `openstack flavor create --ram 2028 --disk 10 --vcpus 1 demo-flavor`
![](https://velog.velcdn.com/images/jupiter-j/post/00cfec04-d775-42eb-b8eb-f8eaad8eac49/image.png)

&lt;br&gt;

### 네트워크 생성
* 네트워크 생성 : ` openstack network create internal &amp;&amp; openstack subnet create internal-subnet --network internal --subnet-range 192.168.100.0/24 --gateway 192.168.100.1 --dns-nameserver 8.8.8.8`

![](https://velog.velcdn.com/images/jupiter-j/post/8a6649da-6c86-4aed-ba1b-3a1d1192ff2c/image.png)![](https://velog.velcdn.com/images/jupiter-j/post/d84db6bd-4fef-490b-9168-7fbbeddf38bc/image.png)![](https://velog.velcdn.com/images/jupiter-j/post/7f1b1ffd-4513-4e23-bb06-7c3f45f70fb1/image.png)

&lt;br&gt;


## 인스턴스 생성
![](https://velog.velcdn.com/images/jupiter-j/post/a9664c86-ea8e-4ebd-a7cd-10a36dd038d3/image.png)

![](https://velog.velcdn.com/images/jupiter-j/post/ff72d248-8317-4f76-8ff3-da35f6dc8fe0/image.png)
![](https://velog.velcdn.com/images/jupiter-j/post/c3bc0b89-213f-4817-871d-24db7e29c1c2/image.png)
</code></pre><p>(os-venv) vagrant@openstack-aio:~$ openstack server create --image test-img --flavor mini-flavor --network internal test-vm
+-------------------------------------+----------------------------------------------------+
| Field                               | Value                                              |
+-------------------------------------+----------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                             |
| OS-EXT-AZ:availability_zone         |                                                    |
| OS-EXT-SRV-ATTR:host                | None                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                               |
| OS-EXT-SRV-ATTR:instance_name       |                                                    |
| OS-EXT-STS:power_state              | NOSTATE                                            |
| OS-EXT-STS:task_state               | scheduling                                         |
| OS-EXT-STS:vm_state                 | building                                           |
| OS-SRV-USG:launched_at              | None                                               |
| OS-SRV-USG:terminated_at            | None                                               |
| accessIPv4                          |                                                    |
| accessIPv6                          |                                                    |
| addresses                           |                                                    |
| adminPass                           | vy5RpQnmvzmg                                       |
| config_drive                        |                                                    |
| created                             | 2025-05-28T08:35:49Z                               |
| flavor                              | mini-flavor (6c4f3d8a-799b-44e1-8035-7e56966b086f) |
| hostId                              |                                                    |
| id                                  | e84dd025-363c-41bf-bdf6-ad639ec4cd33               |
| image                               | test-img (2ee4f418-d721-4fe4-aebf-0920206ce358)    |
| key_name                            | None                                               |
| name                                | test-vm                                            |
| progress                            | 0                                                  |
| project_id                          | 00855a5cafa646478a16f350df1f00f6                   |
| properties                          |                                                    |
| security_groups                     | name=&#39;default&#39;                                     |
| status                              | BUILD                                              |
| updated                             | 2025-05-28T08:35:50Z                               |
| user_id                             | 58d8e1d0c87143aaad968509ea167b17                   |
| volumes_attached                    |                                                    |
+-------------------------------------+----------------------------------------------------+</p>
<pre><code>- 인스턴스 생성 : ` openstack server create --image test-img --flavor mini-flavor --network internal test-vm`


![](https://velog.velcdn.com/images/jupiter-j/post/e580628e-4d45-4a00-8e56-fc888268e02b/image.png)

![](https://velog.velcdn.com/images/jupiter-j/post/643d1f5a-d107-4c34-a53c-0a622792b1c3/image.png)

![](https://velog.velcdn.com/images/jupiter-j/post/721387c7-8124-482a-a5a5-d93a11f92a1f/image.png)

![](https://velog.velcdn.com/images/jupiter-j/post/30cafbcd-a6cb-4105-a5ad-635db5ac46aa/image.png)






### 오픈스택 인스턴스 생성과정
1. horizon 서비스에서 인스턴스 생성 작업 지시
2. keystone 서비스를 통해 인증
3. nova-api 서비스로 인스턴스 생성 요청
4. nova-conductor 서비스로 요청 내용 전달
5. nova-scheduler 서비스로 요청해서 사용 가능한 노드를 확인
6. placement 서비스를 통해 리소스 사용에 대한 정보 확인
7. 정보를 바탕으로 사용 가능한 노드 목록을 선정
8. nova-scheduler에서 노드를 선택
9. nova-conductor 최종으로 선택해서 배치
10. glance-api에 이미지를 요청
11. 백엔드 스토리지에서 선택된 컴퓨트 노드로 이미지를 복사
12. 컴퓨트 노드에 있는 nova-compute 서비스와 hypervisor로 인스턴스를 생성함 



</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[[CCCR] 프라이빗 오픈 클라우드를 위한 오픈스택 구축 및 운영 (1)]]></title>
            <link>https://velog.io/@jupiter-j/openstack</link>
            <guid>https://velog.io/@jupiter-j/openstack</guid>
            <pubDate>Tue, 27 May 2025 02:34:27 GMT</pubDate>
            <description><![CDATA[<ol>
<li><p>파일 &gt; 가상시스템 가져오기 선택하여 vm 생성
<img src="https://velog.velcdn.com/images/jupiter-j/post/239fc948-33ec-43f3-8e46-d6eabbef6788/image.png" alt=""></p>
</li>
<li><p>네트워크 설정 , 이름 설정, 공유폴더 설정 </p>
<blockquote>
<p>파일&gt;도구&gt;네트워크 관리자
hostonly 네트워크 2개 확인</p>
</blockquote>
</li>
<li><p>68.56.1/24</p>
</li>
<li><p>168.57.1/24</p>
</li>
</ol>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/4e98cd7b-1596-48aa-9c32-e2d6a7196780/image.png" alt=""></p>
<ol start="3">
<li>네트워크 어뎁터 2, 3은 hostonly로 설정, 만들어둔 네트워크 어뎁터 이름을 선택하기</li>
</ol>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/31171e2d-1ddd-47f1-86e9-d5b3e838a3e3/image.png" alt=""></p>
<ol start="4">
<li>putty 접속<blockquote>
<p>사용자 vagrant / vagrant
ssh <a href="mailto:vagrant@192.18.56.200">vagrant@192.18.56.200</a></p>
</blockquote>
</li>
</ol>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/8a823492-ac65-474f-b40f-7a03842e89a1/image.png" alt=""></p>
<blockquote>
<h2 id="오픈스택-설치">오픈스택 설치</h2>
<p>해당 실습은 이미 koll-ansible이 설치되었다는 가정하에 진행함으로 환경이 없다면 아래 링크로 설치
<a href="https://docs.openstack.org/kolla-ansible/2024.1/user/quickstart.html">https://docs.openstack.org/kolla-ansible/2024.1/user/quickstart.html</a></p>
</blockquote>
<h3 id="파이썬-가상환경-활성화">파이썬 가상환경 활성화</h3>
<pre><code>
(os-venv) vagrant@openstack-aio:~$ tail ~/.bashrc
# this, if it&#39;s already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
  if [ -f /usr/share/bash-completion/bash_completion ]; then
    . /usr/share/bash-completion/bash_completion
  elif [ -f /etc/bash_completion ]; then
    . /etc/bash_completion
  fi
fi
source os-venv/bin/activate ## 추가
</code></pre><br>


<h3 id="kolla-ansible-깃허브">kolla-ansible 깃허브</h3>
<p><a href="https://opendev.org/openstack/kolla-ansible">https://opendev.org/openstack/kolla-ansible</a></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/5a20944f-5b34-4aac-b0ee-507a1c4efee0/image.png" alt=""></p>
<br>

<h2 id="도메인">도메인</h2>
<p>오픈스택에는 물리적인 데이터들을 나누는 단위로 리전을 사용하여 논리적인 구분단위로 도메인과 프로젝트라는 개념을 사용한다. </p>
<p>도메인은 하나의 리전에 포함되는 가장 큰 논리적 단위이다. 도메인 안에는 프로젝트와 사용자 및 그룹이 포함되며 이를 통해 리소스에 대한 격리 및 접근제어 역할을 수행한다. </p>
<p>관리자 권한의 계정으로만 작업 가능함.
<img src="https://velog.velcdn.com/images/jupiter-j/post/1b86bd26-f5a4-40f2-8d99-be95637d6519/image.png" alt=""></p>
<ul>
<li>도메인 생성: <code>openstack domain create new-domain</code></li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/fed9d938-c8c1-479a-8076-6217c69759b2/image.png" alt=""></p>
<ul>
<li>도메인 삭제
최상위 단위이기 때문에 비활성화 후 삭제를 할수있다. </li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/19410a36-1f24-4dde-9824-9eecaa08bda5/image.png" alt=""></p>
<ul>
<li>비활성화: <code>openstack domain set --disable new-domain</code></li>
<li>삭제: <code>openstack domain delete new-domain</code><pre><code>(os-venv) vagrant@openstack-aio:~$ openstack domain set --disable new-domain
(os-venv) vagrant@openstack-aio:~$ openstack domain delete new-domain</code></pre></li>
</ul>
<h3 id="대시보드-접속">대시보드 접속</h3>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/35a9ba88-287b-438e-a1be-7073a539e17d/image.png" alt=""></p>
<ul>
<li>pw/id를 모를경우
kolla ansible기준 /etc/kolla/admin-openrc.sh 파일에 해당 값이 존재함 확인하기 
<img src="https://velog.velcdn.com/images/jupiter-j/post/b848e0e5-3402-4137-8ace-739d6f53985b/image.png" alt=""></li>
</ul>
<ul>
<li>프로젝트 확인시 왼쪽 상단 잘보기 
<img src="https://velog.velcdn.com/images/jupiter-j/post/f96264bb-2d7e-4347-baeb-045111f8b430/image.png" alt=""></li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/2962fe29-e5ad-4dd9-aa83-4b72c4a9a014/image.png" alt="">
인증&gt; 프로젝트의 위치에 있는 값 2개는 오픈스택 어느버전이든 default로 설치되어있음. 오픈스택 서비스가 동작하기 위해 사용됨</p>
<h3 id="프로젝트-생성---gui">프로젝트 생성 - GUI</h3>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/3fb92396-a752-4839-8b3e-ec034d06ac96/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/73429a17-6227-4a8b-91fa-be20f0a5bade/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/ac6ed057-6e56-4364-aa2c-423fc82dfb8c/image.png" alt="">
멤버관리 &gt; 프로젝트 편집: 유저 권한/그룹권한을 부여할 수 있음. 
멤버관리 &gt; Quotas : cpu,memory등 수정 가능</p>
<p>도메인은 활성화 상태에서 바로 삭제할수 없으나 프로젝트는 활성화 상태에서도 바로 삭제가 가능하다. </p>
<h3 id="프로젝트-생성---cli">프로젝트 생성 - CLI</h3>
<pre><code>
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 00855a5cafa646478a16f350df1f00f6 | admin   |
| e1cbe4b32b1744bf8f78d964db3215fc | service |
+----------------------------------+---------+
(os-venv) vagrant@openstack-aio:/etc/kolla$
</code></pre><ul>
<li>서브 프로젝트 생성: <code>openstack project create --parent new-project sub-project</code>
프로젝트 명은 중복으로 만들 수 없다. 
삭제시 프로젝트가 서브와 중첩되어있기 때문에 하위 프로젝트인 서브 프로젝트부터 삭제해야한다. </li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/8df4b731-463c-4804-85e4-46cd30ce8b59/image.png" alt=""></p>
<h3 id="할당량-관리">할당량 관리</h3>
<p>관리&gt;시스템&gt;기본 : 앞으로 생성할 모든 프로젝트의 기본값을 수정
인증&gt;멤버관리 &gt; Quotas : 단일 프로젝트 기본값 수정 
<img src="https://velog.velcdn.com/images/jupiter-j/post/eea358ac-e807-4a53-a56e-7c7b3ca6ad93/image.png" alt=""><img src="https://velog.velcdn.com/images/jupiter-j/post/1e397de4-6ca2-407a-93ef-38501a26b258/image.png" alt="">
--옵션을 사용하여 compute, volume등을 확인 가능 </p>
<br>

<h1 id="실습">실습</h1>
<blockquote>
</blockquote>
<ol>
<li>도메인 생성 : practice-domain</li>
<li>도메인 안에 프로젝트 생성 : upper-project</li>
<li>리소스 기본 할당량을 조정 및 확인</li>
<li>프로젝트 중첩해서 생성 : lower-project</li>
<li>리소스 할당량 조정 및 확인: lower-project의 리소스 할당량 변경</li>
<li>프로젝트 및 도메인 삭제
lower-project 삭제 &gt; upeer-project 삭제 &gt; practice-doamin 비활성화 &gt; 삭제 </li>
</ol>
<pre><code>
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack domain create practice-domain
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description |                                  |
| enabled     | True                             |
| id          | 200f3f3aeffc4ed7870b10a9250d0cf5 |
| name        | practice-domain                  |
| options     | {}                               |
| tags        | []                               |
+-------------+----------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack domain list
+----------------------------------+------------------+---------+--------------------+
| ID                               | Name             | Enabled | Description        |
+----------------------------------+------------------+---------+--------------------+
| 200f3f3aeffc4ed7870b10a9250d0cf5 | practice-domain  | True    |                    |
| 678ffb142e184d89bc5aaeebccbc86cd | heat_user_domain | True    |                    |
| default                          | Default          | True    | The default domain |
+----------------------------------+------------------+---------+--------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack project create --domain practice-domain upper-project
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description |                                  |
| domain_id   | 200f3f3aeffc4ed7870b10a9250d0cf5 |
| enabled     | True                             |
| id          | 09a54747b7ba4ec18d92bc440c3eca63 |
| is_domain   | False                            |
| name        | upper-project                    |
| options     | {}                               |
| parent_id   | 200f3f3aeffc4ed7870b10a9250d0cf5 |
| tags        | []                               |
+-------------+----------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack project list
+----------------------------------+---------------+
| ID                               | Name          |
+----------------------------------+---------------+
| 00855a5cafa646478a16f350df1f00f6 | admin         |
| 09a54747b7ba4ec18d92bc440c3eca63 | upper-project |
| e1cbe4b32b1744bf8f78d964db3215fc | service       |
+----------------------------------+---------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack project create lower-project --domain practice-domain --parent 09a54747b7ba4ec18d92bc440c3eca63
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description |                                  |
| domain_id   | 200f3f3aeffc4ed7870b10a9250d0cf5 |
| enabled     | True                             |
| id          | e1279735f0ae459b93ff5323526c87cc |
| is_domain   | False                            |
| name        | lower-project                    |
| options     | {}                               |
| parent_id   | 09a54747b7ba4ec18d92bc440c3eca63 |
| tags        | []                               |
+-------------+----------------------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack project list
+----------------------------------+---------------+
| ID                               | Name          |
+----------------------------------+---------------+
| 00855a5cafa646478a16f350df1f00f6 | admin         |
| 09a54747b7ba4ec18d92bc440c3eca63 | upper-project |
| e1279735f0ae459b93ff5323526c87cc | lower-project |
| e1cbe4b32b1744bf8f78d964db3215fc | service       |
+----------------------------------+---------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$

## 할당량

(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack quota show --compute lower-project
+----------------------+-------+
| Resource             | Limit |
+----------------------+-------+
| cores                |    20 |
| instances            |    10 |
| ram                  | 51200 |
| fixed-ips            |    -1 |
| injected-file-size   | 10240 |
| injected-path-size   |   255 |
| injected-files       |     5 |
| key-pairs            |   100 |
| properties           |   128 |
| server-groups        |    10 |
| server-group-members |    10 |
| floating-ips         |    -1 |
| secgroup-rules       |    -1 |
| secgroups            |    -1 |
+----------------------+-------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack quota show --compute upper-project
+----------------------+-------+
| Resource             | Limit |
+----------------------+-------+
| cores                |    20 |
| instances            |    10 |
| ram                  | 51200 |
| fixed-ips            |    -1 |
| injected-file-size   | 10240 |
| injected-path-size   |   255 |
| injected-files       |     5 |
| key-pairs            |   100 |
| properties           |   128 |
| server-groups        |    10 |
| server-group-members |    10 |
| floating-ips         |    -1 |
| secgroup-rules       |    -1 |
| secgroups            |    -1 |
+----------------------+-------+

(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack quota set --ram 512 lower-project --force
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack quota show --compute lower-project
+----------------------+-------+
| Resource             | Limit |
+----------------------+-------+
| cores                |    40 |
| instances            |    20 |
| ram                  |   512 |
| fixed-ips            |    -1 |
| injected-file-size   | 10240 |
| injected-path-size   |   255 |
| injected-files       |     5 |
| key-pairs            |   100 |
| properties           |   128 |
| server-groups        |    10 |
| server-group-members |    10 |
| floating-ips         |    -1 |
| secgroup-rules       |    -1 |
| secgroups            |    -1 |
+----------------------+-------+
## 삭제

(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack project delete lower-project
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack project delete upper-project
(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack domain set --disable practice-domain
(os-venv) vagrant@openstack-aio:/etc/kolla$
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 00855a5cafa646478a16f350df1f00f6 | admin   |
| e1cbe4b32b1744bf8f78d964db3215fc | service |
+----------------------------------+---------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack domain list
+----------------------------------+------------------+---------+--------------------+
| ID                               | Name             | Enabled | Description        |
+----------------------------------+------------------+---------+--------------------+
| 200f3f3aeffc4ed7870b10a9250d0cf5 | practice-domain  | False   |                    |
| 678ffb142e184d89bc5aaeebccbc86cd | heat_user_domain | True    |                    |
| default                          | Default          | True    | The default domain |
+----------------------------------+------------------+---------+--------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack domain delete practice-domain
(os-venv) vagrant@openstack-aio:/etc/kolla$ openstack domain list
+----------------------------------+------------------+---------+--------------------+
| ID                               | Name             | Enabled | Description        |
+----------------------------------+------------------+---------+--------------------+
| 678ffb142e184d89bc5aaeebccbc86cd | heat_user_domain | True    |                    |
| default                          | Default          | True    | The default domain |
+----------------------------------+------------------+---------+--------------------+
(os-venv) vagrant@openstack-aio:/etc/kolla$
</code></pre><blockquote>
<p>도메인 생성 : practice-domain
<img src="https://velog.velcdn.com/images/jupiter-j/post/d7ccd1d6-9fee-42fd-acfc-8fea600e0146/image.png" alt="">
도메인 안에 프로젝트 생성 : upper-project
<img src="https://velog.velcdn.com/images/jupiter-j/post/4d791650-4740-487b-a25d-54e87d2d9bd4/image.png" alt="">
프로젝트 중첩해서 생성 : lower-project
<img src="https://velog.velcdn.com/images/jupiter-j/post/db5e6f98-e040-4c62-b799-1ba039a65d24/image.png" alt="">
리소스 기본 할당량을 조정 및 확인
<img src="https://velog.velcdn.com/images/jupiter-j/post/28c94ff6-08c5-4997-8ac6-91795ec534b8/image.png" alt="">
프로젝트 할당량 변경
<img src="https://velog.velcdn.com/images/jupiter-j/post/d01c918f-8536-4cf9-8ba6-953b3c740a81/image.png" alt="">
도메인, 프로젝트 삭제
<img src="https://velog.velcdn.com/images/jupiter-j/post/6d44a4c3-954e-4610-a367-9997401f03c2/image.png" alt=""></p>
</blockquote>
]]></description>
        </item>
        <item>
            <title><![CDATA[[k8s]쿠버네티스 인증서 만료, 갱신 & 스크립트 정리]]></title>
            <link>https://velog.io/@jupiter-j/k8s%EC%BF%A0%EB%B2%84%EB%84%A4%ED%8B%B0%EC%8A%A4-%EC%9D%B8%EC%A6%9D%EC%84%9C-%EB%A7%8C%EB%A3%8C-%EA%B0%B1%EC%8B%A0-%EC%8A%A4%ED%81%AC%EB%A6%BD%ED%8A%B8-%EC%A0%95%EB%A6%AC</link>
            <guid>https://velog.io/@jupiter-j/k8s%EC%BF%A0%EB%B2%84%EB%84%A4%ED%8B%B0%EC%8A%A4-%EC%9D%B8%EC%A6%9D%EC%84%9C-%EB%A7%8C%EB%A3%8C-%EA%B0%B1%EC%8B%A0-%EC%8A%A4%ED%81%AC%EB%A6%BD%ED%8A%B8-%EC%A0%95%EB%A6%AC</guid>
            <pubDate>Sun, 11 May 2025 14:54:54 GMT</pubDate>
            <description><![CDATA[<blockquote>
<h2 id="kubernetes의-인증서란">kubernetes의 인증서란?</h2>
<p>쿠버네티스는 여러 컴포넌트가 통신하면서 동작하는 분산 시스템이다.
이때, 서로를 신뢰하고 보안 연결을 하려면 인증서를 써야한다.</p>
</blockquote>
<br>


<h2 id="인증서는-어디에-어떻게-만들어질까">인증서는 어디에, 어떻게 만들어질까?</h2>
<p>kubeadm init은 <strong>마스터용 인증서를 자동</strong>으로 만든다.
kubelet은 <strong>bootstrap 인증서를 사용해서 스스로 요청(CSR)</strong>을 보내서
<strong>정식 인증서(kubelet.crt)</strong>를 받고 이걸로 클러스터에 참여한다.</p>
<h3 id="kubeadm의-kubernetes-마스터-인증서-자동-생성">kubeadm의 Kubernetes 마스터 인증서 자동 생성</h3>
<ol>
<li><code>kubeadm init</code> 명령을 실행하면, Kubernetes의 핵심 컴포넌트(API 서버, etcd 등)에 필요한 인증서들이 자동 생성된다</li>
<li><code>/etc/kubernetes/pki/</code> 경로에 저장<pre><code>/etc/kubernetes/pki/
├── ca.crt / ca.key                      # 클러스터의 루트 인증서
├── apiserver.crt / apiserver.key        # API 서버 HTTPS 인증
├── apiserver-kubelet-client.crt         # kubelet 접근용 인증서
├── etcd/*.crt                           # etcd 보안 통신용 인증서
├── front-proxy-*.crt                    # Aggregation Layer용</code></pre><img src="https://velog.velcdn.com/images/jupiter-j/post/fa193257-6bc5-42c9-9732-78241ce6bb27/image.png" alt=""><h3 id="kubelet-인증서-생성-과정">kubelet 인증서 생성 과정</h3>
</li>
<li>kubeadm init 명령을 실행하면,<code>/etc/kubernetes/pki/</code>에 클러스터용 CA 인증서와 apiserver용 인증서가 생성되고,
kubelet이 초기에 사용할 임시 인증 정보<code>(bootstrap-kubelet.conf)</code>도 함께 생성된다</li>
<li>kubelet은 부팅 시 <code>bootstrap-kubelet.conf</code>를 사용해 API 서버에 접근하고, 자신의 인증서를 발급받기 위해 CSR(Certificate Signing Request)을 제출한다</li>
<li>쿠버네티스 컨트롤러가 CSR을 자동으로 승인하면,
kubelet은 <code>/var/lib/kubelet/pki/</code> 경로에 정식 인증서(kubelet.crt, kubelet.key)를 저장하고,
이후에는 kubelet.conf를 사용하여 정식 구성원으로서 API 서버와 통신한다.
<img src="https://velog.velcdn.com/images/jupiter-j/post/5211af80-6b89-4af8-92ac-a4ea3e02778b/image.png" alt=""></li>
</ol>
<ul>
<li>k8s와 kubelet에 pki 폴더가 자동으로 생성된것을 볼 수 있다.
<img src="https://velog.velcdn.com/images/jupiter-j/post/53b1e943-2b8c-4077-8f7b-3e6a45bf306a/image.png" alt=""></li>
</ul>
<pre><code>[ kubeadm init 실행 ]
        │
        ├── /etc/kubernetes/pki/ 인증서 생성 (API 서버, etcd 등)
        └── bootstrap-kubelet.conf 생성 (kubelet 초기용)
               │
               ▼
[ kubelet 시작 ]
        │
        ├── bootstrap-kubelet.conf 사용해 API 서버에 접속
        └── CSR 요청 제출 → 자동 승인됨
               │
               ▼
[ 인증서 발급 완료 ]
        ├── /var/lib/kubelet/pki/kubelet.crt 저장
        └── kubelet.conf 로 전환하여 정식 통신 시작
</code></pre><br>



<h3 id="인증서-갱신이-필요한-이유">인증서 갱신이 필요한 이유</h3>
<p>kubernetes의 인증서 기간을 확인해보면 크게 두가지 섹터로 기간이 다른것을 확인할 수 있다. <code>kubeadm certs check-expiration</code> 명령어</p>
<pre><code>kubeadm certs check-expiration</code></pre><pre><code>[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with &#39;kubectl -n kube-system get cm kubeadm-config -o yaml&#39;

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Oct 25, 2024 02:01 UTC   202d            ca                      no
apiserver                 Oct 25, 2024 02:01 UTC   202d            ca                      no
apiserver-etcd-client     Oct 25, 2024 02:01 UTC   202d            etcd-ca                 no
apiserver-kubelet-client  Oct 25, 2024 02:01 UTC   202d            ca                      no
controller-manager.conf   Oct 25, 2024 02:01 UTC   202d            ca                      no
etcd-healthcheck-client   Oct 25, 2024 02:01 UTC   202d            etcd-ca                 no
etcd-peer                 Oct 25, 2024 02:01 UTC   202d            etcd-ca                 no
etcd-server               Oct 25, 2024 02:01 UTC   202d            etcd-ca                 no
front-proxy-client        Oct 25, 2024 02:01 UTC   202d            front-proxy-ca          no
scheduler.conf            Oct 25, 2024 02:01 UTC   202d            ca                      no
super-admin.conf          Oct 25, 2024 02:01 UTC   202d            ca                      no

CERTIFICATE AUTHORITY     EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                        Mar 30, 2035 01:32 UTC   9y              no
etcd-ca                   Mar 30, 2035 01:32 UTC   9y              no
front-proxy-ca            Mar 30, 2035 01:32 UTC   9y              no
</code></pre><p>인증서는 컴포넌트 간에 <strong>보안 통신(TLS)</strong>을 위해 필요하다.
만료되면 신뢰할 수 없는 인증서가 되기 때문에, 연결이 끊기게 된다. </p>
<ul>
<li><p>인증서 만료 시 생기는 증상 예시</p>
<table>
<thead>
<tr>
<th>만료된 인증서</th>
<th>증상</th>
</tr>
</thead>
<tbody><tr>
<td><code>admin.conf</code></td>
<td><code>kubectl</code> 명령어 에러 발생 (<code>x509: certificate has expired</code>)</td>
</tr>
<tr>
<td><code>apiserver.crt</code></td>
<td>API 서버가 <strong>부팅 실패</strong>, 클러스터 전체 작동 불가</td>
</tr>
<tr>
<td><code>etcd/server.crt</code></td>
<td>etcd 간 통신 실패, 클러스터 데이터 저장/읽기 오류</td>
</tr>
<tr>
<td><code>kubelet.crt</code></td>
<td>노드 상태 보고 불가 (<code>NotReady</code>), 파드 스케줄링 안 됨</td>
</tr>
</tbody></table>
</li>
</ul>
<br>

<h3 id="인증서-기간이-차이나는-이유">인증서 기간이 차이나는 이유</h3>
<p>기본적으로 발급되는 모든 인증서의 유효기간은 1년 (365일)이다. ca.crt만 10년(3650일)로 기본 설정되어 있다.</p>
<table>
<thead>
<tr>
<th>인증서 종류</th>
<th>유효기간</th>
<th>이유</th>
</tr>
</thead>
<tbody><tr>
<td><strong>ca.crt (루트 CA)</strong></td>
<td><strong>10년</strong></td>
<td>이 인증서가 모든 다른 인증서를 서명하므로, 자주 바꾸면 전체 재발급이 필요해 <strong>오래 유지</strong></td>
</tr>
<tr>
<td><strong>apiserver, etcd, admin.conf 등</strong></td>
<td><strong>1년</strong></td>
<td>외부와 연결되거나 클러스터 내부에서 많이 쓰이므로 <strong>주기적으로 갱신이 안전</strong></td>
</tr>
<tr>
<td><strong>kubelet.crt</strong></td>
<td>1년 (자동 갱신됨)</td>
<td>워커 노드의 상태를 지속적으로 보장해야 하므로 <strong>자동 갱신 처리</strong></td>
</tr>
</tbody></table>
<ul>
<li>인증서의 분류</li>
</ul>
<table>
<thead>
<tr>
<th>분류</th>
<th>설명</th>
<th>예시</th>
<th>기본 유효기간</th>
</tr>
</thead>
<tbody><tr>
<td><strong>1. 루트 인증서 (Root CA Certificates)</strong></td>
<td>다른 인증서를 <strong>서명해주는 최상위 인증서</strong></td>
<td><code>ca.crt</code>, <code>etcd-ca.crt</code>, <code>front-proxy-ca.crt</code></td>
<td>보통 <strong>10년</strong></td>
</tr>
<tr>
<td><strong>2. 클러스터 인증서 (Component/Leaf Certificates)</strong></td>
<td>실제 <strong>각 컴포넌트가 사용하는 인증서</strong></td>
<td><code>apiserver.crt</code>, <code>admin.conf</code>, <code>kubelet.crt</code>, <code>etcd/server.crt</code> 등</td>
<td>보통 <strong>1년</strong></td>
</tr>
</tbody></table>
<br>

<h3 id="인증서-사용-흐름">인증서 사용 흐름</h3>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/fc6b125c-3b9b-4877-a190-552c6372a099/image.png" alt=""></p>
<table>
<thead>
<tr>
<th>통신 대상</th>
<th>흐름 예시</th>
<th>누가 어떤 인증서를 사용하나?</th>
</tr>
</thead>
<tbody><tr>
<td>👤 사용자 ↔ apiserver</td>
<td><code>kubectl get pods</code></td>
<td><code>admin.conf</code> 안의 인증서로 사용자 인증</td>
</tr>
<tr>
<td>kubelet ↔ apiserver</td>
<td>워커 노드가 상태 보고</td>
<td>kubelet은 <code>kubelet.crt</code>로 인증apiserver는 <code>apiserver-kubelet-client.crt</code>로 인증</td>
</tr>
<tr>
<td>apiserver ↔ etcd</td>
<td>apiserver가 etcd에 데이터 요청</td>
<td>apiserver는 <code>apiserver-etcd-client.crt</code> 사용etcd는 <code>etcd/server.crt</code>로 응답</td>
</tr>
<tr>
<td>controller-manager ↔ apiserver</td>
<td>새 파드 생성 명령</td>
<td><code>controller-manager.conf</code>의 인증서로 인증</td>
</tr>
<tr>
<td>scheduler ↔ apiserver</td>
<td>어디에 파드를 배치할지 결정</td>
<td><code>scheduler.conf</code>의 인증서로 인증</td>
</tr>
</tbody></table>
<br>

<h2 id="인증서-갱신-방법">인증서 갱신 방법</h2>
<pre><code>sudo kubeadm certs renew all</code></pre><p>위의 명령어는 전체 클러스터 인증서 자동 갱신 명령어다. 문제는 1년만 추가로 갱신이 된다. 즉 10년짜리로 갱신을 하려면 단순한 명령어로는 되지 않는다.</p>
<h3 id="스크립트">스크립트</h3>
<p>에러 
<img src="https://velog.velcdn.com/images/jupiter-j/post/1cfa18d8-246a-4db1-83a5-e8b85ba357ca/image.png" alt=""></p>
]]></description>
        </item>
        <item>
            <title><![CDATA[pcsd vip error]]></title>
            <link>https://velog.io/@jupiter-j/pcsd-vip-error</link>
            <guid>https://velog.io/@jupiter-j/pcsd-vip-error</guid>
            <pubDate>Mon, 28 Apr 2025 02:08:06 GMT</pubDate>
            <description><![CDATA[<blockquote>
<h3 id="사건발단">사건발단</h3>
<p>k8s인프라 구성중에 haproxy와 pcsd를 설치했다.
이유는 vip를 사용하여 다중클러스터링에서 </p>
</blockquote>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/c0012e28-8a73-49d2-a590-d62b12b1904d/image.png" alt=""></p>
<ul>
<li><p>haproxy가 running일때
<img src="https://velog.velcdn.com/images/jupiter-j/post/d049a422-e6ca-49e7-9262-774badb04daa/image.png" alt=""></p>
</li>
<li><p>haproxy가 stop일때
<img src="https://velog.velcdn.com/images/jupiter-j/post/32ced841-6f8f-4796-b1d7-eedc7548f6f5/image.png" alt=""></p>
</li>
</ul>
<p>나는 당연히 pacemaker가 health체크를 하고 해당 부분을 반영시키는 줄 알았음</p>
<blockquote>
<p>haproxy를 systemctl stop haproxy로 멈추면 haproxy 서비스(프로세스)만 죽는 거야. 하지만 VIP 리소스는 여전히 살아 있어.
VIP(MAESTRO-VIP, HAPROXY-VIP)는 haproxy랑은 별개로 pacemaker가 관리하거든. VIP는 그냥 &quot;IP주소를 이 서버에 붙여주는 것&quot;이야. haproxy랑 직접 연결된 건 아님.
<br>
즉, haproxy 프로세스는 죽어도, pcs status 보면 VIP는 여전히 Started 상태로 있을 거야. 클러스터 입장에서는 VIP를 모니터링할 뿐, haproxy 프로세스를 모니터링하지는 않아.</p>
</blockquote>
<h3 id="haproxy-프로세스까지-모니터링하고-싶다면">haproxy 프로세스까지 모니터링하고 싶다면?</h3>
<p>그냥 VIP 모니터링만 하면 haproxy 죽은 걸 감지 못하잖아?
그래서 haproxy 프로세스를 감시하는 리소스를 별도로 추가할 수도 있어.
<code>예시: ocf:heartbeat:haproxy 에이전트 사용</code>
<code>pcs resource create HAPROXY ocf:heartbeat:haproxy op monitor interval=30s</code> 이런 식으로 등록하면 돼.
이렇게 하면 haproxy 죽으면 자동 failover(다른 노드로 VIP 이동)도 가능해.</p>
<br>
<br>

<h2 id="안전하게-vip를-삭제하는-법">안전하게 vip를 삭제하는 법</h2>
<pre><code>[root@k8s-master ~]# pcs resource disable MAESTRO-VIP
[root@k8s-master ~]#

[root@k8s-master ~]# pcs resource delete MAESTRO-VIP
Removing Constraint - location-k8s-master-mavip
Deleting Resource - MAESTRO-VIP

[root@k8s-master ~]# pcs status
Cluster name: MAESTRO_CLUSTER
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: k8s-master (version 2.1.7-5.el9_4-0f7f88312) - partition with quorum
  * Last updated: Mon Apr 28 11:27:05 2025 on k8s-master
  * Last change:  Mon Apr 28 11:20:57 2025 by root via root on k8s-master
  * 1 node configured
  * 1 resource instance configured

Node List:
  * Online: [ k8s-master ]

Full List of Resources:
  * HAPROXY-VIP    (ocf:heartbeat:IPaddr2):     Started k8s-master

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled</code></pre><br>

<p>안전하게 삭제됨! 
<img src="https://velog.velcdn.com/images/jupiter-j/post/d55ef1fe-9c03-4f4b-bcce-35df5ce20bc8/image.png" alt=""></p>
]]></description>
        </item>
        <item>
            <title><![CDATA[[k8s] UTM-VM-RHEL9.4 Setting하기 (1)]]></title>
            <link>https://velog.io/@jupiter-j/RHEL9.4</link>
            <guid>https://velog.io/@jupiter-j/RHEL9.4</guid>
            <pubDate>Mon, 31 Mar 2025 00:37:33 GMT</pubDate>
            <description><![CDATA[<h1 id="vm-생성">VM 생성</h1>
<blockquote>
<p>맥에서 arm이 아닌 amd로 k8s를 설치하기 
UTM을 사용해서 설치함 </p>
</blockquote>
<br>
<br>


<h2 id="1-iso-다운">1. Iso 다운</h2>
<hr>
<blockquote>
<p>RHEL 9.4다운  <a href="https://developers.redhat.com/products/rhel/download#rhelforsap896?source=sso">https://developers.redhat.com/products/rhel/download#rhelforsap896?source=sso</a>
<img src="https://velog.velcdn.com/images/jupiter-j/post/1f04a9c4-ffd0-421b-bc0c-cbc8ef99ea64/image.png" alt=""></p>
</blockquote>
<ul>
<li>RHEL 9.4v</li>
<li>x86_64 DVD iso 를 받을것 ! (amd를 사용하기 위해)</li>
</ul>
<br>
<br>
<br>

<h1 id="2-utm-setting">2. UTM setting</h1>
<hr>
<blockquote>
<p>VM 생성시 <strong>Emulate</strong>를 선택해서 해야함</p>
</blockquote>
<h3 id="✅-setting">✅ Setting</h3>
<blockquote>
</blockquote>
<ul>
<li>4096 MB</li>
<li>CPU 2</li>
<li>5 GB</li>
</ul>
<h3 id="✅-파티션-설정">✅ 파티션 설정</h3>
<blockquote>
<p>/boot : 1024mib / ext4
/ : 나머지 영역 / ext4 / VG00 / lv_root 로 생성
swap : 일반적으로 메모리와 같거나 최대로 32Gib / swap / VG00 / lv_swap</p>
</blockquote>
<ul>
<li>UEFI로 설치한 경우 /boot/efi : 200mib / EFI System Partition
볼륨 그룹(V) : 수정(M) &gt; 이름 : VG00 &gt; 크기 정책(z) : 가능한 크게 &gt; 저장(s)</li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/6b28e2e7-8cd5-4be2-890a-92262a0a8808/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/3d2b319b-5f1a-4b03-aec9-72a7674adb6e/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/c6c07107-3cd8-430a-971d-884bea4cb318/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/921f9756-5346-48a4-8808-265693e6a367/image.png" alt=""></p>
<br>
<br>
<br>


<h1 id="3-redhat">3. Redhat</h1>
<hr>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/581ebdc5-a159-4969-8f93-1e7eccc822a0/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/b227eeea-f3d0-4d6e-8518-d098c66fba88/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/4bbba691-ce27-418c-995b-3c8eb1148b8b/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/a36ae850-c1f9-4185-aafb-282dd0d43a8e/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/5fa856f6-66d0-4631-b15a-c8fe33082aea/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/17ec5b77-83fc-43e7-8ab3-29f1d4015927/image.png" alt=""></p>
<ul>
<li>볼륨그룹을 VGO1로 수정함, 가능한크게를 설정안했음.. </li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/4433fd29-8afe-4e11-915b-246f83173857/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/8da29f37-4701-4c84-bac0-6470e647b8c3/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/d2dc6c05-46ef-4fef-bde6-2a61d457f596/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/d3b1f2bf-278b-4001-ba74-0aec9b652f19/image.png" alt="">ssh로그인 하도록 허용
<img src="https://velog.velcdn.com/images/jupiter-j/post/0c18e270-ebba-406c-98b5-dff6fed15950/image.png" alt=""></p>
<p>오래걸림주의
<img src="https://velog.velcdn.com/images/jupiter-j/post/08d4bd67-3ac6-4cd2-a212-f4a334f9f3ac/image.png" alt="">
설치 완료되면 멈추고 공유폴더 밑의 Cd/DVD 초기화</p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/1eb17e58-d5cc-4a39-9e53-2386ad31f419/image.png" alt=""></p>
<p>재부팅전에 CD/DVD초기화 했는지 확인하고 재부팅 시작 </p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/7bbcfdb5-a232-418c-bbd0-2fcd8e0141de/image.png" alt="">
목록에 없습니까? 를 클릭하여 root접속 
<img src="https://velog.velcdn.com/images/jupiter-j/post/de5ae72a-e09e-458c-9cd4-95073b03876a/image.png" alt=""></p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/fda22f62-62c7-425f-a002-2222e35bb862/image.png" alt=""></p>
<br>
<br>
<br>

<h1 id="4-rhel-94-초기-설정">4. RHEL 9.4 초기 설정</h1>
<hr>
<h2 id="root-ssh접속-허용-설정">root ssh접속 허용 설정</h2>
<ul>
<li><p>sshd가 제대로 실행중인지 확인 
<code>systemctl status sshd</code>
<img src="https://velog.velcdn.com/images/jupiter-j/post/0a74b90b-920e-4e8a-bbed-919f9682043f/image.png" alt=""></p>
</li>
<li><p>root접속 허용 설정
<code>vi /etc/ssh/sshd_config</code></p>
<pre><code>62 #IgnoreRhosts yes
63
64 # To disable tunneled clear text passwords, change to no here!
65 PasswordAuthentication yes ##변경
66 #PermitEmptyPasswords no
67
68 # Change to no to disable s/key passwords
69 #KbdInteractiveAuthentication yes</code></pre></li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/05eb52f3-4d96-44b8-926f-1fd6a4f30379/image.png" alt=""></p>
<blockquote>
<p>여기서 부터는 터미널에서 SSH root접속이 되는지 확인해보기
<code>ssh root@[vm-ip]</code> 접속 </p>
</blockquote>
<br>


<h2 id="selinux-off">SElinux Off</h2>
<blockquote>
<h3 id="selinux-off-1">Selinux OFF</h3>
<p>셀리눅스의 보안정책 강화로 일부 기능들을 사용하지 못하거나 에러를 제대로 보기 힘듬. 그래서 해당 기능을 끈다.</p>
</blockquote>
<ul>
<li>Enforcing (기본값) – SELinux가 켜져있고 정책에 위반된 모든 작업을 차단함</li>
<li>Permissive – SELinux가 켜져있지만 정책에 위반된 사항에 대해 경고만 하도록 함 (audit 로그에 기록만 하는 상태)</li>
<li>Disable – SELinux가 완전히 꺼진 상태</li>
</ul>
<pre><code>[root@localhost ~]#
[root@localhost ~]# vi /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
...
#
#    grubby --update-kernel ALL --remove-args selinux
#
SELINUX=disabled ## 추가
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,</code></pre><p><img src="https://velog.velcdn.com/images/jupiter-j/post/d7df49c5-0562-4955-902f-12cc8da2875a/image.png" alt=""></p>
<br>

<ul>
<li>selinux 비활성화</li>
</ul>
<pre><code>[root@localhost ~]# grubby --update-kernel ALL --args selinux=0
[root@localhost ~]#
[root@localhost ~]# reboot</code></pre><pre><code>[root@localhost ~]# setenforce 0
setenforce: SELinux is disabled</code></pre><br>


<h2 id="firewall-off">firewall Off</h2>
<pre><code>[root@localhost ~]# systemctl disable --now firewalld
Removed &quot;/etc/systemd/system/multi-user.target.wants/firewalld.service&quot;.
Removed &quot;/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service&quot;.</code></pre><p><img src="https://velog.velcdn.com/images/jupiter-j/post/49f612d5-85ca-4e1f-8eb3-3f91419e0320/image.png" alt=""></p>
<h2 id="불필요한-서비스-off">불필요한 서비스 off</h2>
<pre><code>[root@localhost ~]# systemctl disable --now cups.service bluetooth.service
Removed &quot;/etc/systemd/system/printer.target.wants/cups.service&quot;.
Removed &quot;/etc/systemd/system/multi-user.target.wants/cups.service&quot;.
Removed &quot;/etc/systemd/system/multi-user.target.wants/cups.path&quot;.
Removed &quot;/etc/systemd/system/bluetooth.target.wants/bluetooth.service&quot;.
Removed &quot;/etc/systemd/system/sockets.target.wants/cups.socket&quot;.
Removed &quot;/etc/systemd/system/dbus-org.bluez.service&quot;.</code></pre><br>

<h2 id="이미지-추가--디스크-마운트">이미지 추가 &amp; 디스크 마운트</h2>
<blockquote>
<p>원래는 로컬 레포를 사용하기 위해 이미지를 삽입(디스크추가) &gt; 디스크 마운트를 해야함. 나의 경우 이미 이미지가 있어서 scp로 VM에 전송함  </p>
</blockquote>
<ul>
<li>scp로 베이스 이미지를 모두 전송함 (AppStream / BaseOS)<pre><code>[root@localhost /]# ll
합계 76
drwxr-xr-x    4 root root  4096  3월 31 10:00 AppStream
drwxr-xr-x    4 root root  4096  3월 31 10:39 BaseOS
dr-xr-xr-x.   2 root root  4096  8월 10  2021 afs</code></pre></li>
<li>/rhel 폴더 하위에 이미지들을 넣음<pre><code>[root@localhost /]# cd rhel94/
[root@localhost rhel94]# ll
합계 8
drwxr-xr-x 4 root root 4096  3월 31 10:00 AppStream
drwxr-xr-x 4 root root 4096  3월 31 10:39 BaseOS</code></pre></li>
<li>레파지토리 생성: <code>vi /etc/yum.repos.d/local.repo</code><pre><code>[root@localhost rhel94]# cat [LocalRepo_BaseOS]
name=BaseOS
baseurl=file:///rhel94/BaseOS/
enabled=1
gpgcheck=0
</code></pre></li>
</ul>
<p>[LocalRepo_AppStream]
name=AppStream
baseurl=file:///rhel94/AppStream/
enabled=1
gpgcheck=0</p>
<pre><code>
- 레파지토리 적용</code></pre><h1 id="dnf-clean-all">dnf clean all</h1>
<h1 id="dnf-repolist-all">dnf repolist all</h1>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/ad198169-f008-4733-9724-6230801a7ed5/image.png)

&lt;br&gt;


## 시간동기화</code></pre><h1 id="chrony-설치--dnf-install-chrony">chrony 설치 : dnf install chrony</h1>
<h1 id="chrony-데몬-시작--systemctl-enable---now-chronyd">chrony 데몬 시작 : systemctl enable --now chronyd</h1>
<h1 id="chrony-데몬-확인--systemctl-status-chronyd">chrony 데몬 확인 : systemctl status chronyd</h1>
<h1 id="시간-확인--chronyc-sources">시간 확인 : chronyc sources</h1>
<pre><code>
&lt;br&gt;

&gt; **Master1, worker1, worker2구조를 만들기 위해 VM 복제
이후 내용부터는 각각의 VM에서 수행 **



&lt;br&gt;

## VM ip 고정 
### 할당된 ip &amp; GW 확인
- gateway 확인 : `ip route show default`
default뒤의 ip가 gw</code></pre><p>[root@localhost cloud]# ip route show default
default via 192.168.64.1 dev enp0s1 proto dhcp src 192.168.64.15 metric 100</p>
<pre><code>- ip확인 : `ip a`</code></pre><p>root@localhost cloud]# ip a
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether da:ed:e9:75:94:ae brd ff:ff:ff:ff:ff:ff
    inet 192.168.64.15/24 brd 192.168.64.255 scope global dynamic noprefixroute enp0s1
       valid_lft 2928sec preferred_lft 2928sec
    inet6 fd84:450:23ec:c85a:d8ed:e9ff:fe75:94ae/64 scope global dynamic noprefixroute
       valid_lft 2591932sec preferred_lft 604732sec
    inet6 fe80::d8ed:e9ff:fe75:94ae/64 scope link noprefixroute
       valid_lft forever preferred_lft forever</p>
<pre><code>
### 고정 ip 설정
- `vi /etc/NetworkManager/system-connections/enp0s1.nmconnection`</code></pre><p>[root@localhost cloud]# cat /etc/NetworkManager/system-connections/enp0s1.nmconnection
[connection]
id=enp0s1
uuid=62564758-41ca-3a3b-9269-fb6ac68157ba
type=ethernet
autoconnect-priority=-999
interface-name=enp0s1
timestamp=1743142416</p>
<p>[ethernet]</p>
<p>[ipv4]
method=manual
addresses=192.168.64.15/24 ## 변경
gateway=192.168.64.1 ## 변경
dns=8.8.8.8 ## 추가 </p>
<p>[ipv6]
addr-gen-mode=eui64
method=auto</p>
<p>[proxy]</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/12db366a-09aa-4a0b-94c3-121ef8c3cada/image.png)

- `systemctl restart NetworkManager` : 적용

### 적용확인
- `ip addr show enp0s1`
- `ip route show`</code></pre><p>[root@localhost cloud]# ip addr show enp0s1
2: enp0s1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether da:ed:e9:75:94:ae brd ff:ff:ff:ff:ff:ff
    inet 192.168.64.15/24 brd 192.168.64.255 scope global noprefixroute enp0s1
       valid_lft forever preferred_lft forever
    inet6 fd84:450:23ec:c85a:d8ed:e9ff:fe75:94ae/64 scope global dynamic noprefixroute
       valid_lft 2591993sec preferred_lft 604793sec
    inet6 fe80::d8ed:e9ff:fe75:94ae/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[root@localhost cloud]# ip route show
default via 192.168.64.1 dev enp0s1 proto dhcp src 192.168.64.15 metric 100
default via 192.168.64.1 dev enp0s1 proto static metric 100
192.168.64.0/24 dev enp0s1 proto kernel scope link src 192.168.64.15 metric 100</p>
<pre><code>### DHCP 설정 비활성화 
- `sudo nmcli connection modify enp0s1 ipv4.method manual`
- `systemctl restart NetworkManager`
- `nmcli connection down enp0s1 &amp;&amp; nmcli connection up enp0s1`</code></pre><p>[root@localhost cloud]# sudo nmcli connection modify enp0s1 ipv4.method manual
[root@localhost cloud]# sudo systemctl restart NetworkManager
[root@localhost cloud]# nmcli connection down enp0s1 &amp;&amp; nmcli connection up enp0s1
&#39;enp0s1&#39; 연결이 성공적으로 비활성화되었습니다 (D-Bus 활성 경로: /org/freedesktop/NetworkManager/ActiveConnection/2)
연결이 성공적으로 활성화되었습니다 (D-버스 활성 경로: /org/freedesktop/NetworkManager/ActiveConnection/3)</p>
<pre><code>- 확인: `ip route show` , `ip addr show enp0s1`
dynamic이 아닌 static 인지 확인 
![](https://velog.velcdn.com/images/jupiter-j/post/55da3d15-685d-45ed-bbbd-098e8e1a61f6/image.png)

&lt;br&gt;

&gt; ## enp0s1이 안될경우
```bash
sudo nmcli connection down &quot;유선 연결 1&quot; &amp;&amp; sudo nmcli connection up &quot;유선 연결 1&quot;</code></pre><pre><code class="language-bash">nmcli connection show</code></pre>
<pre><code class="language-bash">nmcli connection modify &quot;유선 연결 1&quot; ipv4.addresses 192.168.64.19/24 #ip주의! 
nmcli connection modify &quot;유선 연결 1&quot; ipv4.gateway 192.168.64.1
nmcli connection modify &quot;유선 연결 1&quot; ipv4.dns &quot;8.8.8.8&quot;
nmcli connection modify &quot;유선 연결 1&quot; ipv4.method manual
sudo nmcli connection down &quot;유선 연결 1&quot; &amp;&amp; sudo nmcli connection up &quot;유선 연결 1&quot;
systemctl restart NetworkManager</code></pre>
<br>

<h2 id="dns-hostname-설정">DNS, hostname 설정</h2>
<ul>
<li><code>hostnamectl set-hostname &lt;변경하고싶은이름&gt;</code><pre><code>[root@localhost ~]# hostnamectl set-hostname k8s-master
[root@localhost ~]# hostname
k8s-master</code></pre></li>
<li><code>vi /etc/hosts</code>
```
[root@localhost ~]# cat /etc/hosts</li>
</ul>
<p>127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6</p>
<p>192.168.64.15 k8s-master ## 추가
192.168.64.17 k8s-worker1 ## 다른 vm hostname 추가 </p>
<pre><code>
&lt;br&gt;

&gt; - 최종적으로 서로의 VM ip에 ping이 가는지 확인 하기 
- 해당 ip로 SSH접속이 되는지 확인하기 </code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[kubeadm-init-log 정리]]></title>
            <link>https://velog.io/@jupiter-j/kubeadm-init-log-%EC%A0%95%EB%A6%AC</link>
            <guid>https://velog.io/@jupiter-j/kubeadm-init-log-%EC%A0%95%EB%A6%AC</guid>
            <pubDate>Sun, 16 Mar 2025 12:11:14 GMT</pubDate>
            <description><![CDATA[<pre><code>[root@k8s-master-2 ~]# sudo kubeadm init --image-repository=10.0.16.62:8080/new --control-plane-endpoint=10.0.16.71:6443 --upload-certs --v=5
## 1. CRI 소켓 탐색 -- kubeadm이 컨테이너 런타임 인터페이스 CRI를 감지함 
I0316 08:09:18.502656  737621 initconfiguration.go:122] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0316 08:09:18.503554  737621 interface.go:432] Looking for default routes with IPv4 addresses
I0316 08:09:18.503679  737621 interface.go:437] Default route transits interface &quot;eth0&quot;
I0316 08:09:18.503969  737621 interface.go:209] Interface eth0 is up
I0316 08:09:18.504218  737621 interface.go:257] Interface &quot;eth0&quot; has 2 addresses :[10.0.16.71/24 fe80::f816:3eff:fefc:5459/64].
I0316 08:09:18.504371  737621 interface.go:224] Checking addr  10.0.16.71/24.
I0316 08:09:18.504456  737621 interface.go:231] IP found 10.0.16.71

## 2. 네트워크 인터페이스 및 노드 IP 확인
I0316 08:09:18.504543  737621 interface.go:263] Found valid IPv4 address 10.0.16.71 for interface &quot;eth0&quot;.
I0316 08:09:18.504662  737621 interface.go:443] Found active IP 10.0.16.71

## 3. Kubelet 설정 확인
I0316 08:09:18.504969  737621 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to &quot;systemd&quot;
I0316 08:09:18.528738  737621 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
W0316 08:09:18.552813  737621 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL &quot;https://dl.k8s.io/release/stable-1.txt&quot;: Get &quot;https://dl.k8s.io/release/stable-1.txt&quot;: dial tcp: lookup dl.k8s.io on 10.0.16.2:53: server misbehaving
W0316 08:09:18.553123  737621 version.go:105] falling back to the local client version: v1.29.11
[init] Using Kubernetes version: v1.29.11
[preflight] Running pre-flight checks

## 4. 사전 점검 (Pre-flight Checks) - Kubernetes 클러스터를 정상적으로 초기화할 수 있는지 사전 점검 수행
I0316 08:09:18.554538  737621 checks.go:563] validating Kubernetes and kubeadm version
I0316 08:09:18.554831  737621 checks.go:168] validating if the firewall is enabled and active
I0316 08:09:18.602915  737621 checks.go:203] validating availability of port 6443
I0316 08:09:18.603716  737621 checks.go:203] validating availability of port 10259
I0316 08:09:18.603816  737621 checks.go:203] validating availability of port 10257
I0316 08:09:18.604011  737621 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0316 08:09:18.604063  737621 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0316 08:09:18.604107  737621 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0316 08:09:18.604164  737621 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0316 08:09:18.604300  737621 checks.go:430] validating if the connectivity type is via proxy or direct
I0316 08:09:18.604429  737621 checks.go:469] validating http connectivity to first IP address in the CIDR
I0316 08:09:18.604475  737621 checks.go:469] validating http connectivity to first IP address in the CIDR
I0316 08:09:18.604509  737621 checks.go:104] validating the container runtime
I0316 08:09:18.730215  737621 checks.go:639] validating whether swap is enabled or not
I0316 08:09:18.730521  737621 checks.go:370] validating the presence of executable crictl
I0316 08:09:18.730601  737621 checks.go:370] validating the presence of executable conntrack
I0316 08:09:18.730654  737621 checks.go:370] validating the presence of executable ip
I0316 08:09:18.730701  737621 checks.go:370] validating the presence of executable iptables
I0316 08:09:18.730762  737621 checks.go:370] validating the presence of executable mount
I0316 08:09:18.730838  737621 checks.go:370] validating the presence of executable nsenter
I0316 08:09:18.730906  737621 checks.go:370] validating the presence of executable ethtool
I0316 08:09:18.730939  737621 checks.go:370] validating the presence of executable tc
I0316 08:09:18.730997  737621 checks.go:370] validating the presence of executable touch
I0316 08:09:18.731039  737621 checks.go:516] running all checks
I0316 08:09:18.761905  737621 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0316 08:09:18.762099  737621 checks.go:605] validating kubelet version
I0316 08:09:18.902113  737621 checks.go:130] validating if the &quot;kubelet&quot; service is enabled and active
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run &#39;systemctl enable kubelet.service&#39;
I0316 08:09:18.962511  737621 checks.go:203] validating availability of port 10250
I0316 08:09:18.962922  737621 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0316 08:09:18.963310  737621 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0316 08:09:18.963381  737621 checks.go:203] validating availability of port 2379
I0316 08:09:18.963512  737621 checks.go:203] validating availability of port 2380
I0316 08:09:18.963706  737621 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using &#39;kubeadm config images pull&#39;

## 5. 이미지 다운로드 확인
I0316 08:09:18.964216  737621 checks.go:828] using image pull policy: IfNotPresent
I0316 08:09:19.023967  737621 checks.go:846] image exists: 10.0.16.62:8080/new/kube-apiserver:v1.29.11
I0316 08:09:19.067897  737621 checks.go:846] image exists: 10.0.16.62:8080/new/kube-controller-manager:v1.29.11
I0316 08:09:19.121899  737621 checks.go:846] image exists: 10.0.16.62:8080/new/kube-scheduler:v1.29.11
I0316 08:09:19.166452  737621 checks.go:846] image exists: 10.0.16.62:8080/new/kube-proxy:v1.29.11
I0316 08:09:19.227060  737621 checks.go:846] image exists: 10.0.16.62:8080/new/coredns:v1.11.1
I0316 08:09:19.346328  737621 checks.go:846] image exists: 10.0.16.62:8080/new/pause:3.9
I0316 08:09:19.404943  737621 checks.go:846] image exists: 10.0.16.62:8080/new/etcd:3.5.16-0

## 6. 인증서 생성
[certs] Using certificateDir folder &quot;/etc/kubernetes/pki&quot;
I0316 08:09:19.405351  737621 certs.go:112] creating a new certificate authority for ca
[certs] Generating &quot;ca&quot; certificate and key
I0316 08:09:19.705783  737621 certs.go:519] validating certificate period for ca certificate
[certs] Generating &quot;apiserver&quot; certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.16.71]
[certs] Generating &quot;apiserver-kubelet-client&quot; certificate and key
I0316 08:09:20.334959  737621 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating &quot;front-proxy-ca&quot; certificate and key
I0316 08:09:20.612167  737621 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating &quot;front-proxy-client&quot; certificate and key
I0316 08:09:20.901589  737621 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating &quot;etcd/ca&quot; certificate and key
I0316 08:09:21.065271  737621 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating &quot;etcd/server&quot; certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-2 localhost] and IPs [10.0.16.71 127.0.0.1 ::1]
[certs] Generating &quot;etcd/peer&quot; certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-2 localhost] and IPs [10.0.16.71 127.0.0.1 ::1]
[certs] Generating &quot;etcd/healthcheck-client&quot; certificate and key
[certs] Generating &quot;apiserver-etcd-client&quot; certificate and key
I0316 08:09:22.239465  737621 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating &quot;sa&quot; key and public key
[kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot;
I0316 08:09:22.372382  737621 kubeconfig.go:112] creating kubeconfig file for admin.conf

## 7. Kubeconfig 파일 생성
[kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file
I0316 08:09:22.496756  737621 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing &quot;super-admin.conf&quot; kubeconfig file
I0316 08:09:22.792774  737621 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing &quot;kubelet.conf&quot; kubeconfig file
I0316 08:09:22.902729  737621 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file
I0316 08:09:23.377538  737621 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file

## 8. Etcd 및 Control Plane 구성
[etcd] Creating static Pod manifest for local etcd in &quot;/etc/kubernetes/manifests&quot;
I0316 08:09:23.577173  737621 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to &quot;/etc/kubernetes/manifests/etcd.yaml&quot;
[control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot;
[control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot;
I0316 08:09:23.577465  737621 manifests.go:102] [control-plane] getting StaticPodSpecs
I0316 08:09:23.577920  737621 certs.go:519] validating certificate period for CA certificate
I0316 08:09:23.578028  737621 manifests.go:128] [control-plane] adding volume &quot;ca-certs&quot; for component &quot;kube-apiserver&quot;
I0316 08:09:23.578045  737621 manifests.go:128] [control-plane] adding volume &quot;etc-pki&quot; for component &quot;kube-apiserver&quot;
I0316 08:09:23.578057  737621 manifests.go:128] [control-plane] adding volume &quot;k8s-certs&quot; for component &quot;kube-apiserver&quot;
I0316 08:09:23.578069  737621 manifests.go:128] [control-plane] adding volume &quot;usr-share-ca-certificates&quot; for component &quot;kube-apiserver&quot;
I0316 08:09:23.579347  737621 manifests.go:157] [control-plane] wrote static Pod manifest for component &quot;kube-apiserver&quot; to &quot;/etc/kubernetes/manifests/kube-apiserver.yaml&quot;
[control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot;
I0316 08:09:23.579416  737621 manifests.go:102] [control-plane] getting StaticPodSpecs
I0316 08:09:23.579692  737621 manifests.go:128] [control-plane] adding volume &quot;ca-certs&quot; for component &quot;kube-controller-manager&quot;
I0316 08:09:23.579728  737621 manifests.go:128] [control-plane] adding volume &quot;etc-pki&quot; for component &quot;kube-controller-manager&quot;
I0316 08:09:23.579741  737621 manifests.go:128] [control-plane] adding volume &quot;flexvolume-dir&quot; for component &quot;kube-controller-manager&quot;
I0316 08:09:23.579753  737621 manifests.go:128] [control-plane] adding volume &quot;k8s-certs&quot; for component &quot;kube-controller-manager&quot;
I0316 08:09:23.579764  737621 manifests.go:128] [control-plane] adding volume &quot;kubeconfig&quot; for component &quot;kube-controller-manager&quot;
I0316 08:09:23.579779  737621 manifests.go:128] [control-plane] adding volume &quot;usr-share-ca-certificates&quot; for component &quot;kube-controller-manager&quot;
I0316 08:09:23.581881  737621 manifests.go:157] [control-plane] wrote static Pod manifest for component &quot;kube-controller-manager&quot; to &quot;/etc/kubernetes/manifests/kube-controller-manager.yaml&quot;
[control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot;
I0316 08:09:23.581936  737621 manifests.go:102] [control-plane] getting StaticPodSpecs
I0316 08:09:23.582247  737621 manifests.go:128] [control-plane] adding volume &quot;kubeconfig&quot; for component &quot;kube-scheduler&quot;
I0316 08:09:23.583044  737621 manifests.go:157] [control-plane] wrote static Pod manifest for component &quot;kube-scheduler&quot; to &quot;/etc/kubernetes/manifests/kube-scheduler.yaml&quot;
I0316 08:09:23.583145  737621 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot;
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot;
[kubelet-start] Starting the kubelet
I0316 08:09:24.001747  737621 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy

## 9. kubeadm init을 실행 -- 에러가 날 경우: kubelet이 static Pod를 실행하지 못하고 있을 가능성이 있음.
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory &quot;/etc/kubernetes/manifests&quot;. This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.004837 seconds
I0316 08:09:34.015390  737621 kubeconfig.go:606] ensuring that the ClusterRoleBinding for the kubeadm:cluster-admins Group exists
I0316 08:09:34.020403  737621 kubeconfig.go:682] creating the ClusterRoleBinding for the kubeadm:cluster-admins Group by using super-admin.conf
I0316 08:09:34.058422  737621 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap &quot;kubeadm-config&quot; in the &quot;kube-system&quot; Namespace
I0316 08:09:34.107281  737621 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap &quot;kubelet-config&quot; in namespace kube-system with the configuration for the kubelets in the cluster
I0316 08:09:34.188089  737621 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node
I0316 08:09:34.188347  737621 patchnode.go:31] [patchnode] Uploading the CRI Socket information &quot;unix:///var/run/containerd/containerd.sock&quot; to the Node API object &quot;k8s-master-2&quot; as an annotation
[upload-certs] Storing the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace
[upload-certs] Using certificate key:
23911e0dbad42ca9c82419f332ffc9367fa1b3ca5c5ab74011026f96240b4e12
[mark-control-plane] Marking the node k8s-master-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master-2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: sewekg.e85zilfzi0v3vxf3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the &quot;cluster-info&quot; ConfigMap in the &quot;kube-public&quot; namespace
I0316 08:09:35.526354  737621 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I0316 08:09:35.527022  737621 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0316 08:09:35.527270  737621 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0316 08:09:35.538323  737621 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0316 08:09:35.606931  737621 kubeletfinalize.go:91] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found &quot;/var/lib/kubelet/pki/kubelet-client-current.pem&quot;
[kubelet-finalize] Updating &quot;/etc/kubernetes/kubelet.conf&quot; to point to a rotatable kubelet client certificate and key
I0316 08:09:35.611075  737621 kubeletfinalize.go:135] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run &quot;kubectl apply -f [podnetwork].yaml&quot; with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 10.0.16.71:6443 --token sewekg.e85zilfzi0v3vxf3 \
    --discovery-token-ca-cert-hash sha256:cd8b1e4f1b3311b5160c036cda22168247d30983f41702ed0f1e4b64a117eae1 \
    --control-plane --certificate-key 23911e0dbad42ca9c82419f332ffc9367fa1b3ca5c5ab74011026f96240b4e12

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
&quot;kubeadm init phase upload-certs --upload-certs&quot; to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.16.71:6443 --token sewekg.e85zilfzi0v3vxf3 \
    --discovery-token-ca-cert-hash sha256:cd8b1e4f1b3311b5160c036cda22168247d30983f41702ed0f1e4b64a117eae1</code></pre><br>

<h2 id="에러-종류">에러 종류</h2>
<p>보통 5,6,9번에서 에러가 남 
특히 private 환경에서 k8s를 구성할때 이미지 문제가 난다면 5번
인증서 부분의 문제면 6번, 최종 네트워크 연결인 kubelet에서 에러가 난다면 9번에서 에러가 난다. </p>
<p>그래서 로그를 확인하기 위해 init 명령어에 <code>--v=5</code> 옵션을 추가함!!</p>
<pre><code>sudo kubeadm init --image-repository=10.0.16.62:8080/new --control-plane-endpoint=10.0.16.71:6443 --upload-certs --v=5</code></pre><br>


<h2 id="실시간-로그-확인-명령어">실시간 로그 확인 명령어</h2>
<ul>
<li>실시간 kubelet 로그 확인: <code>journalctl -u kubelet -f</code></li>
<li>실시간 시스템 전체 로그 확인: <code>journalctl -xe -f</code>
<img src="https://velog.velcdn.com/images/jupiter-j/post/5aa38ded-74a8-45e7-8cb0-f1d80154b934/image.png" alt=""></li>
</ul>
]]></description>
        </item>
        <item>
            <title><![CDATA[[Github] The requested URL returned error: 403]]></title>
            <link>https://velog.io/@jupiter-j/Github-The-requested-URL-returned-error-403</link>
            <guid>https://velog.io/@jupiter-j/Github-The-requested-URL-returned-error-403</guid>
            <pubDate>Sun, 16 Mar 2025 09:49:19 GMT</pubDate>
            <description><![CDATA[<p>깃허브로 레파지토리를 만들어 문서들을 저장하려고 했는데 에러가 떴다. 오랜만이라서 다 잊어버림 ;; </p>
<p>에러로그</p>
<blockquote>
<p><strong>The requested URL returned error: 403</strong></p>
</blockquote>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/57954222-bb45-4d93-86aa-9cf0d07552d7/image.png" alt=""></p>
<p>github에서 토큰도 발급하고 제대로 붙여 넣었는데 왜 이런 에러가 뜰까?</p>
<br>

<ul>
<li>저장소 확인 : <code>git remote -v</code>
내가 만든 저장소와 제대로 동기화 되어있는지 확인하자
<img src="https://velog.velcdn.com/images/jupiter-j/post/61cb5405-1403-46eb-afa7-af92b80c986f/image.png" alt=""></li>
</ul>
<br>

<p><strong>제대로 동기화되어있는데 403 에러가 뜬거면 토큰이 잘못 생성된 문제다.</strong></p>
<p>Setting &gt; Developer settings &gt; Personal access tokens &gt; Tokesns (classic)에서 만든 토큰 설정에서 repo권한을 추가하여 만들어야 한다.</p>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/358ce787-b88f-4eeb-a94d-f8cafeb2eaf0/image.png" alt=""></p>
<ul>
<li>유저 등록: <code>git push --set-upstream origin master</code>
userid , token 값을 넣어 다시 해보자</li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/b4ee8f54-767b-494d-bd05-39c0e70a5edb/image.png" alt=""></p>
]]></description>
        </item>
        <item>
            <title><![CDATA[k8s 삭제 & 초기화]]></title>
            <link>https://velog.io/@jupiter-j/k8s-%EC%82%AD%EC%A0%9C</link>
            <guid>https://velog.io/@jupiter-j/k8s-%EC%82%AD%EC%A0%9C</guid>
            <pubDate>Mon, 03 Mar 2025 11:27:05 GMT</pubDate>
            <description><![CDATA[<p>k8s를 재설치할때 삭제하는 과정이 가장 중요하다. 이전의 설정파일이나 etcd 등 파일들이 잔존해있다면 새로운 에러들이 발생하기 때문이다. 테스트용 서버면 상관없지만 다중 마스터 + 다중 노드 구성 혹은 실제 서버에서 이런일이 일어나면 끔찍하니까... </p>
<p>지난 글의 public 환경에서 설치한 k8s를 삭제하는 과정을 정리함 </p>
<blockquote>
<h3 id="삭제되어야-하는-것들">삭제되어야 하는 것들</h3>
</blockquote>
<ol>
<li>잔존한 pv,pvc -&gt; deploy -&gt; pod 삭제하기</li>
<li>cni 네트워크 구성 삭제 </li>
<li>kubeadm reset (여기서부터 시작)</li>
<li>containerd 삭제 </li>
<li>k8s 데이터 삭제 </li>
</ol>
<br>


<ul>
<li>디스크 공간 사용량 확인
```
[root@k8s-master-2 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
devtmpfs                      4.0M     0  4.0M   0% /dev
tmpfs                         3.8G     0  3.8G   0% /dev/shm
tmpfs                         1.6G  9.4M  1.5G   1% /run
/dev/vda4                      49G   34G   16G  68% /
/dev/vda3                     960M  170M  791M  18% /boot
/dev/vda2                     200M  7.1M  193M   4% /boot/efi</li>
</ul>
<p>10.0.16.71:/data/cmp-nas/k8s   49G   34G   16G  68% /data/cmp-storage-k8s
tmpfs                         769M     0  769M   0% /run/user/0
shm                            64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/8c9d69891fb277dac8fe349389509ef37fae3de224d88d1451a5f3ade20cd7e2/shm
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/8c9d69891fb277dac8fe349389509ef37fae3de224d88d1451a5f3ade20cd7e2/rootfs
shm                            64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/305d1a64a736c9963ca6c7ea2b2f7f2bf2d03b03604050a349fe37048e7f9e6f/shm
shm                            64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/5fd79c42421880a8a32382f79444b079848a7c6c9c4275eced65b283e22667ab/shm
shm                            64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/78481a4c241d58864340b7b44750745b93b5667b6d7e9bfb5c8125ab19dfff11/shm
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/305d1a64a736c9963ca6c7ea2b2f7f2bf2d03b03604050a349fe37048e7f9e6f/rootfs
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/5fd79c42421880a8a32382f79444b079848a7c6c9c4275eced65b283e22667ab/rootfs
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/78481a4c241d58864340b7b44750745b93b5667b6d7e9bfb5c8125ab19dfff11/rootfs
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/0d21036db7d10b2562c4282dc0e8dba98f75da5b2a9a54becd8c091ca77a2d7c/rootfs
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/a2314e21319fcf4032ea08153b734d72a0ec7df87194b3a9b886e2f49cef6b19/rootfs
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/ebb6de4659547ac76a8db7ee359a199a7f185b9871c06a86d3839d3eeaf5f402/rootfs
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/89d8260f9f4a33608574baac4c4d11adc44c126df0a7f84784fd48e4cba56098/rootfs
tmpfs                         7.5G   12K  7.5G   1% /var/lib/kubelet/pods/08d6b196-6ba4-4547-a73b-660bfe91080e/volumes/kubernetes.io~projected/kube-api-access-jbwv5
shm                            64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/04a16e875f57322ec34f52a1dc36374b0ed73e03ec432d9bbf78c94fc1909809/shm
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/04a16e875f57322ec34f52a1dc36374b0ed73e03ec432d9bbf78c94fc1909809/rootfs
overlay                        49G   34G   16G  68% /run/containerd/io.containerd.runtime.v2.task/k8s.io/f15a69c8e52e8556492eaeb4da4fda7d8a35925cb835c69aed185c93368d48da/rootfs</p>
<pre><code>
* k8s reset</code></pre><p>sudo kubeadm reset -f</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/bb0b7e2d-1dc7-41a1-b1db-76c6c5fe91c4/image.png)

* 컨트롤 플레인 및 worker 노드의 Kubernetes 구성 제거</code></pre><p>systemctl stop kubelet
kubeadm reset --cri-socket unix:///var/run/containerd/containerd.sock
rm -rf /etc/cni/net.d $HOME/.kube/config</p>
<pre><code>&lt;br&gt;


* iptables, 네트워크 브릿지 설정 초기화</code></pre><p>[root@k8s-worker-1 lib]# iptables -F &amp;&amp; iptables -X &amp;&amp; iptables -t nat -F &amp;&amp; iptables -t nat -X
[root@k8s-worker-1 lib]# iptables -t mangle -F &amp;&amp; iptables -t mangle -X &amp;&amp; iptables -P FORWARD ACCEPT</p>
<pre><code>&lt;br&gt;

* Kubernetes 노드에서 불필요한 이미지 및 컨테이너 정리
containerd 및 kubelet 재시작으로 클린 상태 유지
디스크 공간 확보 및 환경 초기화</code></pre><p>nerdctl rmi $(nerdctl images -a -q) -f &amp;&amp; nerdctl container prune &amp;&amp; systemctl restart kubelet &amp;&amp; systemctl restart containerd &amp;&amp; nerdctl system prune -a</p>
<pre><code>&gt; * `nerdctl rmi $(nerdctl images -a -q) -f`
    모든 이미지 목록(-a) 에서 이미지 ID만 출력(-q)
    위에서 얻은 모든 이미지 ID를 nerdctl rmi 명령어에 전달하여 강제 삭제 (-f)
* `nerdctl container prune` : 실행 중이 않은 모든 컨테이너를 정리
* `systemctl restart kubelet &amp;&amp; systemctl restart containerd` : Kubelet과 containerd를 재시작하여 변경 사항 반영 및 정상 동작하도록 함.
* `nerdctl system prune -a` : 불필요한 리소스(네트워크, 볼륨 등)까지 모두 정리 -a 옵션을 붙이면 사용하지 않는 이미지, 컨테이너, 볼륨, 네트워크 등 전부 삭제


**현재 실행 중인 컨테이너는 정리되지 않지만, 정지된 컨테이너와 사용되지 않는 리소스는 모두 삭제됨.
**
</code></pre><p>[root@k8s-master-2 lib]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
devtmpfs                      4.0M     0  4.0M   0% /dev
tmpfs                         3.8G     0  3.8G   0% /dev/shm
tmpfs                         1.6G  8.7M  1.5G   1% /run
/dev/vda4                      49G   33G   17G  67% /
/dev/vda3                     960M  170M  791M  18% /boot
/dev/vda2                     200M  7.1M  193M   4% /boot/efi
10.0.16.71:/data/cmp-nas/k8s   49G   33G   17G  67% /data/cmp-storage-k8s
tmpfs                         769M     0  769M   0% /run/user/0</p>
<pre><code>![](https://velog.velcdn.com/images/jupiter-j/post/cdbffc56-ac5d-42d2-97c4-a85cb852894c/image.png)

* kubelet, kubeadm, kubectl 삭제</code></pre><p>sudo dnf remove -y kubelet kubeadm kubectl
sudo rm -rf /etc/yum.repos.d/kubernetes.repo</p>
<pre><code>* containerd 삭제</code></pre><p>[root@k8s-master-2 lib]# sudo systemctl stop containerd
[root@k8s-master-2 lib]# sudo systemctl disable containerd
Removed &quot;/etc/systemd/system/multi-user.target.wants/containerd.service&quot;.
[root@k8s-master-2 lib]# sudo rm -rf /usr/local/bin/containerd /usr/local/bin/containerd-shim* /usr/local/sbin/runc
[root@k8s-master-2 lib]# sudo rm -rf /etc/containerd /var/lib/containerd
[root@k8s-master-2 lib]# sudo rm -rf /opt/cni/bin /etc/cni /var/lib/cni</p>
<pre><code>
* k8s 데이터 삭제</code></pre><p>sudo rm -rf ~/.kube
sudo rm -rf /etc/kubernetes/
sudo rm -rf /var/lib/kubelet
sudo rm -rf /var/lib/etcd
```</p>
]]></description>
        </item>
        <item>
            <title><![CDATA[Public 환경에서 k8s 구성하기]]></title>
            <link>https://velog.io/@jupiter-j/Public-%ED%99%98%EA%B2%BD%EC%97%90%EC%84%9C-k8s-%EA%B5%AC%EC%84%B1%ED%95%98%EA%B8%B0</link>
            <guid>https://velog.io/@jupiter-j/Public-%ED%99%98%EA%B2%BD%EC%97%90%EC%84%9C-k8s-%EA%B5%AC%EC%84%B1%ED%95%98%EA%B8%B0</guid>
            <pubDate>Mon, 03 Mar 2025 11:22:01 GMT</pubDate>
            <description><![CDATA[<h1 id="public-환경에서-k8s-구성하기">Public 환경에서 k8s 구성하기</h1>
<blockquote>
<p>외부망 환경에서 k8s 설치하는 경우, 별도의 레지스토리나 이미지를 다운받아서 사용할 필요가 없다.</p>
</blockquote>
<blockquote>
<ul>
<li>설치 버전 정보
Contianerd(1.7.13v)/ CNI-Plugin(1.3.0v)/ runC(1.1.12v)/ k8s(1.30v) 으로 설치</li>
</ul>
</blockquote>
<ul>
<li>운영체제 : RHEL 9.4</li>
<li>Master2 / Worker1을 사용하여 실습</li>
</ul>
<h2 id="1-운영체제-준비">1. 운영체제 준비</h2>
<ul>
<li>k8s는 리눅스 기반 컨테이너 환경을 지원하기 때문에 RHEL, RockyLinux, Ubuntu 등을 사용한다.
나의 경우는 RHEL 9.4v을 사용</li>
</ul>
<h2 id="2-패키지-및-시스템-설정">2. 패키지 및 시스템 설정</h2>
<h3 id="외부접속-확인">외부접속 확인</h3>
<ul>
<li><p>서버가 외부접속이 가능한지 확인 <code>vi /etc/resolv.conf</code></p>
<p>  /etc/resolv.conf 파일은 어떤 DNS 서버를 사용할지 적혀있는 설정파일이다. nameserver는 DNS 서버의 주소를 정해주는 설정이다. 수정후 인터넷 외부접속이 가능한지 ping으로 확인한다. </p>
<pre><code class="language-bash">  [root@k8s-master-2 ~]# cat /etc/resolv.conf
  # Generated by NetworkManager
  search openstacklocal
  nameserver 8.8.8.8 # 추가 
  nameserver 10.0.16.3
  nameserver 10.0.16.4
  nameserver 10.0.16.2

  [root@k8s-master-2 ~]# ping 8.8.8.8
  PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
  64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=39.7 ms
  64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=35.8 ms
  64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=35.4 ms</code></pre>
</li>
</ul>
<h3 id="메모리-확인">메모리 확인</h3>
<ul>
<li><p>메모리 공간 확인 : <code>free -h</code></p>
<p>  시스템의 메모리(RAM)사용량을 확인한다. </p>
<pre><code>  [root@k8s-master-2 ~]# free -h
                 total        used        free      shared  buff/cache   available
  Mem:           7.5Gi       2.4Gi       4.0Gi        11Mi       1.3Gi       5.1Gi
  Swap:             0B          0B          0B</code></pre></li>
</ul>
<h3 id="스왑-메모리-비활성화">스왑 메모리 비활성화</h3>
<ul>
<li><p>스왑 메모리 비활성화: <code>swapoff -a</code></p>
</li>
<li><p>스왑 메모리가 0인지 확인: <code>free -m</code></p>
<p>  SwapMemory: RAM이 부족할때 디스크의 일부를 가상 메모리로 사용하는 기능이다. 스왑 메모리가 있으면 실제 메모리 사용량을 정확하게 파악하지 못하기 때문에 k8s의 리소스 관리와 스케줄링에 혼란을 줄 수 있다. </p>
<ul>
<li><p>비활성화가 되어있지 않은경우:  <code>sudo sed -i &#39;/swap/s/^/#/&#39; /etc/fstab</code></p>
<p>  /etc/fstab 파일에서 스왑 관련 항목을 찾아 그앞에 # 주석처리를 하여 비활성화 하는 명령어 </p>
</li>
</ul>
</li>
</ul>
<pre><code>```bash
[root@k8s-master-2 ~]# swapoff -a
[root@k8s-master-2 ~]# free -m
               total        used        free      shared  buff/cache   available
Mem:            7683        2500        4071          11        1363        5182
Swap:              0           0           0
```</code></pre><h3 id="방화벽-비활성화">방화벽 비활성화</h3>
<ul>
<li><p>방화벽 비활성화 설정</p>
<p>  k8s를 구성하기 위해서는 방화벽을 비활성화 해야한다. k8s는 서로 통신하기 위해 여러 포트를 개방해야한다. 방화벽이 활성화된 상태에서는 포트들을 차단할 가능성이 높기때문에 네트워크 문제가 발생할 수 있다.</p>
<table>
<thead>
<tr>
<th>서비스</th>
<th>포트 번호</th>
</tr>
</thead>
<tbody><tr>
<td><strong>api-server</strong></td>
<td>6443</td>
</tr>
<tr>
<td><strong>etcd</strong></td>
<td>2379-2380</td>
</tr>
<tr>
<td><strong>kubelet</strong></td>
<td>10250</td>
</tr>
<tr>
<td><strong>kube-scheduler</strong></td>
<td>10259</td>
</tr>
<tr>
<td><strong>kube-controller-manager</strong></td>
<td>10257</td>
</tr>
<tr>
<td>- 방화벽이 설치되어있는 경우</td>
<td></td>
</tr>
</tbody></table>
<pre><code class="language-bash">  # 방화벽 비활성화
  # systemctl disable firewalld &amp;&amp; systemctl stop firewalld

  # SELinux 일시적으로 비활성화
  # setenforce 0

  # SELinux 영구적으로 비활성화
  # sed -i &#39;s@SELINUX=.*@SELINUX=disabled@g&#39; /etc/selinux/config</code></pre>
<p>  나의 경우 방화벽 자체를 설치하지 않았음 </p>
<pre><code class="language-bash">  [root@k8s-master-2 ~]# systemctl status firewalld
  Unit firewalld.service could not be found.</code></pre>
</li>
</ul>
<h3 id="cgroup-설정">Cgroup 설정</h3>
<ul>
<li>파일 시스템 확인: <code>findmnt /sys/fs/cgroup</code><ul>
<li>파일시스템: 운영체제가 데이터를 저장하고, 읽고, 관리하는 방식이다. 데이터 저장 및 관리, 파일 디렉터리 구조 제공, 접근제어 및 권한 관리등 역할을 한다.</li>
</ul>
</li>
</ul>
<pre><code>    | 파일 시스템 | 설명 |
    | --- | --- |
    | **ext4** | 리눅스에서 가장 많이 쓰이는 파일 시스템 (ext2 → ext3 → ext4) |
    | **XFS** | 대용량 파일과 고성능을 지원하는 파일 시스템 |
    | **Btrfs** | 고급 기능(Snapshot, RAID 지원)을 제공하는 차세대 파일 시스템 |
    | **tmpfs** | RAM(메모리)에 데이터를 저장하는 파일 시스템 (재부팅 시 삭제됨) |
    | **cgroup2** | 컨테이너 리소스 제한을 위한 파일 시스템 |
- 여기서 cgroup2는 컨테이너 리소스 cpu, 메모리등을 관리하는 역할을 함
- findmnt는 Linux에서 마운트된 파일 시스템을 확인하는 명령어

```bash
[root@k8s-master-2 ~]# findmnt /sys/fs/cgroup
TARGET         SOURCE  FSTYPE  OPTIONS
/sys/fs/cgroup cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot
```</code></pre><ul>
<li><p>Cgroup 확인</p>
<ul>
<li><p>Cgroup이란 control group 리눅스 커널에서 제공하는 리소스 관리 기능으로 프로세스 그룹에 대해 cpu, 메모리,  네트워크 등 리소스를 제한 및 할당이 가능 하다.</p>
</li>
<li><p>k8s는 왜 cgroup을 사용하는가: k8s는 kubelet과 컨테이너 런타임(containerd, cri-o)은 Cgroup을 사용하여 리소스를 제안하고 관리한다. k8s는 기본적으로 cgroup v1 또는 cgroupv2를 사용함.</p>
</li>
<li><p><em>k8s 1.24버전 이상부터는 cgroup v2를 권장한다.*</em></p>
<pre><code class="language-bash"># /sys/fs/cgroup 디렉토리의 파일 시스템 타입을 확인하는 명령어
[root@k8s-master-2 ~]# stat -fc %T /sys/fs/cgroup
cgroup2fs

# mount -l | grep cgroup
[root@k8s-master-2 ~]# mount -l | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)</code></pre>
</li>
</ul>
</li>
<li><p>cgroup v2가 설정되어 있지 않은 경우</p>
<p>  GRUB: GRand Unified Bootloader <strong>리눅스 시스템을 부팅할때 실행되는 부트로더</strong>이다. </p>
<ul>
<li><p>systemd.unified_cgroup_hierarchy=1 → cgroup v2 활성화 하겠다.</p>
<p>```bash</p>
<h1 id="cgroup-v2가-설정되어-있지-않을때">cgroup v2가 설정되어 있지 않을때</h1>
<h2 id="grub-설정-변경">GRUB 설정 변경</h2>
<p>k8s_all.hcpkube ~]$ sudo vi /etc/default/grub</p>
</li>
<li><p>------------CHANGE-------------</p>
<h1 id="grub_cmdline_linux-항목에-systemdunified_cgroup_hierarchy1-추가">GRUB_CMDLINE_LINUX 항목에 &quot;systemd.unified_cgroup_hierarchy=1&quot; 추가</h1>
<p>GRUB_CMDLINE_LINUX=&quot;crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap systemd.unified_cgroup_hierarchy=1&quot;</p>
</li>
<li><p>------------CHANGE-------------END</p>
<h2 id="grub-설정을-업데이트-하여-부팅시-새로운-옵션을-적용">GRUB 설정을 업데이트 하여 부팅시 새로운 옵션을 적용</h2>
<p>k8s_all.hcpkube ~]$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Adding boot menu entry for UEFI Firmware Settings ...
done</p>
<h2 id="커널-버전을-확인">커널 버전을 확인</h2>
<p>k8s_all.hcpkube ~]$ sudo uname -r</p>
</li>
</ul>
<p>  5.14.0-427.13.1.el9_4.x86_64</p>
<h2 id="현재-실행-중인-커널이-grub에서-cgroupv2-를-사용하도록-설정">현재 실행 중인 커널이 GRUB에서 cgroupv2 를 사용하도록 설정</h2>
<p>  k8s_all.hcpkube ~]$ sudo grubby --update-kernel=/boot/vmlinuz-5.14.0-427.13.1.el9_4.x86_64 --args=&quot;systemd.unified_cgroup_hierarchy=1&quot;
  k8s_all.hcpkube ~]$ sudo reboot</p>
<pre><code>
</code></pre></li>
</ul>
<h3 id="커널-모듈-설정">커널 모듈 설정</h3>
<ul>
<li>커널: 운영체제의 핵심 부분으로 하드웨어와 소프트웨어에서 다리 역할을 하는 프로그램</li>
<li><code>overlay</code> 모듈: 컨테이너가 이미지 계층을 공유할 수 있도록 지원(Containerd, Docker 등)</li>
<li><code>br_netfilter</code> ****모듈: iptables가 브릿지 네트워크의 트래픽을 필터링할 수 있도록 설정하여, Kubernetes 네트워크 정책 및 Pod 간 통신을 관리할 수 있도록 함.</li>
</ul>
<pre><code class="language-bash">k8s_all.hcpkube ~]$ sudo vi /etc/modules-load.d/k8s.conf
-------------ADD-------------
overlay
br_netfilter
-------------ADD-------------END
k8s_all.hcpkube ~]$ sudo vi /etc/sysctl.conf
-------------ADD-------------
fs.file-max=66536
-------------ADD-------------END

# 커널 모듈 적용
sysctl --system</code></pre>
<table>
<thead>
<tr>
<th>개념</th>
<th>설명</th>
<th>위 설정 중 해당하는 부분</th>
</tr>
</thead>
<tbody><tr>
<td><strong>커널 모듈</strong></td>
<td>리눅스 커널의 기능을 확장하는 플러그인 같은 것</td>
<td><code>/etc/modules-load.d/k8s.conf</code> (→ <code>overlay</code>, <code>br_netfilter</code> 모듈 로드)</td>
</tr>
<tr>
<td><strong>커널 파라미터</strong></td>
<td>리눅스 커널이 동작하는 방식을 조정하는 값</td>
<td><code>/etc/sysctl.conf</code>, <code>/etc/sysctl.d/k8s.conf</code> (→ 네트워크 설정, IP 포워딩 등)</td>
</tr>
<tr>
<td><strong>파일 시스템</strong></td>
<td>데이터를 저장하는 구조 및 방식</td>
<td>(해당 없음, 하지만 <code>overlay</code> 모듈은 파일 시스템 관련)</td>
</tr>
</tbody></table>
<br>

<h3 id="iptables-추가-네트워크-패킷-설정--nat-테이블-초기화">iptables 추가 (네트워크 패킷 설정) &amp; NAT 테이블 초기화</h3>
<pre><code class="language-bash">vi /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward=1</code></pre>
<ul>
<li>iptables: 리눅스 방화벽 및 패킷 필터링 도구. 네트워크에서 들어오고 나가는 패킷을 필터링하고 제어하는 역할을 한다.</li>
<li><code>net.bridge.bridge-nf-call-iptables = 1</code> 브리지 네트워크를 통해 전달되는 IPv4 패킷이 iptables를 통해 필터링 되도록한다. 기본적으로 리눅스 시스템에서는 0으로 설정되어 iptables규칙 적용을 받지 않는다. 1로 설정하면서 k8s의 네트워크 구성요소들이 iptables를 통해 트래픽을 제어하고 관리할 수 있게 한다.</li>
<li><code>net.bridge.bridge-nf-call-ip6tables = 1</code> IPv6 패킷이 ip6tables를 통해 필터링 되도록한다</li>
<li><code>net.ipv4.ip_forward = 1</code> 시스템이 패킷 포워딩을 허용하도록 한다.</li>
</ul>
<pre><code class="language-bash"># 방화벽 규칙확인 - ACCEPT : 현재 특별한 방화벽 규칙이 적용되지 않음
[root@k8s-master-2 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

## 모든 방화벽 규칙 삭제 F / 사용자 정의 체인 삭제 X / 패킷 및 바이트 카운터 초기화 Z
[root@k8s-master-2 ~]# sudo iptables -F &amp;&amp; sudo iptables -X &amp;&amp; sudo iptables -Z
[root@k8s-master-2 ~]# iptables --table nat --flush
--------------------------------------------------------------------- 아래는 추가
k8s_all.hcpkube ~]$ sudo iptables -L
k8s_all.hcpkube ~]$ sudo iptables -F &amp;&amp; sudo iptables -X &amp;&amp; sudo iptables -Z  (또는 iptables --flush; iptables --delete-chain; iptables --zero )
k8s_all.hcpkube ~]$ sudo iptables --table nat --flush   (또는 iptables -t nat -F)
k8s_all.hcpkube ~]$ sudo systemctl disable firewalld --now &amp;&amp; sudo systemctl stop firewalld
k8s_all.hcpkube ~]$ sudo sed -i &#39;s@SELINUX=.*@SELINUX=disabled@g&#39; /etc/selinux/config &amp;&amp; cat /etc/selinux/config | grep SELINUX
k8s_all.hcpkube ~]$ sudo setenforce 0 &amp;&amp; getenforce</code></pre>
<table>
<thead>
<tr>
<th><strong>구분</strong></th>
<th><strong>iptables (기본 방화벽 기능)</strong></th>
<th><strong>NAT 테이블 (IP 주소 변환 기능)</strong></th>
</tr>
</thead>
<tbody><tr>
<td>역할</td>
<td>패킷을 허용, 차단, 로깅하는 방화벽 역할</td>
<td>IP 주소를 변환하여 내부/외부 네트워크 간 통신을 중계</td>
</tr>
<tr>
<td>주요 체인</td>
<td><code>INPUT</code>, <code>OUTPUT</code>, <code>FORWARD</code></td>
<td><code>PREROUTING</code>, <code>POSTROUTING</code>, <code>OUTPUT</code></td>
</tr>
<tr>
<td>사용 목적</td>
<td>특정 포트/프로토콜을 제한하여 보안 강화</td>
<td>사설 네트워크와 외부 네트워크 간 통신 가능하게 함</td>
</tr>
</tbody></table>
<br>

<table>
<thead>
<tr>
<th><strong>명령어</strong></th>
<th><strong>의도</strong></th>
<th><strong>이유</strong></th>
</tr>
</thead>
<tbody><tr>
<td><code>iptables -L</code></td>
<td>현재 방화벽 규칙 확인</td>
<td>기존 설정 점검</td>
</tr>
<tr>
<td><code>iptables -F</code></td>
<td>기존 방화벽 규칙(룰) 삭제</td>
<td>Kubernetes 네트워크 충돌 방지</td>
</tr>
<tr>
<td><code>iptables -X</code></td>
<td>사용자 정의 체인 삭제</td>
<td>커스텀 방화벽 규칙 제거</td>
</tr>
<tr>
<td><code>iptables -Z</code></td>
<td>패킷 및 바이트 카운터 초기화</td>
<td>새로운 트래픽 모니터링 가능하게 함</td>
</tr>
<tr>
<td><code>iptables -t nat -F</code></td>
<td>NAT 테이블 초기화</td>
<td>Kubernetes의 CNI 플러그인과 충돌 방지</td>
</tr>
</tbody></table>
<br>

<h3 id="파일-디스크립터-제한-늘리기">파일 디스크립터 제한 늘리기</h3>
<p>기본적으로 리눅스 시스템은 사용자가 열수 있는 파일 수에 제한을 두고 있다. k8s와 같은 대규모 오케스트레이션 플랫폼에서는 다수의 컨테이너와 프로세스가 동시에 시작되기 때문에 소프트 및 하드 제한을 늘려준다. </p>
<pre><code class="language-bash">cat &lt;&lt; EOF | sudo tee -a /etc/security/limits.conf
*               soft    nofile          65536
*               hard    nofile          65536
root            soft    nofile          65536
root            hard    nofile          65536
EOF</code></pre>
<br>
<br>

<h2 id="3-containerd-cni-plugin-runc-설치">3. Containerd, CNI-Plugin, RunC 설치</h2>
<p>k8s나 docker같은 컨테이너 오케스트레이션 도구는 직접 컨테이너를 실행하지 않기 때문에 <strong>컨테이너 실행을 담당하는 컨테이너 런타임이 필요</strong>하다.</p>
<ul>
<li><p>containerd는 k8s가 컨테이너를 실행하고 관리하는데 필요한 엔진이다</p>
<p>  <strong>kubelet</strong>(파드실행해줘 요청) → <strong>container runtime</strong>(containerd 요청을 <strong>runc</strong>에게 전달) → <strong>OCI</strong> runtime(runc 실제로 실행) → 컨테이너 실행 </p>
</li>
<li><p>Docker는 불필요한 기능들이 많음, dockershim거쳐서 사용해야한다는 불편함</p>
</li>
<li><p>containerd를 설치할 때 CNI 플러그인을 함께 설치하는 이유는 <strong>기본적인 네트워크 설정을 하기 위해서</strong></p>
<ul>
<li><strong>Flannel</strong>: 가벼운 오버레이 네트워크 제공 (기본 선택)</li>
<li><strong>Calico</strong>: 네트워크 정책 지원 (보안 강화)</li>
<li><strong>Weave</strong>: 자동 피어링 지원</li>
<li><strong>Bridge (기본 CNI)</strong>: 단순한 브리지 네트워크 (테스트용)</li>
</ul>
</li>
<li><p>기본 플러그인 말고 Cilium 을 추가적으로 하는 이유는?</p>
<p>  CNI 플러그인으로도 Pod 간 통신이 가능하지만, <strong>보안, 성능, 정책 관리가 부족하기 때문에 Cilium을 추가로 설치</strong></p>
</li>
<li><p><strong>k8s에서 containerd, runc가 필요한 이유</strong></p>
</li>
</ul>
<pre><code>| 구성 요소 | 역할 |
| --- | --- |
| **kubelet** | Kubernetes 노드에서 컨테이너를 실행 및 관리 |
| **containerd** | 컨테이너 런타임으로서 컨테이너 생성 및 이미지 관리 |
| **runc (OCI)** | 리눅스 **네임스페이스와 cgroup을 사용하여 컨테이너 실행** |</code></pre><p><img src="https://velog.velcdn.com/images/jupiter-j/post/aa33dc54-b164-4372-9756-7f65124b1301/image.png" alt=""></p>
<br>

<h3 id="containerd-설치">Containerd 설치</h3>
<pre><code class="language-bash">dnf install wget
wget https://github.com/containerd/containerd/releases/download/v1.7.13/containerd-1.7.13-linux-amd64.tar.gz
tar Cxzvf /usr/local containerd-1.7.13-linux-amd64.tar.gz

[root@k8s-master-2 ~]# cd /usr/local/bin
[root@k8s-master-2 bin]# ls
...
buildkit-qemu-aarch64    containerd               nerdctl
buildkit-qemu-arm        containerd-shim          rootlessctl
buildkit-qemu-i386       containerd-shim-runc-v1  rootlesskit
buildkit-qemu-mips64     containerd-shim-runc-v2  rootlesskit-docker-proxy
buildkit-qemu-mips64el   containerd-stress
buildkit-qemu-ppc64le    crictl</code></pre>
<ul>
<li><p>containerd 파일 시스템 등록</p>
<p>  Kubernetes에서 <strong>컨테이너 런타임</strong>을 설정하고, 시스템 부팅 시 자동으로 실행되도록 하여 <strong>클러스터 관리</strong> 및 <strong>컨테이너 실행</strong>의 기반을 설정한다. </p>
</li>
</ul>
<pre><code class="language-bash">sudo mkdir -p /usr/local/lib/systemd/system
curl -o /usr/local/lib/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

# 서비스 파일 인식
sudo systemctl daemon-reload
# 서비스 시작 및 자동 실행 설정 
sudo systemctl enable --now containerd
systemctl status containerd</code></pre>
<ul>
<li><p>containerd의 기본 설정 파일(<code>config.toml</code>)을 생성 및 초기화</p>
<p>  <code>SystemdCgroup = true</code> 설정은 <code>containerd</code>가 <strong>systemd의 cgroup 관리 방식을 사용</strong>하도록 하여, Kubernetes와 <code>containerd</code>의 리소스 관리가 <strong>일관성 있게 통합</strong>되도록 한다. </p>
</li>
</ul>
<pre><code class="language-bash">sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

[root@k8s-master-2 bin]# vi /etc/containerd/config.toml
135             Root = &quot;&quot;
136             ShimCgroup = &quot;&quot;
137             SystemdCgroup = true ## 변경 
138

------------------------------------------------------------------
[root@k8s-master-2 bin]# cat /etc/containerd/config.toml | grep -E &quot;root =|sandbox_image|config_path|SystemdCgroup&quot;
root = &quot;/var/lib/containerd&quot;
    sandbox_image = &quot;registry.k8s.io/pause:3.8&quot;
        runtime_root = &quot;&quot;
          runtime_root = &quot;&quot;
            SystemdCgroup = true ## 확인하기 
        runtime_root = &quot;&quot;
      config_path = &quot;&quot;
    plugin_config_path = &quot;/etc/nri/conf.d&quot;
    runtime_root = &quot;&quot;
    config_path = &quot;&quot;

systemctl restart containerd
systemctl status containerd    </code></pre>
<br>

<h3 id="cni---plugin-설치">CNI - plugin 설치</h3>
<pre><code class="language-bash">wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz

systemctl daemon-reload
systemctl restart containerd</code></pre>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/4978d1b6-6f82-42d2-9dbe-67831f59e9db/image.png" alt=""></p>
<h3 id="runc-설치">runC 설치</h3>
<pre><code class="language-bash">wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
sudo install -m 755 runc.amd64 /usr/local/sbin/runc</code></pre>
<br>

<h3 id="모든-설치-확인">모든 설치 확인</h3>
<pre><code class="language-bash">[root@k8s-master-2 k8s_lv1]# containerd --version
containerd github.com/containerd/containerd v1.7.13 7c3aca7a610df76212171d200ca3811ff6096eb8
[root@k8s-master-2 k8s_lv1]# which containerd
/usr/local/bin/containerd
[root@k8s-master-2 k8s_lv1]# runc --version
runc version 1.1.12
commit: v1.1.12-0-g51d5e946
spec: 1.0.2-dev
go: go1.20.13
libseccomp: 2.5.4
[root@k8s-master-2 k8s_lv1]# which runc
/usr/local/sbin/runc

## 만약 이 가이드 기준 runc version이 뜨지 않는다면
echo &quot;export PATH=$PATH:/usr/local/sbin&quot; &gt;&gt; ~/.bashrc
source ~/.bashrc
</code></pre>
<br>

<h2 id="4-k8s-설치">4. k8s 설치</h2>
<pre><code class="language-bash">[root@k8s-master-2 k8s_lv1]# cd /etc/yum.repos.d/
[root@k8s-master-2 yum.repos.d]# ls
local.repo  nexus.repo
[root@k8s-master-2 yum.repos.d]# cat &lt;&lt;EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
EOF
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
[root@k8s-master-2 yum.repos.d]# ls
kubernetes.repo  local.repo  nexus.repo

sudo dnf install -y kubelet kubeadm kubectl
sudo systemctl enable --now kubelet

[root@k8s-master-2 default]# rpm -qa | grep kube
kubernetes-cni-1.4.0-150500.1.1.x86_64
kubelet-1.30.10-150500.1.1.x86_64
kubeadm-1.30.10-150500.1.1.x86_64
kubectl-1.30.10-150500.1.1.x86_64

## 버전확인
[root@k8s-master-2 yum.repos.d]# kubectl version --client
Client Version: v1.30.10
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
[root@k8s-master-2 yum.repos.d]# kubeadm version
kubeadm version: &amp;version.Info{Major:&quot;1&quot;, Minor:&quot;30&quot;, GitVersion:&quot;v1.30.10&quot;, GitCommit:&quot;ccc69071da5040a2bafc1ba9c4775782e0f4e55c&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2025-02-12T21:32:03Z&quot;, GoVersion:&quot;go1.22.12&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}
[root@k8s-master-2 yum.repos.d]# kubelet --version
Kubernetes v1.30.10</code></pre>
<ul>
<li>cgroup 설정 (Kubelet에 systemd 사용)</li>
</ul>
<pre><code class="language-bash">[root@k8s-master-2 yum.repos.d]# sudo mkdir -p /etc/default
[root@k8s-master-2 yum.repos.d]# echo &#39;KUBELET_EXTRA_ARGS=&quot;--cgroup-driver=systemd&quot;&#39; | sudo tee /etc/default/kubelet
KUBELET_EXTRA_ARGS=&quot;--cgroup-driver=systemd&quot;

[root@k8s-master-2 yum.repos.d]# cd /etc/default
[root@k8s-master-2 default]# ls
grub  kubelet  useradd

[root@k8s-master-2 default]# pwd
/etc/default
[root@k8s-master-2 default]# cat /etc/default/kubelet
KUBELET_EXTRA_ARGS=&quot;--cgroup-driver=systemd&quot;

[root@k8s-master-2 default]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Wed 2025-02-26 00:28:31 EST; 7s ago
       Docs: https://kubernetes.io/docs/
    Process: 139741 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_E&gt;
   Main PID: 139741 (code=exited, status=1/FAILURE)
        CPU: 137ms</code></pre>
<ul>
<li>init전 필요한 이미지 당겨오기</li>
</ul>
<pre><code class="language-bash"># 필요한 이미지 목록 확인
[root@k8s-master-2 manifests]# kubeadm config images list
I0303 05:09:45.913718    7380 version.go:256] remote version is much newer: v1.32.2; falling back to: stable-1.30
registry.k8s.io/kube-apiserver:v1.30.10
registry.k8s.io/kube-controller-manager:v1.30.10
registry.k8s.io/kube-scheduler:v1.30.10
registry.k8s.io/kube-proxy:v1.30.10
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.15-0

# 필요한 이미지 pull
[root@k8s-master-2 kubeadm_v]# kubeadm config images pull
I0226 00:35:34.543630  140145 version.go:256] remote version is much newer: v1.32.2; falling back to: stable-1.30
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.30.10
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.30.10
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.30.10
[config/images] Pulled registry.k8s.io/kube-proxy:v1.30.10
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.3
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.15-0</code></pre>
<ul>
<li>마스터 노드에서 init 명령어 수행</li>
</ul>
<pre><code class="language-bash">sudo kubeadm init --apiserver-advertise-address=10.0.16.71 **--v=5**</code></pre>
<pre><code class="language-bash">Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run &quot;kubectl apply -f [podnetwork].yaml&quot; with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.16.71:6443 --token zbgtjh.d14vq735od97url0 \
    --discovery-token-ca-cert-hash sha256:28f7a78b3b913ceec0f039f9f8a38ff0770f473d34698c4b2f055df822f0d9fe</code></pre>
<ul>
<li><p>마스터 노드 조인시</p>
<pre><code class="language-bash">  ## certificate-key 생성
  kubeadm token create --certificate-key 

  ## join할 마스터 노드에 certificate-key 추가
  kubeadm join 10.0.16.71:6443 --token **zbgtjh.d14vq735od97url0 \
      --discovery-token-ca-cert-hash sha256:28f7a78b3b913ceec0f039f9f8a38ff0770f473d34698c4b2f055df822f0d9fe** --control-plane --certificate-key &lt;certificate-key&gt;</code></pre>
</li>
<li><p>워커 노드 조인시</p>
<pre><code class="language-bash">  kubeadm join 10.0.16.71:6443 --token zbgtjh.d14vq735od97url0 \
      --discovery-token-ca-cert-hash sha256:28f7a78b3b913ceec0f039f9f8a38ff0770f473d34698c4b2f055df822f0d9fe</code></pre>
</li>
</ul>
<pre><code class="language-bash">[root@k8s-master-2 ~]# k get no
NAME           STATUS     ROLES           AGE     VERSION
k8s-master-2   NotReady   control-plane   8m35s   v1.30.10

[root@k8s-master-2 ~]# k get po -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-55cb58b774-87sng               0/1     Pending   0          8m29s
kube-system   coredns-55cb58b774-n8zcw               0/1     Pending   0          8m29s
kube-system   etcd-k8s-master-2                      1/1     Running   3          8m37s
kube-system   kube-apiserver-k8s-master-2            1/1     Running   3          8m35s
kube-system   kube-controller-manager-k8s-master-2   1/1     Running   3          8m36s
kube-system   kube-proxy-qhls9                       1/1     Running   0          8m29s
kube-system   kube-scheduler-k8s-master-2            1/1     Running   3          8m39s

[root@k8s-master-2 ~]# k get no
NAME           STATUS     ROLES           AGE     VERSION
k8s-master-2   NotReady   control-plane   9m18s   v1.30.10
k8s-worker-1   NotReady   &lt;none&gt;          6s      v1.30.10
</code></pre>
<ul>
<li>레이블 추가</li>
</ul>
<pre><code class="language-bash">[root@k8s-master-2 ~]# k label node k8s-worker-1 node-role.kubernetes.io/worker-1=worker-1
node/k8s-worker-1 labeled
[root@k8s-master-2 ~]# k get no
NAME           STATUS     ROLES           AGE     VERSION
k8s-master-2   NotReady   control-plane   13m     v1.30.10
k8s-worker-1   NotReady   worker-1        3m56s   v1.30.10 # 반영됨</code></pre>
<ul>
<li><code>CoreDNS</code>가 <code>Pending</code> 상태인 이유는 네트워크 플러그인(예: Calico 또는 Flannel 등)이 제대로 설치되지 않았기 때문</li>
</ul>
<p><img src="https://velog.velcdn.com/images/jupiter-j/post/41e0af54-7361-49ee-a83f-0224cd7ec9a7/image.png" alt=""></p>
<br>


<h3 id="디렉토리--파일-정리">디렉토리 &amp; 파일 정리</h3>
<blockquote>
<ul>
<li><code>/etc/resolv.conf</code> : 시스템에서 <strong>DNS(Domain Name System) 설정</strong>을 관리하는 <strong>파일</strong></li>
</ul>
</blockquote>
<ul>
<li><code>/sys/fs/cgroup</code> : <strong>cgroup</strong>(control group) 관련 파일들이 위치한 디렉토리로, 리소스 제한, 계층 구조 등을 설정하는 파일 시스템을 제공하는 <strong>디렉토리</strong></li>
<li><code>/etc/default/grub</code> : 리눅스 시스템에서 <strong>부트 로더</strong> 역할을 하며, 시스템 부팅 시 운영 체제를 로드하는 프로그램 (<strong>파일</strong>)</li>
<li><code>/etc/modules-load.d/k8s.conf</code>: <strong>리눅스 커널 모듈을 자동으로 로드</strong>하는 설정 <strong>파일</strong><br>  → Kubernetes가 정상적으로 동작하기 위해 필요한 커널 모듈을 자동으로 로드하는 <strong>파일</strong></li>
<li><code>/etc/sysctl.d/k8s.conf</code> : Kubernetes 클러스터를 운영할 때 <strong>네트워크와 관련된 커널 설정</strong>을 최적화하는 역할의 <strong>파일</strong>
  → Kubernetes가 정상적으로 동작하도록 <strong>커널 매개변수(sysctl 설정)를 적용하는 파일</strong></li>
<li><code>/etc/security/limits.conf</code> : 리눅스에서 사용자별 또는 그룹별로 시스템 리소스 제한을 설정하는 <strong>파일</strong></li>
<li><code>/usr/local/lib/systemd/system</code> : 디렉토리는 <code>systemd</code>의 사용자 정의 서비스 파일을 저장하는 위치</li>
<li><code>/etc/containerd/config.toml</code> : <code>containerd</code>의 주요 설정 파일로, <code>containerd</code>의 동작 방식을 제어하는 여러 가지 설정을 포함</li>
</ul>
<br>


<h1 id="번외--ca인증서-추가-설치">번외 : CA인증서 추가 설치</h1>
<h3 id="k8s가-ca인증서를-사용하는-이유">k8s가 CA인증서를 사용하는 이유</h3>
<ul>
<li>k8s는 내부적으로 TLS인증을 사용해서 클러스터 내에서 통신을 보호한다. kubeadm init을 하면 /etc/kubernetes/pki/ 경로에 자동으로 CA인증서가 생성된다.</li>
<li>API Server, Kubelet, Controller Manager, Scheduler등의 구성요소간 TLS인증을 제공한다.</li>
</ul>
<h3 id="따로-ca인증서를-적용해야-하는-이유">따로 CA인증서를 적용해야 하는 이유</h3>
<ul>
<li><p>보통 Kubeadm이 자동으로 CA를 생성하지만 기업 환경이나 보안 요구사항이 높은 경우에는 직접 CA를 생성해야하는 경우가 있다.</p>
<ol>
<li><p>외부 CA 사용</p>
<p> 기업에서 이미 신뢰하는 CA를 사용해서 보안정책을 통제하는 경우 </p>
</li>
<li><p>고가용성 HA 클러스터 구축</p>
<p> 각 마스터 노드가 자체적으로 CA를 생성하면 인증이 일관되지 않기 때문에 공통된 CA를 만들어서 마스터에게 동일하게 적용시켜야하기 때문</p>
</li>
<li><p>클러스터 재설치 혹은 마스터 노드 교체를 해야할때</p>
<p> 클러스터를 재설치하거나 마스터 노드를 교체할때 인증서가 필요함</p>
</li>
<li><p>사용자 인증</p>
<p> RBAC정책을 적용하는 경우 사용자 또는 서비스 계정에 맞는 인증서를 발급해야함</p>
</li>
<li><p>서비스간 TLS 보안 통신</p>
<p> Pod간의 보안 통신을 암호화하기 위해 TLS를 적용할때 별도의 CA를 사용할 수 있다. </p>
</li>
</ol>
</li>
</ul>
<pre><code class="language-bash">/etc/kubernetes/pki/
├── ca.crt  # 클러스터의 루트 CA 인증서
├── ca.key  # 클러스터의 루트 CA 개인 키
├── apiserver.crt  # Kube API 서버 인증서
├── apiserver.key  # Kube API 서버 개인 키
├── apiserver-kubelet-client.crt  # Kubelet이 API 서버와 통신할 때 사용하는 인증서
├── apiserver-kubelet-client.key
├── front-proxy-ca.crt  # 프록시 CA 인증서
├── front-proxy-ca.key
├── front-proxy-client.crt  # 프록시 클라이언트 인증서
├── front-proxy-client.key
├── etcd/
│   ├── ca.crt  # etcd CA 인증서
│   ├── ca.key
│   ├── server.crt  # etcd 서버 인증서
│   ├── server.key
│   ├── peer.crt  # etcd 노드 간 통신 인증서
│   ├── peer.key
</code></pre>
<ul>
<li>인증서 생성방법</li>
</ul>
<pre><code class="language-bash">[root@k8s-master-2 yum.repos.d]# mkdir -p /etc/kubernetes/pki
[root@k8s-master-2 yum.repos.d]# cd /etc/kubernetes/pki
[root@k8s-master-2 pki]# openssl genrsa -out ca.key 2048
[root@k8s-master-2 pki]# openssl req -x509 -new -nodes -key ca.key -subj &quot;/CN=kubernetes-ca&quot; -days 10000 -out ca.crt

[root@k8s-master-2 pki]# cd /etc/kubernetes/pki
[root@k8s-master-2 pki]# ls
ca.crt  ca.key

/etc/kubernetes/pki/ca.crt  # CA 인증서
/etc/kubernetes/pki/ca.key  # CA 개인 키</code></pre>
<p>수동으로 ca.key와 ca.crt를 생성한 후 kubeadm init을 실행하면 k8s의 기본 동작 방식이 달라진다. 원래는 <strong>init시에 CA인증서가 자동생성</strong>되지만 init을 실행하면 기존 CA파일을 그대로 사용한다.</p>
<p>CA는 클러스터내 모든 컴포넌트가 서로 신뢰하도록 하기위해 사용됨! 
(kube-apiserver, kubelet, controller-manager, scheduler 등)</p>
]]></description>
        </item>
        <item>
            <title><![CDATA[k8s Pod 생성 과정 정리]]></title>
            <link>https://velog.io/@jupiter-j/k8s-Pod-%EC%83%9D%EC%84%B1-%EA%B3%BC%EC%A0%95-%EC%A0%95%EB%A6%AC</link>
            <guid>https://velog.io/@jupiter-j/k8s-Pod-%EC%83%9D%EC%84%B1-%EA%B3%BC%EC%A0%95-%EC%A0%95%EB%A6%AC</guid>
            <pubDate>Mon, 03 Mar 2025 11:17:59 GMT</pubDate>
            <description><![CDATA[<h1 id="k8s-pod-생성-과정-정리">k8s Pod 생성 과정 정리</h1>
<br>

<p><img src="https://velog.velcdn.com/images/jupiter-j/post/a27fd464-d88d-44ae-8f80-d4194784e0f9/image.png" alt=""></p>
<ol>
<li><p><strong>사용자가 Deployment, Pod 생성 요청</strong> (kubectl, API call)</p>
</li>
<li><p><strong>API Server가 요청을 받아 etcd에 저장</strong> (클러스터 상태 반영)</p>
</li>
<li><p>Scheduler가 적절한 노드를 선택하여 Pod 배치</p>
</li>
<li><p>kubelet이 해당 노드에서 Pod 실행 요청 처리 (이 요청은 gRPC 프로토콜을 통해 전달됨)</p>
</li>
<li><p>CRI(Container Runtime Interface)를 통해 containerd와 통신</p>
<p> <strong>CRI</strong>는 Kubernetes가 <strong>Container Runtime</strong>(여기서는 <strong>containerd</strong>)와 통신할 수 있도록 해주는 인터페이스. <code>CRI-containerd</code>는 이 인터페이스를 통해 <strong>containerd</strong>에 요청을 전달하여 컨테이너 실행을 요청</p>
</li>
<li><p>containerd는 먼저 <strong>sandbox container</strong>(보통 <code>pause</code> 컨테이너)를 생성하여 Pod의 네트워크 및 기본 환경을 설정</p>
<ul>
<li><strong>Pod 내부 컨테이너들이 동일한 네트워크를 쓰게 하려면, 네트워크 네임스페이스를 공유해야 함. 이 역할을 하는 것이 바로 <code>sandbox</code> 컨테이너</strong></li>
<li><strong>Pod 내부 컨테이너들은 이 sandbox 컨테이너의 네트워크를 공유</strong>하게됨</li>
</ul>
</li>
<li><p>CNI(Container Network Interface)를 통해 네트워크 설정 (sandbox container에 적용)</p>
<ul>
<li><strong>CNI</strong>는 네트워크 설정을 담당하는 플러그인. <strong>sandbox 컨테이너</strong>에 IP 할당, 네트워크 라우팅 등을 설정하여 <strong>Pod 내부의 컨테이너들이 동일한 네트워크를 공유하도록 함</strong></li>
</ul>
</li>
<li><p>containerd가 OCI Runtime(runc)을 사용하여 애플리케이션 컨테이너 실행</p>
<ul>
<li><strong>runc</strong>는 컨테이너를 실제로 실행하고 관리하는 역할을 함</li>
</ul>
</li>
<li><p>cgroup, namespace를 사용해 컨테이너 격리</p>
</li>
<li><p>모든 컨테이너가 정상적으로 실행되면 Pod가 Ready 상태가 됨</p>
</li>
</ol>
<br>

<table>
<thead>
<tr>
<th><strong>컴포넌트</strong></th>
<th><strong>역할 및 기능</strong></th>
</tr>
</thead>
<tbody><tr>
<td><strong>kubelet</strong></td>
<td>- 해당 노드에서 Pod 실행을 관리하는 Kubernetes 에이전트- API Server로부터 Pod 실행 요청을 받고, 컨테이너 런타임(containerd 등)에게 컨테이너 실행 요청을 전달</td>
</tr>
<tr>
<td><strong>CRI (Container Runtime Interface)</strong></td>
<td>- kubelet과 컨테이너 런타임(containerd, CRI-O 등) 간의 표준 API 인터페이스- Kubernetes가 다양한 컨테이너 런타임을 지원할 수 있도록 설계됨</td>
</tr>
<tr>
<td><strong>CRI-containerd</strong></td>
<td>- containerd가 CRI 요청을 처리할 수 있도록 지원하는 플러그인- kubelet이 CRI를 통해 컨테이너를 실행할 수 있도록 함</td>
</tr>
<tr>
<td><strong>OCI (Open Container Initiative)</strong></td>
<td>- 컨테이너 런타임 및 이미지 포맷을 표준화하는 조직- runc, crun 같은 실행 환경을 제공</td>
</tr>
<tr>
<td><strong>CNI (Container Network Interface)</strong></td>
<td>- 컨테이너의 네트워크 설정을 담당하여 Pod 간 통신을 가능하게 함- <code>Calico</code>, <code>Flannel</code> 같은 네트워크 플러그인 사용</td>
</tr>
<tr>
<td><strong>gRPC</strong></td>
<td>- kubelet과 containerd가 통신할 때 사용하는 RPC(Remote Procedure Call) 프로토콜</td>
</tr>
<tr>
<td><strong>containerd client</strong></td>
<td>- kubelet이 containerd에게 컨테이너 생성/삭제 요청을 하는 클라이언트</td>
</tr>
<tr>
<td><strong>cgroup</strong></td>
<td>- 컨테이너의 CPU, 메모리 등 리소스를 제한 및 관리</td>
</tr>
<tr>
<td><strong>namespace</strong></td>
<td>- 컨테이너마다 격리된 실행 환경을 제공하여 다른 컨테이너와 분리</td>
</tr>
<tr>
<td><strong>sandbox container</strong></td>
<td>- Pod 내 네트워크 및 보안 환경을 설정하는 컨테이너 (<code>pause</code> 컨테이너)</td>
</tr>
</tbody></table>
]]></description>
        </item>
        <item>
            <title><![CDATA[Virtualbox-SSH접속-포트포워딩]]></title>
            <link>https://velog.io/@jupiter-j/Virtualbox-SSH%EC%A0%91%EC%86%8D-%ED%8F%AC%ED%8A%B8%ED%8F%AC%EC%9B%8C%EB%94%A9</link>
            <guid>https://velog.io/@jupiter-j/Virtualbox-SSH%EC%A0%91%EC%86%8D-%ED%8F%AC%ED%8A%B8%ED%8F%AC%EC%9B%8C%EB%94%A9</guid>
            <pubDate>Sun, 02 Feb 2025 08:21:24 GMT</pubDate>
            <description><![CDATA[<blockquote>
<ul>
<li>사용하고 있는 가상머신: Virtualbox</li>
</ul>
</blockquote>
<ul>
<li>RedHat9.4v</li>
</ul>
<br>
<br>

<h1 id="1-ssh-설치">1. SSH 설치</h1>
<hr>
<blockquote>
<p>로컬 PC와 VM이 SSH로 통신하려면 22번포트가 열려있어야 한다.</p>
</blockquote>
<ul>
<li><p>설치 명령어</p>
<pre><code>dnf install openssh-server
dnf apt-get update
dnf apt-get install net-tools
dnf install net-tools</code></pre><br>
</li>
<li><p>설치되어있는지 확인 <strong>rpm -qa | grep ssh</strong></p>
<pre><code>[root@vbox ~]# rpm -qa | grep ssh
libssh-config-0.10.4-13.el9.noarch
libssh-0.10.4-13.el9.x86_64
openssh-8.7p1-43.el9.x86_64
openssh-clients-8.7p1-43.el9.x86_64
openssh-server-8.7p1-43.el9.x86_64</code></pre><br>


</li>
</ul>
<ul>
<li>설치된 ssh 위치 확인 <strong>which ssh</strong><pre><code>[root@vbox ~]# which ssh
/usr/bin/ssh</code></pre></li>
</ul>
<br>


<ul>
<li><p>ssh config파일에서 설정 추가 &amp; 주석 해제시키기</p>
</li>
<li><p><em>vi /etc/ssh/sshd_config*</em></p>
<pre><code>[root@vbox ~]# vi /etc/ssh/sshd_config
21 Port 22  ##주석 해제
22 #AddressFamily any
23 #ListenAddress 0.0.0.0
..
39 #LoginGraceTime 2m
40 PermitRootLogin prohibit-password  ##주석해제
41 #StrictModes yes
42 #MaxAuthTries 6</code></pre></li>
<li><p>22포트 Linsten중인지 확인하기 <strong>netstat -na | grep tcp | grep 22</strong></p>
<pre><code>[root@vbox ~]# netstat -na | grep tcp | grep 22
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
tcp6       0      0 :::22                   :::*                    LISTEN     </code></pre></li>
</ul>
<br>

<ul>
<li>sshd 상태 확인 <strong>systemctl status sshd</strong>
아래 server listening on port 22가 보여야함 <pre><code>[root@vbox ~]# systemctl status sshd
</code></pre></li>
</ul>
<pre><code>
![](https://velog.velcdn.com/images/jupiter-j/post/61e2ce59-5db9-41b3-8767-e95978401ede/image.png)

&lt;br&gt;
&lt;br&gt;

# 2. 가상머신 포트포워딩
---
&gt; ### 왜 포트 포워딩이 필요할까?
* **NAT 모드의 네트워크 제한:**
NAT 모드에서는 VM이 내부 네트워크에 속해 있고, 호스트 시스템(로컬 PC)이 인터넷 또는 외부 네트워크와 통신할 수 있다.
하지만, 외부 시스템에서 VM에 직접 접근할 수 없다. NAT 모드는 VM에 고유한 공인 IP 주소를 할당하지 않고, 호스트 시스템의 IP 주소를 사용하여 네트워크 통신을 하기 때문
* **외부 접속을 위한 포트 연결:**
포트 포워딩을 사용하면, 호스트 시스템의 특정 포트(예: 2222번)를 통해 들어오는 트래픽을 VM의 22번 포트 (SSH 포트)로 전달할 수 있다.
이렇게 하면, 로컬 PC에서 VM의 SSH 포트에 접근할 수 있게 된다.

&lt;br&gt;

VM설정에서 setting -&gt; expert 모드를 선택해야함.
그래야 네트워크에서 포트포워딩을 추가할 수 있다.
![](https://velog.velcdn.com/images/jupiter-j/post/5e24907d-cf63-4da5-aee1-32d566dd91e5/image.png)

* 호스트 ip: VM이 실행되는 실제 컴퓨터
    예를 들어, 로컬 PC 또는 서버에서 VirtualBox, VMware 등의 가상화 소프트웨어가 실행되는 경우 이 PC의 IP 주소가 호스트 IP이다. 
    * 확인방법: 로컬 PC에서 ipconfig(Windows) 또는 ifconfig 또는 ip a(Linux/macOS) 명령어를 통해 호스트 IP를 확인

&gt; 명령프롬포트 -&gt; ipconfig -&gt; 무선LAN - IPv4주소
![](https://velog.velcdn.com/images/jupiter-j/post/eb2fddf0-982f-4f01-9484-2be10890112d/image.png)

&lt;br&gt;

## Xterm 접속
---
![](https://velog.velcdn.com/images/jupiter-j/post/a78ff516-661f-4dd0-9121-58378db4f0a3/image.png)


&lt;br&gt;

가상머신과 xterm 서로 ssh 접속이 잘된것을 확인할 수 있음

![](https://velog.velcdn.com/images/jupiter-j/post/e68f57f8-c224-4023-adf3-79aad776b9f8/image.png)</code></pre>]]></description>
        </item>
        <item>
            <title><![CDATA[Kubernetes Udemy MOCK -2]]></title>
            <link>https://velog.io/@jupiter-j/Kubernetes-Udemy-MOCK-2</link>
            <guid>https://velog.io/@jupiter-j/Kubernetes-Udemy-MOCK-2</guid>
            <pubDate>Sat, 10 Aug 2024 14:28:00 GMT</pubDate>
            <description><![CDATA[<h1 id="기본설정">기본설정</h1>
<pre><code>source &lt;(kubectl completion bash) # bash-completion 패키지를 먼저 설치한 후, bash의 자동 완성을 현재 셸에 설정한다
echo &quot;source &lt;(kubectl completion bash)&quot; &gt;&gt; ~/.bashrc # 자동 완성을 bash 셸에 영구적으로 추가한다

source &lt;(kubectl completion bash) echo &quot;source &lt;(kubectl completion bash)&quot; &gt;&gt; ~/.bashrc


alias k=kubectl
complete -o default -F __start_kubectl k</code></pre><h2 id="backup">backup</h2>
<h4 id="take-a-backup-of-the-etcd-cluster-and-save-it-to-optetcd-backupdb">Take a backup of the etcd cluster and save it to /opt/etcd-backup.db.</h4>
<blockquote>
<p>kuberntes doc - etcdctl backup
<a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/">https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/</a></p>
</blockquote>
<ul>
<li>built in stnapshot<pre><code>#ETCDCTL_API=3 etcdctl etcdctl --endpoints $ENDPOINT snapshot save snapshot.db</code></pre></li>
</ul>
<pre><code>controlplane ~ ➜  #ETCDCTL_API=3 etcdctl etcdctl --endpoints $ENDPOINT snapshot save snapshot.db

controlplane ~ ➜  ETCDCTL_API=3 etcdctl snapshot save -h
NAME:
        snapshot save - Stores an etcd node backend snapshot to a given file

USAGE:
        etcdctl snapshot save &lt;filename&gt;

GLOBAL OPTIONS:
      --cacert=&quot;&quot;                               verify certificates of TLS-enabled secure servers using this CA bundle
      --cert=&quot;&quot;                                 identify secure client using this TLS certificate file
      --command-timeout=5s                      timeout for short running command (excluding dial timeout)
      --debug[=false]                           enable client-side debug logging
      --dial-timeout=2s                         dial timeout for client connections
  -d, --discovery-srv=&quot;&quot;                        domain name to query for SRV records describing cluster endpoints
      --endpoints=[127.0.0.1:2379]              gRPC endpoints
      --hex[=false]                             print byte strings as hex encoded strings
      --insecure-discovery[=true]               accept insecure SRV records describing cluster endpoints
      --insecure-skip-tls-verify[=false]        skip server certificate verification
      --insecure-transport[=true]               disable transport security for client connections
      --keepalive-time=2s                       keepalive time for client connections
      --keepalive-timeout=6s                    keepalive timeout for client connections
      --key=&quot;&quot;                                  identify secure client using this TLS key file
      --user=&quot;&quot;                                 username[:password] for authentication (prompt if password is not supplied)
  -w, --write-out=&quot;simple&quot;                      set the output format (fields, json, protobuf, simple, table)

</code></pre><p>매개변수를 파악하기 위해 서버 매니페스트 파일을 살펴본다 </p>
<pre><code>controlplane ~ ➜  cat /etc/kubernetes/manifests/etcd.yaml | grep file
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    seccompProfile:

✅ 엔드포인트를 확인
controlplane ~ ➜  vi /etc/kubernetes/manifests/etcd.yaml
</code></pre><p><img src="https://velog.velcdn.com/images/jupiter-j/post/fb362c13-748d-4384-805d-ca60324a56a1/image.png" alt=""></p>
<pre><code>controlplane ~ ✖ ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 snapshot save snapshot.db --cacert=/etc/kubernetes/pki/etcd/ca.crt \
&gt; --cert=/etc/kubernetes/pki/etcd/server.crt \
&gt; --key=/etc/kubernetes/pki/etcd/server.key
Snapshot saved at snapshot.db


## 정답
export ETCDCTL_API=3
etcdctl snapshot save --endpoints https://[127.0.0.1]:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key  /opt/etcd-backup.db</code></pre><h2 id="2-pod-pv-생성">2. Pod Pv 생성</h2>
<blockquote>
<p>emptyDir
<a href="https://kubernetes.io/docs/concepts/storage/volumes/">https://kubernetes.io/docs/concepts/storage/volumes/</a></p>
</blockquote>
<pre><code>volumes:
  - name: cache-volume
    emptyDir:
      sizeLimit: 500Mi</code></pre><br>
emptyDir은 Kubernetes에서 사용하는 Volume 타입으로, Pod이 생성될 때 빈 디렉토리로 시작하고, Pod의 생명 주기 동안만 존재합니다. Pod이 삭제되면 emptyDir Volume에 저장된 데이터도 함께 삭제됩니다. 이 Volume은 컨테이너 간 데이터 공유 및 임시 데이터 저장에 유용합니다. 데이터는 노드의 로컬 디스크에 저장됩니다.


<h4 id="create-a-pod-called-redis-storage-with-image-redisalpine-with-a-volume-of-type-emptydir-that-lasts-for-the-life-of-the-pod">Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir that lasts for the life of the Pod.</h4>
<pre><code>## pod yaml 생성
controlplane ~ ✖ k run redis-storage --image=redis:alpine --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: redis-storage
  name: redis-storage
spec:
  containers:
  - image: redis:alpine
    name: redis-storage
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

controlplane ~ ➜  k run redis-storage --image=redis:alpine --dry-run=client -o yaml &gt; redis.yaml

## pv 추가
controlplane ~ ➜  vi redis.yaml

controlplane ~ ➜  cat redis.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: redis-storage
  name: redis-storage
spec:
  containers:
  - image: redis:alpine
    name: redis-storage
    resources: {}
    volumeMounts: ⭐️
    - mountPath: /data/redis
      name: cache-volume  
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes: ⭐️
  - name: cache-volume
    emptyDir: {} 
status: {}

controlplane ~ ➜  k create -f redis.yaml
pod/redis-storage created

controlplane ~ ➜  k get po
NAME            READY   STATUS              RESTARTS   AGE
redis-storage   0/1     ContainerCreating   0          3s

controlplane ~ ➜  k describe po redis-storage 
Name:             redis-storage
Namespace:        default
Priority:         0
Service Account:  default
Node:             node01/192.21.81.12
Start Time:       Sat, 10 Aug 2024 14:23:34 +0000
Labels:           run=redis-storage
Annotations:      &lt;none&gt;
Status:           Running
IP:               10.244.192.1
IPs:
  IP:  10.244.192.1
Containers:
  redis-storage:
    Container ID:   containerd://4ed7364200a01b46a5464056b5856c3c33deda24c2fb0afd1f368db98031cd34
    Image:          redis:alpine
    Image ID:       docker.io/library/redis@sha256:eaea8264f74a95ea9a0767c794da50788cbd9cf5223951674d491fa1b3f4f2d2
    Port:           &lt;none&gt;
    Host Port:      &lt;none&gt;
    State:          Running
      Started:      Sat, 10 Aug 2024 14:23:37 +0000
    Ready:          True
    Restart Count:  0
    Environment:    &lt;none&gt;
    Mounts:
      /data/redis from cache-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbplj (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  cache-volume:
    Type:      ✅ EmptyDir (a temporary directory that shares a pod&#39;s lifetime)
    Medium:     
    SizeLimit:  &lt;unset&gt;
  kube-api-access-pbplj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       &lt;nil&gt;
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              &lt;none&gt;
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  80s   default-scheduler  Successfully assigned default/redis-storage to node01
  Normal  Pulling    79s   kubelet            Pulling image &quot;redis:alpine&quot;
  Normal  Pulled     77s   kubelet            Successfully pulled image &quot;redis:alpine&quot; in 1.388s (1.388s including waiting). Image size: 17173585 bytes.
  Normal  Created    77s   kubelet            Created container redis-storage
  Normal  Started    77s   kubelet            Started container redis-storage
</code></pre><h2 id="3-pv-보안-설정">3. PV 보안 설정</h2>
<blockquote>
<p>docs: security capability
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo-4
spec:
  containers:
  - name: sec-ctx-4
    image: gcr.io/google-samples/hello-app:2.0
    securityContext:
      capabilities:
        add: [&quot;NET_ADMIN&quot;, &quot;SYS_TIME&quot;]</code></pre><pre><code>controlplane ~ ➜  k run super-user-pod --image=busybox:1.28 --command --dry-run=client -o yaml -- sleep 4800
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: super-user-pod
  name: super-user-pod
spec:
  containers:
  - command:
    - sleep
    - &quot;4800&quot;
    image: busybox:1.28
    name: super-user-pod
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
controlplane ~ ➜  k run super-user-pod --image=busybox:1.28 --command --dry-run=client -o yaml -- sleep 4800 &gt; busy.yaml
controlplane ~ ➜  vi busy.yaml
controlplane ~ ➜  cat busy.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: super-user-pod
  name: super-user-pod
spec:
  containers:
  - command:
    - sleep
    - &quot;4800&quot;
    image: busybox:1.28
    name: super-user-pod
    resources: {}
    securityContext: ✅ 추가 Sys_time
      capabilities:
        add: [&quot;SYS_TIME&quot;]  
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
controlplane ~ ➜  k create -f busy.yaml
pod/super-user-pod created
controlplane ~ ➜  k get po
NAME             READY   STATUS    RESTARTS   AGE
redis-storage    1/1     Running   0          8m12s
super-user-pod   1/1     Running   0          3s</code></pre><br>

<h2 id="4-pv-mount">4. PV mount</h2>
<h4 id="a-pod-definition-file-is-created-at-rootckause-pvyaml-make-use-of-this-manifest-file-and-mount-the-persistent-volume-called-pv-1-ensure-the-pod-is-running-and-the-pv-is-bound">A pod definition file is created at /root/CKA/use-pv.yaml. Make use of this manifest file and mount the persistent volume called pv-1. Ensure the pod is running and the PV is bound.</h4>
<p>mountPath: /data
persistentVolumeClaim Name: my-pvc </p>
<blockquote>
<p>이 작업은 /root/CKA/use-pv.yaml 파일을 사용하여 파드를 정의하고, pv-1이라는 퍼시스턴트 볼륨을 /data 경로에 마운트하도록 하는 것. 또한, 파드가 정상적으로 실행 중인지, PVC가 퍼시스턴트 볼륨에 바인딩되었는지 확인하는 것이 중요함
<br></p>
</blockquote>
<h3 id="pv-pvc">PV, PVC</h3>
<ul>
<li>Persistent Volume (PV): 관리자가 사전에 프로비저닝한 실제 스토리지를 나타냅니다. Kubernetes 클러스터에서 사용할 수 있는 스토리지 리소스.</li>
<li>Persistent Volume Claim (PVC): 사용자가 필요로 하는 스토리지 요구 사항을 정의한 요청서. PVC가 제출되면, Kubernetes는 요구 사항에 맞는 PV를 할당(바인딩)한다.</li>
<li>따라서, PV는 실제 스토리지 리소스이고, PVC는 그 스토리지를 요청하는 선언이다</li>
</ul>
<ol>
<li>pvc 생성</li>
<li>포드 내의 볼륨 마운트로 설정 </li>
<li>볼륨을 볼륨 마운트로 설정</li>
</ol>
<blockquote>
<h3 id="pvc-doc-persistentbolumeclaims">PVC doc: PersistentBolumeClaims</h3>
<p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims</a></p>
</blockquote>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 8Gi</code></pre><pre><code>1. pvc 생성
controlplane ~ ➜  cat /root/CKA/use-pv.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: use-pv
  name: use-pv
spec:
  containers:
  - image: nginx
    name: use-pv
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

controlplane ~ ➜  k get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-1   10Mi       RWO            Retain           Available                          &lt;unset&gt;                          4m51s

controlplane ~ ➜  k get pvc
No resources found in default namespace.

controlplane ~ ➜  vi pvc.yaml

controlplane ~ ➜  cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi

controlplane ~ ➜  k create -f pvc.yaml
persistentvolumeclaim/my-pvc created

controlplane ~ ➜  k get pvc
NAME     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
my-pvc   Bound    pv-1     10Mi       RWO                           &lt;unset&gt;                 3s
</code></pre><blockquote>
<p>PV 사양 추가</p>
</blockquote>
<pre><code>      volumeMounts:
      - mountPath: &quot;/var/www/html&quot;
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: myclaim</code></pre><pre><code>2. 포드 내의 볼륨 마운트로 설정 
controlplane ~ ➜  cat /root/CKA/use-pv.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: use-pv
  name: use-pv
spec:
  containers:
  - image: nginx
    name: use-pv
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

controlplane ~ ➜  vi /root/CKA/use-pv.yaml

controlplane ~ ➜  vi /root/CKA/use-pv.yaml

controlplane ~ ➜  cat /root/CKA/use-pv.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: use-pv
  name: use-pv
spec:
  containers:
  - image: nginx
    name: use-pv
    resources: {}
    volumeMounts:
      - mountPath: &quot;/data&quot;
        name: mypd  
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: my-pvc  
status: {}

controlplane ~ ➜  k create -f /root/CKA/use-pv.yaml
pod/use-pv created

controlplane ~ ➜  k get po
NAME             READY   STATUS    RESTARTS   AGE
redis-storage    1/1     Running   0          28m
super-user-pod   1/1     Running   0          20m
use-pv           1/1     Running   0          5s</code></pre><h2 id="5-deploy-업데이트">5. Deploy 업데이트</h2>
<h4 id="create-a-new-deployment-called-nginx-deploy-with-image-nginx116-and-1-replica-next-upgrade-the-deployment-to-version-117-using-rolling-update">Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Next upgrade the deployment to version 1.17 using rolling update.</h4>
<pre><code>sage:
  kubectl create deployment NAME --image=image -- [COMMAND] [args...] [options]

Use &quot;kubectl options&quot; for a list of global command-line options (applies to all
commands).

controlplane ~ ➜  k create deploy nginx-deploy --image=nginx:1.16 --replicas=1
deployment.apps/nginx-deploy created

controlplane ~ ➜  k get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   0/1     1            0           5s
</code></pre><blockquote>
<p>이미지 버전 업데이트  <code>k set image --help</code></p>
</blockquote>
<pre><code>controlplane ~ ➜  k set image --help
Update existing container image(s) of resources.
 Possible resources include (case insensitive):
        pod (po), replicationcontroller (rc), deployment (deploy), daemonset
(ds), statefulset (sts), cronjob (cj), replicaset (rs)
Examples:
  # Set a deployment&#39;s nginx container image to &#39;nginx:1.9.1&#39;, and its busybox
container image to &#39;busybox&#39;
  kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1</code></pre><p>도움말중  <code>kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1</code> 명령어를 사용</p>
<pre><code>controlplane ~ ➜  # kubectl set image deployment/nginx-deploy nginx=nginx:1.9.1

controlplane ~ ➜  k describe deploy nginx-deploy 
Name:                   nginx-deploy
Namespace:              default
CreationTimestamp:      Sun, 11 Aug 2024 08:19:00 +0000
Labels:                 app=nginx-deploy
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx-deploy
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx-deploy
  Containers:
   nginx:
    Image:         nginx:1.16 ✅
    Port:          &lt;none&gt;
    Host Port:     &lt;none&gt;
    Environment:   &lt;none&gt;
    Mounts:        &lt;none&gt;
  Volumes:         &lt;none&gt;
  Node-Selectors:  &lt;none&gt;
  Tolerations:     &lt;none&gt;
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  &lt;none&gt;
NewReplicaSet:   nginx-deploy-858fb84d4b (1/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  3m36s  deployment-controller  Scaled up replica set nginx-deploy-858fb84d4b to 1

## 🛑 명령어를 맞게 수정
controlplane ~ ➜  kubectl set image deployment/nginx-deploy nginx=nginx:1.17
deployment.apps/nginx-deploy image updated

controlplane ~ ➜  k describe deploy nginx-deploy 
Name:                   nginx-deploy
Namespace:              default
CreationTimestamp:      Sun, 11 Aug 2024 08:19:00 +0000
Labels:                 app=nginx-deploy
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx-deploy
Replicas:               1 desired | 1 updated | 2 total | 1 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx-deploy
  Containers:
   nginx:
    Image:         nginx:1.17 ✅
    Port:          &lt;none&gt;
    Host Port:     &lt;none&gt;
    Environment:   &lt;none&gt;
    Mounts:        &lt;none&gt;
  Volumes:         &lt;none&gt;
  Node-Selectors:  &lt;none&gt;
  Tolerations:     &lt;none&gt;
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  nginx-deploy-858fb84d4b (1/1 replicas created)
NewReplicaSet:   nginx-deploy-58f87d49 (1/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  4m25s  deployment-controller  Scaled up replica set nginx-deploy-858fb84d4b to 1
  Normal  ScalingReplicaSet  4s     deployment-controller  Scaled up replica set nginx-deploy-58f87d49 to 1</code></pre><h2 id="6-사용자-인증서-생성">6. 사용자 인증서 생성</h2>
<h4 id="create-a-new-user-called-john-grant-him-access-to-the-cluster-john-should-have-permission-to-create-list-get-update-and-delete-pods-in-the-development-namespace--the-private-key-exists-in-the-location-rootckajohnkey-and-csr-at-rootckajohncsr">Create a new user called john. Grant him access to the cluster. John should have permission to create, list, get, update and delete pods in the development namespace . The private key exists in the location: /root/CKA/john.key and csr at /root/CKA/john.csr.</h4>
<p>Important Note: As of kubernetes 1.19, the CertificateSigningRequest object expects a signerName.
Please refer the documentation to see an example. The documentation tab is available at the top right of terminal.</p>
<blockquote>
<p>요약</p>
</blockquote>
<ul>
<li>CSR 생성 및 승인: signerName을 포함한 CSR을 생성하고 승인하여 john 사용자의 클라이언트 인증서를 발급받습니다.</li>
<li>사용자 추가: 발급된 인증서를 사용해 kubectl에 john 사용자 정보를 추가합니다.</li>
<li>Role 및 RoleBinding 설정: john 사용자가 development 네임스페이스에서 파드를 관리할 수 있도록 Role 및 RoleBinding을 설정합니다.</li>
<li>이 작업을 통해 john 사용자는 development 네임스페이스에서 파드를 생성, 조회, 가져오기, 업데이트, 삭제할 수 있는 권한을 가지게 됩니다.</li>
</ul>
]]></description>
        </item>
        <item>
            <title><![CDATA[Kubernetes Udemy MOCK -1]]></title>
            <link>https://velog.io/@jupiter-j/Kubernetes-Udemy-MOCK-1</link>
            <guid>https://velog.io/@jupiter-j/Kubernetes-Udemy-MOCK-1</guid>
            <pubDate>Tue, 09 Jul 2024 07:10:09 GMT</pubDate>
            <description><![CDATA[<h1 id="kubernetes-udemy-mock-1">Kubernetes Udemy MOCK-1</h1>
<h3 id="pod-생성">Pod 생성</h3>
<h4 id="deploy-a-pod-named-nginx-pod-using-the-nginxalpine-image-once-done-click-on-the-next">Deploy a pod named nginx-pod using the nginx:alpine image. Once done, click on the Next</h4>
<hr>
<pre><code>controlplane ~ ➜  k run nginx-pod --image=nginx:alpine
pod/nginx-pod created

controlplane ~ ➜  k get po
NAME        READY   STATUS    RESTARTS   AGE
nginx-pod   1/1     Running   0          3s

controlplane ~ ➜  k describe po nginx-pod 
Name:             nginx-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             controlplane/192.4.145.9
Start Time:       Mon, 08 Jul 2024 04:09:06 +0000
Labels:           run=nginx-pod
Annotations:      &lt;none&gt;
Status:           Running
IP:               10.244.0.4
IPs:
  IP:  10.244.0.4
Containers:
  nginx-pod:
    Container ID:   containerd://248e912896a33ff427a18acb84f167b4b094a8900f335b5005bd8222bf2362cf
    Image:          nginx:alpine</code></pre><br>

<h3 id="pod-상세-설정-생성">Pod 상세 설정 생성</h3>
<h4 id="deploy-a-messaging-pod-using-the-redisalpine-image-with-the-labels-set-to-tiermsg">Deploy a messaging pod using the redis:alpine image with the labels set to tier=msg.</h4>
<hr>
<pre><code>&lt;k run --help 명령어 사용&gt;
Use &quot;kubectl options&quot; for a list of global command-line options (applies to all commands).

controlplane ~ ➜   kubectl run hazelcast --image=hazelcast/hazelcast --labels=&quot;app=hazelcast,env=prod&quot;^C

controlplane ~ ✖ k run messaging --image=redis:alpine --labels=tier=msg
pod/messaging created

controlplane ~ ➜  k get po
NAME        READY   STATUS              RESTARTS   AGE
messaging   0/1     ContainerCreating   0          6s
nginx-pod   1/1     Running             0          2m36s

controlplane ~ ➜  k describe po messaging 
Name:             messaging
Namespace:        default
Priority:         0
Service Account:  default
Node:             controlplane/192.4.145.9
Start Time:       Mon, 08 Jul 2024 04:11:36 +0000
Labels:           tier=msg
Annotations:      &lt;none&gt;
Status:           Running
IP:               10.244.0.5
IPs:
  IP:  10.244.0.5
Containers:
  messaging:</code></pre><br>

<h3 id="ns-생성">NS 생성</h3>
<h4 id="create-a-namespace-named-apx-x9984574">Create a namespace named apx-x9984574.</h4>
<hr>
<pre><code>controlplane ~ ✖ k get ns
NAME              STATUS   AGE
default           Active   109m
kube-flannel      Active   109m
kube-node-lease   Active   109m
kube-public       Active   109m
kube-system       Active   109m

controlplane ~ ➜  k create ns apx-x9984574
namespace/apx-x9984574 created

controlplane ~ ➜  k get ns
NAME              STATUS   AGE
apx-x9984574      Active   4s
default           Active   109m
kube-flannel      Active   109m
kube-node-lease   Active   109m
kube-public       Active   109m
kube-system       Active   109m
</code></pre><br>

<h3 id="json-파일-복사">json 파일 복사</h3>
<h4 id="get-the-list-of-nodes-in-json-format-and-store-it-in-a-file-at-optoutputsnodes-z3444kd9json">Get the list of nodes in JSON format and store it in a file at /opt/outputs/nodes-z3444kd9.json.</h4>
<hr>
<pre><code>controlplane ~ ➜  k get no
NAME           STATUS   ROLES           AGE    VERSION
controlplane   Ready    control-plane   110m   v1.30.0

controlplane ~ ➜  k get no -o json
{
    &quot;apiVersion&quot;: &quot;v1&quot;,
    &quot;items&quot;: [
        {
            &quot;apiVersion&quot;: &quot;v1&quot;,
            &quot;kind&quot;: &quot;Node&quot;,
            &quot;metadata&quot;: {
                &quot;annotations&quot;: {
                    &quot;flannel.alpha.coreos.com/backend-data&quot;: &quot;{\&quot;VNI\&quot;:1,\&quot;VtepMAC\&quot;:\&quot;b6:66:c3:d9:d9:27\&quot;}&quot;,
                    &quot;flannel.alpha.coreos.com/backend-type&quot;: &quot;vxlan&quot;,
                    &quot;flannel.alpha.coreos.com/kube-subnet-manager&quot;: &quot;true&quot;,
.
.
.

    ],
    &quot;kind&quot;: &quot;List&quot;,
    &quot;metadata&quot;: {
        &quot;resourceVersion&quot;: &quot;&quot;
    }
}

⭐️ json 파일을 해당 경로에 복사 
controlplane ~ ➜  k get no -o json &gt; /opt/outputs/nodes-z3444kd9.json.

controlplane ~ ➜  cat  /opt/outputs/nodes-z3444kd9.json.
{
    &quot;apiVersion&quot;: &quot;v1&quot;,
    &quot;items&quot;: [
        {
            &quot;apiVersion&quot;: &quot;v1&quot;,
            &quot;kind&quot;: &quot;Node&quot;,
            &quot;metadata&quot;: {
                &quot;annotations&quot;: {
                    &quot;flannel.alpha.coreos.com/backend-data&quot;: &quot;{\&quot;VNI\&quot;:1,\&quot;VtepMAC\&quot;:\&quot;b6:66:c3:d9:d9:27\&quot;}&quot;,
                    &quot;flannel.alpha.coreos.com/backend-type&quot;: &quot;vxlan&quot;,
                    &quot;flannel.alpha.coreos.com/kube-subnet-manager&quot;: &quot;true&quot;,
                    &quot;flannel.alpha.coreos.com/public-ip&quot;: &quot;192.4.145.9&quot;,
                    &quot;kubeadm.alpha.kubernetes.io/cri-socket&quot;: &quot;unix:///var/run/containerd/containerd.sock&quot;,
                    &quot;node.alpha.kubernetes.io/ttl&quot;: &quot;0&quot;,
                    &quot;volumes.kubernetes.io/controller-managed-attach-detach&quot;: &quot;true&quot;
                },
                &quot;creationTimestamp&quot;: &quot;2024-07-08T02:24:59Z&quot;,</code></pre><br>

<h3 id="service-생성">Service 생성</h3>
<h4 id="create-a-service-messaging-service-to-expose-the-messaging-application-within-the-cluster-on-port-6379">Create a service messaging-service to expose the messaging application within the cluster on port 6379.</h4>
<hr>
<blockquote>
<p><code>k expose --help</code> 사용</p>
</blockquote>
<p>메세지 어플리케이션 접속을 위해 메세지-서비스를 만들어 클러스터 안에 있는 메세지 파드 IP와 서비스 연결</p>
<pre><code>controlplane ~ ➜  k expose po messaging --port 6379 --name messaging-service 
service/messaging-service exposed

controlplane ~ ➜  k get svc
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes          ClusterIP   10.96.0.1        &lt;none&gt;        443/TCP    120m
messaging-service   ClusterIP   10.108.125.130   &lt;none&gt;        6379/TCP   5s

controlplane ~ ➜  k describe svc messaging-service 
Name:              messaging-service
Namespace:         default
Labels:            tier=msg
Annotations:       &lt;none&gt;
Selector:          tier=msg
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.108.125.130
IPs:               10.108.125.130
Port:              &lt;unset&gt;  6379/TCP
TargetPort:        6379/TCP
Endpoints:         10.244.0.5:6379 ⭐️ endppoint 확인
Session Affinity:  None
Events:            &lt;none&gt;

controlplane ~ ➜  k get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
messaging   1/1     Running   0          14m   ⭐️10.244.0.5   controlplane   &lt;none&gt;           &lt;none&gt;
nginx-pod   1/1     Running   0          16m   10.244.0.4   controlplane   &lt;none&gt;           &lt;none&gt;

controlplane ~ ➜  </code></pre><br>

<h3 id="레플리카셋-활용">레플리카셋 활용</h3>
<h4 id="create-a-deployment-named-hr-web-app-using-the-image-kodekloudwebapp-color-with-2-replicas">Create a deployment named hr-web-app using the image kodekloud/webapp-color with 2 replicas.</h4>
<hr>
<pre><code>controlplane ~ ➜  k create deployment hr-web-app --image=kodekloud/webapp-color --replicas=2
deployment.apps/hr-web-app created

controlplane ~ ➜  k get deploy
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
hr-web-app   0/2     2            0           7s

controlplane ~ ➜  k get po
NAME                          READY   STATUS    RESTARTS   AGE
hr-web-app-5d6b77db78-tblmg   1/1     Running   0          17s
hr-web-app-5d6b77db78-w8f64   1/1     Running   0          17s
messaging                     1/1     Running   0          16m
nginx-pod                     1/1     Running   0          19m

controlplane ~ ➜  k describe deploy hr-web-app
Name:                   hr-web-app
Namespace:              default
CreationTimestamp:      Mon, 08 Jul 2024 04:28:08 +0000
Labels:                 app=hr-web-app
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=hr-web-app
Replicas:            확인 ⭐️  2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge</code></pre><br>

<h3 id="pod-생성---상세설정">Pod 생성 - 상세설정</h3>
<h4 id="create-a-static-pod-named-static-busybox-on-the-controlplane-node-that-uses-the-busybox-image-and-the-command-sleep-1000">Create a static pod named static-busybox on the controlplane node that uses the busybox image and the command sleep 1000.</h4>
<hr>
<pre><code>
controlplane ~ ➜  k run static-busybox --image=busybox --dry-run=client -o yaml --comand -- sleep, 1000
error: unknown flag: --comand
See &#39;kubectl run --help&#39; for usage.

controlplane ~ ✖ k run static-busybox --image=busybox --dry-run=client -o yaml --command -- sleep 1000
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: static-busybox
  name: static-busybox
spec:
  containers:
  - command:
    - sleep
    - &quot;1000&quot;
    image: busybox
    name: static-busybox
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

controlplane ~ ➜  k run static-busybox --image=busybox --dry-run=client -o yaml --command -- sleep 1000 &gt; static-busybox.yaml

controlplane ~ ➜  ls
sample.yaml  static-busybox.yaml

controlplane ~ ➜  cat static-busybox.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: static-busybox
  name: static-busybox
spec:
  containers:
  - command:
    - sleep
    - &quot;1000&quot;
    image: busybox
    name: static-busybox
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

controlplane ~ ➜  k create -f static-busybox.yaml 
pod/static-busybox created

controlplane ~ ➜  k get po
NAME                          READY   STATUS    RESTARTS   AGE
hr-web-app-5d6b77db78-tblmg   1/1     Running   0          16m
hr-web-app-5d6b77db78-w8f64   1/1     Running   0          16m
messaging                     1/1     Running   0          33m
nginx-pod                     1/1     Running   0          35m
static-busybox                1/1     Running   0          5s

###### ⭐️ static 경로에 두지 않으면 삭제되어도 다시 생기지 않음
controlplane ~ ➜  k delete po static-busybox 
pod &quot;static-busybox&quot; deleted

controlplane ~ ➜  k get po
NAME                          READY   STATUS    RESTARTS   AGE
hr-web-app-5d6b77db78-tblmg   1/1     Running   0          17m
hr-web-app-5d6b77db78-w8f64   1/1     Running   0          17m
messaging                     1/1     Running   0          34m
nginx-pod                     1/1     Running   0          36m

controlplane ~ ➜  </code></pre><blockquote>
<p>mv static-busybox.yaml <code>/etc/kubernetes/manifests/</code> 경로 암기 </p>
</blockquote>
<pre><code>controlplane ~ ➜  ls
sample.yaml  static-busybox.yaml

controlplane ~ ➜  mv static-busybox.yaml /etc/kubernetes/manifests/

controlplane ~ ➜  k create -f /etc/kubernetes/manifests/static-busybox.yaml
pod/static-busybox created

controlplane ~ ➜  k get po
NAME                          READY   STATUS    RESTARTS   AGE
hr-web-app-5d6b77db78-tblmg   1/1     Running   0          20m
hr-web-app-5d6b77db78-w8f64   1/1     Running   0          20m
messaging                     1/1     Running   0          37m
nginx-pod                     1/1     Running   0          39m
static-busybox                1/1     Running   0          11s
static-busybox-controlplane   1/1     Running   0          48s

controlplane ~ ➜  k describe po static-busybox-controlplane
Name:         static-busybox-controlplane
Namespace:    default
Priority:     0
Node:         controlplane/192.4.145.9
Start Time:   Mon, 08 Jul 2024 04:47:58 +0000
Labels:       run=static-busybox
Annotations:  kubernetes.io/config.hash: 61f6c4844f8c32c1b793af0a920431c9
              kubernetes.io/config.mirror: 61f6c4844f8c32c1b793af0a920431c9
              kubernetes.io/config.seen: 2024-07-08T04:47:58.193630488Z
              kubernetes.io/config.source: file
Status:       Running
IP:           10.244.0.10
IPs:
  IP:           10.244.0.10
Controlled By:  Node/controlplane
Containers:
  static-busybox:
    Container ID:  containerd://a9cfbe573713fefdd98e0b8360468ba966c031350cee7690dad40e196a3bdbaf
    Image:         busybox
    Image ID:      docker.io/library/busybox@sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7
    Port:          &lt;none&gt;
    Host Port:     &lt;none&gt;
    Command:
      sleep
      1000
    State:          Running</code></pre><br>

<h4 id="create-a-pod-in-the-finance-namespace-named-temp-bus-with-the-image-redisalpine">Create a POD in the finance namespace named temp-bus with the image redis:alpine.</h4>
<pre><code>controlplane ~ ➜  k run temp-bus --image=redis:alpine -n finance 
pod/temp-bus created

controlplane ~ ➜  k get po
NAME                          READY   STATUS    RESTARTS   AGE
hr-web-app-5d6b77db78-tblmg   1/1     Running   0          24m
hr-web-app-5d6b77db78-w8f64   1/1     Running   0          24m
messaging                     1/1     Running   0          40m
nginx-pod                     1/1     Running   0          43m
static-busybox                1/1     Running   0          3m53s
static-busybox-controlplane   1/1     Running   0          4m30s

controlplane ~ ➜  k get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATES
hr-web-app-5d6b77db78-tblmg   1/1     Running   0          24m     10.244.0.6    controlplane   &lt;none&gt;           &lt;none&gt;
hr-web-app-5d6b77db78-w8f64   1/1     Running   0          24m     10.244.0.7    controlplane   &lt;none&gt;           &lt;none&gt;
messaging                     1/1     Running   0          41m     10.244.0.5    controlplane   &lt;none&gt;           &lt;none&gt;
nginx-pod                     1/1     Running   0          43m     10.244.0.4    controlplane   &lt;none&gt;           &lt;none&gt;
static-busybox                1/1     Running   0          4m12s   10.244.0.11   controlplane   &lt;none&gt;           &lt;none&gt;
static-busybox-controlplane   1/1     Running   0          4m49s   10.244.0.10   controlplane   &lt;none&gt;           &lt;none&gt;

controlplane ~ ➜  k get po -n finance  ⭐️ ns를 붙여야만 확인 가능
NAME       READY   STATUS    RESTARTS   AGE
temp-bus   1/1     Running   0       

controlplane ~ ✖ k describe po temp-bus -n finance 
Name:             temp-bus
Namespace:        finance
Priority:         0
Service Account:  default
Node:             controlplane/192.4.145.9
Start Time:       Mon, 08 Jul 2024 04:51:51 +0000
Labels:           run=temp-bus
Annotations:      &lt;none&gt;
Status:           Running
IP:               10.244.0.12
IPs:
  IP:  10.244.0.12
Containers:
  temp-bus:
    Container ID:   containerd://1232ad33519acb1be66aad503d54ffae50663e0afb9de52263809ed9e2a272ac
    Image:          redis:alpine</code></pre><br>

<h3 id="트러블슈팅">트러블슈팅</h3>
<h4 id="a-new-application-orange-is-deployed-there-is-something-wrong-with-it-identify-and-fix-the-issue">A new application orange is deployed. There is something wrong with it. Identify and fix the issue.</h4>
<pre><code>controlplane ~ ➜  k get po
NAME                          READY   STATUS                  RESTARTS      AGE
hr-web-app-5d6b77db78-nvvfq   1/1     Running                 0             4m17s
hr-web-app-5d6b77db78-q4tg2   1/1     Running                 0             4m17s
messaging                     1/1     Running                 0             8m24s
nginx-pod                     1/1     Running                 0             15m
orange                        0/1     Init:CrashLoopBackOff   3 (16s ago)   59s
static-busybox                1/1     Running                 0             119s
static-busybox-controlplane   1/1     Running                 0             2m21s

controlplane ~ ➜  k describe po orange 
Name:             orange
Namespace:        default
Priority:         0
Service Account:  default
Node:             controlplane/192.10.141.6
Start Time:       Tue, 09 Jul 2024 06:30:42 +0000
Labels:           &lt;none&gt;
Annotations:      &lt;none&gt;
Status:           Pending
IP:               10.244.0.13
IPs:
  IP:  10.244.0.13
📌Init Containers:
  init-myservice:
    Container ID:  containerd://6a57d42900025961c790d0a31c84555b3f1ca72eec2433b4a49fe8e5e23c3615
    Image:         busybox
    Image ID:      docker.io/library/busybox@sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7
    Port:          &lt;none&gt;
    Host Port:     &lt;none&gt;
    Command:
      sh
      -c
      sleeeep 2;
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    127
      Started:      Tue, 09 Jul 2024 06:31:25 +0000
      Finished:     Tue, 09 Jul 2024 06:31:25 +0000
    Ready:          False
    Restart Count:  3
    Environment:    &lt;none&gt;
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6bgcl (ro)
Containers:
📌  orange-container:
    Container ID:  
    Image:         busybox:1.28
    Image ID:      
    Port:          &lt;none&gt;
    Host Port:     &lt;none&gt;
    Command:
      sh
      -c
      echo The app is running! &amp;&amp; sleep 3600
    State:          Waiting
      Reason:       PodInitializing 🛑 Pod 초기화중 
    Ready:          False
    Restart Count:  0
    Environment:    &lt;none&gt;
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6bgcl (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 False 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-6bgcl:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       &lt;nil&gt;
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              &lt;none&gt;
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  64s                default-scheduler  Successfully assigned default/orange to controlplane
  Normal   Pulled     63s                kubelet            Successfully pulled image &quot;busybox&quot; in 372ms (372ms including waiting). Image size: 2160406 bytes.
  Normal   Pulled     62s                kubelet            Successfully pulled image &quot;busybox&quot; in 293ms (293ms including waiting). Image size: 2160406 bytes.
  Normal   Pulled     48s                kubelet            Successfully pulled image &quot;busybox&quot; in 284ms (284ms including waiting). Image size: 2160406 bytes.
  Normal   Pulling    21s (x4 over 64s)  kubelet            Pulling image &quot;busybox&quot;
  Normal   Created    21s (x4 over 63s)  kubelet            Created container init-myservice
  Normal   Started    21s (x4 over 63s)  kubelet            Started container init-myservice
  Normal   Pulled     21s                kubelet            Successfully pulled image &quot;busybox&quot; in 295ms (295ms including waiting). Image size: 2160406 bytes.
  Warning  BackOff    6s (x6 over 62s)   kubelet            Back-off restarting failed container init-myservice in pod orange_default(7df5e0c3-879c-44b2-a73c-b8ab542ce760)


🛑 실행주인 pod에서 sleeep-&gt; sleep 수정 
controlplane ~ ➜  k edit pod orange
error: pods &quot;orange&quot; is invalid
A copy of your changes has been stored to &quot;/tmp/kubectl-edit-2975896844.yaml&quot;
error: Edit cancelled, no valid changes were saved.

✅ 수정된 yaml을 실행중인 pod에 적용 
controlplane ~ ✖ k replace --force -f /tmp/kubectl-edit-2975896844.yaml
pod &quot;orange&quot; deleted
pod/orange replaced

controlplane ~ ➜  k get po
NAME                          READY   STATUS    RESTARTS   AGE
hr-web-app-5d6b77db78-nvvfq   1/1     Running   0          11m
hr-web-app-5d6b77db78-q4tg2   1/1     Running   0          11m
messaging                     1/1     Running   0          16m
nginx-pod                     1/1     Running   0          22m
orange                        1/1     Running   0          69s
static-busybox                1/1     Running   0          9m39s
static-busybox-controlplane   1/1     Running   0          10m</code></pre><br>

<h4 id="expose-the-hr-web-app-as-service-hr-web-app-service-application-on-port-30082-on-the-nodes-on-the-cluster">Expose the hr-web-app as service hr-web-app-service application on port 30082 on the nodes on the cluster.</h4>
<p>The web application listens on port 8080.</p>
<pre><code>
Usage:
  kubectl expose (-f FILENAME | TYPE NAME) [--port=port]
[--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name]
[--external-ip=external-ip-of-service] [--type=type] [options]

Use &quot;kubectl options&quot; for a list of global command-line options (applies to all
commands).

controlplane ~ ➜  k get po
NAME                          READY   STATUS    RESTARTS   AGE
hr-web-app-5d6b77db78-nvvfq   1/1     Running   0          13m
hr-web-app-5d6b77db78-q4tg2   1/1     Running   0          13m
messaging                     1/1     Running   0          17m
nginx-pod                     1/1     Running   0          24m
orange                        1/1     Running   0          2m22s
static-busybox                1/1     Running   0          10m
static-busybox-controlplane   1/1     Running   0          11m

controlplane ~ ➜  k get no
NAME           STATUS   ROLES           AGE   VERSION
controlplane   Ready    control-plane   64m   v1.30.0

controlplane ~ ➜  k get deploy
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
hr-web-app   2/2     2            2           13m

# expose 대상, svc이름, 적용타입, 연결 포트 
controlplane ~ ➜  k expose deploy hr-web-app --name=hr-web-app-service --type NodePort --port 8080
service/hr-web-app-service exposed

controlplane ~ ➜  k get svc
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
hr-web-app-service   NodePort    10.101.22.212    &lt;none&gt;     🛑 8080:30893/TCP   14s
kubernetes           ClusterIP   10.96.0.1        &lt;none&gt;        443/TCP          68m
messaging-service    ClusterIP   10.110.161.220   &lt;none&gt;        6379/TCP         17m

controlplane ~ ➜  k describe svc hr-web-app-service 
Name:                     hr-web-app-service
Namespace:                default
Labels:                   app=hr-web-app
Annotations:              &lt;none&gt;
Selector:                 app=hr-web-app
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.22.212
IPs:                      10.101.22.212
Port:                     &lt;unset&gt;  8080/TCP
TargetPort:               8080/TCP
NodePort:                 &lt;unset&gt;  30893/TCP
Endpoints:                10.244.0.8:8080,10.244.0.9:8080 ⭐️ 2end 확인
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   &lt;none&gt;

⭐️ 서비스를 수정 : 30893-&gt; 30892
controlplane ~ ➜  k edit svc hr-web-app-service 
service/hr-web-app-service edited

controlplane ~ ➜  k get svc
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
hr-web-app-service   NodePort    10.101.22.212    &lt;none&gt;        8080:30892/TCP   100s
kubernetes           ClusterIP   10.96.0.1        &lt;none&gt;        443/TCP          69m
messaging-service    ClusterIP   10.110.161.220   &lt;none&gt;        6379/TCP         19m</code></pre><blockquote>
<pre><code>kubectl expose deployment hr-web-app --type=NodePort --port=8080 --name=hr-web-app-service --dry-run=client -o yaml &gt; hr-web-app-service.yaml</code></pre></blockquote>
<pre><code>

&lt;br&gt;

#### Use JSON PATH query to retrieve the osImages of all the nodes and store it in a file /opt/outputs/nodes_os_x43kj56.txt.
The osImages are under the nodeInfo section under status of each node.

&gt; kubenetes doc: cheat sheet
https://kubernetes.io/ko/docs/reference/kubectl/cheatsheet/</code></pre><h1 id="appcassandra-레이블을-가진-모든-파드의-레이블-버전-조회">app=cassandra 레이블을 가진 모든 파드의 레이블 버전 조회</h1>
<p>kubectl get pods --selector=app=cassandra -o <br>  jsonpath=&#39;{.items[*].metadata.labels.version}&#39; &lt; 사용</p>
<pre><code>
문</code></pre><p>controlplane ~ ➜  k get no
NAME           STATUS   ROLES           AGE   VERSION
controlplane   Ready    control-plane   74m   v1.30.0</p>
<p>controlplane ~ ➜  k get no -o json
{
    &quot;apiVersion&quot;: &quot;v1&quot;,
  📌 &quot;items&quot;: [
        {
            &quot;apiVersion&quot;: &quot;v1&quot;,
            &quot;kind&quot;: &quot;Node&quot;,
            &quot;metadata&quot;: {
                &quot;annotations&quot;: {
                    &quot;flannel.alpha.coreos.com/backend-data&quot;: &quot;{&quot;VNI&quot;:1,&quot;VtepMAC&quot;:&quot;3a:69:1b:47:d3:28&quot;}&quot;,
                    &quot;flannel.alpha.coreos.com/backend-type&quot;: &quot;vxlan&quot;,
                    &quot;flannel.alpha.coreos.com/kube-subnet-manager&quot;: &quot;true&quot;,
                    &quot;flannel.alpha.coreos.com/public-ip&quot;: &quot;192.10.141.6&quot;,
                    &quot;kubeadm.alpha.kubernetes.io/cri-socket&quot;: &quot;unix:///var/run/containerd/containerd.sock&quot;,
                    &quot;node.alpha.kubernetes.io/ttl&quot;: &quot;0&quot;,
                    &quot;volumes.kubernetes.io/controller-managed-attach-detach&quot;: &quot;true&quot;
                },
                &quot;creationTimestamp&quot;: &quot;2024-07-09T05:35:51Z&quot;,
                &quot;labels&quot;: {
                    &quot;beta.kubernetes.io/arch&quot;: &quot;amd64&quot;,
                    &quot;beta.kubernetes.io/os&quot;: &quot;linux&quot;,
                    &quot;kubernetes.io/arch&quot;: &quot;amd64&quot;,
                    &quot;kubernetes.io/hostname&quot;: &quot;controlplane&quot;,
                    &quot;kubernetes.io/os&quot;: &quot;linux&quot;,
                    &quot;node-role.kubernetes.io/control-plane&quot;: &quot;&quot;,
                    &quot;node.kubernetes.io/exclude-from-external-load-balancers&quot;: &quot;&quot;
                },
                &quot;name&quot;: &quot;controlplane&quot;,
                &quot;resourceVersion&quot;: &quot;6358&quot;,
                &quot;uid&quot;: &quot;60363d3c-c9a6-4a35-b79e-4637cbc864f7&quot;
            },
            &quot;spec&quot;: {
                &quot;podCIDR&quot;: &quot;10.244.0.0/24&quot;,
                &quot;podCIDRs&quot;: [
                    &quot;10.244.0.0/24&quot;
                ]
            },
            &quot;status&quot;: {
                &quot;addresses&quot;: [
                    {
                        &quot;address&quot;: &quot;192.10.141.6&quot;,
                        &quot;type&quot;: &quot;InternalIP&quot;
                    },
                    {
                        &quot;address&quot;: &quot;controlplane&quot;,
                        &quot;type&quot;: &quot;Hostname&quot;
                    }
                ],
                &quot;allocatable&quot;: {
                    &quot;cpu&quot;: &quot;36&quot;,
                    &quot;ephemeral-storage&quot;: &quot;936398358207&quot;,
                    &quot;hugepages-1Gi&quot;: &quot;0&quot;,
                    &quot;hugepages-2Mi&quot;: &quot;0&quot;,
                    &quot;memory&quot;: &quot;214484656Ki&quot;,
                    &quot;pods&quot;: &quot;110&quot;
                },
                &quot;capacity&quot;: {
                    &quot;cpu&quot;: &quot;36&quot;,
                    &quot;ephemeral-storage&quot;: &quot;1016057248Ki&quot;,
                    &quot;hugepages-1Gi&quot;: &quot;0&quot;,
                    &quot;hugepages-2Mi&quot;: &quot;0&quot;,
                    &quot;memory&quot;: &quot;214587056Ki&quot;,
                    &quot;pods&quot;: &quot;110&quot;
                },
                &quot;conditions&quot;: [
                    {
                        &quot;lastHeartbeatTime&quot;: &quot;2024-07-09T05:36:13Z&quot;,
                        &quot;lastTransitionTime&quot;: &quot;2024-07-09T05:36:13Z&quot;,
                        &quot;message&quot;: &quot;Flannel is running on this node&quot;,
                        &quot;reason&quot;: &quot;FlannelIsUp&quot;,
                        &quot;status&quot;: &quot;False&quot;,
                        &quot;type&quot;: &quot;NetworkUnavailable&quot;
                    },
                    {
                        &quot;lastHeartbeatTime&quot;: &quot;2024-07-09T06:48:43Z&quot;,
                        &quot;lastTransitionTime&quot;: &quot;2024-07-09T05:35:47Z&quot;,
                        &quot;message&quot;: &quot;kubelet has sufficient memory available&quot;,
                        &quot;reason&quot;: &quot;KubeletHasSufficientMemory&quot;,
                        &quot;status&quot;: &quot;False&quot;,
                        &quot;type&quot;: &quot;MemoryPressure&quot;
                    },
                    {
                        &quot;lastHeartbeatTime&quot;: &quot;2024-07-09T06:48:43Z&quot;,
                        &quot;lastTransitionTime&quot;: &quot;2024-07-09T05:35:47Z&quot;,
                        &quot;message&quot;: &quot;kubelet has no disk pressure&quot;,
                        &quot;reason&quot;: &quot;KubeletHasNoDiskPressure&quot;,
                        &quot;status&quot;: &quot;False&quot;,
                        &quot;type&quot;: &quot;DiskPressure&quot;
                    },
                    {
                        &quot;lastHeartbeatTime&quot;: &quot;2024-07-09T06:48:43Z&quot;,
                        &quot;lastTransitionTime&quot;: &quot;2024-07-09T05:35:47Z&quot;,
                        &quot;message&quot;: &quot;kubelet has sufficient PID available&quot;,
                        &quot;reason&quot;: &quot;KubeletHasSufficientPID&quot;,
                        &quot;status&quot;: &quot;False&quot;,
                        &quot;type&quot;: &quot;PIDPressure&quot;
                    },
                    {
                        &quot;lastHeartbeatTime&quot;: &quot;2024-07-09T06:48:43Z&quot;,
                        &quot;lastTransitionTime&quot;: &quot;2024-07-09T05:36:11Z&quot;,
                        &quot;message&quot;: &quot;kubelet is posting ready status&quot;,
                        &quot;reason&quot;: &quot;KubeletReady&quot;,
                        &quot;status&quot;: &quot;True&quot;,
                        &quot;type&quot;: &quot;Ready&quot;
                    }
                ],
                &quot;daemonEndpoints&quot;: {
                    &quot;kubeletEndpoint&quot;: {
                        &quot;Port&quot;: 10250
                    }
                },
                &quot;images&quot;: [
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/kodekloud/fluent-ui-running@sha256:78fd68ba8a79adcd3e58897a933492886200be513076ba37f843008cc0168f81&quot;,
                            &quot;docker.io/kodekloud/fluent-ui-running:latest&quot;
                        ],
                        &quot;sizeBytes&quot;: 389734636
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee&quot;,
                            &quot;docker.io/library/nginx:latest&quot;
                        ],
                        &quot;sizeBytes&quot;: 70991807
                    },
                    {
                        &quot;names&quot;: [
                            &quot;registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b&quot;,
                            &quot;registry.k8s.io/etcd:3.5.12-0&quot;
                        ],
                        &quot;sizeBytes&quot;: 57236178
                    },
                    {
                        &quot;names&quot;: [
                            &quot;registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3&quot;,
                            &quot;registry.k8s.io/kube-apiserver:v1.30.0&quot;
                        ],
                        &quot;sizeBytes&quot;: 32663599
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/kodekloud/webapp-color@sha256:99c3821ea49b89c7a22d3eebab5c2e1ec651452e7675af243485034a72eb1423&quot;,
                            &quot;docker.io/kodekloud/webapp-color:latest&quot;
                        ],
                        &quot;sizeBytes&quot;: 31777918
                    },
                    {
                        &quot;names&quot;: [
                            &quot;registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe&quot;,
                            &quot;registry.k8s.io/kube-controller-manager:v1.30.0&quot;
                        ],
                        &quot;sizeBytes&quot;: 31030110
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/weaveworks/weave-kube@sha256:d797338e7beb17222e10757b71400d8471bdbd9be13b5da38ce2ebf597fb4e63&quot;,
                            &quot;docker.io/weaveworks/weave-kube:2.8.1&quot;
                        ],
                        &quot;sizeBytes&quot;: 30924173
                    },
                    {
                        &quot;names&quot;: [
                            &quot;registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210&quot;,
                            &quot;registry.k8s.io/kube-proxy:v1.30.0&quot;
                        ],
                        &quot;sizeBytes&quot;: 29020473
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/flannel/flannel@sha256:c951947891d7811a4da6bf6f2f4dcd09e33c6e1eb6a95022f3f621d00ed4615e&quot;,
                            &quot;docker.io/flannel/flannel:v0.23.0&quot;
                        ],
                        &quot;sizeBytes&quot;: 28051548
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/library/nginx@sha256:fdbfdaea4fc323f44590e9afeb271da8c345a733bf44c4ad7861201676a95f42&quot;,
                            &quot;docker.io/library/nginx:alpine&quot;
                        ],
                        &quot;sizeBytes&quot;: 20461204
                    },
                    {
                        &quot;names&quot;: [
                            &quot;registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67&quot;,
                            &quot;registry.k8s.io/kube-scheduler:v1.30.0&quot;
                        ],
                        &quot;sizeBytes&quot;: 19208660
                    },
                    {
                        &quot;names&quot;: [
                            &quot;registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1&quot;,
                            &quot;registry.k8s.io/coredns/coredns:v1.11.1&quot;
                        ],
                        &quot;sizeBytes&quot;: 18182961
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/library/redis@sha256:de14eedfbd1fc871d0f5aa1773fd80743930e45354d035b6f3b551e7ffa44df8&quot;,
                            &quot;docker.io/library/redis:alpine&quot;
                        ],
                        &quot;sizeBytes&quot;: 16801716
                    },
                    {
                        &quot;names&quot;: [
                            &quot;registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e&quot;,
                            &quot;registry.k8s.io/coredns/coredns:v1.10.1&quot;
                        ],
                        &quot;sizeBytes&quot;: 16190758
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/weaveworks/weave-npc@sha256:38d3e30a97a2260558f8deb0fc4c079442f7347f27c86660dbfc8ca91674f14c&quot;,
                            &quot;docker.io/weaveworks/weave-npc:2.8.1&quot;
                        ],
                        &quot;sizeBytes&quot;: 12814131
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/flannel/flannel-cni-plugin@sha256:ca6779c6ad63b77af8a00151cefc08578241197b9a6fe144b0e55484bc52b852&quot;,
                            &quot;docker.io/flannel/flannel-cni-plugin:v1.2.0&quot;
                        ],
                        &quot;sizeBytes&quot;: 3879095
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/library/busybox@sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7&quot;,
                            &quot;docker.io/library/busybox:latest&quot;
                        ],
                        &quot;sizeBytes&quot;: 2160406
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/library/busybox@sha256:c3839dd800b9eb7603340509769c43e146a74c63dca3045a8e7dc8ee07e53966&quot;
                        ],
                        &quot;sizeBytes&quot;: 2160005
                    },
                    {
                        &quot;names&quot;: [
                            &quot;docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47&quot;,
                            &quot;docker.io/library/busybox:1.28&quot;
                        ],
                        &quot;sizeBytes&quot;: 727869
                    },
                    {
                        &quot;names&quot;: [
                            &quot;registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097&quot;,
                            &quot;registry.k8s.io/pause:3.9&quot;
                        ],
                        &quot;sizeBytes&quot;: 321520
                    },
                    {
                        &quot;names&quot;: [
                            &quot;registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db&quot;,
                            &quot;registry.k8s.io/pause:3.6&quot;
                        ],
                        &quot;sizeBytes&quot;: 301773
                    }
                ],
              📌 &quot;nodeInfo&quot;: {
                    &quot;architecture&quot;: &quot;amd64&quot;,
                    &quot;bootID&quot;: &quot;aabdead6-7ad3-48f4-9e4e-e7d012582f1e&quot;,
                    &quot;containerRuntimeVersion&quot;: &quot;containerd://1.6.26&quot;,
                    &quot;kernelVersion&quot;: &quot;5.4.0-1106-gcp&quot;,
                    &quot;kubeProxyVersion&quot;: &quot;v1.30.0&quot;,
                    &quot;kubeletVersion&quot;: &quot;v1.30.0&quot;,
                    &quot;machineID&quot;: &quot;19d93cf879df4a7dbff7fb9eabd1279f&quot;,
                    &quot;operatingSystem&quot;: &quot;linux&quot;,
              📌    &quot;osImage&quot;: &quot;Ubuntu 22.04.4 LTS&quot;,
                    &quot;systemUUID&quot;: &quot;1a5d0c79-cc3c-637d-7715-012ff9847f27&quot;
                }
            }
        }
    ],
    &quot;kind&quot;: &quot;List&quot;,
    &quot;metadata&quot;: {
        &quot;resourceVersion&quot;: &quot;&quot;
    }
}</p>
<p>controlplane ~ ➜  k get nodes -o jsonpath=&#39;{.items[<em>].status.nodeInfo.osImage}&#39;
Ubuntu 22.04.4 LTS
controlplane ~ ➜<br>controlplane ~ ➜  k get nodes -o jsonpath=&#39;{.items[</em>].status.nodeInfo.osImage}&#39; &gt; /opt/outputs/nodes_os_x43kj56.txt</p>
<p>controlplane ~ ➜  cat /opt/outputs/nodes_os_x43kj56.txt
Ubuntu 22.04.4 LTS</p>
<pre><code>
&lt;br&gt;

#### Create a Persistent Volume with the given specification: -
Volume name: pv-analytics
Storage: 100Mi
Access mode: ReadWriteMany
Host path: /pv/data-analytics

&gt; pv 검색
https://kubernetes.io/docs/concepts/storage/persistent-volumes/</code></pre><p>apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow</p>
<pre><code>
</code></pre><p>controlplane ~ ➜  ls
sample.yaml</p>
<p>controlplane ~ ➜  vi pv.yaml</p>
<p>controlplane ~ ➜  cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-analytics
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /pv/data-analytics</p>
<p>controlplane ~ ➜  k create -f pv.yaml 
persistentvolume/pv-analytics created</p>
<p>controlplane ~ ➜  k get pv
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv-analytics   100Mi      RWX            Retain           Available                          <unset>                          3s</p>
<p>controlplane ~ ➜  k describe pv
Name:            pv-analytics
Labels:          <none>
Annotations:     <none>
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:<br>Status:          Available
Claim:<br>Reclaim Policy:  Retain
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        100Mi
Node Affinity:   <none>
Message:<br>Source:
    Type:          HostPath (bare host directory volume)
    Path:          /pv/data-analytics
    HostPathType:<br>Events:            <none></p>
<p>```</p>
<br>
<br>
<br>
<br>
<br>
<br>
<br>]]></description>
        </item>
        <item>
            <title><![CDATA[[파알문] 7. 깊이,넓이 우선탐색 활용]]></title>
            <link>https://velog.io/@jupiter-j/%ED%8C%8C%EC%95%8C%EB%AC%B8-7</link>
            <guid>https://velog.io/@jupiter-j/%ED%8C%8C%EC%95%8C%EB%AC%B8-7</guid>
            <pubDate>Tue, 05 Sep 2023 07:52:06 GMT</pubDate>
            <description><![CDATA[<h1 id="230830">23.08.30</h1>
<h2 id="7-1-최대-점수-구하기">7-1. 최대 점수 구하기</h2>
<p>time, score로 따로 나눠서 구할 생각을 못하고 한번에 튜플로 저장해서 처리할생각을 하니까 구현도, 문제푸는 방식도 한정적이게 됨. </p>
<ul>
<li>에러<pre><code class="language-py">def DFS(L, saveScore, saveTime):
  if L==n-1: #에러
      if saveTime==m:
          return saveScore
  else:
      for i in range(n):
          DFS(L+1, saveScore+score[i], saveTime+time[i])
          DFS(L+1, saveScore, saveTime)
</code></pre>
</li>
</ul>
<p>n, m = map(int, input().split())
score = []
time = []
for _ in range(n):
    a, b = map(int, input().split())
    score.append(a)
    time.append(b)
print(DFS(0,0,0))</p>
<pre><code>- 정답
```py
&quot;&quot;&quot;
1. 종료조건 잘못함
2. 비교하기 위한 부분을 생각 못함 
&quot;&quot;&quot;

def DFS(L, saveScore, saveTime):
    global res
    if saveTime&gt;m:
        return
    if L==n:
        if saveScore&gt;res:
            res=saveScore
    else:
        for i in range(n):
            DFS(L+1, saveScore+score[L], saveTime+time[L])
            DFS(L+1, saveScore, saveTime)

n, m = map(int, input().split())
score = []
time = []
for _ in range(n):
    a, b = map(int, input().split())
    score.append(a)
    time.append(b)
res = -1e9
DFS(0, 0, 0)
print(res)</code></pre><br>


<h2 id="7-2-휴가">7-2. 휴가</h2>
<p>이코테 dp문제였는데 풀지 못했다는 점이 심각쓰
BFS를 여전히 다루지 못한다는 점 </p>
<pre><code class="language-py">def DFS(L, sum):
    global res
    if L==n+1:
        if sum&gt;res:
            res=sum
    else:
        if L+t[L]&lt;=n+1: #상담기간 제한
            DFS(L+t[L], sum+p[L]) #(현재날짜+다음상담기간, 금액 누적)
        DFS(L+1, sum) 

n = int(input())
t = []
p = []
for i in range(n):
    a, b= map(int, input().split())
    t.append(a)
    p.append(b)

res = -1e9
t.insert(0,0) #인덱스를 날짜로 하기 위해서 0 넣기
p.insert(0,0)</code></pre>
<br>

<h2 id="7-3-양팔저울">7-3. 양팔저울</h2>
<blockquote>
<p>왼, 오, 적용하지 않을경우 3가지로 나눠서 DFS구현
<img src="https://velog.velcdn.com/images/jupiter-j/post/0d733269-51c4-462d-ba58-f2460e1ca630/image.png" alt=""></p>
</blockquote>
<pre><code class="language-py">def DFS(L, sum):
    global res
    if L==n:
        if 0&lt;sum&lt;=s: #sum이 양수일경우 ~ 총 무게까지 // 대칭구조임으로 음수는 제외하면 된다 
            res.add(sum)
    else:
        DFS(L+1, sum+g[L]) #왼쪽
        DFS(L+2, sum-g[L]) #오른쪽
        DFS(L+1, sum) #사용하지 않을 경우

n = int(input())
g= list(map(int, input().split()))
s = sum(g)
res = set() # 중복제거
DFS(0,0)
print(s-len(res))</code></pre>
<br>

<h2 id="7-4-동전-바꿔주기">7-4. 동전 바꿔주기</h2>
<pre><code class="language-py">&quot;&quot;&quot;
20
3
5 3
10 2
1 5
&quot;&quot;&quot;
def DFS(L, sum):
    global cnt
    if sum&gt;t:
        return
    if L==k:
        if sum==t:
            cnt+=1
    else:
        for i in range(cn[L]+1):
            DFS(L+1, sum+(cv[L]*i))

t = int(input())
k = int(input())
cv = []
cn = []
for i in range(k):
    a, b = map(int, input().split())
    cv.append(a)
    cn.append(b)
cnt = 0
DFS(0,0)
print(cnt)</code></pre>
<br>

<h2 id="7-5-동전-분배하기">7-5. 동전 분배하기</h2>
<p>사람을 기준을 DFS에 넣어서 분배 백트랙킹을 이용했는데 혼자서 구현 X </p>
<pre><code class="language-py">&quot;&quot;&quot;
동전 분배 하기
7
8
9
11
12
23
15
17

coin = [8,9,11,12,23,15,17]
&quot;&quot;&quot;

def DFS(L):
    global res
    if L==n:
        cha=max(money)-min(money)
        if cha&lt;res:
            tmp=set()
            for x in money:
                tmp.add(x)
            if len(tmp)==3:
                res=cha
    else:
        for i in range(3):
            money[i]+=coin[L]
            DFS(L+1)
            money[i]-=coin[L]


n = int(input())
money = [0]*3
res = 1e9
coin = [int(input()) for _ in range(n)]
DFS(0)
print(res)</code></pre>
<br>

<h2 id="7-6-알파코드">7-6. 알파코드</h2>
<pre><code class="language-py">&quot;&quot;&quot;
알파코드
# 한 글자씩 쪼개기
# 26 이하 까지 두 자릿 수로 쪼개기
&quot;&quot;&quot;

def DFS(L,P):
    global cnt
    if L==n:
        cnt+=1
        for j in range(P):
            print(chr(res[j]+64), end=&#39;&#39;)
        print()
    else:
        for i in range(1,27):
            if code[L]==i:
                res[P]=i
                DFS(L+1, P+1)
            elif i&gt;=10 and code[L]==i//10 and code[L+1]==i%10:
                res[P]=i
                DFS(L+2, P+1)

code=list(map(int, input()))
n = len(code) #종착점
code.insert(n, -1) #에러
res=[0]*(n+3)
cnt=0
DFS(0,0)
print(cnt)</code></pre>
<h2 id="7-7-송아지-찾기">7-7. 송아지 찾기</h2>
<pre><code class="language-py">from collections import deque
MAX=100000
s, e = map(int, input().split())
ch=[0]*(MAX+1)
dis=[0]*(MAX+1)
ch[s]=1
dis[s]=0
dQ = deque()
dQ.append(s)

while dQ:
    now = dQ.popleft()
    if now == e:
        break
    for x in (now-1, now+1, now+5):
        if 0&lt;x&lt;=MAX:
            if ch[x]==0:
                dQ.append(x)
                ch[x]=1
                dis[x]=dis[now]+1
print(dis[e])</code></pre>
<h2 id="7-8-사과나무">7-8. 사과나무</h2>
<pre><code class="language-py">from collections import deque
n = int(input())
a = [list(map(int, input().split())) for _ in range(n)]
ch=[[0]*n for _ in range(n)]
sum=0
dx = [-1, 0, 1, 0]
dy = [0, 1, 0, -1]
Q = deque()
Q.append((n//2,n//2))
ch[n//2][n//2]=1
sum+=a[n//2][n//2]
L=0

while True:
    if L==n//2:
        break
    h = len(Q)
    for i in range(h):
        tmp= Q.popleft()
        for j in range(4):
            x=tmp[0]+dx[j]
            y=tmp[1]+dy[j]
            if ch[x][y]==0:
                sum+=a[x][y]
                ch[x][y]=1
                Q.append((x,y))
    L+=1
print(sum)</code></pre>
<br>
<br>

<h2 id="7-9-미로의-최단거리">7-9. 미로의 최단거리</h2>
<ul>
<li>정답<pre><code class="language-py">&quot;&quot;&quot;
미로의 최단거리
0 0 0 0 0 0 0 
0 1 1 1 1 1 0 
0 0 0 1 0 0 0 
1 1 0 1 0 1 1 
1 1 0 1 0 0 0 
1 0 0 0 1 0 0 
1 0 1 0 0 0 0
</code></pre>
</li>
</ul>
<p>&quot;&quot;&quot;
from collections import deque
dx = [-1,0,1,0]
dy= [0,1,0,-1]
board= [list(map(int, input().split())) for _ in range(7)]
dis = [[0]*7 for _ in range(7)]
Q = deque()
Q.append((0,0))
board[0][0]=1</p>
<p>while Q:
    tmp = Q.popleft()
    for i in range(4):
        x=tmp[0]+dx[i]
        y=tmp[1]+dy[i]
        if 0&lt;=x&lt;=6 and 0&lt;=y&lt;=6 and board[x][y]==0:
            board[x][y]=1
            dis[x][y]=dis[tmp[0]][tmp[1]]+1
            Q.append((x,y))
if dis[6][6]==0:
    print(-1)
else:
    print(dis[6][6])</p>
<pre><code>&lt;br&gt;

## 7-10. 미로탐색
```py
&quot;&quot;&quot;
미로의 최단거리
0 0 0 0 0 0 0
0 1 1 1 1 1 0
0 0 0 1 0 0 0
1 1 0 1 0 1 1
1 1 0 1 0 0 0
1 0 0 0 1 0 0
1 0 1 0 0 0 0

&quot;&quot;&quot;

dx=[-1,0,1,0]
dy=[0,1,0,-1]
def DFS(x,y):
    global cnt
    if x==6 and y==6:
        cnt+=1
    else:
        for i in range(4):
            xx=x+dx[i]
            yy=y+dy[i]
            if 0&lt;=xx&lt;=6 and 0&lt;=yy&lt;=6 and board[xx][yy]==0:
                board[xx][yy]=1
                DFS(xx,yy)
                board[xx][yy]=0

board=[list(map(int, input().split())) for _ in range(7)]
cnt=0
board[0][0]=1
DFS(0,0)</code></pre><br>

<h2 id="7-11-등산-경로">7-11. 등산 경로</h2>
<pre><code class="language-py">&quot;&quot;&quot;
등산경로
값을 변경되는 방식은 BFS
&quot;&quot;&quot;

def DFS(x,y):
    global cnt
    if x == ex and y == ey:
        cnt+=1
    else:
        for k in range(4):
            xx=x+dx[k]
            yy=y+dy[k]
            if 0&lt;=xx&lt;n and 0&lt;=yy&lt;n and ch[xx][yy]==0 and board[xx][yy]&gt;board[x][y]:
                ch[xx][yy]=1
                DFS(xx,yy)
                ch[xx][yy]=0

n = int(input())
board = [[0]*n for _ in range(n)]
ch = [[0]*n for _ in range(n)]
dx = [-1, 0, 1, 0]
dy = [0, 1, 0, -1]
max = -1e9
min = 1e9
for i in range(n):
    tmp = list(map(int, input().split()))
    for j in range(n):
        if tmp[j]&lt;min:
            min=tmp[j]
            sx=i
            sy=j
        if tmp[j]&gt;max:
            max=tmp[j]
            ex=i
            ey=j
        board[i][j]=tmp[j]
    ch[sx][sy]=1
    cnt=0
    DFS(sx, sy)
    print(cnt)</code></pre>
<br>


<h2 id="7-12-단지-번호-붙이기">7-12. 단지 번호 붙이기</h2>
<pre><code class="language-py">&quot;&quot;&quot;
7
0110100
0110101
1110101
0000111
0100000
0111110
0111000
&quot;&quot;&quot;
dx = [-1, 0, 1, 0]
dy = [0, 1, 0, -1]
def DFS(x,y):
    global cnt
    cnt+=1
    board[x][y]=0
    for i in range(4):
        xx=x+dx[i]
        yy=y+dy[i]
        if 0&lt;=xx&lt;n and 0&lt;=yy&lt;n and board[xx][yy]==1:
            DFS(xx,yy)

n = int(input())
board = [list(map(int, input())) for _ in range(n)]
res=[]
for i in range(n):
    for j in range(n):
        if board[i][j]==1:
            cnt=0
            DFS(i, j)
            res.append(cnt)
print(len(res))
res.sort()
for x in res:
    print(x)</code></pre>
<p>7-13. 섬나라 아일랜드
7-14. 안전영역</p>
]]></description>
        </item>
    </channel>
</rss>