Post

[OpenShift 오픈시프트] RHOCP 4.14.15 Installation - 4. HA Proxy 구성

RHOCP 4.14.15 설치 (오픈시프트 설치) - HA Proxy 구성


Test Environment


Red Hat Enterprise Linux 9.2 (Plow)


HA Proxy 설치 및 설정


여러 인스턴스에 트래픽을 고르게 분산시켜주는 로드밸런싱 설정을 위해 HA proxy를 구성한다.

이 설정은 Bastion 노드에서 진행한다.


haproxy를 다운로드 받아 haproxy.cfg 파일을 백업하고, 아래 내용으로 새로 파일을 생성해주자.

yum -y install haproxy

mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_default

vi /etc/haproxy/haproxy.cfg

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

    # utilize system-wide crypto-policies
    ssl-default-bind-ciphers PROFILE=SYSTEM
    ssl-default-server-ciphers PROFILE=SYSTEM

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 4000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
#frontend main
#    bind *:5000
#    acl url_static       path_beg       -i /static /images /javascript /stylesheets
#    acl url_static       path_end       -i .jpg .gif .png .css .js
#
#    use_backend static          if url_static
#    default_backend             app

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
##backend static
#    balance     roundrobin
#    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
frontend openshift-api-server
    bind *:6443
    default_backend openshift-api-server
    mode tcp
    option tcplog

backend openshift-api-server
    balance source
    mode tcp
    server bootstrap 172.16.0.172:6443 check
    server master01 172.16.0.173:6443 check
    server master02 172.16.0.174:6443 check
    server master03 172.16.0.175:6443 check

frontend machine-config-server
    bind *:22623
    default_backend machine-config-server
    mode tcp
    option tcplog

backend machine-config-server
    balance source
    mode tcp
    server bootstrap 172.16.0.172:22623 check
    server master01 172.16.0.173:22623 check
    server master02 172.16.0.174:22623 check
    server master03 172.16.0.175:22623 check

frontend ingress-http
    bind *:80
    default_backend ingress-http
    mode tcp
    option tcplog

backend ingress-http
    balance source
    mode tcp
    server infra01 172.16.0.179:80 check
    server infra02 172.16.0.180:80 check

frontend ingress-https
    bind *:443
    default_backend ingress-https
    mode tcp
    option tcplog

backend ingress-https
    balance source
    mode tcp
    server infra01 172.16.0.179:443 check
    server infra02 172.16.0.180:443 check

여기서 노드 및 ip 정보는 초기에 계획한 1. Overview 포스팅 대로 작성하였다.


이제 위 설정을 바탕으로 haproxy 서비스를 자동으로 실행되도록 해준다.

systemctl enable haproxy --now


Active 상태로 잘 실행된 것을 확인할 수 있다.

systemctl status haproxy

1
2
3
4
5
6
7
8
9
10
11
12
13
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.
[root@bastion client]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
     Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; preset: disa>
     Active: active (running) since Tue 2024-07-16 14:39:37 KST; 9s ago
    Process: 69126 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -f $CFGDIR -c -q $OPTI>
   Main PID: 69128 (haproxy)
      Tasks: 5 (limit: 100314)
     Memory: 5.0M
        CPU: 46ms
     CGroup: /system.slice/haproxy.service
             ├─69128 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -f /etc/hapr>
             └─69130 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -f /etc/hapr>


References


https://docs.openshift.com/container-platform/4.14/installing/installing_bare_metal/installing-bare-metal-network-customizations.html

https://docs.openshift.com/container-platform/4.14/installing/installing_bare_metal/installing-bare-metal.html


This post is licensed under CC BY 4.0 by the author.

Comments powered by Disqus.