[gpfsug-discuss] Enabling SSL/HTTPS/ on Object S3.

Jan-Frode Myklebust janfrode at tanso.net
Thu May 7 21:02:12 BST 2020


(almost verbatim copy of my previous email — in case anybody else needs it,
or has ideas for improvements :-)

The way I would do this is to install "haproxy" on all these nodes, and
have haproxy terminate SSL and balance incoming requests over the 3
CES-addresses. For S3 -- we only need to provide access to the swift port
at 8080.

First install haproxy:

# yum install haproxy

Put your cert and key into /etc/haproxy/ssl.pem:


# cat server.key server.crt cert-chain.crt > /etc/haproxy/ssl.pem
# chmod 400 /etc/haproxy/ssl.pem

Then create a /etc/haproxy/haproxy.cfg:

# cat <<'EOF' > /etc/haproxy/haproxy,cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048

# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

listen stats *:80
maxconn 10
timeout client 100s
timeout server 100s
timeout connect 100s
timeout queue 100s

stats enable
stats hide-version
stats refresh 30s
stats show-node
stats auth admin:password
stats uri /haproxy?stats

frontend s3-frontend
bind *:443 ssl crt /etc/haproxy/ssl.pem
default_backend s3-backend

backend s3-backend
balance roundrobin
server ces1 10.33.23.167:8080 check
server ces2 10.33.23.168:8080 check
server ces3 10.33.23.169:8080 check
EOF

# systemctl enable haproxy
# systemctl start haproxy


You only need to modify the IP-addresses in the s3-backend (make sure they
point at your floating CES addresses, not the static ones), and maybe make
a better username/password for the stats page at "
*http://hostname/haproxy?stats*
<https://urldefense.proofpoint.com/v2/url?u=http-3A__hostname_haproxy-3Fstats&d=DwMGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=C3SCgx_FF-wJZebP7c5CEG8K2j7pCg2LdZP4SFMyCQI&m=ZCneR00GhPGRUqmpEvPdSEl4EHMUA6j_5WdnrnYvqXk&s=0UY3Rjy-1AOANEsyvosHlOCpzys6e6UL8PTBpQeJ-Rg&e=>
".


This setup does two layers of loadbalancing. First DNS round-robin, then
haproxy roundrobin -- I don't think this is a negative thing, and it does
make it possible to tune the loadbalancing further with more advanced
algorithms for selecting backends. F.ex. we can do weighting, if some are
more powerful than others. Or "leastconn", to send new requests to backends
with the least number of active connections.



  -jf



tor. 7. mai 2020 kl. 14:59 skrev Andi Christiansen <andi at christiansen.xxx>:

> Hi Christian,
>
> Thanks for answering!
>
> We solved this with lab services a while back now, and ended up setting up
> haproxys I front of the ces nodes and then they handle the ssl encryption
> to the S3 API
>
> Thanks
> Andi Christiansen
>
> Sendt fra min iPhone
>
> Den 7. maj 2020 kl. 12.08 skrev Christian Vieser <
> christian.vieser at 1und1.de>:
>
> 
>
> Hi Andi,
>
> up to now there are no instructions available on how to enable SSL on the Swift/S3 endpoints.
>
> The only thing is that you can enable SSL on the authentication path. So your connection to Swift authentication on port 35357 will be secured and the S3 authentication arriving at http port 8080 will internally take the SSL path, if configured properly. We have successfully done that in a test environment. Be sure to use the --pwd-file option with the "mmuserauth service create ..." and verify the proxy settings afterwards. It should look like this:
>
> # mmobj config list --ccrfile proxy-server.conf --section filter:s3token
>
> [filter:s3token]
> auth_uri = https://127.0.0.1:35357/use = egg:swift3#s3token
> insecure = true
>
> You can correct wrong settings with# mmobj config change --ccrfile proxy-server.conf --section filter:s3token --property insecure --value true
> # mmobj config change --ccrfile proxy-server.conf --section filter:s3token --property auth_uri --value 'https://127.0.0.1:35357/'
>
> Regards,
> Christian
>
>
> > i have tried what you suggested. mmobj swift base ran fine. but after i have
> > deleted the userauth and try to set it up again with ks-ssl enabled it just
> > hangs:
> >
> > # mmuserauth service create --data-access-method object --type local
> > --enable-ks-ssl
> >
> > still waiting for it to finish, 15 mins now.. :)
>
>
> >>     Basically all i need is this:
> >>
> >>     https://s3.something.com:8080 https://s3.something.com:8080 which points
> >> to the WAN ip of the CES cluster (already configured and ready)
> >>
> >>     and endpoints like this:
> >>
> >>     None | keystone | identity | True | public | https://cluster_domain:5000/
> >> https://cluster_domain:5000/
> >>     RegionOne | swift | object-store | True | public |
> >> https://cluster_domain:443/v1/AUTH_%(tenant_id)s
> >>     RegionOne | swift | object-store | True | public |
> >> https://cluster_domain:8080/v1/AUTH_%(tenant_id)s
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200507/7363c5aa/attachment-0002.htm>


More information about the gpfsug-discuss mailing list