[gpfsug-discuss] Mounting GPFS data on OpenStack VM
Jonathan Mills
jonathan.b.mills at nasa.gov
Wed Jan 18 16:10:51 GMT 2017
On 1/18/17 3:46 AM, Simon Thompson (Research Computing - IT Services) wrote:
>
>> Another option might be to NFS/CIFS export the
>> filesystems from the hypervisor to the guests via the 169.254.169.254
>> metadata address although I don't know how feasible that may or may not
>
> Doesn't the metadata IP site on the network nodes though and not the
> hypervisor?
Not when Neutron is in DVR mode. It is intercepted at the hypervisor
and redirected to the neutron-ns-metadata-proxy. See below:
[root at gpcc003 ~]# ip netns exec
qrouter-bc4aa217-5128-4eec-b9af-67923dae319a iptables -t nat -nvL
neutron-l3-agent-PREROUTING
Chain neutron-l3-agent-PREROUTING (1 references)
pkts bytes target prot opt in out source
destination
19 1140 REDIRECT tcp -- qr-+ * 0.0.0.0/0
169.254.169.254 tcp dpt:80 redir ports 9697
281 12650 DNAT all -- rfp-bc4aa217-5 * 0.0.0.0/0
169.154.180.32 to:10.0.4.22
[root at gpcc003 ~]# ip netns exec
qrouter-bc4aa217-5128-4eec-b9af-67923dae319a netstat -tulpn |grep 9697
tcp 0 0 0.0.0.0:9697 0.0.0.0:*
LISTEN 28130/python2
[root at gpcc003 ~]# ps aux |grep 28130
neutron 28130 0.0 0.0 286508 41364 ? S Jan04 0:02
/usr/bin/python2 /bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/bc4aa217-5128-4eec-b9af-67923dae319a.pid
--metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--router_id=bc4aa217-5128-4eec-b9af-67923dae319a
--state_path=/var/lib/neutron --metadata_port=9697
--metadata_proxy_user=989 --metadata_proxy_group=986 --verbose
--log-file=neutron-ns-metadata-proxy-bc4aa217-5128-4eec-b9af-67923dae319a.log
--log-dir=/var/log/neutron
root 31220 0.0 0.0 112652 972 pts/1 S+ 11:08 0:00 grep
--color=auto 28130
>
> We currently have created interfaces on out net nodes attached to the
> appropriate VLAN/VXLAN and then run CES on top of that.
>
> The problem with this is if you have the same subnet existing in two
> networks, then you have a problem.
>
> I had some discussion with some of the IBM guys about the possibility of
> using a different CES protocol group and running multiple ganesha servers
> (maybe a container attached to the net?) so you could then have different
> NFS configs on different ganesha instances with CES managing a floating IP
> that could exist multiple times.
>
> There were some potential issues in the way the CES HA bits work though
> with this approach.
>
> Simon
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
--
Jonathan Mills / jonathan.mills at nasa.gov
NASA GSFC / NCCS HPC (606.2)
Bldg 28, Rm. S230 / c. 252-412-5710
More information about the gpfsug-discuss
mailing list