[gpfsug-discuss] How to: Show clients actively connected to a given NFS export (CES)

Alec anacreo at gmail.com
Thu Mar 23 16:47:48 GMT 2023


Whoa whoa whoa... Easy on the fighting words...

Korn shell was a shell language eventually replaced by bash???

ksh is alive and well and I believe (ksh93) superior to bash in many ways...

On Thu, Mar 23, 2023, 6:26 AM Kidger, Daniel <daniel.kidger at hpe.com> wrote:

> It is simple really….
>
>
>
> The product is called “Storage Scale”.
>
>
>
> It used to be called “Spectrum Scale” but that being abbreviated to “SS”
> was a problem, hence the recent change. :-)
>
> As an appliance it is called ESS = “Elastic Storage System” since before
> being called Spectrum Scale, it was briefly called “Elastic Storage”
>
> All the software RPMs though still start with “gpfs” and that is what
> every techie person still calls the software.
>
> The software unpacks to /usr/lpp/mmfs/bin since the product was once the
> “MultiMedia FileSystem”
>
> The ‘lpp’ comes from AIX. It stands for “Licensed Programming Product “.
> This term is lost on everyone who comes from a Linux background.
>
> Likewise no one from the Linux world know what ‘adm’ is, as in
> /var/adm/ras  (or ras?)
>
>
>
> There are c. 333 ‘mm’ commands like say ‘mmdeldisk’. None are actual
> binaries – all are Korn shell scripts (Korn was a shell language that Bash
> eventually replaced)
>
> *(actually a few mm commands are written in Python not Korn.)
> Most mm command call underlying binaries which all start with ‘ts’  eg
> ‘tsdeldisk’. This is, as we all know is become the product was known as
> TigerShark but no one could be bothered renaming stuff when “TigerShark” as
> a name was dropped 25 years ago.
>
>
>
> Simple?
>
>
>
>
>
>
>
>
>
>
> *Daniel Kidger *HPC Storage Solutions Architect, EMEA
> daniel.kidger at hpe.com
>
> +44 (0)7818 522266
>
> *hpe.com* <http://www.hpe.com/>
>
>
>
>
> *From:* gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org> *On Behalf Of *Lyle
> Gayne
> *Sent:* 23 March 2023 12:29
> *To:* gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
> *Subject:* Re: [gpfsug-discuss] How to: Show clients actively connected
> to a given NFS export (CES)
>
>
>
> Yes, MMFS was the internal name for an offering that I believe was to be
> called VideoCharger/MediaStreamer, and was an ahead of its time/market
> offering.  When MMFS was cancelled, all four of its developers of the time
> joined the GPFS effort, and we had collaborated with them to sequence
> changes in the same code base for a few years by that point and knew them
> all well.
> ------------------------------
>
> *From:* gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org> on behalf of
> Ryan Novosielski <novosirj at rutgers.edu>
> *Sent:* Wednesday, March 22, 2023 8:12 PM
> *To:* gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
> *Subject:* [EXTERNAL] Re: [gpfsug-discuss] How to: Show clients actively
> connected to a given NFS export (CES)
>
>
>
> The product formerly known as MMFS? -- #BlackLivesMatter ____ || \\UTGERS,
> |---------------------------*O*--------------------------- ||_// the State
> | Ryan Novosielski - novosirj@ rutgers. edu || \\ University | Sr.
> Technologist - 973/972. 0922
>
> The product formerly known as MMFS?
>
>
>
> --
> #BlackLivesMatter
>
> ____
> || \\UTGERS,    |---------------------------*O*---------------------------
> ||_// the State  |         Ryan Novosielski - novosirj at rutgers.edu
> || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
> ||  \\    of NJ  | Office of Advanced Research Computing - MSB
> A555B, Newark
>      `'
>
>
>
> On Mar 22, 2023, at 17:30, Alec <anacreo at gmail.com> wrote:
>
>
>
> Thanks for correction.. been using GPFS so long I forgot my basic NFS
> command.
>
>
>
> Or that its now IBM Storage Scale and no longer Spectrum Scale or GPFS...
>
>
>
> As a note that info is a little unreliable, but if you take a daily
> snapshots and throw it all together it should give you something.
>
>
>
> Alternatively you can have most nfsd daemons log mounts and then scan the
> logs for a more reliable method.
>
>
>
> Alec
>
>
>
> On Wed, Mar 22, 2023, 2:00 PM Markus Stoeber <
> M.Stoeber at rz.uni-frankfurt.de> wrote:
>
> Am 22.03.2023 um 21:45 schrieb Beckman, Daniel D:
>
>
>
> Hi,
>
>
>
> showmount -a should do the trick, however the manpage notes that:
>
>
>
> -a or --all
>
>               List both the client hostname or IP address and mounted
> directory in host:dir format. This info should not be considered reliable.
> See the notes on rmtab in rpc.mountd(8).
>
>
>
> showmount -d could also be an option:
>
>
>
> -d or --directories
>               List only the directories mounted by some client.
>
>
>
> Best regards,
>
>
>
> Markus
>
>
>
> Thanks, but that shows the export list: a list of shares and the hosts /
> networks that have access. It does not show which of those clients are
> currently connected to a given share, as in have it mounted.
>
>
>
> *From:* gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org>
> <gpfsug-discuss-bounces at gpfsug.org> *On Behalf Of *Alec
> *Sent:* Wednesday, March 22, 2023 4:23 PM
> *To:* gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
> <gpfsug-discuss at gpfsug.org>
> *Subject:* Re: [gpfsug-discuss] How to: Show clients actively connected
> to a given NFS export (CES)
>
>
>
> *CAUTION:* This email message has been received from an external source.
> Please use caution when opening attachments, or clicking on links.
>
> showmount -e nfsserver
>
>
>
> Normal way to see that for an nfs server.
>
>
>
> On Wed, Mar 22, 2023, 1:13 PM Beckman, Daniel D <dbec at loc.gov> wrote:
>
> This is probably a dumb question, but I could not find it in the
> documentation. We have certain NFS exports that we suspect are no longer
> used or needed but before removing we’d like to first check if any clients
> are currently mounting them.  (Not just what has access to those exports.)
> Are there commands that will show this?
>
>
>
> Thanks,
>
> Daniel
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
>
>
> _______________________________________________
>
> gpfsug-discuss mailing list
>
> gpfsug-discuss at gpfsug.org
>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
>
>
> --
>
> Markus Stoeber
>
> Systemmanagement AIX, Linux / Storagemanagement / Plattformintegration
>
> Hochschulrechenzentrum der Johann Wolfgang Goethe-Universitaet
>
> Abt. Basisdienste
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20230323/79570296/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 2541 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20230323/79570296/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 2541 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20230323/79570296/attachment-0005.png>


More information about the gpfsug-discuss mailing list