[gpfsug-discuss] How to: Show clients actively connected to a given NFS export (CES)

Daniel McNabb dlmcnabb at gmail.com
Thu Mar 23 04:52:28 GMT 2023


The earliest name was Shark because sharks cannot stop swimming. It
developed wide striping across many disks and became TigerShark (hey,
striped), .

When the first SP systems came out it became MMFS (MultiMedia FS) and went
into a few trials with Tokyo Island and one of the baby Bells to stream
many videos.

VideoCharger was a quick attempt to make some hardware for things like
small hotels to stream digital content to rooms, but I think it got killed
right before launch.

After those attempts it moved from being just a research project to being a
real product and got named GPFS (General Parallel FS). This launched us
into the High Performance Computing sphere for some of the largest SP
systems (on AIX) going into some national labs and other weather and
research systems in 1998. This is the first time it has a full Posix
interface instead of a specialized video streaming interface. It also
became popular with things like banking systems for all its replication
possibilities. It supported Oracle for many years until they worked out
their own redundancy capabilities.  The ESS hardware systems were sold as
hardware embodiments using the GPFS software. The addition of a Linux
implementation expanded the systems GPFS could run on. There were many
innovations added to GPFS over the years for handling AFM replication, NFS
exporting, and distributed striping for fast reconstruction of lost disks.
(There are several other things I cannot remember at the moment.)

After many years of having no marketing division (i.e only word of mouth),
IBM storage decided to get into the game and marketed it as IBM Spectrum
Scale which has just been changed to IBM Storage Scale.

Cheers all,
Daniel McNabb.


On Wed, Mar 22, 2023 at 6:19 PM Glen Corneau <gcorneau at us.ibm.com> wrote:

> I think that product was Videocharger, but it used the multimedia file
> system striped across multiple SCSI-1 disks for streaming video performance
> IIRC!
>
> Sure we're not talking about Tigershark?  😉
>
> (and can't forget the cousin, PIOFS or Parallel I/O File System)
> ---
> Glen Corneau
> Senior, Power Partner Technical Specialist (PTS-P)
> IBM Technology, North America
> Email: gcorneau at us.ibm.com
> Cell: 512-420-7988
>
> ------------------------------
> *From:* gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org> on behalf of
> Ryan Novosielski <novosirj at rutgers.edu>
> *Sent:* Wednesday, March 22, 2023 19:12
> *To:* gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
> *Subject:* [EXTERNAL] Re: [gpfsug-discuss] How to: Show clients actively
> connected to a given NFS export (CES)
>
> The product formerly known as MMFS? -- #BlackLivesMatter ____ || \\UTGERS,
> |---------------------------*O*--------------------------- ||_// the State
> | Ryan Novosielski - novosirj@ rutgers. edu || \\ University | Sr.
> Technologist - 973/972. 0922
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> The product formerly known as MMFS?
>
> --
> #BlackLivesMatter
> ____
> || \\UTGERS,    |---------------------------*O*---------------------------
> ||_// the State  |         Ryan Novosielski - novosirj at rutgers.edu
> || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
> ||  \\    of NJ  | Office of Advanced Research Computing - MSB
> A555B, Newark
>      `'
>
> On Mar 22, 2023, at 17:30, Alec <anacreo at gmail.com> wrote:
>
> Thanks for correction.. been using GPFS so long I forgot my basic NFS
> command.
>
> Or that its now IBM Storage Scale and no longer Spectrum Scale or GPFS...
>
> As a note that info is a little unreliable, but if you take a daily
> snapshots and throw it all together it should give you something.
>
> Alternatively you can have most nfsd daemons log mounts and then scan the
> logs for a more reliable method.
>
> Alec
>
> On Wed, Mar 22, 2023, 2:00 PM Markus Stoeber <
> M.Stoeber at rz.uni-frankfurt.de> wrote:
>
> Am 22.03.2023 um 21:45 schrieb Beckman, Daniel D:
>
> Hi,
>
> showmount -a should do the trick, however the manpage notes that:
>
> -a or --all
>               List both the client hostname or IP address and mounted
> directory in host:dir format. This info should not be considered reliable.
> See the notes on rmtab in rpc.mountd(8).
>
> showmount -d could also be an option:
>
> -d or --directories
>               List only the directories mounted by some client.
>
> Best regards,
>
> Markus
>
> Thanks, but that shows the export list: a list of shares and the hosts /
> networks that have access. It does not show which of those clients are
> currently connected to a given share, as in have it mounted.
>
>
> *From:* gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org>
> <gpfsug-discuss-bounces at gpfsug.org> *On Behalf Of *Alec
> *Sent:* Wednesday, March 22, 2023 4:23 PM
> *To:* gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
> <gpfsug-discuss at gpfsug.org>
> *Subject:* Re: [gpfsug-discuss] How to: Show clients actively connected
> to a given NFS export (CES)
>
>
> *CAUTION:* This email message has been received from an external source.
> Please use caution when opening attachments, or clicking on links.
>
> showmount -e nfsserver
>
>
> Normal way to see that for an nfs server.
>
>
> On Wed, Mar 22, 2023, 1:13 PM Beckman, Daniel D <dbec at loc.gov> wrote:
>
> This is probably a dumb question, but I could not find it in the
> documentation. We have certain NFS exports that we suspect are no longer
> used or needed but before removing we’d like to first check if any clients
> are currently mounting them.  (Not just what has access to those exports.)
> Are there commands that will show this?
>
>
> Thanks,
>
> Daniel
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
>
> --
> Markus Stoeber
> Systemmanagement AIX, Linux / Storagemanagement / Plattformintegration
> Hochschulrechenzentrum der Johann Wolfgang Goethe-Universitaet
> Abt. Basisdienste
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20230322/4275bf3c/attachment-0002.htm>


More information about the gpfsug-discuss mailing list