<div dir="ltr"><div dir="ltr">I'd try running the mmdelnode process from a different node (we use cluster manager, mmlsmgr). Ensure the target node to be removed is unreachable from system where you run mmdelnode by either shutting host down or setting up a null route from cluster manager. After mmdelnode'ing, fix networking, and mmaddnode.<br><input name="virtru-metadata" type="hidden" value="{"email-policy":{"disableCopyPaste":false,"disablePrint":false,"disableForwarding":false,"enableNoauth":false,"expandedWatermarking":false,"expires":false,"sms":false,"expirationNum":1,"expirationUnit":"days","expirationDate":"2025-06-25T17:19:03.102Z","isManaged":false,"persistentProtection":false},"attachments":{},"compose-id":"3","compose-window":{"secure":false}}"><div>Re CPU compatibility, unsure about Zen5, but we have gpfs 5.1.x.x running on AMD Zen4.</div><div><br></div><div>Best,</div><div>Chris</div></div><br><div class="gmail_quote gmail_quote_container" style=""><div dir="ltr" class="gmail_attr">On Tue, Jun 24, 2025 at 12:56 PM Jonathan Buzzard <<a href="mailto:jonathan.buzzard@strath.ac.uk">jonathan.buzzard@strath.ac.uk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 24/06/2025 16:59, Achim Rehor wrote:<br>
><br>
> I think the node was not added to the cluster, but some changes on the <br>
> node itself took place already (like creating /var/mmfs/gen ... and <br>
> possibly others)<br>
> <br>
> There is/was a chapter/appendix to the GPFS Adminstration Guide, telling <br>
> how to permanently remove GPFS from a node, that might help on how <br>
> to stop the mmaddnode command to cope with a node already belonging to <br>
> a cluster ...<br>
> <br>
<br>
I removed the GPFS packages from the node, nuked /var/mmfs and /usr/lpp <br>
reinstalled GPFS and that clears the issue but it is stuck again<br>
<br>
There is a tsgskkm process running at 100%, been running 15 minutes now :-(<br>
<br>
Before I go any further I am going to presume GPFS is fine on Zen5 CPUs?<br>
Specifically dual EPYC 9555. The node is running 5.2.2-1<br>
<br>
<br>
JAB.<br>
<br>
-- <br>
Jonathan A. Buzzard Tel: +44141-5483420<br>
HPC System Administrator, ARCHIE-WeSt.<br>
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG<br>
<br>
<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://gpfsug.org" rel="noreferrer" target="_blank">gpfsug.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org</a><br>
</blockquote></div></div>
<br>
<div><hr></div><font face="Arial, Helvetica, sans-serif" size="1">This message is for the recipient’s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email.</font>