From ghemingtsai at gmail.com Fri Jun 1 01:29:38 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Thu, 31 May 2012 17:29:38 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 In-Reply-To: References: Message-ID: Hi, Jez, Sorry to bother you again. I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", and restarted all the related dsm,hsm,mm processes, so finally, dsmsmj shows the directory "/gpfs_directory1" is in status "active". So I added a lot of files into directory "/gpfs_directory1" until it is 100% full. But the files in "/gpfs_directory1" just dont migrate to TSM. The "dsmerror.log" just keeps showing the following errors every 20 seconds: 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! 05/31/12 17:25:02 ANS9590E The SOAP error information: HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused I checked the URL Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 and followed the steps in these two pages, but I still got the same errors in dsmerror.log. Could you help please? Thanks. Grace Date: Wed, 30 May 2012 16:14:17 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: > ANS9085E > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > So. My hunch from looking at my system here is that you haven't actually > told dsm that the filesystem is to be space managed. > > You do that here: > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html > > Then re-run dsmmigfs query -Detail and hopefully you should see something > similar to this: > > > [root at tsm01 ~]# dsmmigfs query -Detail > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 2, Level 4.1 > Client date/time: 30-05-2012 17:13:00 > (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights > Reserved. > > > The local node has Node ID: 3 > The failover environment is deactivated on the local node. > > File System Name: /mnt/gpfs > High Threshold: 100 > Low Threshold: 80 > Premig Percentage: 20 > Quota: 999999999999999 > Stub Size: 0 > Server Name: TSM01 > Max Candidates: 100 > Max Files: 0 > Min Partial Rec Size: 0 > Min Stream File Size: 0 > MinMigFileSize: 0 > Preferred Node: tsm01 Node ID: 3 > Owner Node: tsm01 Node ID: 3 > Source Nodes: tsm01 > > Then see if your HSM GUI works properly. > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 30 May 2012 17:06 > To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E > > Hi, Jez, > > Thanks to reply my questions. > > Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". > > On GPFS server, > dsmmigfs query -Detail > => > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 3, Level 0.0 > Client date/time: 05/30/12 08:51:55 > (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights > Reserved. > > > The local node has Node ID: 1 > The failover environment is active on the local node. > The recall distribution is enabled. > > > > On GPFS server, > ps -ef | grep dsm > => > root 6157 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrootd > root 6158 1 0 May29 ? 00:00:03 > /opt/tivoli/tsm/client/hsm/bin/dsmmonitord > root 6159 1 0 May29 ? 00:00:14 > /opt/tivoli/tsm/client/hsm/bin/dsmscoutd > root 6163 1 0 May29 ? 00:00:37 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6165 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm > root 9034 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 9035 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 14278 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/ba/bin/dsmcad > root 22236 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 22237 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj > root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= > -DDSM_LOG=/ -DDSM_DIR= > -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar > > Thanks. > > Grace > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120530/ee7a6e0b/attachment-0001.html > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Jun 1 08:31:45 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Jun 2012 07:31:45 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059ED53@WARVWEXC1.uk.deluxe-eu.com> Try this: https://www-304.ibm.com/support/docview.wss?uid=swg21459480 Though; you're evaluating GPFS + TSM with a view to purchase. Are you not receiving the level of pre-sales technical support to enable you to perform your evaluation successfully? (Only that we're in different time zones and your local IBM support should be much faster) From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 01 June 2012 01:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 Hi, Jez, Sorry to bother you again. I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", and restarted all the related dsm,hsm,mm processes, so finally, dsmsmj shows the directory "/gpfs_directory1" is in status "active". So I added a lot of files into directory "/gpfs_directory1" until it is 100% full. But the files in "/gpfs_directory1" just dont migrate to TSM. The "dsmerror.log" just keeps showing the following errors every 20 seconds: 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! 05/31/12 17:25:02 ANS9590E The SOAP error information: HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused I checked the URL Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 and followed the steps in these two pages, but I still got the same errors in dsmerror.log. Could you help please? Thanks. Grace Date: Wed, 30 May 2012 16:14:17 +0000 From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: <39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com> Content-Type: text/plain; charset="windows-1252" So. My hunch from looking at my system here is that you haven't actually told dsm that the filesystem is to be space managed. You do that here: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html Then re-run dsmmigfs query -Detail and hopefully you should see something similar to this: [root at tsm01 ~]# dsmmigfs query -Detail IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 2, Level 4.1 Client date/time: 30-05-2012 17:13:00 (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights Reserved. The local node has Node ID: 3 The failover environment is deactivated on the local node. File System Name: /mnt/gpfs High Threshold: 100 Low Threshold: 80 Premig Percentage: 20 Quota: 999999999999999 Stub Size: 0 Server Name: TSM01 Max Candidates: 100 Max Files: 0 Min Partial Rec Size: 0 Min Stream File Size: 0 MinMigFileSize: 0 Preferred Node: tsm01 Node ID: 3 Owner Node: tsm01 Node ID: 3 Source Nodes: tsm01 Then see if your HSM GUI works properly. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 30 May 2012 17:06 To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Mon Jun 4 18:36:18 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Mon, 4 Jun 2012 10:36:18 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 6, Issue 1 In-Reply-To: References: Message-ID: Hi, Jez, Sorry, we dont have the pre-sales technical support, I have tried several things, now the dsmerror.log gives me different error messages. Could you send me the following files and the commands output which work for your GPFS/HSM on your GPFS server please? dsm.opt dsm.sys /etc/inittab /etc/adsm/SpaceMan/config/DSMNodeSet /etc/adsm/SpaceMan/config/instance /etc/adsm/SpaceMan/config/DSMSDRVersion ps -ef | grep dsm ps -ef | grep hsm Thanks. Grace Message: 3 > Date: Fri, 1 Jun 2012 07:31:45 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059ED53 at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > Try this: https://www-304.ibm.com/support/docview.wss?uid=swg21459480 > > Though; you're evaluating GPFS + TSM with a view to purchase. > > Are you not receiving the level of pre-sales technical support to enable > you to perform your evaluation successfully? (Only that we're in different > time zones and your local IBM support should be much faster) > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 01 June 2012 01:30 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 > > Hi, Jez, > > Sorry to bother you again. > > I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", > and restarted all the related dsm,hsm,mm processes, so finally, > dsmsmj shows the directory "/gpfs_directory1" is in status "active". > > So I added a lot of files into directory "/gpfs_directory1" until it is > 100% full. > > But the files in "/gpfs_directory1" just dont migrate to TSM. > The "dsmerror.log" just keeps showing the following errors every 20 > seconds: > > 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! > 05/31/12 17:25:02 ANS9590E The SOAP error information: > HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused > > > I checked the URL > > Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 > Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 > > and followed the steps in these two pages, but I still got the same errors > in dsmerror.log. > > Could you help please? Thanks. > > Grace > Date: Wed, 30 May 2012 16:14:17 +0000 > From: Jez Tucker > > To: gpfsug main discussion list gpfsug-discuss at gpfsug.org>> > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: > ANS9085E > Message-ID: > <39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com > 39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com>> > Content-Type: text/plain; charset="windows-1252" > > So. My hunch from looking at my system here is that you haven't actually > told dsm that the filesystem is to be space managed. > > You do that here: > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html > > Then re-run dsmmigfs query -Detail and hopefully you should see something > similar to this: > > > [root at tsm01 ~]# dsmmigfs query -Detail > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 2, Level 4.1 > Client date/time: 30-05-2012 17:13:00 > (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights > Reserved. > > > The local node has Node ID: 3 > The failover environment is deactivated on the local node. > > File System Name: /mnt/gpfs > High Threshold: 100 > Low Threshold: 80 > Premig Percentage: 20 > Quota: 999999999999999 > Stub Size: 0 > Server Name: TSM01 > Max Candidates: 100 > Max Files: 0 > Min Partial Rec Size: 0 > Min Stream File Size: 0 > MinMigFileSize: 0 > Preferred Node: tsm01 Node ID: 3 > Owner Node: tsm01 Node ID: 3 > Source Nodes: tsm01 > > Then see if your HSM GUI works properly. > > From: gpfsug-discuss-bounces at gpfsug.org gpfsug-discuss-bounces at gpfsug.org> [mailto: > gpfsug-discuss-bounces at gpfsug.org] > On Behalf Of Grace Tsai > Sent: 30 May 2012 17:06 > To: gpfsug-discuss at gpfsug.org; > Jez.Tucker at rushes.co.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E > > Hi, Jez, > > Thanks to reply my questions. > > Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". > > On GPFS server, > dsmmigfs query -Detail > => > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 3, Level 0.0 > Client date/time: 05/30/12 08:51:55 > (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights > Reserved. > > > The local node has Node ID: 1 > The failover environment is active on the local node. > The recall distribution is enabled. > > > > On GPFS server, > ps -ef | grep dsm > => > root 6157 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrootd > root 6158 1 0 May29 ? 00:00:03 > /opt/tivoli/tsm/client/hsm/bin/dsmmonitord > root 6159 1 0 May29 ? 00:00:14 > /opt/tivoli/tsm/client/hsm/bin/dsmscoutd > root 6163 1 0 May29 ? 00:00:37 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6165 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm > root 9034 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 9035 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 14278 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/ba/bin/dsmcad > root 22236 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 22237 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj > root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= > -DDSM_LOG=/ -DDSM_DIR= > -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar > > Thanks. > > Grace > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120530/ee7a6e0b/attachment-0001.html > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120601/4b4dd980/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 6, Issue 1 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Tue Jun 5 16:03:09 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Tue, 5 Jun 2012 16:03:09 +0100 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 12/06/2012) Message-ID: I am out of the office until 12/06/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 6, Issue 2" sent on 5/6/2012 12:00:01. This is the only notification you will receive while this person is away. From Jez.Tucker at rushes.co.uk Tue Jun 19 17:06:51 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 19 Jun 2012 16:06:51 +0000 Subject: [gpfsug-discuss] Storage Decisions Chicago 2012 Message-ID: <39571EA9316BE44899D59C7A640C13F5305A49E1@WARVWEXC1.uk.deluxe-eu.com> Is anyone going to this? Interested to note any IBM & GPFS related information. http://storagedecisions.techtarget.com/chicago/ https://twitter.com/#!/search/%23SDCHI12 --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Fri Jun 22 15:52:34 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Fri, 22 Jun 2012 15:52:34 +0100 Subject: [gpfsug-discuss] Samba mapping of "special" SID entries Message-ID: <4FE486B2.1050501@ed.ac.uk> Hi all, Has anyone bumped up against the "nfs4: special" option in GPFS/Samba deployments which manipulates how the "owner" and "group owner" (and "everybody") behaviour is mapped to ACLs when accessed via the samba stack? In particular, with the "default" setting (if one blindly follows the worked examples on this) of nfs4: special, if a user adds themselves specifically to an ACL, this creates an entry: special:@owner rather than: user:username which has the knock-on effect that if a file/folder is created under this ACL by a different owner (or if ownership changes), the person who put said ACL on to the file/folder no longer has access. Most people find this confusing (which is putting it politely). To further complicate matters, the "special" windows SID's*[1] - such as "CREATOR/OWNER" - don't seem to work properly in the ctdb/samba/gpfs stack (I don't know if they do in "normal" samba though). IBM don't support CREATOR/OWNER in SONAS*[2] - so it's not just me! So my question is - has anyone else been looking into this at all, and if so, do you have any sage words of wisdom to offer? Cheers, Orlando. *[1] http://support.microsoft.com/kb/163846 *[2] http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.sonas.doc%2Fadm_authorization_limitations.html -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From luke.raimbach at oerc.ox.ac.uk Fri Jun 22 17:33:10 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Fri, 22 Jun 2012 16:33:10 +0000 Subject: [gpfsug-discuss] Samba mapping of "special" SID entries In-Reply-To: <4FE486B2.1050501@ed.ac.uk> References: <4FE486B2.1050501@ed.ac.uk> Message-ID: Hi Orlando, I've been having success using Centrify to manage UID/GID mappings for our very small mixed cluster (7 x Linux, 1 x Windows 2008R2). I've created a map for "CREATOR / OWNER", "SYSTEM", "Domain Admins", etc. group SIDs and use the Windows node to manage ACLs. When the windows node applies the ACLs, these seem to translate successfully in to GPFS ACLs and work nicely for the mixed environment allowing users on both Linux and Windows systems to manipulate each other's files. People are mounting the FS via NFS (exported via the NSD Linux servers) and CIFS (shared from Win2k8R2). The permissions don't look friendly when you run ls -l on a Linux system over NFS but the ACLs do their job in preserving inheritable permissions, etc. If people want to see the 'real' ACL, they need to use mmgetacl on a GPFS attached node (or windows users simply click on the security tab under properties of a file). Drop me a line off-list if you want to take a look at what we've got remotely. I can run a webex session from the Windows node if you want to have a good poke around. Luke. -- Luke Raimbach IT Manager Oxford e-Research Centre 7 Keble Road, Oxford, OX1 3QG +44(0)1865 610639 > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Orlando Richards > Sent: 22 June 2012 15:53 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] Samba mapping of "special" SID entries > > Hi all, > > Has anyone bumped up against the "nfs4: special" option in GPFS/Samba > deployments which manipulates how the "owner" and "group owner" (and > "everybody") behaviour is mapped to ACLs when accessed via the samba > stack? > > In particular, with the "default" setting (if one blindly follows the worked > examples on this) of nfs4: special, if a user adds themselves specifically to > an ACL, this creates an entry: > > special:@owner > > rather than: > > user:username > > which has the knock-on effect that if a file/folder is created under this ACL > by a different owner (or if ownership changes), the person who put said ACL > on to the file/folder no longer has access. Most people find this confusing > (which is putting it politely). > > To further complicate matters, the "special" windows SID's*[1] - such as > "CREATOR/OWNER" - don't seem to work properly in the ctdb/samba/gpfs > stack (I don't know if they do in "normal" samba though). IBM don't support > CREATOR/OWNER in SONAS*[2] - so it's not just me! > > So my question is - has anyone else been looking into this at all, and if so, > do you have any sage words of wisdom to offer? > > Cheers, > Orlando. > > > *[1] http://support.microsoft.com/kb/163846 > *[2] > http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fc > om.ibm.sonas.doc%2Fadm_authorization_limitations.html > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Jez.Tucker at rushes.co.uk Mon Jun 25 14:52:47 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 25 Jun 2012 13:52:47 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? Message-ID: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Curiosity... How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in any configuration? Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:08:14 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:08:14 +0300 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 4:52 PM, Jez Tucker wrote: > ? How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in > any configuration? Native GPFS on Linux @KAUST. No Windows or OS X as far as I know. ~jonathon From Jez.Tucker at rushes.co.uk Mon Jun 25 15:08:14 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 25 Jun 2012 14:08:14 +0000 Subject: [gpfsug-discuss] HPC people - interconnects Message-ID: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Do you all use IB? Has anyone tried RDMA over 10G via the OFED stack? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:09:13 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:09:13 +0300 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 5:08 PM, Jonathon Anderson wrote: > No Windows or OS X as far as I know. Though, to be completely accurate, a number of our users mount their GPFS homedirs via SSHFS. So that's kind of like having Windows and OS X clients... ~jonathon From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:11:25 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:11:25 +0300 Subject: [gpfsug-discuss] HPC people - interconnects In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 5:08 PM, Jez Tucker wrote: > Do you all use IB? We're all 1/10GbE here. ~jonathon From arifali1 at gmail.com Mon Jun 25 15:13:32 2012 From: arifali1 at gmail.com (Arif Ali) Date: Mon, 25 Jun 2012 15:13:32 +0100 Subject: [gpfsug-discuss] HPC people - interconnects In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FE8720C.7040007@gmail.com> On 25/06/12 15:08, Jez Tucker wrote: > > Do you all use IB? > > Has anyone tried RDMA over 10G via the OFED stack? > > Most of our customers we use RDMA over verbs Is this the same thing you mentioned a few weeks ago with respect to ROCE. Does gpfs even support this? -- regards, Arif -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Mon Jun 25 15:22:20 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Mon, 25 Jun 2012 14:22:20 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: We have: 2 Physical Linux GPFS NSD servers (IBM), 4 Physical VMware ESXi servers (target the cNFS GPFS shares as datastores) (HP) 5 Virtual Machines with native GPFS clients, 1 Virtual Windows 2008R2 with native GPFS client, Approx 30 Linux machines targeting cNFS re-exported from 4 of the Linux VMs is a separate cNFS failure group, Approx 20 Desktops using the Windows 2008R2 server shares of the same file system, Centrify stitches the whole lot together and keeps UID/GID maps neat and tidy and consistent with our Active Directory, 10GbE at the back end for GPFS node communications, 1GbE for exporting to other servers / desktops in our building. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 25 June 2012 14:53 To: gpfsug main discussion list Subject: [gpfsug-discuss] Your GPFS O/S support? Curiosity... How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in any configuration? Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Fri Jun 1 01:29:38 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Thu, 31 May 2012 17:29:38 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 In-Reply-To: References: Message-ID: Hi, Jez, Sorry to bother you again. I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", and restarted all the related dsm,hsm,mm processes, so finally, dsmsmj shows the directory "/gpfs_directory1" is in status "active". So I added a lot of files into directory "/gpfs_directory1" until it is 100% full. But the files in "/gpfs_directory1" just dont migrate to TSM. The "dsmerror.log" just keeps showing the following errors every 20 seconds: 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! 05/31/12 17:25:02 ANS9590E The SOAP error information: HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused I checked the URL Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 and followed the steps in these two pages, but I still got the same errors in dsmerror.log. Could you help please? Thanks. Grace Date: Wed, 30 May 2012 16:14:17 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: > ANS9085E > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > So. My hunch from looking at my system here is that you haven't actually > told dsm that the filesystem is to be space managed. > > You do that here: > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html > > Then re-run dsmmigfs query -Detail and hopefully you should see something > similar to this: > > > [root at tsm01 ~]# dsmmigfs query -Detail > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 2, Level 4.1 > Client date/time: 30-05-2012 17:13:00 > (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights > Reserved. > > > The local node has Node ID: 3 > The failover environment is deactivated on the local node. > > File System Name: /mnt/gpfs > High Threshold: 100 > Low Threshold: 80 > Premig Percentage: 20 > Quota: 999999999999999 > Stub Size: 0 > Server Name: TSM01 > Max Candidates: 100 > Max Files: 0 > Min Partial Rec Size: 0 > Min Stream File Size: 0 > MinMigFileSize: 0 > Preferred Node: tsm01 Node ID: 3 > Owner Node: tsm01 Node ID: 3 > Source Nodes: tsm01 > > Then see if your HSM GUI works properly. > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 30 May 2012 17:06 > To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E > > Hi, Jez, > > Thanks to reply my questions. > > Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". > > On GPFS server, > dsmmigfs query -Detail > => > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 3, Level 0.0 > Client date/time: 05/30/12 08:51:55 > (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights > Reserved. > > > The local node has Node ID: 1 > The failover environment is active on the local node. > The recall distribution is enabled. > > > > On GPFS server, > ps -ef | grep dsm > => > root 6157 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrootd > root 6158 1 0 May29 ? 00:00:03 > /opt/tivoli/tsm/client/hsm/bin/dsmmonitord > root 6159 1 0 May29 ? 00:00:14 > /opt/tivoli/tsm/client/hsm/bin/dsmscoutd > root 6163 1 0 May29 ? 00:00:37 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6165 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm > root 9034 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 9035 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 14278 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/ba/bin/dsmcad > root 22236 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 22237 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj > root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= > -DDSM_LOG=/ -DDSM_DIR= > -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar > > Thanks. > > Grace > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120530/ee7a6e0b/attachment-0001.html > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Jun 1 08:31:45 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Jun 2012 07:31:45 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059ED53@WARVWEXC1.uk.deluxe-eu.com> Try this: https://www-304.ibm.com/support/docview.wss?uid=swg21459480 Though; you're evaluating GPFS + TSM with a view to purchase. Are you not receiving the level of pre-sales technical support to enable you to perform your evaluation successfully? (Only that we're in different time zones and your local IBM support should be much faster) From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 01 June 2012 01:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 Hi, Jez, Sorry to bother you again. I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", and restarted all the related dsm,hsm,mm processes, so finally, dsmsmj shows the directory "/gpfs_directory1" is in status "active". So I added a lot of files into directory "/gpfs_directory1" until it is 100% full. But the files in "/gpfs_directory1" just dont migrate to TSM. The "dsmerror.log" just keeps showing the following errors every 20 seconds: 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! 05/31/12 17:25:02 ANS9590E The SOAP error information: HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused I checked the URL Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 and followed the steps in these two pages, but I still got the same errors in dsmerror.log. Could you help please? Thanks. Grace Date: Wed, 30 May 2012 16:14:17 +0000 From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: <39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com> Content-Type: text/plain; charset="windows-1252" So. My hunch from looking at my system here is that you haven't actually told dsm that the filesystem is to be space managed. You do that here: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html Then re-run dsmmigfs query -Detail and hopefully you should see something similar to this: [root at tsm01 ~]# dsmmigfs query -Detail IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 2, Level 4.1 Client date/time: 30-05-2012 17:13:00 (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights Reserved. The local node has Node ID: 3 The failover environment is deactivated on the local node. File System Name: /mnt/gpfs High Threshold: 100 Low Threshold: 80 Premig Percentage: 20 Quota: 999999999999999 Stub Size: 0 Server Name: TSM01 Max Candidates: 100 Max Files: 0 Min Partial Rec Size: 0 Min Stream File Size: 0 MinMigFileSize: 0 Preferred Node: tsm01 Node ID: 3 Owner Node: tsm01 Node ID: 3 Source Nodes: tsm01 Then see if your HSM GUI works properly. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 30 May 2012 17:06 To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Mon Jun 4 18:36:18 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Mon, 4 Jun 2012 10:36:18 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 6, Issue 1 In-Reply-To: References: Message-ID: Hi, Jez, Sorry, we dont have the pre-sales technical support, I have tried several things, now the dsmerror.log gives me different error messages. Could you send me the following files and the commands output which work for your GPFS/HSM on your GPFS server please? dsm.opt dsm.sys /etc/inittab /etc/adsm/SpaceMan/config/DSMNodeSet /etc/adsm/SpaceMan/config/instance /etc/adsm/SpaceMan/config/DSMSDRVersion ps -ef | grep dsm ps -ef | grep hsm Thanks. Grace Message: 3 > Date: Fri, 1 Jun 2012 07:31:45 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059ED53 at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > Try this: https://www-304.ibm.com/support/docview.wss?uid=swg21459480 > > Though; you're evaluating GPFS + TSM with a view to purchase. > > Are you not receiving the level of pre-sales technical support to enable > you to perform your evaluation successfully? (Only that we're in different > time zones and your local IBM support should be much faster) > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 01 June 2012 01:30 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 > > Hi, Jez, > > Sorry to bother you again. > > I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", > and restarted all the related dsm,hsm,mm processes, so finally, > dsmsmj shows the directory "/gpfs_directory1" is in status "active". > > So I added a lot of files into directory "/gpfs_directory1" until it is > 100% full. > > But the files in "/gpfs_directory1" just dont migrate to TSM. > The "dsmerror.log" just keeps showing the following errors every 20 > seconds: > > 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! > 05/31/12 17:25:02 ANS9590E The SOAP error information: > HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused > > > I checked the URL > > Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 > Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 > > and followed the steps in these two pages, but I still got the same errors > in dsmerror.log. > > Could you help please? Thanks. > > Grace > Date: Wed, 30 May 2012 16:14:17 +0000 > From: Jez Tucker > > To: gpfsug main discussion list gpfsug-discuss at gpfsug.org>> > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: > ANS9085E > Message-ID: > <39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com > 39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com>> > Content-Type: text/plain; charset="windows-1252" > > So. My hunch from looking at my system here is that you haven't actually > told dsm that the filesystem is to be space managed. > > You do that here: > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html > > Then re-run dsmmigfs query -Detail and hopefully you should see something > similar to this: > > > [root at tsm01 ~]# dsmmigfs query -Detail > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 2, Level 4.1 > Client date/time: 30-05-2012 17:13:00 > (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights > Reserved. > > > The local node has Node ID: 3 > The failover environment is deactivated on the local node. > > File System Name: /mnt/gpfs > High Threshold: 100 > Low Threshold: 80 > Premig Percentage: 20 > Quota: 999999999999999 > Stub Size: 0 > Server Name: TSM01 > Max Candidates: 100 > Max Files: 0 > Min Partial Rec Size: 0 > Min Stream File Size: 0 > MinMigFileSize: 0 > Preferred Node: tsm01 Node ID: 3 > Owner Node: tsm01 Node ID: 3 > Source Nodes: tsm01 > > Then see if your HSM GUI works properly. > > From: gpfsug-discuss-bounces at gpfsug.org gpfsug-discuss-bounces at gpfsug.org> [mailto: > gpfsug-discuss-bounces at gpfsug.org] > On Behalf Of Grace Tsai > Sent: 30 May 2012 17:06 > To: gpfsug-discuss at gpfsug.org; > Jez.Tucker at rushes.co.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E > > Hi, Jez, > > Thanks to reply my questions. > > Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". > > On GPFS server, > dsmmigfs query -Detail > => > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 3, Level 0.0 > Client date/time: 05/30/12 08:51:55 > (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights > Reserved. > > > The local node has Node ID: 1 > The failover environment is active on the local node. > The recall distribution is enabled. > > > > On GPFS server, > ps -ef | grep dsm > => > root 6157 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrootd > root 6158 1 0 May29 ? 00:00:03 > /opt/tivoli/tsm/client/hsm/bin/dsmmonitord > root 6159 1 0 May29 ? 00:00:14 > /opt/tivoli/tsm/client/hsm/bin/dsmscoutd > root 6163 1 0 May29 ? 00:00:37 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6165 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm > root 9034 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 9035 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 14278 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/ba/bin/dsmcad > root 22236 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 22237 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj > root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= > -DDSM_LOG=/ -DDSM_DIR= > -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar > > Thanks. > > Grace > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120530/ee7a6e0b/attachment-0001.html > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120601/4b4dd980/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 6, Issue 1 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Tue Jun 5 16:03:09 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Tue, 5 Jun 2012 16:03:09 +0100 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 12/06/2012) Message-ID: I am out of the office until 12/06/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 6, Issue 2" sent on 5/6/2012 12:00:01. This is the only notification you will receive while this person is away. From Jez.Tucker at rushes.co.uk Tue Jun 19 17:06:51 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 19 Jun 2012 16:06:51 +0000 Subject: [gpfsug-discuss] Storage Decisions Chicago 2012 Message-ID: <39571EA9316BE44899D59C7A640C13F5305A49E1@WARVWEXC1.uk.deluxe-eu.com> Is anyone going to this? Interested to note any IBM & GPFS related information. http://storagedecisions.techtarget.com/chicago/ https://twitter.com/#!/search/%23SDCHI12 --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Fri Jun 22 15:52:34 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Fri, 22 Jun 2012 15:52:34 +0100 Subject: [gpfsug-discuss] Samba mapping of "special" SID entries Message-ID: <4FE486B2.1050501@ed.ac.uk> Hi all, Has anyone bumped up against the "nfs4: special" option in GPFS/Samba deployments which manipulates how the "owner" and "group owner" (and "everybody") behaviour is mapped to ACLs when accessed via the samba stack? In particular, with the "default" setting (if one blindly follows the worked examples on this) of nfs4: special, if a user adds themselves specifically to an ACL, this creates an entry: special:@owner rather than: user:username which has the knock-on effect that if a file/folder is created under this ACL by a different owner (or if ownership changes), the person who put said ACL on to the file/folder no longer has access. Most people find this confusing (which is putting it politely). To further complicate matters, the "special" windows SID's*[1] - such as "CREATOR/OWNER" - don't seem to work properly in the ctdb/samba/gpfs stack (I don't know if they do in "normal" samba though). IBM don't support CREATOR/OWNER in SONAS*[2] - so it's not just me! So my question is - has anyone else been looking into this at all, and if so, do you have any sage words of wisdom to offer? Cheers, Orlando. *[1] http://support.microsoft.com/kb/163846 *[2] http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.sonas.doc%2Fadm_authorization_limitations.html -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From luke.raimbach at oerc.ox.ac.uk Fri Jun 22 17:33:10 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Fri, 22 Jun 2012 16:33:10 +0000 Subject: [gpfsug-discuss] Samba mapping of "special" SID entries In-Reply-To: <4FE486B2.1050501@ed.ac.uk> References: <4FE486B2.1050501@ed.ac.uk> Message-ID: Hi Orlando, I've been having success using Centrify to manage UID/GID mappings for our very small mixed cluster (7 x Linux, 1 x Windows 2008R2). I've created a map for "CREATOR / OWNER", "SYSTEM", "Domain Admins", etc. group SIDs and use the Windows node to manage ACLs. When the windows node applies the ACLs, these seem to translate successfully in to GPFS ACLs and work nicely for the mixed environment allowing users on both Linux and Windows systems to manipulate each other's files. People are mounting the FS via NFS (exported via the NSD Linux servers) and CIFS (shared from Win2k8R2). The permissions don't look friendly when you run ls -l on a Linux system over NFS but the ACLs do their job in preserving inheritable permissions, etc. If people want to see the 'real' ACL, they need to use mmgetacl on a GPFS attached node (or windows users simply click on the security tab under properties of a file). Drop me a line off-list if you want to take a look at what we've got remotely. I can run a webex session from the Windows node if you want to have a good poke around. Luke. -- Luke Raimbach IT Manager Oxford e-Research Centre 7 Keble Road, Oxford, OX1 3QG +44(0)1865 610639 > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Orlando Richards > Sent: 22 June 2012 15:53 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] Samba mapping of "special" SID entries > > Hi all, > > Has anyone bumped up against the "nfs4: special" option in GPFS/Samba > deployments which manipulates how the "owner" and "group owner" (and > "everybody") behaviour is mapped to ACLs when accessed via the samba > stack? > > In particular, with the "default" setting (if one blindly follows the worked > examples on this) of nfs4: special, if a user adds themselves specifically to > an ACL, this creates an entry: > > special:@owner > > rather than: > > user:username > > which has the knock-on effect that if a file/folder is created under this ACL > by a different owner (or if ownership changes), the person who put said ACL > on to the file/folder no longer has access. Most people find this confusing > (which is putting it politely). > > To further complicate matters, the "special" windows SID's*[1] - such as > "CREATOR/OWNER" - don't seem to work properly in the ctdb/samba/gpfs > stack (I don't know if they do in "normal" samba though). IBM don't support > CREATOR/OWNER in SONAS*[2] - so it's not just me! > > So my question is - has anyone else been looking into this at all, and if so, > do you have any sage words of wisdom to offer? > > Cheers, > Orlando. > > > *[1] http://support.microsoft.com/kb/163846 > *[2] > http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fc > om.ibm.sonas.doc%2Fadm_authorization_limitations.html > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Jez.Tucker at rushes.co.uk Mon Jun 25 14:52:47 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 25 Jun 2012 13:52:47 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? Message-ID: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Curiosity... How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in any configuration? Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:08:14 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:08:14 +0300 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 4:52 PM, Jez Tucker wrote: > ? How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in > any configuration? Native GPFS on Linux @KAUST. No Windows or OS X as far as I know. ~jonathon From Jez.Tucker at rushes.co.uk Mon Jun 25 15:08:14 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 25 Jun 2012 14:08:14 +0000 Subject: [gpfsug-discuss] HPC people - interconnects Message-ID: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Do you all use IB? Has anyone tried RDMA over 10G via the OFED stack? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:09:13 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:09:13 +0300 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 5:08 PM, Jonathon Anderson wrote: > No Windows or OS X as far as I know. Though, to be completely accurate, a number of our users mount their GPFS homedirs via SSHFS. So that's kind of like having Windows and OS X clients... ~jonathon From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:11:25 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:11:25 +0300 Subject: [gpfsug-discuss] HPC people - interconnects In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 5:08 PM, Jez Tucker wrote: > Do you all use IB? We're all 1/10GbE here. ~jonathon From arifali1 at gmail.com Mon Jun 25 15:13:32 2012 From: arifali1 at gmail.com (Arif Ali) Date: Mon, 25 Jun 2012 15:13:32 +0100 Subject: [gpfsug-discuss] HPC people - interconnects In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FE8720C.7040007@gmail.com> On 25/06/12 15:08, Jez Tucker wrote: > > Do you all use IB? > > Has anyone tried RDMA over 10G via the OFED stack? > > Most of our customers we use RDMA over verbs Is this the same thing you mentioned a few weeks ago with respect to ROCE. Does gpfs even support this? -- regards, Arif -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Mon Jun 25 15:22:20 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Mon, 25 Jun 2012 14:22:20 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: We have: 2 Physical Linux GPFS NSD servers (IBM), 4 Physical VMware ESXi servers (target the cNFS GPFS shares as datastores) (HP) 5 Virtual Machines with native GPFS clients, 1 Virtual Windows 2008R2 with native GPFS client, Approx 30 Linux machines targeting cNFS re-exported from 4 of the Linux VMs is a separate cNFS failure group, Approx 20 Desktops using the Windows 2008R2 server shares of the same file system, Centrify stitches the whole lot together and keeps UID/GID maps neat and tidy and consistent with our Active Directory, 10GbE at the back end for GPFS node communications, 1GbE for exporting to other servers / desktops in our building. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 25 June 2012 14:53 To: gpfsug main discussion list Subject: [gpfsug-discuss] Your GPFS O/S support? Curiosity... How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in any configuration? Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Fri Jun 1 01:29:38 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Thu, 31 May 2012 17:29:38 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 In-Reply-To: References: Message-ID: Hi, Jez, Sorry to bother you again. I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", and restarted all the related dsm,hsm,mm processes, so finally, dsmsmj shows the directory "/gpfs_directory1" is in status "active". So I added a lot of files into directory "/gpfs_directory1" until it is 100% full. But the files in "/gpfs_directory1" just dont migrate to TSM. The "dsmerror.log" just keeps showing the following errors every 20 seconds: 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! 05/31/12 17:25:02 ANS9590E The SOAP error information: HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused I checked the URL Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 and followed the steps in these two pages, but I still got the same errors in dsmerror.log. Could you help please? Thanks. Grace Date: Wed, 30 May 2012 16:14:17 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: > ANS9085E > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > So. My hunch from looking at my system here is that you haven't actually > told dsm that the filesystem is to be space managed. > > You do that here: > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html > > Then re-run dsmmigfs query -Detail and hopefully you should see something > similar to this: > > > [root at tsm01 ~]# dsmmigfs query -Detail > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 2, Level 4.1 > Client date/time: 30-05-2012 17:13:00 > (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights > Reserved. > > > The local node has Node ID: 3 > The failover environment is deactivated on the local node. > > File System Name: /mnt/gpfs > High Threshold: 100 > Low Threshold: 80 > Premig Percentage: 20 > Quota: 999999999999999 > Stub Size: 0 > Server Name: TSM01 > Max Candidates: 100 > Max Files: 0 > Min Partial Rec Size: 0 > Min Stream File Size: 0 > MinMigFileSize: 0 > Preferred Node: tsm01 Node ID: 3 > Owner Node: tsm01 Node ID: 3 > Source Nodes: tsm01 > > Then see if your HSM GUI works properly. > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 30 May 2012 17:06 > To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E > > Hi, Jez, > > Thanks to reply my questions. > > Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". > > On GPFS server, > dsmmigfs query -Detail > => > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 3, Level 0.0 > Client date/time: 05/30/12 08:51:55 > (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights > Reserved. > > > The local node has Node ID: 1 > The failover environment is active on the local node. > The recall distribution is enabled. > > > > On GPFS server, > ps -ef | grep dsm > => > root 6157 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrootd > root 6158 1 0 May29 ? 00:00:03 > /opt/tivoli/tsm/client/hsm/bin/dsmmonitord > root 6159 1 0 May29 ? 00:00:14 > /opt/tivoli/tsm/client/hsm/bin/dsmscoutd > root 6163 1 0 May29 ? 00:00:37 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6165 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm > root 9034 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 9035 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 14278 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/ba/bin/dsmcad > root 22236 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 22237 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj > root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= > -DDSM_LOG=/ -DDSM_DIR= > -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar > > Thanks. > > Grace > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120530/ee7a6e0b/attachment-0001.html > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Jun 1 08:31:45 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Jun 2012 07:31:45 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059ED53@WARVWEXC1.uk.deluxe-eu.com> Try this: https://www-304.ibm.com/support/docview.wss?uid=swg21459480 Though; you're evaluating GPFS + TSM with a view to purchase. Are you not receiving the level of pre-sales technical support to enable you to perform your evaluation successfully? (Only that we're in different time zones and your local IBM support should be much faster) From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 01 June 2012 01:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 Hi, Jez, Sorry to bother you again. I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", and restarted all the related dsm,hsm,mm processes, so finally, dsmsmj shows the directory "/gpfs_directory1" is in status "active". So I added a lot of files into directory "/gpfs_directory1" until it is 100% full. But the files in "/gpfs_directory1" just dont migrate to TSM. The "dsmerror.log" just keeps showing the following errors every 20 seconds: 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! 05/31/12 17:25:02 ANS9590E The SOAP error information: HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused I checked the URL Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 and followed the steps in these two pages, but I still got the same errors in dsmerror.log. Could you help please? Thanks. Grace Date: Wed, 30 May 2012 16:14:17 +0000 From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: <39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com> Content-Type: text/plain; charset="windows-1252" So. My hunch from looking at my system here is that you haven't actually told dsm that the filesystem is to be space managed. You do that here: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html Then re-run dsmmigfs query -Detail and hopefully you should see something similar to this: [root at tsm01 ~]# dsmmigfs query -Detail IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 2, Level 4.1 Client date/time: 30-05-2012 17:13:00 (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights Reserved. The local node has Node ID: 3 The failover environment is deactivated on the local node. File System Name: /mnt/gpfs High Threshold: 100 Low Threshold: 80 Premig Percentage: 20 Quota: 999999999999999 Stub Size: 0 Server Name: TSM01 Max Candidates: 100 Max Files: 0 Min Partial Rec Size: 0 Min Stream File Size: 0 MinMigFileSize: 0 Preferred Node: tsm01 Node ID: 3 Owner Node: tsm01 Node ID: 3 Source Nodes: tsm01 Then see if your HSM GUI works properly. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 30 May 2012 17:06 To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Mon Jun 4 18:36:18 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Mon, 4 Jun 2012 10:36:18 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 6, Issue 1 In-Reply-To: References: Message-ID: Hi, Jez, Sorry, we dont have the pre-sales technical support, I have tried several things, now the dsmerror.log gives me different error messages. Could you send me the following files and the commands output which work for your GPFS/HSM on your GPFS server please? dsm.opt dsm.sys /etc/inittab /etc/adsm/SpaceMan/config/DSMNodeSet /etc/adsm/SpaceMan/config/instance /etc/adsm/SpaceMan/config/DSMSDRVersion ps -ef | grep dsm ps -ef | grep hsm Thanks. Grace Message: 3 > Date: Fri, 1 Jun 2012 07:31:45 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059ED53 at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > Try this: https://www-304.ibm.com/support/docview.wss?uid=swg21459480 > > Though; you're evaluating GPFS + TSM with a view to purchase. > > Are you not receiving the level of pre-sales technical support to enable > you to perform your evaluation successfully? (Only that we're in different > time zones and your local IBM support should be much faster) > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 01 June 2012 01:30 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 > > Hi, Jez, > > Sorry to bother you again. > > I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", > and restarted all the related dsm,hsm,mm processes, so finally, > dsmsmj shows the directory "/gpfs_directory1" is in status "active". > > So I added a lot of files into directory "/gpfs_directory1" until it is > 100% full. > > But the files in "/gpfs_directory1" just dont migrate to TSM. > The "dsmerror.log" just keeps showing the following errors every 20 > seconds: > > 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! > 05/31/12 17:25:02 ANS9590E The SOAP error information: > HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused > > > I checked the URL > > Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 > Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 > > and followed the steps in these two pages, but I still got the same errors > in dsmerror.log. > > Could you help please? Thanks. > > Grace > Date: Wed, 30 May 2012 16:14:17 +0000 > From: Jez Tucker > > To: gpfsug main discussion list gpfsug-discuss at gpfsug.org>> > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: > ANS9085E > Message-ID: > <39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com > 39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com>> > Content-Type: text/plain; charset="windows-1252" > > So. My hunch from looking at my system here is that you haven't actually > told dsm that the filesystem is to be space managed. > > You do that here: > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html > > Then re-run dsmmigfs query -Detail and hopefully you should see something > similar to this: > > > [root at tsm01 ~]# dsmmigfs query -Detail > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 2, Level 4.1 > Client date/time: 30-05-2012 17:13:00 > (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights > Reserved. > > > The local node has Node ID: 3 > The failover environment is deactivated on the local node. > > File System Name: /mnt/gpfs > High Threshold: 100 > Low Threshold: 80 > Premig Percentage: 20 > Quota: 999999999999999 > Stub Size: 0 > Server Name: TSM01 > Max Candidates: 100 > Max Files: 0 > Min Partial Rec Size: 0 > Min Stream File Size: 0 > MinMigFileSize: 0 > Preferred Node: tsm01 Node ID: 3 > Owner Node: tsm01 Node ID: 3 > Source Nodes: tsm01 > > Then see if your HSM GUI works properly. > > From: gpfsug-discuss-bounces at gpfsug.org gpfsug-discuss-bounces at gpfsug.org> [mailto: > gpfsug-discuss-bounces at gpfsug.org] > On Behalf Of Grace Tsai > Sent: 30 May 2012 17:06 > To: gpfsug-discuss at gpfsug.org; > Jez.Tucker at rushes.co.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E > > Hi, Jez, > > Thanks to reply my questions. > > Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". > > On GPFS server, > dsmmigfs query -Detail > => > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 3, Level 0.0 > Client date/time: 05/30/12 08:51:55 > (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights > Reserved. > > > The local node has Node ID: 1 > The failover environment is active on the local node. > The recall distribution is enabled. > > > > On GPFS server, > ps -ef | grep dsm > => > root 6157 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrootd > root 6158 1 0 May29 ? 00:00:03 > /opt/tivoli/tsm/client/hsm/bin/dsmmonitord > root 6159 1 0 May29 ? 00:00:14 > /opt/tivoli/tsm/client/hsm/bin/dsmscoutd > root 6163 1 0 May29 ? 00:00:37 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6165 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm > root 9034 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 9035 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 14278 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/ba/bin/dsmcad > root 22236 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 22237 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj > root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= > -DDSM_LOG=/ -DDSM_DIR= > -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar > > Thanks. > > Grace > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120530/ee7a6e0b/attachment-0001.html > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120601/4b4dd980/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 6, Issue 1 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Tue Jun 5 16:03:09 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Tue, 5 Jun 2012 16:03:09 +0100 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 12/06/2012) Message-ID: I am out of the office until 12/06/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 6, Issue 2" sent on 5/6/2012 12:00:01. This is the only notification you will receive while this person is away. From Jez.Tucker at rushes.co.uk Tue Jun 19 17:06:51 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 19 Jun 2012 16:06:51 +0000 Subject: [gpfsug-discuss] Storage Decisions Chicago 2012 Message-ID: <39571EA9316BE44899D59C7A640C13F5305A49E1@WARVWEXC1.uk.deluxe-eu.com> Is anyone going to this? Interested to note any IBM & GPFS related information. http://storagedecisions.techtarget.com/chicago/ https://twitter.com/#!/search/%23SDCHI12 --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Fri Jun 22 15:52:34 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Fri, 22 Jun 2012 15:52:34 +0100 Subject: [gpfsug-discuss] Samba mapping of "special" SID entries Message-ID: <4FE486B2.1050501@ed.ac.uk> Hi all, Has anyone bumped up against the "nfs4: special" option in GPFS/Samba deployments which manipulates how the "owner" and "group owner" (and "everybody") behaviour is mapped to ACLs when accessed via the samba stack? In particular, with the "default" setting (if one blindly follows the worked examples on this) of nfs4: special, if a user adds themselves specifically to an ACL, this creates an entry: special:@owner rather than: user:username which has the knock-on effect that if a file/folder is created under this ACL by a different owner (or if ownership changes), the person who put said ACL on to the file/folder no longer has access. Most people find this confusing (which is putting it politely). To further complicate matters, the "special" windows SID's*[1] - such as "CREATOR/OWNER" - don't seem to work properly in the ctdb/samba/gpfs stack (I don't know if they do in "normal" samba though). IBM don't support CREATOR/OWNER in SONAS*[2] - so it's not just me! So my question is - has anyone else been looking into this at all, and if so, do you have any sage words of wisdom to offer? Cheers, Orlando. *[1] http://support.microsoft.com/kb/163846 *[2] http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.sonas.doc%2Fadm_authorization_limitations.html -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From luke.raimbach at oerc.ox.ac.uk Fri Jun 22 17:33:10 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Fri, 22 Jun 2012 16:33:10 +0000 Subject: [gpfsug-discuss] Samba mapping of "special" SID entries In-Reply-To: <4FE486B2.1050501@ed.ac.uk> References: <4FE486B2.1050501@ed.ac.uk> Message-ID: Hi Orlando, I've been having success using Centrify to manage UID/GID mappings for our very small mixed cluster (7 x Linux, 1 x Windows 2008R2). I've created a map for "CREATOR / OWNER", "SYSTEM", "Domain Admins", etc. group SIDs and use the Windows node to manage ACLs. When the windows node applies the ACLs, these seem to translate successfully in to GPFS ACLs and work nicely for the mixed environment allowing users on both Linux and Windows systems to manipulate each other's files. People are mounting the FS via NFS (exported via the NSD Linux servers) and CIFS (shared from Win2k8R2). The permissions don't look friendly when you run ls -l on a Linux system over NFS but the ACLs do their job in preserving inheritable permissions, etc. If people want to see the 'real' ACL, they need to use mmgetacl on a GPFS attached node (or windows users simply click on the security tab under properties of a file). Drop me a line off-list if you want to take a look at what we've got remotely. I can run a webex session from the Windows node if you want to have a good poke around. Luke. -- Luke Raimbach IT Manager Oxford e-Research Centre 7 Keble Road, Oxford, OX1 3QG +44(0)1865 610639 > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Orlando Richards > Sent: 22 June 2012 15:53 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] Samba mapping of "special" SID entries > > Hi all, > > Has anyone bumped up against the "nfs4: special" option in GPFS/Samba > deployments which manipulates how the "owner" and "group owner" (and > "everybody") behaviour is mapped to ACLs when accessed via the samba > stack? > > In particular, with the "default" setting (if one blindly follows the worked > examples on this) of nfs4: special, if a user adds themselves specifically to > an ACL, this creates an entry: > > special:@owner > > rather than: > > user:username > > which has the knock-on effect that if a file/folder is created under this ACL > by a different owner (or if ownership changes), the person who put said ACL > on to the file/folder no longer has access. Most people find this confusing > (which is putting it politely). > > To further complicate matters, the "special" windows SID's*[1] - such as > "CREATOR/OWNER" - don't seem to work properly in the ctdb/samba/gpfs > stack (I don't know if they do in "normal" samba though). IBM don't support > CREATOR/OWNER in SONAS*[2] - so it's not just me! > > So my question is - has anyone else been looking into this at all, and if so, > do you have any sage words of wisdom to offer? > > Cheers, > Orlando. > > > *[1] http://support.microsoft.com/kb/163846 > *[2] > http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fc > om.ibm.sonas.doc%2Fadm_authorization_limitations.html > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Jez.Tucker at rushes.co.uk Mon Jun 25 14:52:47 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 25 Jun 2012 13:52:47 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? Message-ID: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Curiosity... How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in any configuration? Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:08:14 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:08:14 +0300 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 4:52 PM, Jez Tucker wrote: > ? How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in > any configuration? Native GPFS on Linux @KAUST. No Windows or OS X as far as I know. ~jonathon From Jez.Tucker at rushes.co.uk Mon Jun 25 15:08:14 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 25 Jun 2012 14:08:14 +0000 Subject: [gpfsug-discuss] HPC people - interconnects Message-ID: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Do you all use IB? Has anyone tried RDMA over 10G via the OFED stack? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:09:13 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:09:13 +0300 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 5:08 PM, Jonathon Anderson wrote: > No Windows or OS X as far as I know. Though, to be completely accurate, a number of our users mount their GPFS homedirs via SSHFS. So that's kind of like having Windows and OS X clients... ~jonathon From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:11:25 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:11:25 +0300 Subject: [gpfsug-discuss] HPC people - interconnects In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 5:08 PM, Jez Tucker wrote: > Do you all use IB? We're all 1/10GbE here. ~jonathon From arifali1 at gmail.com Mon Jun 25 15:13:32 2012 From: arifali1 at gmail.com (Arif Ali) Date: Mon, 25 Jun 2012 15:13:32 +0100 Subject: [gpfsug-discuss] HPC people - interconnects In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FE8720C.7040007@gmail.com> On 25/06/12 15:08, Jez Tucker wrote: > > Do you all use IB? > > Has anyone tried RDMA over 10G via the OFED stack? > > Most of our customers we use RDMA over verbs Is this the same thing you mentioned a few weeks ago with respect to ROCE. Does gpfs even support this? -- regards, Arif -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Mon Jun 25 15:22:20 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Mon, 25 Jun 2012 14:22:20 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: We have: 2 Physical Linux GPFS NSD servers (IBM), 4 Physical VMware ESXi servers (target the cNFS GPFS shares as datastores) (HP) 5 Virtual Machines with native GPFS clients, 1 Virtual Windows 2008R2 with native GPFS client, Approx 30 Linux machines targeting cNFS re-exported from 4 of the Linux VMs is a separate cNFS failure group, Approx 20 Desktops using the Windows 2008R2 server shares of the same file system, Centrify stitches the whole lot together and keeps UID/GID maps neat and tidy and consistent with our Active Directory, 10GbE at the back end for GPFS node communications, 1GbE for exporting to other servers / desktops in our building. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 25 June 2012 14:53 To: gpfsug main discussion list Subject: [gpfsug-discuss] Your GPFS O/S support? Curiosity... How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in any configuration? Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Fri Jun 1 01:29:38 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Thu, 31 May 2012 17:29:38 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 In-Reply-To: References: Message-ID: Hi, Jez, Sorry to bother you again. I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", and restarted all the related dsm,hsm,mm processes, so finally, dsmsmj shows the directory "/gpfs_directory1" is in status "active". So I added a lot of files into directory "/gpfs_directory1" until it is 100% full. But the files in "/gpfs_directory1" just dont migrate to TSM. The "dsmerror.log" just keeps showing the following errors every 20 seconds: 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! 05/31/12 17:25:02 ANS9590E The SOAP error information: HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused I checked the URL Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 and followed the steps in these two pages, but I still got the same errors in dsmerror.log. Could you help please? Thanks. Grace Date: Wed, 30 May 2012 16:14:17 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: > ANS9085E > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > So. My hunch from looking at my system here is that you haven't actually > told dsm that the filesystem is to be space managed. > > You do that here: > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html > > Then re-run dsmmigfs query -Detail and hopefully you should see something > similar to this: > > > [root at tsm01 ~]# dsmmigfs query -Detail > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 2, Level 4.1 > Client date/time: 30-05-2012 17:13:00 > (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights > Reserved. > > > The local node has Node ID: 3 > The failover environment is deactivated on the local node. > > File System Name: /mnt/gpfs > High Threshold: 100 > Low Threshold: 80 > Premig Percentage: 20 > Quota: 999999999999999 > Stub Size: 0 > Server Name: TSM01 > Max Candidates: 100 > Max Files: 0 > Min Partial Rec Size: 0 > Min Stream File Size: 0 > MinMigFileSize: 0 > Preferred Node: tsm01 Node ID: 3 > Owner Node: tsm01 Node ID: 3 > Source Nodes: tsm01 > > Then see if your HSM GUI works properly. > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 30 May 2012 17:06 > To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E > > Hi, Jez, > > Thanks to reply my questions. > > Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". > > On GPFS server, > dsmmigfs query -Detail > => > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 3, Level 0.0 > Client date/time: 05/30/12 08:51:55 > (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights > Reserved. > > > The local node has Node ID: 1 > The failover environment is active on the local node. > The recall distribution is enabled. > > > > On GPFS server, > ps -ef | grep dsm > => > root 6157 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrootd > root 6158 1 0 May29 ? 00:00:03 > /opt/tivoli/tsm/client/hsm/bin/dsmmonitord > root 6159 1 0 May29 ? 00:00:14 > /opt/tivoli/tsm/client/hsm/bin/dsmscoutd > root 6163 1 0 May29 ? 00:00:37 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6165 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm > root 9034 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 9035 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 14278 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/ba/bin/dsmcad > root 22236 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 22237 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj > root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= > -DDSM_LOG=/ -DDSM_DIR= > -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar > > Thanks. > > Grace > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120530/ee7a6e0b/attachment-0001.html > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Jun 1 08:31:45 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Jun 2012 07:31:45 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059ED53@WARVWEXC1.uk.deluxe-eu.com> Try this: https://www-304.ibm.com/support/docview.wss?uid=swg21459480 Though; you're evaluating GPFS + TSM with a view to purchase. Are you not receiving the level of pre-sales technical support to enable you to perform your evaluation successfully? (Only that we're in different time zones and your local IBM support should be much faster) From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 01 June 2012 01:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 Hi, Jez, Sorry to bother you again. I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", and restarted all the related dsm,hsm,mm processes, so finally, dsmsmj shows the directory "/gpfs_directory1" is in status "active". So I added a lot of files into directory "/gpfs_directory1" until it is 100% full. But the files in "/gpfs_directory1" just dont migrate to TSM. The "dsmerror.log" just keeps showing the following errors every 20 seconds: 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! 05/31/12 17:25:02 ANS9590E The SOAP error information: HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused I checked the URL Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 and followed the steps in these two pages, but I still got the same errors in dsmerror.log. Could you help please? Thanks. Grace Date: Wed, 30 May 2012 16:14:17 +0000 From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: <39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com> Content-Type: text/plain; charset="windows-1252" So. My hunch from looking at my system here is that you haven't actually told dsm that the filesystem is to be space managed. You do that here: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html Then re-run dsmmigfs query -Detail and hopefully you should see something similar to this: [root at tsm01 ~]# dsmmigfs query -Detail IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 2, Level 4.1 Client date/time: 30-05-2012 17:13:00 (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights Reserved. The local node has Node ID: 3 The failover environment is deactivated on the local node. File System Name: /mnt/gpfs High Threshold: 100 Low Threshold: 80 Premig Percentage: 20 Quota: 999999999999999 Stub Size: 0 Server Name: TSM01 Max Candidates: 100 Max Files: 0 Min Partial Rec Size: 0 Min Stream File Size: 0 MinMigFileSize: 0 Preferred Node: tsm01 Node ID: 3 Owner Node: tsm01 Node ID: 3 Source Nodes: tsm01 Then see if your HSM GUI works properly. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 30 May 2012 17:06 To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Mon Jun 4 18:36:18 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Mon, 4 Jun 2012 10:36:18 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 6, Issue 1 In-Reply-To: References: Message-ID: Hi, Jez, Sorry, we dont have the pre-sales technical support, I have tried several things, now the dsmerror.log gives me different error messages. Could you send me the following files and the commands output which work for your GPFS/HSM on your GPFS server please? dsm.opt dsm.sys /etc/inittab /etc/adsm/SpaceMan/config/DSMNodeSet /etc/adsm/SpaceMan/config/instance /etc/adsm/SpaceMan/config/DSMSDRVersion ps -ef | grep dsm ps -ef | grep hsm Thanks. Grace Message: 3 > Date: Fri, 1 Jun 2012 07:31:45 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059ED53 at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > Try this: https://www-304.ibm.com/support/docview.wss?uid=swg21459480 > > Though; you're evaluating GPFS + TSM with a view to purchase. > > Are you not receiving the level of pre-sales technical support to enable > you to perform your evaluation successfully? (Only that we're in different > time zones and your local IBM support should be much faster) > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 01 June 2012 01:30 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 9 > > Hi, Jez, > > Sorry to bother you again. > > I ran "dsmmigfs add -ht=85 -lt=75 /gpfs_directory1", > and restarted all the related dsm,hsm,mm processes, so finally, > dsmsmj shows the directory "/gpfs_directory1" is in status "active". > > So I added a lot of files into directory "/gpfs_directory1" until it is > 100% full. > > But the files in "/gpfs_directory1" just dont migrate to TSM. > The "dsmerror.log" just keeps showing the following errors every 20 > seconds: > > 05/31/12 17:25:02 ANS9592E A SOAP TCP connection error has happened! > 05/31/12 17:25:02 ANS9590E The SOAP error information: > HSM_Comm_ResponsivenessServiceJoin failed, reason: Connection refused > > > I checked the URL > > Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21358488 > Ref: http://www-304.ibm.com/support/docview.wss?uid=swg21416853 > > and followed the steps in these two pages, but I still got the same errors > in dsmerror.log. > > Could you help please? Thanks. > > Grace > Date: Wed, 30 May 2012 16:14:17 +0000 > From: Jez Tucker > > To: gpfsug main discussion list gpfsug-discuss at gpfsug.org>> > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS -error message: > ANS9085E > Message-ID: > <39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com > 39571EA9316BE44899D59C7A640C13F53059E38F at WARVWEXC1.uk.deluxe-eu.com>> > Content-Type: text/plain; charset="windows-1252" > > So. My hunch from looking at my system here is that you haven't actually > told dsm that the filesystem is to be space managed. > > You do that here: > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html > > Then re-run dsmmigfs query -Detail and hopefully you should see something > similar to this: > > > [root at tsm01 ~]# dsmmigfs query -Detail > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 2, Level 4.1 > Client date/time: 30-05-2012 17:13:00 > (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights > Reserved. > > > The local node has Node ID: 3 > The failover environment is deactivated on the local node. > > File System Name: /mnt/gpfs > High Threshold: 100 > Low Threshold: 80 > Premig Percentage: 20 > Quota: 999999999999999 > Stub Size: 0 > Server Name: TSM01 > Max Candidates: 100 > Max Files: 0 > Min Partial Rec Size: 0 > Min Stream File Size: 0 > MinMigFileSize: 0 > Preferred Node: tsm01 Node ID: 3 > Owner Node: tsm01 Node ID: 3 > Source Nodes: tsm01 > > Then see if your HSM GUI works properly. > > From: gpfsug-discuss-bounces at gpfsug.org gpfsug-discuss-bounces at gpfsug.org> [mailto: > gpfsug-discuss-bounces at gpfsug.org] > On Behalf Of Grace Tsai > Sent: 30 May 2012 17:06 > To: gpfsug-discuss at gpfsug.org; > Jez.Tucker at rushes.co.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E > > Hi, Jez, > > Thanks to reply my questions. > > Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". > > On GPFS server, > dsmmigfs query -Detail > => > IBM Tivoli Storage Manager > Command Line Space Management Client Interface > Client Version 6, Release 3, Level 0.0 > Client date/time: 05/30/12 08:51:55 > (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights > Reserved. > > > The local node has Node ID: 1 > The failover environment is active on the local node. > The recall distribution is enabled. > > > > On GPFS server, > ps -ef | grep dsm > => > root 6157 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrootd > root 6158 1 0 May29 ? 00:00:03 > /opt/tivoli/tsm/client/hsm/bin/dsmmonitord > root 6159 1 0 May29 ? 00:00:14 > /opt/tivoli/tsm/client/hsm/bin/dsmscoutd > root 6163 1 0 May29 ? 00:00:37 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6165 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm > root 9034 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 9035 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 14278 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/ba/bin/dsmcad > root 22236 1 0 May29 ? 00:00:35 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 22237 1 0 May29 ? 00:00:00 > /opt/tivoli/tsm/client/hsm/bin/dsmrecalld > root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj > root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= > -DDSM_LOG=/ -DDSM_DIR= > -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar > > Thanks. > > Grace > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120530/ee7a6e0b/attachment-0001.html > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120601/4b4dd980/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 6, Issue 1 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Tue Jun 5 16:03:09 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Tue, 5 Jun 2012 16:03:09 +0100 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 12/06/2012) Message-ID: I am out of the office until 12/06/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 6, Issue 2" sent on 5/6/2012 12:00:01. This is the only notification you will receive while this person is away. From Jez.Tucker at rushes.co.uk Tue Jun 19 17:06:51 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 19 Jun 2012 16:06:51 +0000 Subject: [gpfsug-discuss] Storage Decisions Chicago 2012 Message-ID: <39571EA9316BE44899D59C7A640C13F5305A49E1@WARVWEXC1.uk.deluxe-eu.com> Is anyone going to this? Interested to note any IBM & GPFS related information. http://storagedecisions.techtarget.com/chicago/ https://twitter.com/#!/search/%23SDCHI12 --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Fri Jun 22 15:52:34 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Fri, 22 Jun 2012 15:52:34 +0100 Subject: [gpfsug-discuss] Samba mapping of "special" SID entries Message-ID: <4FE486B2.1050501@ed.ac.uk> Hi all, Has anyone bumped up against the "nfs4: special" option in GPFS/Samba deployments which manipulates how the "owner" and "group owner" (and "everybody") behaviour is mapped to ACLs when accessed via the samba stack? In particular, with the "default" setting (if one blindly follows the worked examples on this) of nfs4: special, if a user adds themselves specifically to an ACL, this creates an entry: special:@owner rather than: user:username which has the knock-on effect that if a file/folder is created under this ACL by a different owner (or if ownership changes), the person who put said ACL on to the file/folder no longer has access. Most people find this confusing (which is putting it politely). To further complicate matters, the "special" windows SID's*[1] - such as "CREATOR/OWNER" - don't seem to work properly in the ctdb/samba/gpfs stack (I don't know if they do in "normal" samba though). IBM don't support CREATOR/OWNER in SONAS*[2] - so it's not just me! So my question is - has anyone else been looking into this at all, and if so, do you have any sage words of wisdom to offer? Cheers, Orlando. *[1] http://support.microsoft.com/kb/163846 *[2] http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.sonas.doc%2Fadm_authorization_limitations.html -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From luke.raimbach at oerc.ox.ac.uk Fri Jun 22 17:33:10 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Fri, 22 Jun 2012 16:33:10 +0000 Subject: [gpfsug-discuss] Samba mapping of "special" SID entries In-Reply-To: <4FE486B2.1050501@ed.ac.uk> References: <4FE486B2.1050501@ed.ac.uk> Message-ID: Hi Orlando, I've been having success using Centrify to manage UID/GID mappings for our very small mixed cluster (7 x Linux, 1 x Windows 2008R2). I've created a map for "CREATOR / OWNER", "SYSTEM", "Domain Admins", etc. group SIDs and use the Windows node to manage ACLs. When the windows node applies the ACLs, these seem to translate successfully in to GPFS ACLs and work nicely for the mixed environment allowing users on both Linux and Windows systems to manipulate each other's files. People are mounting the FS via NFS (exported via the NSD Linux servers) and CIFS (shared from Win2k8R2). The permissions don't look friendly when you run ls -l on a Linux system over NFS but the ACLs do their job in preserving inheritable permissions, etc. If people want to see the 'real' ACL, they need to use mmgetacl on a GPFS attached node (or windows users simply click on the security tab under properties of a file). Drop me a line off-list if you want to take a look at what we've got remotely. I can run a webex session from the Windows node if you want to have a good poke around. Luke. -- Luke Raimbach IT Manager Oxford e-Research Centre 7 Keble Road, Oxford, OX1 3QG +44(0)1865 610639 > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Orlando Richards > Sent: 22 June 2012 15:53 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] Samba mapping of "special" SID entries > > Hi all, > > Has anyone bumped up against the "nfs4: special" option in GPFS/Samba > deployments which manipulates how the "owner" and "group owner" (and > "everybody") behaviour is mapped to ACLs when accessed via the samba > stack? > > In particular, with the "default" setting (if one blindly follows the worked > examples on this) of nfs4: special, if a user adds themselves specifically to > an ACL, this creates an entry: > > special:@owner > > rather than: > > user:username > > which has the knock-on effect that if a file/folder is created under this ACL > by a different owner (or if ownership changes), the person who put said ACL > on to the file/folder no longer has access. Most people find this confusing > (which is putting it politely). > > To further complicate matters, the "special" windows SID's*[1] - such as > "CREATOR/OWNER" - don't seem to work properly in the ctdb/samba/gpfs > stack (I don't know if they do in "normal" samba though). IBM don't support > CREATOR/OWNER in SONAS*[2] - so it's not just me! > > So my question is - has anyone else been looking into this at all, and if so, > do you have any sage words of wisdom to offer? > > Cheers, > Orlando. > > > *[1] http://support.microsoft.com/kb/163846 > *[2] > http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fc > om.ibm.sonas.doc%2Fadm_authorization_limitations.html > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Jez.Tucker at rushes.co.uk Mon Jun 25 14:52:47 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 25 Jun 2012 13:52:47 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? Message-ID: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Curiosity... How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in any configuration? Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:08:14 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:08:14 +0300 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 4:52 PM, Jez Tucker wrote: > ? How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in > any configuration? Native GPFS on Linux @KAUST. No Windows or OS X as far as I know. ~jonathon From Jez.Tucker at rushes.co.uk Mon Jun 25 15:08:14 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 25 Jun 2012 14:08:14 +0000 Subject: [gpfsug-discuss] HPC people - interconnects Message-ID: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Do you all use IB? Has anyone tried RDMA over 10G via the OFED stack? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:09:13 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:09:13 +0300 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 5:08 PM, Jonathon Anderson wrote: > No Windows or OS X as far as I know. Though, to be completely accurate, a number of our users mount their GPFS homedirs via SSHFS. So that's kind of like having Windows and OS X clients... ~jonathon From jonathon.anderson at kaust.edu.sa Mon Jun 25 15:11:25 2012 From: jonathon.anderson at kaust.edu.sa (Jonathon Anderson) Date: Mon, 25 Jun 2012 17:11:25 +0300 Subject: [gpfsug-discuss] HPC people - interconnects In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Message-ID: On Mon, Jun 25, 2012 at 5:08 PM, Jez Tucker wrote: > Do you all use IB? We're all 1/10GbE here. ~jonathon From arifali1 at gmail.com Mon Jun 25 15:13:32 2012 From: arifali1 at gmail.com (Arif Ali) Date: Mon, 25 Jun 2012 15:13:32 +0100 Subject: [gpfsug-discuss] HPC people - interconnects In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A65AD@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FE8720C.7040007@gmail.com> On 25/06/12 15:08, Jez Tucker wrote: > > Do you all use IB? > > Has anyone tried RDMA over 10G via the OFED stack? > > Most of our customers we use RDMA over verbs Is this the same thing you mentioned a few weeks ago with respect to ROCE. Does gpfs even support this? -- regards, Arif -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Mon Jun 25 15:22:20 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Mon, 25 Jun 2012 14:22:20 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305A6563@WARVWEXC1.uk.deluxe-eu.com> Message-ID: We have: 2 Physical Linux GPFS NSD servers (IBM), 4 Physical VMware ESXi servers (target the cNFS GPFS shares as datastores) (HP) 5 Virtual Machines with native GPFS clients, 1 Virtual Windows 2008R2 with native GPFS client, Approx 30 Linux machines targeting cNFS re-exported from 4 of the Linux VMs is a separate cNFS failure group, Approx 20 Desktops using the Windows 2008R2 server shares of the same file system, Centrify stitches the whole lot together and keeps UID/GID maps neat and tidy and consistent with our Active Directory, 10GbE at the back end for GPFS node communications, 1GbE for exporting to other servers / desktops in our building. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 25 June 2012 14:53 To: gpfsug main discussion list Subject: [gpfsug-discuss] Your GPFS O/S support? Curiosity... How many of you run Windows, Linux and OS X as clients (GPFS/NFS/CIFS), in any configuration? Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: