Hi, Jez,<br><br>I tried what you suggested with the command:<br><br>mmchfs -z yes   /dev/fs1<br><br>and the list output of "mmlsfs" is as follows:<br><br>-sh-4.1# ./mmlsfs /dev/fs1<br>flag                value                    description<br>
------------------- ------------------------ -----------------------------------<br> -f                 32768                    Minimum fragment size in bytes<br> -i                 512                      Inode size in bytes<br>
 -I                 32768                    Indirect block size in bytes<br> -m                 1                        Default number of metadata replicas<br> -M                 2                        Maximum number of metadata replicas<br>
 -r                 1                        Default number of data replicas<br> -R                 2                        Maximum number of data replicas<br> -j                 cluster                  Block allocation type<br>
 -D                 nfs4                     File locking semantics in effect<br> -k                 all                      ACL semantics in effect<br> -n                 10                       Estimated number of nodes that will mount file system<br>
 -B                 1048576                  Block size<br> -Q                 none                     Quotas enforced<br>                    none                     Default quotas enabled<br> --filesetdf        no                       Fileset df enabled?<br>
 -V                 12.10 (3.4.0.7)          File system version<br> --create-time      Thu Feb 23 16:13:28 2012 File system creation time<br> -u                 yes                      Support for large LUNs?<br> -z                 yes                      Is DMAPI enabled?<br>
 -L                 4194304                  Logfile size<br> -E                 yes                      Exact mtime mount option<br> -S                 no                       Suppress atime mount option<br> -K                 whenpossible             Strict replica allocation option<br>
 --fastea           yes                      Fast external attributes enabled?<br> --inode-limit      571392                   Maximum number of inodes<br> -P                 system                   Disk storage pools in file system<br>
 -d                 scratch_DL1;scratch_MDL1  Disks in file system<br> -A                 no                       Automatic mount option<br> -o                 none                     Additional mount options<br> -T                 /gpfs_directory1/        Default mount point<br>
 --mount-priority   0                        Mount priority<br><br>But I still got the error message in dsmsmj  from "manage" on /gpfs_directory1<br><br>"A conflicting Space Management is already running in the /gpfs_directory1 file system.<br>
  Please wait until the Space Management process is ready and try"<br><br>Could you help please?  <br>Could you give more suggestions please?<br>Thanks.<br><br>Grace<br><br><div class="gmail_quote">On Tue, May 29, 2012 at 4:00 AM,  <span dir="ltr"><<a href="mailto:gpfsug-discuss-request@gpfsug.org" target="_blank">gpfsug-discuss-request@gpfsug.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send gpfsug-discuss mailing list submissions to<br>
        <a href="mailto:gpfsug-discuss@gpfsug.org">gpfsug-discuss@gpfsug.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
        <a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
or, via email, send a message with subject or body 'help' to<br>
        <a href="mailto:gpfsug-discuss-request@gpfsug.org">gpfsug-discuss-request@gpfsug.org</a><br>
<br>
You can reach the person managing the list at<br>
        <a href="mailto:gpfsug-discuss-owner@gpfsug.org">gpfsug-discuss-owner@gpfsug.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of gpfsug-discuss digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
   1. Re: Use HSM to backup GPFS - error message: ANS9085E (Jez Tucker)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Mon, 28 May 2012 15:55:54 +0000<br>
From: Jez Tucker <<a href="mailto:Jez.Tucker@rushes.co.uk">Jez.Tucker@rushes.co.uk</a>><br>
To: gpfsug main discussion list <<a href="mailto:gpfsug-discuss@gpfsug.org">gpfsug-discuss@gpfsug.org</a>><br>
Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS - error message:<br>
        ANS9085E<br>
Message-ID:<br>
        <<a href="mailto:39571EA9316BE44899D59C7A640C13F53059CC99@WARVWEXC1.uk.deluxe-eu.com">39571EA9316BE44899D59C7A640C13F53059CC99@WARVWEXC1.uk.deluxe-eu.com</a>><br>
Content-Type: text/plain; charset="windows-1252"<br>
<br>
Hello Grace<br>
<br>
  This is most likely because the file system that you're trying to manage via Space Management isn't configured as such.<br>
<br>
I.E. the -z flag in mmlsfs<br>
<br>
<a href="http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html" target="_blank">http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html</a><br>

<br>
Also:<br>
<br>
This IBM red book should be a good starting point and includes the information you need should you with to setup GPFS drives TSM migration (using THRESHOLD).<br>
<br>
<a href="http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1" target="_blank">http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1</a><br>
<br>
Suggest you read the red book first and decide which method you'd like.<br>
<br>
Regards,<br>
<br>
Jez<br>
<br>
---<br>
Jez Tucker<br>
Senior Sysadmin<br>
Rushes<br>
<br>
GPFSUG Chairman (<a href="mailto:chair@gpfsug.org">chair@gpfsug.org</a>)<br>
<br>
<br>
<br>
From: <a href="mailto:gpfsug-discuss-bounces@gpfsug.org">gpfsug-discuss-bounces@gpfsug.org</a> [mailto:<a href="mailto:gpfsug-discuss-bounces@gpfsug.org">gpfsug-discuss-bounces@gpfsug.org</a>] On Behalf Of Grace Tsai<br>

Sent: 26 May 2012 01:10<br>
To: <a href="mailto:gpfsug-discuss@gpfsug.org">gpfsug-discuss@gpfsug.org</a><br>
Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E<br>
<br>
Hi,<br>
<br>
I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories:<br>
<br>
/gpfs_directory1<br>
/gpfs_directory2<br>
<br>
I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3).<br>
I run  HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows:<br>
<br>
File System                  State        Size(KB)       Free(KB)   ...<br>
------------------<br>
/                         Not Manageable<br>
/boot                  Not Manageable<br>
...<br>
/gpfs_directory1       Not Managed<br>
/gpfs_directory2       Not Managed<br>
<br>
<br>
I click "gpfs_directory1", and click "Manage"<br>
=><br>
I got error:<br>
"""<br>
A conflicting Space Management process is already running in the /gpfs_directory1 file system.<br>
Please wait until the Space management process is ready and try again.<br>
"""<br>
<br>
The dsmerror.log  shows the message:<br>
"ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management"<br>
<br>
Is there anything on GPFS or HSM or TSM server that I didnt configure correctly?  Please help.  Thanks.<br>
<br>
Grace<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120528/b97e39e0/attachment-0001.html" target="_blank">http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120528/b97e39e0/attachment-0001.html</a>><br>

<br>
------------------------------<br>
<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
<a href="mailto:gpfsug-discuss@gpfsug.org">gpfsug-discuss@gpfsug.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
<br>
<br>
End of gpfsug-discuss Digest, Vol 5, Issue 6<br>
********************************************<br>
</blockquote></div><br>