AFS

Every user has a dedicated AFS qutoa for the home directory. Use the following command to check the current usage and the assigned quota:

fs list_quota ~

In addition every ATLAS user has an AFS scratch directory under ~/scratch with a dedicated quota. AFS scratch has no backup. For more AFS home or scratch space or additional group/project AFS volumes contact NAF ATLAS support.

For more details see NAF: working with AFS.

dCache

The NAF has access to both DESY-HH and DESY-ZN Tier2 storage elements, which are running dCache. On the work group servers the DESY-HH PNFS filesystem is mounted directly under /pnfs/desy.de/atlas. At the momment three different access protocols are available: plain dcache, dcap and gsidcap. For gsidcap you need a valid grid proxy. gsidcap is the prefered one.

User Space

At the DESY-HH and DESY-ZN SEs there are two types of user space:

All ATLAS user can write to the SCRATCHDISK using the standard tools (dq2 or ganga/pathena). As the name implies this is scratch space. Datasets will be automatically deleted after 30 days. LOCALGROUPDISK is only for German Atlas users. Use the /atlas/de voms extension to be allowed to write to this space. It is managed by the ATLAS NAF admins.

Use dq2-put to write your data onto the SEs. See dq2 tools for more details.

dCache Tools

The preferred access to the dCache system besides the ATLAS tools is via the standard grid tools as the lcg or srm tools and the access via the local /pnfs mount is not supported for various reasons. The dctools have been developed to mimic the local file system access and hiding the underlying grid tools.

In order to use them, you need to set up the PATH using ini dctools or add the following directory to your PATH: /afs/desy.de/group/it/services/dcache/bin.

The new tools include among others:

For further documentation see http://naf.desy.de/general_naf_docu/naf_storage/working_with_dcache/

Writing to/Reading from dCache using directly /pnfs

Writing to dCache should only be done with dq2-put.

You can read/copy data using the dccp command:

Examples:

Reading using ROOT

ROOT file access is done via the TFile class. ROOT can and will handle different access protocols if the file is opened using the static Open() method.

Examples:

Reading using Athena

Athena uses ROOT for file access, hence the same syntax can be used, although you need to enable the dCache access using the useDCACHE tag (release 14.2.20 or higher), when setting up your athena environment.

The tool dq2-poolFCjobO.py can create the PoolFileCatalog in the xml format or job option files for a given dataset, directory or file pattern. By default it will use the dcap access protcol, but gsidcap can also be choosen. The tool can be found in /afs/naf.desy.de/group/atlas/software/scripts or ~efeld/public/dq2. If a DQ2 dataset is used as input, DQ2 need to be set up (ini dq2), for the other a UI (ini gliteatlas) is sufficient. Do NOT set up a ATLAS software kit in the same session! Use the flag -h for more options and some explanation.

Examples for PoolFileCatalog and JO files:

This is based on the lcg-getturls command, which translates a SURL into a TURL for a specific transfer protocol. E.g. lcg-gt srm://dcache-se-atlas.desy.de:8443/pnfs/desy.de/atlas/dq2/yvestest4.bash dcap.

dCache library

If you are missing the libdcap.so library, there are different ways to get them:

UID/GUID Problems

Your linux user and group ID is different, when you work on the NAF or on the Atlas DESY T2s. If you also have a normal DESY account you have another set of UID/GUID.

When you are reading from dCache this usually is no problem, because no strict read access is applaid. It means that everybody who has acccess to the ATLAS dCaceh can read files.

If you want to write to the dCache this is different. If you write always with the same UID/GUID using the dcap protocol then it also should work out of the box. If not, e.g. run on the NAF and the Grid or use gsidcap, you should change the ownership of the directory you write in to the user 40101, which is the default atlas user on the DESY T2. Please be aware that everybody can write(delete)/read to you directory.

Lustre

Lustre is mounted under /scratch. This is scratch space, meaning there is NO backup. Further, this is only temporary space. The Lustre space at the Hamburg site is at /scratch/hh/lustre/atlas, the Lustre space at the Zeuthen site is at /scratch/zn/lustre/atlas and /scratch/zn/lustre/atlas2. Usually the space is divided between the users directory and the monthly directory. Files from the monthly directory will be regularly deleted if not used for a while. On the other side much more space is available below monthly. Create your files and directories within a sub directory named by your user name.

For more details see http://naf.desy.de/general_naf_docu/naf_storage/working_with_lustre.

Scratch Space

Every workgroup server has around 100 GB of free scratch space under /tmp. This is local to the machine.

ATLAS: WorkBook/NAF/DataStorage (last edited 2011-12-30 16:59:17 by WolfgangEhrenfeld)