## page was renamed from WorkBook/NAF/Grid Tools #acl EditorsGroup:read,write All:read <> ---- Here you can find some help on the setup as well as the usage of Grid Tools on the NAF. == DQ2 == <> To initialise DQ2 enter {{{ ini dq2 }}} Use `dq2-ls` to find datasetnames and list the dataset content (option -f). with `dq2-get` you can download datsets from the grid, with `dq2-put` you can put files into a dataset and on a Grid SE. Every `dq2` command support the switch `-h` for getting help. For more information see [[https://twiki.cern.ch/twiki/bin/view/Atlas/DistributedDataManagement|ATLAS TWiki:DistributedDataManagment]]. The section `User Help` contains links to the full documentation, !HowTo and a tutorial. See also the [[WorkBook/NAF/ADT09DQ2|DQ2 tutorial]] about DQ2 from the ATLAS-D09 tutorials. === dq2-get-naf === `dq2-get-naf` is a wrapper around `dq2-get` but executing `dq2-get` on a batch node. All options for `dq2-get` are also available for `dq2-get-naf`. You get `dq2-get-naf` into your PATH after executing {{{ini atlas}}}. Here is an example: {{{ ini dq2 ini autoproxy ini atlas dq2-get-naf user09.WolfgangEhrenfeld.test_for_tutorial.test.TEST.v0 }}} `dq2-get-naf` should start at latest after a few seconds. If not, complain at the [[WorkBook/NAF/Support|NAF ATLAS support]]. The output from the batch system is buffered, so the output is not as fluently as running directly on the work group server. But this has no effect on the download speed. `dq2-get-naf` was introduced to move dataset downloads from the interactive ATLAS NAF work group server to the batch nodes. The network traffic from `dq2-get` on a work group server can congest the network card and hence effecting all other users on the same work group server if their work is network based (AFS, Lustre, ...). `dq2-get-naf` checks, if the downloads go to /scratch/zn and then redirects `dq2-get` to a batch node at the Zeuthen site. For all other cases a batch node at the Hamburg site is requested. === dq2-put === If you want to store data on the grid you have two options. Every site has a `xxx_SCRATCHDISK` and a `xxx_LOCALGROUPDISK`, where `xxx` is the site name. For the NAF this is `DESY-HH` and `DESY-ZN`. Everything on `xxx_SCRATCHDISK` will be deleted automatically by `ATLAS` after 30 days. Before deletion, the dataset will be frozen. `xxx_LOCALGROUPDISK` is for local users, in case of the NAF for German `ATLAS` user. Use this location for long term storage. By default, `dq2-put` will write to `xxx_SCRATCHDISK`. Use the option `-L` to specify a different location. {{{ dq2-put -L DESY-HH_LOCALGROUPDISK -s files.d mydatasetname }}} Will write all files in the subdirecory `files.d` to the `DESY-HH_LOCALGROUPDISK` location and register all the files in the dataset `mydatasetname`. User dataset should start with `user08.FirstnameLastname` and should include a tag for the format, e. g. `EVNT`, `HITS`, `RDO`, `ESD`, `AOD` or `DPD`. == Ganga == To setup the latest version of Ganga enter {{{ ini ganga }}} For older versions of ganga check the output of {{{ ini }}} === Initial Ganga customisation === If you start Ganga for the first time, you need to create the $HOME/.gangarc file: {{{ ganga -g }}} Some defaults for ATLAS and the NAF are setup automatically. See the first file listed in `$GANGA_CONFIG_PATH`. For completness they are listed here: {{{ [Configuration] RUNTIME_PATH = GangaAtlas:GangaPanda:GangaNG:GangaGUI:GangaRobot [Athena] ATLAS_SOFTWARE = /afs/naf.desy.de/group/atlas/software/kits [SGE] submit_str = cd %s; qsub -w e -cwd -P atlas -l h_vmem=1G -l h_stack=10M -l h_fsize=10G -l h_cpu=11:59:00 -v DCACHE_CLIENT_ACTIVE=1 -S /usr/bin/python %s %s %s %s [defaults_GridProxy] voms = atlas init_opts = -voms atlas:/atlas/de [LCG] EDG_ENABLE = True EDG_SETUP = /afs/naf.desy.de/project/ganga/grid-athena.sh GLITE_ENABLE = True GLITE_SETUP = /afs/naf.desy.de/project/ganga/grid-athena.sh SubmissionThread = 5 VirtualOrganisation = atlas [defaults_LCG] middleware = GLITE [defaults_AtlasLCGRequirements] dq2client_version = 0.1.28 }}} If you want different defaults you need to change the value in your own Ganga configuration file `$HOME/.gangarc`: === Ganga and VOMS Roles, e. g. /atlas/de === Ganga tries to extend your grid proxy if it is valid less than a certain time. This property is configurable. By default it uses plain ATLAS as the VOMS extension, which is not always desired. The NAF Ganga installation is configured to use the /atlas/de VOMS role using the `init_opts` property from the `defaults_GridProxy` section in the `.gangarc` config file: {{{ [defaults_GridProxy] # String of options to be passed to command for proxy creation init_opts = -voms atlas:/atlas/de }}} If you want the plain atlas VOMS role, you need to add the following to your `$HOME/.gangarc` file: {{{ [defaults_GridProxy] # String of options to be passed to command for proxy creation init_opts = -voms atlas }}} Note: You need the same VOMS role for retrieving output as submitting the job! If you do not want to rely on Ganga for creating your grid proxy, you can always create your grid proxy before working with Ganga. Please setup your Grid UI in a different shell and use the `voms-proxy-init` command. === Ganga and UI === Since Ganga version 5.0.9 the Grid UI interface is not setup explicitly anymore. This is done to avoid python problems between the Grid UI and Athena (release 14). This behaviour is the same as the Ganga setup at CERN. This means that for example `voms-proxi-init` is not available. If you need the Grid UI, use {{{ini gliteatlas}}} in a new shell. There you could create your Grid proxy with the `/atlas/de` role. === Ganga and Athena === Since Ganga version 5.0.9 the Grid UI is not setup explicitly anymore and hence no further problems with Athena are expected anymore. === Missing dCache library (`libdcap.so`) === Since athena release 14.2.20 `libdcap.so` is not in the runtime setup anymore. If dCache access is needed. e. g. reading data from the local SE, you need to add `libdcap.so` explicitly to the input sandbox for the local and sge backend: {{{ --inputsandbox=/afs/naf.desy.de/project/ganga/5.0.9/install/5.0.9/python/GangaAtlas/Lib/Athena/libdcap.so }}} === Further Documentation === For more details see https://twiki.cern.ch/twiki/bin/view/Atlas/DistributedAnalysisUsingGanga. == Panda Client Tools == The latest Panda client tools are automatically installed at the NAF into the directory {{{/afs/naf.desy.de/group/atlas/software/panda}}}. The link {{{current}}} always points to the latest version. Use either {{{ source /afs/naf.desy.de/group/atlas/software/panda/panda_setup.sh }}} or {{{ ini pandaclient }}} to setup the Panda client tools. They provide * [[https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaAthena|pathena]]: submit Athena jobs to Panda * [[https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaRun|prun]]: submit ROOT/general jobs to Panda * [[https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaBook|pbook]]: bookkeeping for Panda analysis jobs * [[https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaSequencer|psequencer]]: sequential jobs/operations in Panda No Grid UI is needed. The Panda client tools will to this automatically. If you need to create a new Grid proxy, please do this in a different shell, so that the Grid UI setup used for `voms-proxy-init` will not interfere with the Panda client tools. === Further Documentation === For more details see https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaTools