Contents
Getting Started at the NAF
This page collects some instructions and examples, how to get started on the NAF.
Login into NAF
For login into the NAF you need a valid Grid proxy and hence a recent Grid User Interface (UI). For more details on the login procedure and UI requirements see http://naf.desy.de/general_naf_docu/interactive_login. There are different options to get a UI:
- if you have a UI on your local machine, set it up
- if you have AFS on your local machine, get the UI from
- DESY: source /afs/desy.de/project/glite/UI/etc/profile.d/grid-env.sh
- CERN: source /afs/cern.ch/project/gd/LCG-share/current_3.2/external/etc/profile.d/grid-env.sh
- if your institutes cluster has a UI, log into it and start from there
- last, but not least: log into lxplus at CERN and set up the CERN UI (see above)
Create a Grid proxy with the -rfc option (proxy version 4).
voms-proxy-init -voms atlas -rfc
The created Grid proxy is of version 4, which is not recognized by all Grid applications. For example the globus job manager has problems with it.
Log into the ATLAS NAF login server:
gsissh -Y atlas.naf.desy.de
You will be automatically forwarded to one of the ATLAS NAF work group servers. A load balancing is applied. The option -Y enables trusted X11 forwarding. Otherwise no new windows will be exported to your local machine.
For more details see
Setup Grid
Although you came into the NAF with a valid Grid proxy, it is lost after login. Hence you need to be able to create a Grid proxy at the NAF.
First, copy your Grid certificate from your local or institutes machine to the NAF:
If you are working from a local session, you can copy using gsiscp from your local machine to the NAF (do this only for small files):
gsiscp -pr $HOME/.globus atlas.naf.desy.de:
or if you are working from a NAF session, you can copy using normal scp from your local machine to the NAF:
scp -pr YOURHOSTNAME:.globus $HOME
Depending on the firewall setting of your local machine, you might prefer the first, but this is not suitable for large data volumes. Please also note, how you copy files between the NAF and an outside machine.
Now, set up the Grid UI and check out your Grid certificate:
ini gliteatlas
This sources the Grid UI and set some ATLAS specific variables, e. g. LFC.
voms-proxy-init -voms atlas:/atlas/de -valid 96:00
- -voms atlas is short for -voms atlas:/atlas
- -voms atlas:/atlas/de create a proxy with the ATLAS-D role
- the maximum time for the voms extension is 96 hours
voms-proxy-info -all
Look at the different attributes and create a Grid proxy with plain ATLAS role.
Now that you have a valid proxy, we will set up the autoproxy service. You have to create a valid proxy once a month, which will be uploaded to a myproxy server. The proxy on the myproxy server will be used to automatically extend your current proxy. Your proxy will have a life time of at least 72 hours and has the /atlas/de VOMS group as active group.
Setup the autoproxy service and create a Grid proxy on the myproxy server:
ini autoproxy ap_gen.sh
Turn on the autoproxy service by creating an empty file:
touch .globus/.autoproxy
Now log off and in again and test, if you have a valid proxy (well, this test makes much more sense after at least 24 hours):
ini gliteatlas ini autoproxy voms-proxy-info --all
Please note, that you also set up the Grid UI. If you only need the proxy but not the Grid UI (for ganga or athena), you can leave out ini gliteatlas. So, everytime you start a new session and need a valid proxy, you get it via ini autoproxy.
Finally, you will get a warning message at login time, when your proxy on the myproxy server is due to expired. To renew use the following two commands:
ini autoproxy ap_gen.sh
For more details see:
AFS
Like CERN your home directory is in the AFS filesystem and hence you can see it from every computer in the world with an AFS client. Well, with some restrictions. The root to the home directories is /afs/naf.desy.de/user followed by the first letter of your username and the username, e. g. /afs/naf.desy.de/user/e/efeld. Every user has a directory called public in the home directory, which is world readable. Try
ls /afs/naf.desy.de/user/e/efeld/public
from any AFS client, e. g. lxplus at CERN.
If you want to access some other directories within your home directory, you can either change the AFS ACLs (be careful with them) or get a NAF AFS token. Because a Grid proxy is used for authentication at the NAF a Grid proxy is also needed to get a NAF AFS token. Remember, you do not have a password for your NAF account. If you have AFS access at your local computer, you can create a NAF AFS token with the following script:
/afs/naf.desy.de/products/scripts/naf_token USERNAME
Please, try it out. Go to your computer at your institute, get a Grid proxy (version 4), get the NAF AFS token and list the content of your home directroy at the NAF.
For more details see:
ATLAS Software
The ATLAS software is setup via the AtlasSetup approach. We propose to use the directory $HOME/atlas/testarea for your testareas. The following script will create these directories and make them group readable.
ini atlas atlas_setup.py --create
The ATLAS software environment is setup in the following way:
asetup 17.2.2 --multi --testarea ~/atlas/testarea
This will use $HOME/atlas/testarea/17.2.2 as your testarea. asetup will also warn you if the testarea does not exist. Please create the missing directory:
mkdir $HOME/atlas/testarea/17.2.2
Alternatively, you can switch first to your testarea and then setup the ATLAS software environment, as default thec current directory is used as testarea:
mkdir $HOME/atlas/testarea/17.2.2 cd $HOME/atlas/testarea/17.2.2 asetup 17.2.2,here
Read the AtlasSetup TWiki for more options and how to store defaults in the $HOME/.asetup file.
Check, if your setup is working:
athena.py AthExHelloWorld/HelloWorldOptions.py
As the software is distributed via CVMFS, all releases, production and analysis caches (32 and 64 bit) are available.
The atlas_test_config.py script dumps some ATLAS related variables, which are useful for debugging the ATLAS environment (use ini atlas at the NAF):
ini atlas atlas_test_config.sh
If you want to check out a package from CERN, you need a Kerberos ticket for CERN:
kinit usernameonlxplus@CERN.CH
If your CERN username is different from your NAF username, tell ssh about it. An example is on the ATLAS Software wiki page.
In order to test it you should check out the UserAnalysis package:
cd $HOME/atlas/testarea/17.2.2 asetup 17.2.2,here pkgco.py UserAnalysis
pkgco.py will retrieve the correct tag for the release setup and check it out. It can also check out dedicated tags or the head of a package. Use the -h option for more instructions. Alternatively, you can use cmt to check out packages. The corresponding line is
cmt co -r UserAnalysis-00-15-06 PhysicsAnalysis/AnalysisCommon/UserAnalysis
If you want to know the official tag of a package for a certain release, use the get_tag command (either on lxplus or use ini atlas at the NAF). This script will query AMI and needs authentication to the AMI server. If you have set up a release, you can use cmt show versions PhysicsAnalysis/AnalysisCommon/UserAnalysis to get the tag for that release.
Just compile the package for later use:
cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt cmt make
Sometimes you want to use an production or analysis cache, which is used for official production. You can set them up in the following way, e. g. for AtlasProduction 17.2.2.4:
asetup AtlasProduction,17.2.2.4,here
Official ntuple production is usually done with an AtlasPhysics analysis cache, which has five digits. You can set them up in the following way, e. g. for AtlasPhysics 17.2.2.4.2:
asetup AtlasPhysics,17.2.2.4.2,here
For more details see:
Batch System
SGE is used for the local batch system. qstat will list your jobs (submitted and running). Jobs are submitted with qsub. Here is a very simple example, which needs to be pasted into the terminal as two parts:
First:
qsub -l h_cpu=0:10:00
Second:
echo "Hello World!" hostname pwd
End the input with ^D (hold the control key and press D). Output (separated into stdout (STDIN.oID) and stderr (STDIN.eID)) is written to your home directory. The script is execute into your home directory. This can be changed with arguments to qsub. See the man pages for more details.
Get the example script from Batch System and run the athena HelloWorld example on the batch. By default qsub will not export any variables from your current shell into th batch job. Hence the ATLAS software environment must be set up again. Check also the special comments (#$) in the script in order to pass arguments to the qsub command at submission time. Check, what resources are used for this job.
Note, that specifying resources, e. g. CPU time and memory consumption, is very important and has an impact on the time between submitting and running your job.
For more details see:
DQ2
The latest version of the DDM clients tools, called dq2 are installed at the NAF. The setup is done via the ini command:
ini dq2
The most used commands are dq2-ls which list a dataset and dq2-get which downloads a dataset to the local disc.
Try to look for your datasets:
dq2-ls user.efeld.*/
or for egamma stream data from the summer 2011 reprocessing:
dq2-ls data12_8TeV.*physics_Egamma.PhysCont.AOD*t0pro13_v01*/
For more details see:
Ganga
If you have done the dq2 section, start from a fresh login shell.
Recent versions of Ganga are installed and can be setup using the ini script. The most recent is setup with ganga as argument:
ini ganga
For older version check the list of available modules within ini:
ini
This will also set up the GANGA_CONFIG_PATH for ATLAS.
If you run ganga the first time at the NAF, create a config file with
ganga -g
For more details about the configuration see the Ganga section of Grid Tools.
Here are the instructions to run athena HelloWorld on the NAF batch system and on the DESY-HH Grid site using ganga:
cd $HOME/atlas/testarea/17.2.2 asetup 17.2.2,here mkdir run cd run get_files -jo AthExHelloWorld/HelloWorldOptions.py ini ganga ganga athena --sge HelloWorldOptions.py ganga athena --nocompile --nobuild --panda --site ANALY_DESY-HH HelloWorldOptions.py
The first three lines can also be written within Ganga:
j = Job() j.name='HelloWorld_SGE' j.application=Athena() j.application.atlas_release = '17.2.2' j.application.option_file='$HOME/atlas/testarea/17.2.2/run/HelloWorldOptions.py' j.backend=SGE() j.submit()
j = Job() j.name='HelloWorld_Panda' j.application=Athena() j.application.option_file='$HOME/atlas/testarea/17.2.2/run/HelloWorldOptions.py' j.application.athena_compile=False j.application.atlas_dbrelease = '' j.application.prepare() j.outputdata = DQ2OutputDataset() j.backend=Panda() j.backend.site = 'ANALY_DESY-HH' j.submit()
Start ganga or ganga --gui to monitor your jobs status.
Athena with Ganga
Most of the time you want to run real life jobs using Ganga and this involves event processing using Athena. The following to examples produce ntuples using the analysis skeleton from the UserAnalysis packages. The first is run on SGE and the second and third on the Grid, using the LCG and Panda backend, respectively. The setup for both jobs is the same:
cd $HOME/atlas/testarea/17.2.2 asetup 17.2.2,here cd PhysicsAnalysis/AnalysisCommon/UserAnalysis mkdir run cd run get_files -jo AnalysisSkeleton_topOptions.py ln -s /afs/cern.ch/atlas/maxidisk/d33/referencefiles/aod/AOD-17.2.2/AOD-17.2.2-full.pool.root AOD.pool.root ini ganga ganga
Please note, that you need a working setup and job options. Ganga will run the job options to figure out some important run parameters, e.g. the output name of your ntuples. Therefore it is important to have a small AOD file around. This example uses a reference AOD from release 17.0.3 stored at CERN.
SGE:
j = Job() j.application=Athena() j.application.option_file=['AnalysisSkeleton_topOptions.py'] j.application.max_events=100 j.application.prepare() j.inputdata=DQ2Dataset() j.inputdata.dataset="data11_7TeV.00184130.physics_Egamma.merge.AOD.r2603_p659/" j.outputdata=ATLASOutputDataset() j.outputdata.outputdata=['AnalysisSkeleton.aan.root' ] j.outputdata.location = '$HOME/atlas/' j.backend=SGE() j.submit()
Panda system:
j = Job() j.application=Athena() j.application.option_file=['AnalysisSkeleton_topOptions.py'] j.application.max_events=100 j.application.prepare() j.inputdata=DQ2Dataset() j.inputdata.dataset="data11_7TeV.00184130.physics_Egamma.merge.AOD.r2603_p659/" j.outputdata=DQ2OutputDataset() j.splitter=DQ2JobSplitter() j.splitter.numfiles=2 j.backend=Panda() j.backend.requirements.cloud='DE' j.submit()
You can monitor your jobs at ATLAS: Panda Monitor
For more details see:
Pathena
The Panda analysis client tools are installed at the NAF. You get the latest version via the ini command:
ini pandaclient
This will give you pathena, prun and pbook
As in the case of ganga, you do not have to setup the Grid UI. Also, the client tools will check for a valid proxy.
You can simply run the HelloWorld program like:
cd $HOME/atlas/testarea/17.2.2 asetup 17.2.2,here cd run get_files -jo AthExHelloWorld/HelloWorldOptions.py ini pandaclient pathena --noBuild --site ANALY_DESY-HH --outDS=user.efeld.test_20120912 --extOutFile=dummy HelloWorldOptions.py
The --extOutFile and --outDS options are needed as pathena requires output to be written into a dataset.
For more details see: