These are the ATLAS NAF instructions for the Detector Understanding with First LHC Data Workshop (29.6. - 3.7.2009).
Login into NAF
For login into the NAF you need a valid Grid proxy and hence a recent Grid User Interface (UI). For more details on the login procedure and UI requirements see http://naf.desy.de/general_naf_docu/login_into_workgroupserver. There are different options to get a UI:
- If you have a UI on your laptop, set it up and log into the NAF from there.
- If you have AFS on your local machine, get the UI from DESY or CERN and log into the NAF from there.
- DESY: source /afs/desy.de/project/glite/UI/etc/profile.d/grid-env.sh
- CERN: source /afs/cern.ch/project/gd/LCG-share/current/external/etc/profile.d/grid-env.sh
If you have a DESY account, log into your favourite DESY work group server, setup the Grid UI using ini glite and log into the NAF from there.
If you have none of the previous requirements, log into the school cluster detector09-ui.desy.de using your school account. Copy your Grid certificate to your school account and log into the NAF from there. The Grid UI is setup by default in the school accounts.
- last, but not recommended: log into your home institue or CERN and log into the NAF from there.
Create a grid proxy with the -rfc option (proxy version 4):
voms-proxy-init -voms atlas -rfc
The created grid proxy is of version 4, which is not recognized by all Grid applications. For example the globus job manager has problems with it.
Log into the ATLAS NAF login server:
gsissh -Y atlas.naf.desy.de
You will be automatically forwarded to one of the ATLAS NAF work group servers. A load balancing is applied. The option -Y enables trusted X11 forwarding. Otherwise no new windows will be exported to your local machine.
The available work group server for ATLAS can be listed using the command wgsinfo. During the workshop the following hosts are available: tcx030, tcx040, tcx050, tcx052, tcx053, tcx054, tcx060, tcx063.
For more details see
Grid Setup
Although you came into the NAF with a valid Grid proxy, it is lost after login. Hence you need to be able to create a Grid proxy at the NAF.
First, copy your grid certificate from your local or institutes machine to the NAF:
If your session is on your local machine:
gsiscp -pr $HOME/.globus atlas.naf.desy.de:
If your session is on the NAF:
scp -pr YOURHOSTNAME:.globus $HOME
Depending on the firewall setting of your local machine, you might prefer the first.
Now, set up the Grid UI and check out your grid certificate:
ini gliteatlas
This sources the Grid UI and set some ATLAS specific variables, e. g. LFC.
voms-proxy-init -voms atlas:/atlas/de -valid 96:00
- -voms atlas is short for -voms atlas:/atlas
- -voms atlas:/atlas/de create a proxy with the ATLAS-D role
- the maximum time for the voms extension is 96 hours
voms-proxy-info -all
Look at the different attributes and create a Grid proxy with plain ATLAS role.
For more details see:
AFS
Like CERN your home directory is in the AFS filesystem and hence you can see it from every computer in the world with an AFS client. Well, with some restrictions. The root to the home directories is /afs/naf.desy.de/user followed by the first letter of your username and the username, e. g. /afs/naf.desy.de/user/e/efeld. Every user has a directory called public in the home directory, which is world readable. Try
ls /afs/naf.desy.de/user/e/efeld/public
from any AFS client, e. g. lxplus at CERN.
If you want to access some other directories within your home directory, you can either change the AFS ACLs (be careful with them) or get a NAF AFS token. Because a Grid proxy is used for authentication at the NAF a Grid proxy is also needed to get a NAF AFS token. Remember, you do not have a password for your NAF account. If you have AFS access at your local computer, you can create a NAF AFS token with the following script:
/afs/naf.desy.de/products/scripts/naf_token USERNAME
Please, try it out. Go to your computer at your institute, get a grid proxy (version 4), get the NAF AFS token and list the content of your home directory at the NAF.
For more details see:
ATLAS Software
Please, start from a clean shell to avoid any contamination of your environment from previous work!
At the NAF a few scripts are provided in the /afs/naf.desy.de/group/atlas/software/scripts directory to ease your life. In addition some ATLAS script are also installed there. Type
ini atlas
to add this directory to your $PATH variable.
If you are an experienced ATLAS user and you have worked with kits, you can setup your own ATLAS environment. For everybody else (and the lazy experts) you can run a script, which will do it for you:
atlas_setup.py --create
This creates the cmthome directory, a requirement file and sets up cmt. In addition, it creates an atlas directory in your home directory. Check the directories and AFS ACLs (fs listacl) for the directories. Your testarea is set up to be in $HOME/atlas/testarea. If you have already an old $HOME/cmthome directory and a requirements file, remove either the whole $HOME/cmthome directory or the requirements file. The script will not overwrite existing files.
The latest requirements file can be get from
atlas_create_requirements.py
From the requirements file, you see where the kits are installed including version and production caches.
By default the ATLAS kits are installed in AFS. For the tutorial, a local copy of the ATLAS software (release 15.1.0 and 15.2.0) is also installed in /tmp/atlas on every work group server. You can and should switch to this installation using the additional tag local when setting up your ATLAS environment.
Now, check if athena is working:
source $HOME/cmthome/setup.sh -tag=15.1.0,local athena.py AthExHelloWorld/HelloWorldOptions.py
If you want to check out a package from CERN, you need a Kerberos ticket for CERN:
kinit cernusername@CERN.CH
If your CERN username is different from your NAF username, tell ssh about it. An example is on the ATLAS Software wiki page.
In order to test it you should check out the UserAnalysis package:
mkdir -p $HOME/atlas/testarea/15.1.0 cd $HOME/atlas/testarea/15.1.0 cmt co -r UserAnalysis-00-13-14 PhysicsAnalysis/AnalysisCommon/UserAnalysis
If you want to know the official tag of a package for a certain release, use the get_tag command (use ini atlas on the NAF if the command is not found).
Also remember, that you can have only one Kerberos 5 ticket for authentication. Either your NAF one or your CERN one. If you want to log into another NAF machine, submit to the batch system or use the automatic Kerberos ticket, AFS token and Grid proxy renewal service, you need a new NAF ticket. Either start with a new NAF session or use the naf_ticket command (use ini atlas if the command is not found).
Compile the package:
cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt cmt make
Sometimes you want to use an AtlasProduction cache, which is used for official MC production. E. g. if you want to use the job transform scripts for a private MC production. These caches are usually installed at the NAF. You can set them up in the following way, e. g. for 15.1.0.1:
source ~/cmthome/setup.sh -tag=15.1.0.1,AtlasProduction
Please note, that your testarea ends now into 15.1.0.1. See the shell variable $TestArea for the full path.
For more details see:
AthenaROOTAccess
Please, start from a clean shell to avoid any contamination of your environment from previous work!
With AthenaROOTAccess (ARA) you can use ROOT to analyse interactively AODs or DPDs. Clearly, some information about the ATLAS Event Data Model (EDM) is needed. Therefor you need the ATLAS software, which also contains the required ROOT installation.
Installation
For release 15.1.0 you need the following package for AthenaROOTAccess:
- PhysicsAnalysis/AthenaROOTAccess AthenaROOTAccess-00-05-46-01
- PhysicsAnalysis/AthenaROOTAccessExamples AthenaROOTAccessExamples-00-00-25
Follow the instruction to check them out and compile them. Start from a new shell:
mkdir -p $HOME/atlas/testarea/15.1.0 source $HOME/cmthome/setup.sh -tag=15.1.0,local cd $TestArea kinit cernusername@CERN.CH cmt co -r AthenaROOTAccess-00-05-46-01 PhysicsAnalysis/AthenaROOTAccess cmt co -r AthenaROOTAccessExamples-00-00-25 PhysicsAnalysis/AthenaROOTAccessExamples setupWorkArea.py cd WorkArea/cmt cmt config cmt broadcast cmt make cd ../..
Getting Started
In the following you should work with the interactive features of ARA. Go into the AthenaROOTAccessExamples package and prepare your run directory:
cd $TestArea/PhysicsAnalysis/AthenaROOTAccessExamples mkdir run cd run cp ../../AthenaROOTAccess/share/test.py . ln -s /scratch/current/atlas/DUW09/intro/mc08.106050.PythiaZee_1Lepton.merge.AOD.e347_s462_r635_t53_tid064381/AOD.064381._00171.pool.root.1 AOD.pool.root
As the core of AthenaROOTAccess is written in python, you need to call the python script test.py within ROOT to load the AOD content. By default the file AOD.pool.root will be loaded. Look at the ARA TWiki page to learn how to use TChainROOTAccess to load more than one file. TChainROOTAccess is the ARA equivalent of a ROOT TChain.
root -l root [0] TPython::Exec("execfile('test.py')") root [1] CollectionTree_trans = (TTree *)gROOT->Get("CollectionTree_trans")
Now you are ready to go. You can work with the CollectionTree_trans tree like a normal ROOT tree.
# list all objects in the tree, the name of the object is the StoreGate key CollectionTree_trans->Print() # get number of entries CollectionTree_trans->GetEntries() # draw a variable CollectionTree_trans.Draw("PhotonAODCollection.e()") # scan a variable CollectionTree_trans.Scan("PhotonAODCollection.e()") # get the size of a container, notice the @ at the beginning of the variable name CollectionTree_trans.Draw("@PhotonAODCollection.size()") # start the browser TBrowser b
When the transient collection tree is loaded, automatically the dictionaries of all contained classes are also loaded and available within the ROOT session. For example tab completion can be used to browse class names, member functions and data members.
Let's try some out:
Particle properties like momentum, mass and angles are usually accessible via the I4Momentum interface [ doxygen ]. Type I4Momentum:: and press the tab key to get a list. You should see member functions as px, py, pz, pt, eta and m.
Particle properties like charge and particle ID are usually accessible via the IParticle interface [ doxygen ]. Type IParticle:: and press the tab key to get a list. You should see member functions as charge, hasCharge and pdgId. You should also see all the member functions from the I4Momentum class as IParticle inherits from I4Momentum.
With the tab completion you can also complete a member function name. Move one position back and press tab again and you will get the exact definition of this function. Try it out for the member function I4Momentum::hlv() and understand, what it does. Or look up its purpose in the doxygen documentation.
In the tutorial you will work with tracks, jets, electrons, photons, muons and taus. Some of the classes are in a namespace, which you have to prepend to the class name. Namespace and class name are separated by ::. The same separation is between class name and data members or member functions. Analysis::Electron, Analysis::Photon, Analysis::Muon and Analysis::TauJet. Look at all the member functions and data members of one of theses classes.
For more details see: