#acl EditorsGroup:read,write All:read <> ---- This is a basic page of documentation for ATLAS NAF users. The main NAF documentation can be found on [[https://confluence.desy.de/display/IS/NAF+-+National+Analysis+Facility|this]] confluence page <
> == Accounts == <
> === Getting an account === <
> The NAF working environment is fully integrated into the standard DESY computing infrastructure: there is no distinction between a "DESY account" and a "NAF account". There is however a dedicated unix group for ATLAS on the NAF, '''AF-ATLAS''', which is usually added to all new ATLAS accounts. <
> The ATLAS group has a set of working group servers which are used as the NAF [[#Login|entry point]], which can be accessed by any member of the DESY group AF-ATLAS. If you are an existing DESY user with a full ATLAS account, you need to be added to the AF-ATLAS group in order to access NAF 2: send an email to naf-atlas-support@desy.de <
> If you previously had a DESY account but this has now been deactivated, including with another group, then rather than getting a new account the old one should be reactivated. Please send an email to naf-atlas-support@desy.de with your first, last, and user names and specify that you would like to reactivate your account in order to get access to the NAF. <
> If you a non-DESY user, you can request a NAF account [[https://naf2-request.desy.de|here]]. NAF registration by non-DESY users requires a Grid certificate imported into your browser for authentication purposes. <
> More information on NAF accounts is available [[https://confluence.desy.de/display/IS/Getting+a+NAF+account|here]], including how to reset passwords (required every 6 months). <
> === Login === <
> To login to the NAF you must be a member of the group '''AF-ATLAS'''. For login, simply do an ssh to the ATLAS NAF working group servers: {{{ ssh -X -l yourusername naf-atlas.desy.de }}} In reality this is a load-balancing alias to a pool of eight working group servers, {{{naf-atlas11-18}}}, all running EL7. <
> Up to date information on the NAF working group servers and login is [[https://confluence.desy.de/display/IS/NAF+Login%2C+WGS+and+remote+Desktop|here]]. <
> === Login to DESY and/or eduroam networks at DESY === You can login to the [[https://it.desy.de/services/networks/wlan_at_desy/desy/index_eng.html|DESY]] or [[https://it.desy.de/services/networks/wlan_at_desy/eduroam/index_eng.html|eduroam]] !WiFi networks at DESY if your laptop is registered in {{{QIP}}}, and you have the correct resources on your account, namely "{{{wlan_desy}}}" and "{{{eduroam}}}" - send an email to naf-atlas-support@desy.de. <
> The login is via {{{ @desy.de }}} using a different password to your account password. Once you have the relevant groups, you can set this yourself at [[https://password.desy.de|password.desy.de]]. <
> Finally, you also need to download the relevant "profiles" provided in the "Configuration of the !WiFi Networks" section [[https://it.desy.de/services/networks/wlan_at_desy/index_eng.html|here]]. <
> === NAF remote desktop login === <
> You can use this browser based login to start a shell within the DESY firewall, for example to access internal DESY pages: <
> ##Point your browser at https://nafhh-x1.desy.de:3443 or https://nafhh-x2.desy.de:3443 Point your browser at https://naf-x-el.desy.de:3389 (currently, an CentOS 7 system) or https://naf-x-ubuntu.desy.de:3389 (currently, an Ubuntu 20.04 system) <
> More details can be found [[https://confluence.desy.de/display/IS/NAF+Login%2C+WGS+and+remote+Desktop#NAFLogin,WGSandremoteDesktop-NAFRemoteDesktop|here]]. <
> === AFS quota === <
> The afs cell used in the NAF is the standard DESY afs cell {{{/afs/desy.de}}}. To learn about your current quota usage, do {{{ fs lq }}} This can be done either in your afs {{{$HOME}}} or in {{{$HOME/xxl}}} (see [[#Additional AFS space: xxl|here]]). <
> === Account expiry === <
> Accounts expire typically 3 years after creation, but may be extended if the owner continues to work on ATLAS with the same or a different ATLAS-D group. Expiration notification e-mails are regularly sent to the DESY email address, beginning at 30 days before the expiration date. <
> If an extension is needed, please write to the namespace supervisors listed in the expiration notification e-mail, stating '''the username, the reason(s) for extension, and the additional length of time required'''. <
> == Grid certificates == <
> Information on how to get a DESY grid certificate from GÉANT is available [[https://confluence.desy.de/display/grid/Grid+User+Certificates+New|here]]. Alternatively, a CERN grid certificate can be obtained [[https://ca.cern.ch/ca/user/Request.aspx?template=EE2User|here]]. <
> === Deploying the certificate === <
> To use the grid and corresponding software, make sure you have your {{{usercert.pem}}} and {{{userkey.pem}}} in your {{{$HOME/.globus/}}} directory at DESY and/or CERN and that they have the right permissions. A script is available [[attachment:CertConvert.sh|here]] to extract the {{{.pem}}} files from your {{{grid.certificate.p12}}}. Copy the script, set the correct permissions for an executable using {{{chmod 751}}}, and then do: {{{ CertConvert.sh -pem grid.certificate.p12 }}} Many ATLAS interfaces such as [[https://rucio-ui.cern.ch|Rucio]], [[https://ami.in2p3.fr/index.php/en/|AMI]] and [[https://atlas-tagservices.cern.ch/tagservices/RunBrowser/index.html|COMA]] use authentication via grid certificate and it is necessary to upload the certificate to your browser. It is recommended to use the Firefox web-browser, and old certificates should be removed to avoid possible conflicts. <
> === ATLAS VO membership === <
> You will need to be a member of the ATLAS VO, membership can be applied for [[https://voms2.cern.ch:8443/voms/atlas/user/home.action|here]]. This will allow you to access ATLAS data and get a proxy. <
> === Setting up a proxy === <
> To use the grid certificate, you have to set up the grid environment first: {{{ lsetup emi }}} This is also executed during Rucio setup. Now, you can create a proxy for the ATLAS-D role: {{{ voms-proxy-init -voms atlas:/atlas/de }}} or get a proxy with the plain ATLAS role, use: {{{ voms-proxy-init -voms atlas }}} To get information about your current proxy, for instance the time for how long it will stay valid: {{{ voms-proxy-info -all }}} <
> == Storage on the NAF == <
> === DUST === <
> The main local storage for NAF is DUST. Please note that this is "scratch" space, i.e. '''there is no back up'''. DUST space is mounted on all ATLAS NAF working group servers and is visible from the batch system. <
> An inital 1 TB of space can be created for each NAF user under: {{{ /nfs/dust/atlas/user/yourusername }}} Initial requests for DUST space and requests to increase the quota of existing DUST space should be made to naf-atlas-support@desy.de. Please specify '''your username''' and in the case of an increase request '''how much additional space you require'''. <
> ==== Group spaces on DUST ==== There are also several group spaces in the following directory: {{{ /nfs/dust/atlas/group }}} where membership of a particular group can be applied for via naf-atlas-support@desy.de. This is usually needed when people are working on the same analysis area and want to share files, although it should be noted that by default all files on DUST are readable by all users. <
> '''Please note''': the group directories on DUST are set up to have read and write access for all members to all newly created sub-directories. So as a member, you can all create, read '''and delete''' directories and files in this group directory. <
> The current groups in the ATLAS DUST space are: || '''Group''' || '''Directory''' || '''Responsible''' || '''Description''' || || AF-ATLZEED || /nfs/dust/atlas/group/zeed || Sasha Glazov|| Z to ee analyses || || AF-ATLZPRIME || /nfs/dust/atlas/group/zprime || Eleanor Jones || zprime analyses || || AF-ATLJETMET || /nfs/dust/atlas/group/jetmet || Pavel Starovoitov || Jet-MET analyses || || AF-ATLTOP || /nfs/dust/atlas/group/top || Timothée Theveneaux-Pelzer || Top analyses || || AF-ATLEXTH || /nfs/dust/atlas/group/extendedhiggs || Claudia Seitz || Extended Higgs analysis || || AF-ATLTTPH || /nfs/dust/atlas/group/ttbarphoton || Carmen Diez || ttbar plus photon analysis || || AF-ATLWAI || /nfs/dust/atlas/group/wai || Ludovica Aperio Bella || W-Ai analysis || || AF-ATLZAI || /nfs/dust/atlas/group/zai || Ludovica Aperio Bella || Z-Ai analysis || || AF-ATLHTT || /nfs/dust/atlas/group/htautau || Karsten Koeneke || Higgs to tau tau analysis || || AF-ATLHVNU || /nfs/dust/atlas/group/heavynu || Krisztian Peters || Heavy nu analysis || || AF-ATLHEAVYH || /nfs/dust/atlas/group/ahtottbar || Katharina Behr || Heavy Higgs searches || || AF-ATLTYC || /nfs/dust/atlas/group/topyukawa || Thorsten Kuhl || Top yukawa coupling || || AF-TTENTANGLE || /nfs/dust/atlas/group/ttEntangle|| Eleanor Jones || ttEntangle analysis || <
> You can view your DUST usage and quota '''[[https://amfora.desy.de/|here]]''' from inside the DESY firewall, as well as on the command line via: {{{ my-dust-quota -g }}} where the "-g" option also then displays the usage and quota of any groups to which you belong. <
> Take note of the '''DUST lifecycle policy''' detailed [[https://confluence.desy.de/pages/viewpage.action?pageId=98078732|here]], which governs how and when data on DUST are automatically removed. <
> === Additional AFS space: xxl === <
> An additional, larger afs space can be created for each NAF user, in a directory {{{$HOME/xxl}}}. An initial 10 GB quota is applied, which may be increased up to a maximum of 30 GB, if necessary. <
> To get a NAF afs {{{$HOME/xxl}}} directory, please contact naf-helpdesk@desy.de <
> === Grid storage: DESY-HH_LOCALGROUPDISK === <
> If you run on the grid, the files of your output dataset are usually placed at the SCRATCHDISK of the site you where running (see [[#Grid_certificates|here]] for information on grid certificates). <
> A large dCache based storage is provided for the NAF as a dedicated resource, '''DESY-HH_LOCALGROUPDISK''', which visible from the NAF batch system and should be used to store large files. An initial quota of 10TB is automatically assigned to all users with a german grid certificate via the /atlas/de role (see [[#ATLAS_VO_membership|here]]). You can also request the /atlas/de role [[https://voms2.cern.ch:8443/voms/atlas/user/home.action|here]]. <
> Your current quota and usage on '''all''' Rucio Storage Elements (RSEs) can be checked [[https://rucio-ui.cern.ch/account_rse_usage|here]], and you can select DESY-HH_LOCALGROUPDISK (or any other) using the ''Search'' box. Requests for additional quota on DESY-HH_LOCALGROUPDISK should go to naf-atlas-support@desy.de, please don't forget to specify '''your CERN username''' and '''how much additional quota is needed'''. <
> Similarly, you can check your current list of Rucio rules on each RSE [[https://rucio-ui.cern.ch/r2d2|here]] and request transfers via [[https://rucio-ui.cern.ch/r2d2/request|this interface]]. The Rucio command line interface is documented [[https://rucio.readthedocs.io/en/latest/man/rucio.html|here]]. <
> The overall use of LOCALGROUPDISKs in the German cloud can be checked using [[https://localgroupdisk.pleiades.uni-wuppertal.de/index.html|this monitoring]] (grid certificate required in browser). Here are some direct links for [[https://localgroupdisk.pleiades.uni-wuppertal.de/disks/DESY-HH_LOCALGROUPDISK/index.html|DESY-HH_LOCALGROUPDISK]] and [[https://localgroupdisk.pleiades.uni-wuppertal.de/disks/DESY-ZN_LOCALGROUPDISK/index.html|DESY-ZN_LOCALGROUPDISK]]. <
> == Batch system on the NAF == <
> The NAF batch system is part of the BIRD (Batch Infrastructure Resource at DESY) general purpose cluster, based on HTCondor. Full documentation, including details on OS versions, example configurations can be found on [[https://confluence.desy.de/display/IS/BIRD|this page]]. <
> In particular, there are pages on [[https://confluence.desy.de/display/IS/Submitting+Jobs|submitting jobs]] and [[https://confluence.desy.de/pages/viewpage.action?pageId=109165934|checking and managing jobs]] on the batch system. <
> Some monitoring is available [[http://bird.desy.de/stats/day.html|here]]. <
> Problems with the batch system can be reported directly to bird.service@desy.de, including for GPU slots. <
> === Access to NAF GPUs === <
> There are a number of GPUs available within the NAF batch system, information on those is available [[https://confluence.desy.de/display/IS/GPU+on+NAF|here]]. Access is restricted to people with the {{{nafgpu}}} resource, ask for this via naf-atlas-support@desy.de. <
> == ATLAS software == <
> The NAF has the advantage for ATLAS is that all software is taken from CERN via CVMFS. The following repositories are mounted on all ATLAS NAF working group servers, as well as the batch system: {{{ /cvmfs/atlas.cern.ch /cvmfs/atlas-condb.cern.ch /cvmfs/atlas-nightlies.cern.ch /cvmfs/sft.cern.ch /cvmfs/sft-nightlies.cern.ch /cvmfs/unpacked.cern.ch }}} ATLAS has a very active analysis support, DAST, which is documented [[https://twiki.cern.ch/twiki/bin/view/AtlasComputing/AtlasDAST|here]] and can be contacted via atlasdast@gmail.com. <
> === Initialising the environment === <
> To set up the ATLAS software, do: {{{ export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase source /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/user/atlasLocalSetup.sh }}} Note that adding these three lines to your {{{.zprofile}}}, in particular the source command, creates some setup interference and causes problems with global PATH variables. Therefore, it is advisable to execute them only when you have successfully logged in to the NAF. <
> From this command, you will get an output like this: {{{ lsetup lsetup [ ...] (see lsetup -h): lsetup agis ATLAS Grid Information System lsetup asetup (or asetup) to setup an Athena release lsetup atlantis Atlantis: event display lsetup eiclient Event Index lsetup emi EMI: grid middleware user interface lsetup ganga Ganga: job definition and management client lsetup lcgenv lcgenv: setup tools from cvmfs SFT repository lsetup panda Panda: Production ANd Distributed Analysis lsetup pod Proof-on-Demand (obsolete) lsetup pyami pyAMI: ATLAS Metadata Interface python client lsetup root ROOT data processing framework lsetup rucio distributed data management system client lsetup views Set up a full LCG release lsetup xcache XRootD local proxy cache lsetup xrootd XRootD data access advancedTools advanced tools menu diagnostics diagnostic tools menu helpMe more help printMenu show this menu showVersions show versions of installed software %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% You are running on a RHEL7 compatible OS. Please refer to these pages for the status and open issues: For releases: https://twiki.cern.ch/twiki/bin/view/AtlasComputing/CentOS7Readiness#ATLAS_software_status %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% }}} <
> === Diagnostics === <
> To use some diagnostics tools provided by ATLAS, type {{{ diagnostics }}} which will return: {{{ checkOS check the system OS of your desktop db-fnget run fnget test for Frontier-squid access db-readReal run readReal test for Frontier-squid access gridCert check for user grid certificate issues rseCheck check a Rucio RSE runKV run Kit Validation of an Athena release setMeUp check readiness for tutorial setMeUpData download datafiles for tutorial supportInfo dump information to send to user support toolTest test one or more tools }}} For example, {{{gridCert}}} is a very useful tool that performs several tests on your grid certificate, in case you fear something is broken. <
> === Some examples === <
> Rucio is initialised as follows: {{{ lsetup rucio }}} which also issues the {{{lsetup emi}}} command to set up grid tools, allowing you to get a VOMS proxy (see [[#Setting_up_a_proxy|here]]). <
> The Athena software environment is initialised in the following way, for example in release 21.2: {{{ asetup AthAnalysis,21.2,latest }}} If one needs a new version of ROOT, the recommended way is to use LCG views (the full list of LCG releases is available [[http://lcginfo.cern.ch|here]]). One can choose the release, target OS and compiler used, for example: {{{ lsetup "views LCG_100 x86_64-centos7-gcc10-opt" }}} There is still a possibility to load a specific version of ROOT without other software in the LCG stack, but this is not recommended anymore, and newer releases might not be available: {{{ lsetup "root 6.18.04-x86_64-centos7-gcc8-opt" }}}