Tier3 Monitoring

In 2011-2012 the JINR staff members participated in realization of the RnD project on development and deployment of monitoring system for Tier3-sites which are considered to be computing centers of the third layer in the hierarchy of data processing centers at the Large Hadron Collider (LHC). For detailed information see ATL-SOFT-PUB-2011-001, CERN, 2011

LHC Tier-3 centers consist of non-pledged resources groups (in other words, there are no strict requirements on the volume of resources) mostly dedicated for the data analysis by the geographically close or local groups of users. Tier-3 sites comprise a range of architectures and many do not possess Grid middleware, which makes the use of application of Tier-2 monitoring systems useless.

Software suite for T3 monitoring will enable local monitoring of the Tier3 sites and the global view of the computing activities of the LHC virtual organizations at the T3 sites.

Actually development was carried out in 3 directions:

  1. Local site monitoring (proof, root, xrootd, pbs and condor) by the usage of Ganglia
  2. Site integration into the global grid monitoring i.e. delivery of monitoring data from Tier3s to CERN Dashboard and data visualization in Dashboard.
  3. File transfer monitoring in xRootd-federation.

At the first stage of realization this software package was aimed to meet the requirements of the ATLAS collaboration but the solutions implemented in the framework of this project are expected to be generic, so other Virtual Organizations (VO), within or outside of LHC experiments, can use them.

The developed software suite allows monitoring of Tier3 sites as local computing farms and also provides a global monitoring view to the services provided by Tier3 center.

A local monitoring system collects, aggregates and displays information from the local Tier3 monitoring systems: detailed monitoring of the local fabric (overall cluster or clusters monitoring, monitoring each individual node in the cluster, network usage); monitoring of the batch system (distribution of tasks on nodes); monitoring of job processing; monitoring of the mass storage system (total and available space, number of connections, I/O performance); monitoring of VO computing activities at the local site.

The objectives of the central Tier3 monitoring are: usage of the Tier3 resources by different VOs in terms of data transfer,  job processing and grid service quality.

The current status of the project has been reported, in particular, at the JINR Program Advisory Committee for Particle Physics (January 2012) and the CHEP’2012 conference. Details…

As a result of the successful completion of project, each of the implemented tasks separately is used and developed in other projects, such as:

  • the modified RPM Package Manager is in use at CERN Dashboard;
  • xRootd-federation monitoring became a part of the global monitoring and is currently under continuous development by CERN Dashboard team;
  • local monitoring system is used at many Tier3 sites.

Comments are closed.