SEEGRID-WP3-HU-038-Deliverable5.4-a-2006-03-15 by xiuliliaofz

VIEWS: 6 PAGES: 50

									                                FP6 Research Infrastructures
                                                  SEE-GRID
            South-Eastern European Grid-enabled eInfrastructure Development




                                             Deliverable D5.4
                                          Demonstration labs

                         Author(s):     SEE-GRID consortium
                   Status –Version:     Final
                               Date:    February 26th, 2006
               Distribution - Type:     Confidential
                              Code:     SEE-GRID-Deliverable-D5.4

Abstract: Deliverable D5.4 – The scope of the “Demonstration labs” document is to give an overview about the latest
information of the SEE sites. This deliverable deals with the deployed SEEGRID resources at local, National and
Regional levels. The document is functioning as a “log-file” of the required steps for deploying LCG-2 m/w and
evaluates the current status of the SEE-GRID sites. It describes the achieved results, challenges and the solutions for the
occurred problems during the deployment phases. Moreover, this deliverable includes the final information about the P-
Grade portal which is the web-based entry point of the SEE resources and includes the latest version of the Resource
Centre snapshot as well.

                                          Copyright by the SEE-GRID Consortium
                                          The SEE-GRID Consortium consists of:
GRNET                                                  Coordinating Contractor           Greece
CERN                                                   Contractor                        Switzerland
CLPP-BAS                                               Contractor                        Bulgaria
ICI                                                    Contractor                        Romania
TUBITAK                                                Contractor                        Turkey
SZTAKI                                                 Contractor                        Hungary
ASA                                                    Contractor                        Albania
BIHARNET                                               Contractor                        Bosnia-Herzegovina
UKIM                                                   Contractor                        FYR of Macedonia
UOB                                                    Contractor                        Serbia-Montenegro
RBI                                                    Contractor                        Croatia
D5.4 – Demonstration Labs                                                                                  Page 2 of 50




This document contains material, which is the copyright of certain SEE-GRID contractors and the EC, may not be
reproduced or copied without permission.
The commercial use of any information contained in this document may require a license from the proprietor of that
information.
The contractors do not warrant that the information contained in the report is capable of use, or that use of the
information is free from risk, and accept no liability for loss or damage suffered by any person using this information.




79163d0b-c558-4988-8845-d426662f0fd6.doc              SEE-GRID consortium
D5.4 – Demonstration Labs                                                       Page 3 of 50



Document Revision History

    Date                Issue   Editor                Summary of main changes

    Feb 1, 2006         a       Jozsef Patvarczki     Initial Draft
    Feb. 21, 2006       a       Antun Balaz           UOB Contribution
    Feb. 21, 2006       a       N. Frasheri           INIMA contribution
    Feb. 21, 2006       a       Valentin Vidić        RBI contribution
    Feb. 22, 2006       a       Onur Temizsoylu       TUBITAK contribution
    Feb. 22, 2006       a       Nikos Vogiatis        GRNET contribution
    Feb. 23, 2006       a       Todor Gurov           IPP contribution
    Feb. 23, 2006       a       Gabriel NEAGU         ICI contribution
    Feb. 23, 2006       a       Miklos                SZTAKI contribution
                                Kozlovszky
    Feb. 21, 2006       a       Antun Balaz           UOB contribution
    Feb. 27, 2006       a       Boro Jakimovski       MARNET contribution
    Feb. 27, 2006       a       Mihajlo Savic         BIHARNET contribution




79163d0b-c558-4988-8845-d426662f0fd6.doc             SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                        Page 4 of 50



Preface
SEE-GRID aims to provide specific support actions to assist the participation of the SE European states to the pan-
European and worldwide Grid initiatives by establishing a seamless and interoperable pilot-Grid infrastructure that will
expand and support the European Research Area (ERA) in the region. The interconnection of the regional infrastructure
to the pan-European and worldwide Grid initiatives will translate into benefits for the smaller, less-resourced sites in SE
Europe to access computing power that would otherwise be unaffordable. In this perspective, the SEE-GRID initiative
will ease the digital divide and release the scientific & productive talents of the region and will allow equal participation
of the targeted countries in pan-European Grid efforts in the immediate future. The project involves the National Grid
Initiatives of Albania, Bosnia-Herzegovina, Bulgaria, Croatia, FYR of Macedonia, Greece, Hungary, Romania, Serbia-
Montenegro, and Turkey.


The main objectives of the SEE-GRID project are:
    1.   Create a human network in the area of Grids, eScience and eInfrastructures in SE Europe and promote
         awareness in the region regarding Grid developments through dissemination conferences, training material and
         demonstrations for hands-on experience.
    2.   Integrate incubating and existing National Grid infrastructures in all SEE-GRID countries. This will be
         accomplished by building upon and exploiting the infrastructure provided by the Gigabit Pan-European
         Research & Education Network (GEANT) and the South-East European Research and Education Networking
         (SEEREN) initiative in the region.
    3.   Ease the digital divide and bring SEE Grid communities closer to the rest of the continent.
    4.   Establish a dialogue at the level of policy developments for research and education networking and provide
         input to the agenda of national governments and funding bodies.
    5.   Promote awareness in the region regarding Grid developments through dissemination conferences, training
         material and demonstrations for hands-on experience, in coordination with EGEE, which will promote the
         project results to the private and public sector, ultimately reaching the general public.
    6.   Migrate and test Grid middleware components and APIs developed by pan-European and national Grid efforts
         in the regional infrastructure.
    7.   Deploy (adapt if necessary) and test Grid applications developed by EGEE (Large Hadron Collider Computing
         Grid, Biomedical Grids) in the regional infrastructure.
    8.   Demonstrate an additional         Grid    application   of   regional   interest   (e.g.   earthquake    prediction,
         meteorology/climate).
    9.   Integrate available pilot Resource Centres of Albania, Bosnia-Herzegovina, Croatia, FYR of Macedonia,
         Serbia-Montenegro and Turkey into the EGEE-compatible infrastructure.
    10. Expand the operations and support centre of the EGEE SE Europe Federation to cater for the operations in the
        above countries.


The expected results of the project are:
    1.   Study and analyze Grid requirements for the region; Prepare relevant roadmaps and cookbooks that will used to
         present the fact findings and guide the respective development of research or innovation strategies
    2.   Select and tune the appropriate middleware solutions with the existing underlying resources (storage,
         computing power, connectivity, clusters, etc) and background knowledge to ensure stability, secure operation
         and interoperability among the selected sites.
    3.   Deploy (and adapt if needed) the two EGEE applications: the "Large Hadron Collider Computing Grid (LCG)",
         targeting the storage and analysing of petabytes of real and simulated data from CERN high-energy physics
         experiments, and the "Biomedical Grids" addressing the need of several communities to cope with the flood of
         bioinformatic and healthcare data (a prime example being the Health Grid association). A third application of
         regional interest will be developed and demonstrated.



79163d0b-c558-4988-8845-d426662f0fd6.doc                SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                                Page 5 of 50

     4.   Evaluate and propose operation structures (operation centres, helpdesk and support centres) for running and
          deploying the Grid at local, national and regional tier levels along with the necessary policies support,
          including legal structures that will govern the operations with the SEE-GRID infrastructure
     5.   Establish training and dissemination mechanisms such as proof of concept and demonstration labs, seminars
          and scientific presentations and workshops in cooperation with EGEE to promote the potential provided by the
          SEE-GRID project.
The SEE-GRID project kicked-off in May 2004 and is planned to be completed by April 2006. It is coordinated by
GRNET with 10 contractors participating in the project: representatives of the NGIs of Albania, Bosnia-Herzegovina,
Bulgaria, Croatia, FYR of Macedonia, Hungary, Romania, Serbia- Montenegro, Turkey, and CERN in a consulting role
and as liaison to the pan-European grid effort EGEE. The total budget is 1,215,000 €. The project is funded by the
European Commission's Sixth Framework Programme/Research Infrastructures.
The project plans to issue the following deliverables:

   Del.                                                                                                             Planned
           Deliverable name                                                                Nature        Security
   no.                                                                                                              Delivery

  D1.1     Project management information system and contractual relationships              R, O               CO    M01
  D5.1a    Internal and external web site, docs repository and mailing lists                R, O          CO, PU     M02
  D5.2a    Promotional package                                                                R                PU    M03
  D5.3     Policy workshop                                                                  R, O               PU    M04
  D2.1     Requirements capture and analysis                                                  R                PU    M04
  D2.2     Technological roadmap for the SEE-GRID National Grid Initiatives                   R                PU    M06
  D3.1     Specifications of SEE-GRID additional application                                  R                PU    M08
  D4.1     Operational, organizational and policy schemes                                     R                PU    M10
  D1.2     SEE-GRID acceptable use policy                                                     R                PU    M10
  D3.2a    Migration of middleware components and APIs                                      R, O               PU    M10
  D1.3     First year progress report                                                         R                CO    M13
  D5.2b    Promotional package                                                                R                PU    M13
  D3.3a    Migration of Grid Applications                                                   R, O               PU    M14
  D3.2b    Migration of middleware components and APIs                                      R, O               PU    M19
  D5.4     Demonstration labs                                                               R, O               PU    M21
  D4.2     SEE-GRID one-stop-shopping for Grid resources                                    R, O               PU    M22
  D5.5     SEE-GRID dissemination conference                                                R, O               PU    M22
  D3.3b    Migration of Grid Applications                                                   R, O               PU    M22
  D5.6     Plan for using and disseminating knowledge                                         R                CO    M23
  D5.1b    Internal and external web site, docs repository and mailing lists                R, O          CO, PU     M23
  D4.3     SEE-GRID infrastructure evaluation results                                         R                PU    M24
  D1.4     Final progress report                                                              R                PU    M24
  D5.2c    Promotional package                                                                R                PU    M24
  D3.2c    Migration of middleware components and APIs                                      R, O               PU    M24
  D5.7     Report on raising public participation                                             R                PU    M24
Legend: R = Report, O = Other, PU = Public, CO = Confidential (only for members of the consortium incl. EC).




79163d0b-c558-4988-8845-d426662f0fd6.doc                            SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                                                                     Page 6 of 50


Table of contents
1.          General description of the SEE-GRID pilot sites ................................................................................8
     1.1.      Short description of the SEE-GRID Resource Centers (RCs) .............................................................8
     1.2.      The infrastructure of the RCs .............................................................................................................13
     1.3.      Other typical site parameters .............................................................................................................26
2.          LCG-2 installation cycles ....................................................................................................................28
     2.1. Final Phase I: Linux installation ..........................................................................................................30
       2.1.1. Known hardware issues ..............................................................................................................30
       2.1.2. Known software issues ...............................................................................................................31
     2.2. Final Phase II: LCG-2 installation and configuration ..........................................................................32
       2.2.1. Computing Element .....................................................................................................................32
            2.2.1.2      Occurrent problems and solutions ........................................................................................................ 34
            2.2.1.3      Solutions / workarounds for the problems ............................................................................................. 35
        2.2.2.        User Interface..............................................................................................................................36
            2.2.2.1.        Certificate issues ............................................................................................................................... 36
            2.2.2.2.        Occurrent problems ........................................................................................................................... 37
            2.2.2.3.        Solutions / workarounds for the problems ......................................................................................... 38
        2.2.3.        Storage Element .........................................................................................................................38
            2.2.3.1.        Occurred problems ............................................................................................................................ 39
            2.2.3.2.        Solutions / workarounds for the problems ......................................................................................... 40
        2.2.4.        Worker Nodes .............................................................................................................................40
            2.2.4.1.        Occurred problems ............................................................................................................................ 41
            2.2.4.2.        Solutions / workarounds for the problems ......................................................................................... 42
     2.3.      Achieved Results ................................................................................................................................42
3.          Certificate Authorities .........................................................................................................................44

4.          Final state of SEE-GRID sites .............................................................................................................45

5.          P-GRADE PORTAL: The WEB-based single access point of the SEE-GRID resources ...............46
     5.1.      Overview of the P-GRADE Portal .......................................................................................................46
     5.2.      Features of P-GRADE Grid Portal......................................................................................................46
     5.3.      Detailed description of the P-GRADE Portal ......................................................................................47
     5.4.      Grid Applications in the P-GRADE Portal ...........................................................................................47
     5.5.      P-GRADE Portal Development Roadmap ..........................................................................................48
6.          Future plans .........................................................................................................................................48




79163d0b-c558-4988-8845-d426662f0fd6.doc                                       SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                    Page 7 of 50


References

[1]   http://www.rcub.bg.ac.yu/
[2]   http://scl.phy.bg.ac.yu/




Executive summary
What is the focus of this Deliverable?
The focus of this deliverable is to provide up-to-date information about the current status of the resources. It deals with
the different problems that the SEE-GRID partners undertook while deploying the Grid at local, National and Regional
levels. The deliverable acts essentially as a “log-file” of the steps and issues encountered while deploying LCG-2 m/w
and Linux OS and evaluates current status of the SEE-GRID sites according to the final LCG-2 installation phases and
the cycles. Furthermore, the deliverable gives detailed information about the created certificate authorities and their
functions. It deals with the P-GRADE portal which is the official web-based access point of the sites and introduces the
newest features for the SEE-GRID community. Moreover, it gives a final overview about the latest Resource Centre
snapshot of the SEE sites. Finally, the deliverable identifies future deployment plans and gives an outline of the
evolution.
What is next in the process to deliver the SEE-GRID results?
The conclusions and recommendation from this deliverable will feed into activities in following SEE-GRID activities:
- Displays the final statuses of the project
- Provides single access point to the regional Grid resources
- The results of this deliverable will also feed back to all activities.
The complete deliverable and workflow progress is described in the project Annex-I – Description of Work.
What are the deliverable contents?
The deliverable contents include an overview about the current status of SEE sites, followed by detailed infrastructure
scheme and final LCG-2 installation and configuration cycles. Chapter 3 deals with the created certificate authorities and
the certificates. Chapter 4 describes the latest status of the Resource Centre snapshot and chapter 5 introduces the newest
features of the single access point. Furthermore, it also contains a section for the further plans which gives a general
outline for the required steps in the near future.
Conclusions
Deliverable D5.4 gives the final overview of SEE-GRID sites operations and their updated statuses. This deliverable
will be used by the Grid operators in the region as the reference document/logbook in the lifetime of the project and
hopefully beyond.




79163d0b-c558-4988-8845-d426662f0fd6.doc                   SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                       Page 8 of 50



1. General description of the SEE-GRID pilot sites

1.1. Short description of the SEE-GRID Resource Centers (RCs)
.gr
GRNET is the coordinating organization of SEE-GRID. GRNET participates in the EGEE infrastructure with a full-
scale National Grid product-level infrastructure thus for the purposes of SEE-GRID a different, small-scale, test
cluster/”resource center” was set up from scratch. The RC “test-GRNET” is hosted in Network Management and
Optimal Design Laboratory (NETMODE) at the National Technical University of Athens (NTUA). NETMODE is
actively involved in R&D programmes sponsored by Greek and European organisations. Its interests span across several
areas of communication networks with emphasis on planning, design and management of distributed data
communications (e.g. evolution planning towards high speed networks, interim communication solutions, models and
requirements for services and infrastructure, methods for flow/congestion control in multimedia environments,
asynchronous video transfer and traffic models, etc.).
NETMODE is active in projects of research, development, services and training in the following areas:
         Computer & telecommunication network specifications and design.
         Network management.
         Network analysis and simulation.
         Analysis & design of specialised distributed information systems.


.bg
The Institute for Parallel Processing (IPP) at the Bulgarian Academy of Sciences has a leading position among the
scientific institutions in Bulgaria in the fields of Computer Science, Networks and Distributed Systems, Scientific
Computations, Supercomputing, Linguistic Modeling and Communications and Control. It performs research,
consultations, projects and high quality education in its fields of interest. The activities of IPP are oriented mainly to the
creation and usage of advanced mathematical and computer technologies. During the last years, fundamental results in
different areas of the Theoretical Informatics and Scientific Computations have been obtained. The Institute’s staff
develops methods, algorithms and software, and introduces advanced information technology and computer facilities.
The scientific strategy of IPP consists of the development of new high-performance algorithms for parallel computers
with shared and distributed memory, including clusters of workstations, distributed systems, tools for network security
monitoring and management, etc. The results obtained by the staff of the Institute have been applied in different fields
such as ecology, engineering, computer technology, information systems, etc. IPP is a leading institution in the country
on the development of the foundations, software infrastructures and applications for GRID technologies. Annually the
scientists from the Institute publish about 140 papers, and about 100 are in refereed international journals and
proceedings of high quality international conferences in the fields of the main laboratory activities.
IPP has organized many international conferences and workshops. Some of them are periodical as “Parallel and
Distributed Processing”, “Network Information Processing Systems”, “Numerical Methods and Applications”, “Large-
Scale Scientific Computations”, etc. Scientific seminars on Parallel Algorithms, Computer Methods for Natural
Language Processing and Contemporary Methods and Algorithms for Information Processing in Sensor Environment
take place regularly.
The Institute is an active participant in a number of projects of the programs INCO/Copernicus, Tempus, Peco and
ACTS of the EU. It participates in the scientific programs of NATO.
Since 2000 it became national Centre of Excellence in informatics for the Country, and recently – a centre, where GRID
projects are run.
Many experts of the group had been on long-term specialization in USA, UK, Denmark, France Germany, etc. Members
of our staff work as researchers and professors in different universities and centres world-wide.
The total staff of IPP is 112 persons including two academicians, six full professors, 24 associate professors, 39 research
assistants and 20 university educated specialists. Every year students from Sofia University (Faculty of Mathematics and
Informatics) and Sofia Technical University prepare their MSc thesis in the Institute, having as supervisors Senior
Scientists from the staff. Most of the time, there are several PhD students in the field of Numerical Mathematics,




79163d0b-c558-4988-8845-d426662f0fd6.doc                 SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                 Page 9 of 50

Computer Science, and Computer and Network Technology and their applications. Most of the Senior Scientists of IPP
are affiliated with various national and international administrative organizations such as the National Research Fund,
the Higher Evaluation Commission, the Executive Council of the Bulgarian Academy of Sciences, the International
Federation for Information Processing (IFIP), the International Council for Computer Communications (ICCC),
American Mathematical Society (AMS), British Computer Society, International Council of Scientific Unions (ISCU),
International Association of Universities (IMU), International Commission for Math. Education, Green Gross, etc.

.ro
The National Institute for R & D in Informatics - ICI Bucharest was created in 1970. Since then the institute has been
acting as a leading unit in Romanian IT Research and Development. Six Academy awards were granted to ICI people. In
1999, ICI was ranked first among R & D Romanian organizations, in the EC survey “Impact of the enlargement of the
European Union towards the associated central and eastern European countries on RTD - innovation and structural
policies” (p.194, Office of official publications of the EC, 1999). Since 2000 the institute is certified ISO-9001.
Currently the institute is coordinated by the Ministry for Communications and Information Technology.
The main fields of expertise covered by this staff include knowledge engineering, database systems, computer
networking, software engineering, quality assurance in IT, business process re-engineering, decision support systems,
computer interfaces, computer integrated manufacturing, mathematical modeling, simulation and optimization,
multimedia, web technologies. Two technical journals are issued by the institute: Studies in Computers and Informatics
(an international journal) and the Romanian Journal for Informatics and Automation (in Romanian).
ICI is the National Operator for the Romanian National R&D Computer Network – RNC, Providing Internet services to
about 60 research institutes, 250 business companies, and thousands of individuals. Also since 1993 ICI has been
playing the role of the National Registry for the domain .ro, working with more than 80 local registrars.
The IT infrastructure of ICI is based on an intranet connecting specialised servers and more than 200 workstations
(mainly Pentium IV). The infrastructure includes also a training platform with 10 workstations and a server. External
connectivity to the metropolitan area network is provided through several 100 Mbps lines, including the link to
RoEduNet NOC.
In the field of international cooperation, since 1993 ICI has been involved in 38 EU funded projects (Forth and Fifth
Framework Programmes) and 6 FP6 projects, including EGEE and SEE-GRID.
Starting form 2002 ICI is the coordinator of the RoGrid consortium, which was setup as national initiative in this
domain. In 2003 the first strategic document for Grid development in Romania was proposed by this consortium.
RoGrid is participating in the EGEE and SEE-GRID projects being represented by ICI as contracting partner and four
other member organizations as JRU Third Parties: National Center for Information Technology from University
“Politehnica” of Bucharest (UPB), National Institute for Physics and Nuclear Engineerin (NIPNE), National Institute for
Aerospace Research (INCAS), and University of Bucharest (UB). All these institutions are represented in the Working
Group for the National Grid Infrastructure which was organized in July 2005 by the National Authority for Scientific
Research.


.mk
The Macedonian role in the SEE-GRID project is divided into three parties, MARNET, Faculty of Natural
Sciences and Mathematics and Faculty of Electrical Engineering. The three parties are contributing with their
own resources in building the national Grid initiative in conjunction with the SEE-GRID project.
Macedonian Academic and Research Network (MARNET) primary function is to organize and manage the
sole academic and research network in the country. Therefore it was also supposed to and it does manage
the '.mk' country code top level domain, to plan, develop, implement and manage the communication
infrastructure backbone in the country as well as to attain and maintain international and Internet connectivity
for its users. MARNet is also responsible for the provision of membership in international networking
organizations and associations and participation in projects of interest for the academic community. In the
domestic environment it is obliged to implement different information services and to support the local user
community by organizing educational activities and events like workshops help desks etc.
Faculty of electrical engineering employs 62 professors and 45 teaching and researching assistants who
carry out the two fundamental tasks - education and research. The FEE has its own Computer Center.
Laboratories, from which Telecommunications Laboratory and Computer Science Laboratory are involved in




79163d0b-c558-4988-8845-d426662f0fd6.doc              SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                    Page 10 of 50

this project. The number of students currently studying and training for future careers is about 2000. The
teaching process at the FEE is realized through four programmes of study extending over nine semesters:
Electroenergetics, Industrial Electroenergetics and Automation, Electronics and Telecommunications,
Computer Technology, Information Technology and Automatics. International collaboration plays a significant
part in the development of the Faculty. It is primarily realized in the fields of educational and research
activities and is less emphasized in the field of applicative activity.
The Faculty of Natural Sciences and Mathematics is the oldest and leading Macedonian institution, providing
research, education and applications in natural, life and material sciences, mathematics and informatics. It
encompasses six institutes (mathematics, informatics, physics, chemistry, biology and geography),
seismological observatory with national telemetric network and botanical garden, around 200 academic and
research staff and 2500 undergraduate and graduate students. The Institute of Informatics is its youngest
institute, established in 1985 with 30 academic staff and around 600 students. It is providing research and
educational excellence in informatics and computer sciences, through undergraduate and graduate studies,
education of non-informatics students, and participation in educational, research and application projects with
numerous domestic institutions and have ongoing cooperation and projects with numerous institutions from
Europe. Major scientific interest ranges from fields of applied mathematics and theoretical foundation of
computer sciences (algebra and discrete mathematics and their application in cryptography coding theory
and communication processes, operational research, optimization, modelling and simulation, numerical
analysis, chaos theory, probability, statistics and queuing theory), computer sciences (software engineering,
architectures and networks, parallel processing and distributed computing, artificial intelligence, knowledge
theory, database and information systems, Internet technologies, e-science and cooperation, distance
learning) to computational disciplines, as link to natural, life science, material sciences and environmental
sciences and cooperative work with other Faculty institutes.


.hr
The Ruđer Bošković Institute is the largest Croatian research centre in sciences and science applications an
interdisciplinary Institute primarily oriented towards the field of natural an technical sciences. The multidisciplinary
environment of the Institute, more than 500 academic staff and graduate student work in many different fields including
electronics, computer science, physics, chemistry, biology, biomedicine and marine and environmental research. Within
Croatia, the RBI is a national institution dedicated to research, higher education and provision of support to the
academic community, to state and local governments and to technology based industry. Within the European Union, the
RBI forms a part of the European Research Area. Worldwide, the RBI collaborates with many research institutions and
universities upholding the same values and vision.


.tr
TUBITAK ULAKBIM, leads and coordinates the activities in the SEE Grid Project. Bilkent University, one of the three
third parties is developing the SE4SEE application. The other third parties are playing a major role in dissemination of
knowledge in their geographic area.


As can be seen in the following chapters of this deliverable, Turkish SEE-Grid Infrastructure has been extended to a
wide region of the country after first project year. Current infrastructure includes 6 sites with over 50 CPUs and also half
terabyte storage offering researchers an advanced testbed with enough resources to develop future grid applications. In
the near future, most of the current sites will be supported by new clusters which will be a part of the EGEE production
infrastructure also. This will be discussed in the future plans section.




.yu
Belgrade University Computer Centre (RCUB, [1]) was established in 1991 with the objective of providing information
technology related services to the academic community. Through the permanent development of services provided to its
users and by establishing Internet connectivity to the academic institutions, RCUB has became the central
communication point of the Academic Network of Serbia and Montenegro, one of the largest computer networks in the
country.




79163d0b-c558-4988-8845-d426662f0fd6.doc                SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                         Page 11 of 50

Professors, researchers and students of the Belgrade University are entitled to use the computing resources of RCUB
servers and Internet access for free. The access to computer and communication resources at RCUB is possible over the
Academic Network of Serbia and Montenegro (i.e., the Internet) and over the modems connected to the public
telecommunication network. The importance of these services is illustrated by the number of active users, which is
currently over 10,000.
The Institute of Physics (IP) was co-founded in 1961 by the University of Belgrade and the Government of Serbia with
the following principle objectives in mind: to provide facilities for the conducting of original research to its faculty,
associates and visitors; to help in fostering the growth of advanced studies in and related to physics; to provide a forum
for scientific contacts between physicists from Serbia and their colleagues around the world; to form the nucleus of an
advanced graduate program in physics, and to work with the Department of Physics of the Belgrade University in
creating a top level physics graduate curriculum; to develop applied areas of physics and assist in the development of
related technologies.
IP is a leading research institution in Serbia and Montenegro, with more than 120 researchers. The principle activities of
IP are oriented towards scientific research in theoretical and experimental physics, and in new technologies. Theoretical
investigations in quantum field theory, gravitation and into the fundamental and methodological problems of quantum
mechanics have a long and successful history here. In the last several years substantial efforts have been devoted to the
research and development of numerical simulations, high performance computing and GRID technologies (SCL, [2]). At
the same time there have been important strides in understanding the physics of condensed matter systems, as well as in
the development of theoretical atomic and molecular physics. Plasma physics as well as the study of nonlinear dynamics
have also generated much interest and are being pursued by researchers here both from the theoretical and experimental
sides. Significant results have been obtained in the field of laser physics. There is a varied and extensive research
program in nuclear physics and in high energy physics. The ongoing research in applied physics forms a natural
complement and extension to the above mentioned fundamental research. Important results have been achieved by the
Institute’s faculty in environmental protection, in designing a wide range of sensing equipment, as well as in the
development and manufacturing of microwave and light sources.


.al
Institute of Informatics and Applied Mathematics (INIMA) is founded as Center of Mathematical Calculus in 1971 and
as institute in 1985. INIMA belongs to the Academy of Sciences. Work profile is development and application of
computer science, computer infrastructure and mathematical methods in different fields of human activity, as in
economy, engineering, medicine, etc., as well as training of end-users. Due to the statute of Academy of Sciences,
INIMA has administrative, financial and operational autonomy. INIMA was charged with the realization of UNDP
project for creation of metropolitan network of Tirana in eighties. It participated in several international activities,
including European projects whose common goal was development of networking and distributed applications as
ETCETERA, HANNIBAL, ESATT+, and recently SEEREN.
The Faculty of Electrical Engineering (FIE) is one of the most ancient units of the Polytechnic University. Since the beginning
it has been privileged by its high level students and academic staff. Faculty of Electrical engineering has two departments,
Electrical and Electronics Departments. Both electrical and electronic departments have been developed following the new
trends of the labour market. Electronics Department graduates students in three fields: computer engineering,
telecommunication, and automation. All three groups of students has to be prepared and do research in their respective fields
and also in field of computer networking and distributed systems. The Faculty is one of most important places where students
are prepared to work in fields of design and management of Computer networks for the country. The Faculty is involved in
distance learning in computer engineering and telecommunication. It has several projects for preparation of PhD studies in
distance in collaboration with European universities. The Faculty is involved in many national and international projects for
implementation of Bologna Declaration and research in different areas, especially with Italian, German and Greek partners.
The Faculty has 50 lectors and researchers. The education and research infrastructure is used for both industrial and academic
projects. The FEE has been beneficiary of two JEP TEMPUS Projects in the fields of telecommunication, computer science
and development of graduate and post-graduate programs.

Faculty of Natural Sciences (FNS) of Tirana University is one of oldest institutions of higher education in Albania. The
Faculty of Natural Sciences is founded since the foundation of Tirana University in 1957 on the basis of the former Tirana
High Pedagogical Institute and the reorganization of some sections of the Science Institute and Polytechnic Institute. The
Faculty of Natural Sciences (FNC) is a part of Tirana University. It is the main center for preparing the specialists in the field
of Mathematics, Physics, Chemistry, Biology, Informatics and Pharmacology in Albania. It is also one of the principal
institutions of the country for preparing teachers for secondary schools in these fields. It represents a long experience in the
preparation of student generations, who worked or continue to work in important fields of social life as education, research,
production or management, inside the country and abroad as well. Actually there are studying about 1500 students in this



79163d0b-c558-4988-8845-d426662f0fd6.doc                   SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                      Page 12 of 50

faculty: about 380 students in Mathematics (about 20 of them in the 5-years orientation), about 210 students in Physics (about
20 of them in the 5-years orientation), about 240 students in Biology (about 35 of them in the 5-years orientation), about 350
students in Computer Sciences, about 310 students in Chemistry, about 80 students in Pharmacology. The faculty offers 11
different kinds of diplomas: 2 in Mathematics, 2 in Physics, 2 in Biology, 3 in Chemistry, 1 in Informatics and 1 in
Pharmacology. The faculty offers also some diplomas of postgraduate in these fields. In the area of this faculty there are
functioning several laboratories helping research activities or other users. There are two studying centers attached to the
faculty: Botanic Garden and the Museum of Natural Sciences as well as centers of teacher qualification on the fields of
Mathematics, Physics, Chemistry and Biology.


.hu
SZTAKI is the Computer and Automation Research Institute of the Hungarian Academy of Sciences. MTA SZTAKI is
the leading Grid centre in Hungary. SZTAKI’s mission is to carry out basic and application-oriented research in an inter-
disciplinary setting in the field of computer science, intelligent systems, process control, wide-area networking and
multimedia. The activities cover the computing, control, communication and intelligence – quadruple. SZTAKI’s
mission includes the transfer of up-to-date results and research technology to university students and the Institute runs
four external university departments. SZTAKI deals with Computer Science and Information Technology, Applied
Mathematics, Automated Control Systems, Artificial Intelligence, Analogical and Neural Computing, Parallel and
Distributed Computing and Integrated Design and Control Systems. SZTAKI is consist of different laboratories like
Analogical and Neural Computing Systems Laboratory, Applied Mathematics Laboratory, Engineering and Management
Intelligence Laboratory, Computer Integrated Manufacturing Research Laboratory, Gemometric Modelling Laboratory,
Informatics Laboratory, Laboratory of Operations Research and Decision Systems, Parallel and Distributed Systems
Laboratory and Systems and Control Laboratory.
MTA SZTAKI is a European Union centre of Excellence in Information Technology and has taken part in many
projects, not least the DataGrid and GridLab projects. For the EGEE project, MTA SZTAKI is responsible for
Information Dissemination and Outreach, User Training and Support, Application Identification and Support and
European Grid Support, Operation and Management.


.ba
Contractor in Bosnia and Herzegovina is BIHARNET – national academic research network. BIHARNET itself
does not have a GRID enabled site installation. Besides the contractor, there are four third parties in B&H:
Faculty of Electrical Engineering (University of Banja Luka), Faculty of Electrical Engineering (University of
East Sarajevo), Faculty of Electrical Engineering (University of Sarajevo) and Faculty of Natural Sciences
(University of Sarajevo). Since there are numerous problems regarding the functioning and financing of the
BIHARNET, most of the activities are performed by the third parties. Aforementioned parties are also involved
in SEEREN2 project.




79163d0b-c558-4988-8845-d426662f0fd6.doc                 SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                Page 13 of 50



1.2. The infrastructure of the RCs
.gr
The “test-GRNET” grid node was built for experimental purposes within the SEE-GRID project. It currently consists of
two working nodes (fully active) while a new node has been recently added (with only the OS installed yet). The nodes,
including their hardware/software specifications and the services they support are shown in this table:
                                MV version
          Node Name              & Tag                     Archietctu                 Memory
                                 Number         Services      re           CPU         RAM     Storage       OS

                                                                        Pentium III                       Scientific
                                 LCG 2.6.0    CE_torque      i586                      2048    40GB
                                                                         1.4GHz                          Linux 3.0.5
  grid1.netmode.ece.ntua.gr                   SE_torque

                                                                        Pentium III                       Scientific
                                 LCG 2.6.0 WN_torqueM        i586                      2048    40GB
                                                                         1.4GHz                          Linux 3.0.5
  grid2.netmode.ece.ntua.gr                  ON/UI
                                Table – Nodes installed at the “test-GRNET” site

For practical purposes, in the remainder of the document, the different nodes will be referenced by their hostname and
not with their full DNS name.
Comments:
         All the nodes are under the domain “netmode.ece.ntua.gr”.
         The above setup has been chosen for experimental purposes and it may change in the future.
These photos are showing the available nodes:




79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                       Page 14 of 50




.bg
The BG04-ACAD site provides computing resources based on a PC cluster with 12 worker nodes with processors
Pentium IV, 2,8 GHz. The capacity of the storage resources (SE) is 80 GB. The CE of BG04-ACAD uses Dual Intel
Xeon processor running at 2,8 GHz with 512 MB memory. BG04-ACAD site at ISTF supports and runs HEP, Biomed,
SEE-GRID and SEE VO applications.




79163d0b-c558-4988-8845-d426662f0fd6.doc         SEE-GRID consortium
D5.4 – Demonstration Labs                                                                               Page 15 of 50

Our students from Networking and Communication object of study learn some basic techniques related to Internet
computing in general and some concepts of grid computing and operation. The aim is to give the students an entry level
background and experience in the various distributed systems in use today.




The course includes technical information on instalation of services in a grid environment, with a focus on LCG
middleware. Show how to adapt applications to grid environment, and test them on existing grid site. The students
interact with User Interface machine in order to test their simple jobs and retrieve the results.


.ro
RoGrid Consortium, which is formed by ICI, UPB, NIPNE, INCAS and UB has provided the following sites to support
SEEGRID VO: RO-01-ICI, RO-02-NIPNE and R0-03-UPB. Other Grid sites which are to be deployed by the RoGrid
partners in the next period of time are planned to support SEEGRID VO as well. Currently, there are 5 sites installed:
RO-01-ICI, RO-02-NIPNE, RO-03-UPB, RO-04-NIHAM-L and RO-05-INCAS, ranging from fully certified and
production EGEE sites, like RO-01-ICI, to installed but not yet fully integrated sites, like RO-04-NIHAM-L and RO-05-
INCAS. All of them support various Virtual Organizations which belong to the HEP domain, specifically LHC
experiments, but also Biomedicine and Earth Sciences.




79163d0b-c558-4988-8845-d426662f0fd6.doc             SEE-GRID consortium
D5.4 – Demonstration Labs                                                                              Page 16 of 50




                                           WNs farm at RO-01-ICI site

RO-01-ICI has been installed according to EGEE and SEE-GRID project requirements. Currently, it has 17 computing
nodes, which are installed as follows: 13 Working Nodes with HT, 1 Computing Element, 1 Storage Element, 1
Monitoring Element, and 1 User Interface. External connectivity to the RoEduNet NOC is going to be upgraded to 1
Gbps. From RoEduNet a 622Mbps link is used to reach the GEANT ring.




                                                RO-02-NIPNE site

R0-02-NIPNE is integrated into EGEE infrastructure as an uncertified site. It has 12 nodes, specifically 8 dual 3Ghz
Xeons and one 2.8 Ghz Pentium 4 as Working Nodes, all of them having 4 Gb RAM. RO-03-UPB site has upgraded to
9 nodes, each of them using a 1.7 Ghz Pentium 4, and plans to further upgrade the site hardware configuration and get
integrated into EGEE infrastructure.




79163d0b-c558-4988-8845-d426662f0fd6.doc             SEE-GRID consortium
D5.4 – Demonstration Labs                                                                           Page 17 of 50

RO-04-NIHAM-L has 3 nodes, which are Dual Athlons, and 1 node is a Computing Element which is collocated with
the Storage Element, Monitoring Element and LHC File Catalog.




                                               RO-05-INCAS site

RO-05-INCAS has 12 Working Nodes and it is using 3.2 Ghz Athlon CPUs.
At the moment all these sites are using LCG-2.6.0 middleware, excepting RO-03-UPB which has upgraded to the latest
version, LCG-2.7.0.


.mk
The parties are situated in Skopje, the capital of Macedonia, where major projects are undertaken in
improving the network infrastructure that will enable gigabit connectivity between clusters. Currently the
clusters are connected using gigabit optical technology that can bring up to 1 Gbits of bandwidth. The
connection to the rest of the world is provided by SEEREN and SEEREN 2 project which now is 34 MBit
connectivity to GEANT through GRNET.
MARNET and FNSM cluster (MK-01-UKIM_II) consists of 7 Pentium 4 PCs out of which 5 are used as worker
nodes and one as central and one as storage element. The computing element is also the user interface
node. The nodes are spread between MARNET nad FNSM which are 2 km apart using gigabit fibber optic
connection. The Computing and Storage elements are situated in FNSM building while 5 worker nodes are
situated in the MARNET building. This has been done in this manner since the previous two clusters did not
had enough CPU power to join EGEE-SEE production grid and the fast network connection overcome the
distance problem.
FEE cluster (MK-02-ETF) consists of 3 Pentium 4 PCs. The architecture is following: one node computing
element, one is storage element and one for worker node and user interface.
Addition to this two clusters is the Itanium-2 machine that is installed as storage element with its own RGMA.
This node is mainly used for monitoring since the SFT is installed on it.




79163d0b-c558-4988-8845-d426662f0fd6.doc            SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                                                                          Page 18 of 50


MK-01-UKIM_II Grid Cluster

 MARNET Part of MK-01-UKIM_II (WNs)                                                                                     PMF Part of MK-01-UKIM_II (CE, UI and SE)
                                 grid-wn0.marnet.net.mk   grid-wn2.marnet.net.mk   grid-wn4.marnet.net.mk                  grid-ce.ii.edu.mk
                                                                                                                           grid-ui.ii.edu.mk
                       Switch                                                                                                                    Switch   grid-wn.ii.edu.mk




                                 grid-wn1.marnet.net.mk   grid-wn3.marnet.net.mk   grid-wn5.marnet.net.mk
                                                                                                                           grid-se.ii.edu.mk



                                                                                                              1 GB
                                                                                                            backbone


      Dual Itamium-2                                                                                                   MK-02-ETF Grid Cluster
                                HP Donation
                                                                                                                        2x HP Workstation PC (P4HT, 3GHz, 512MB, 160GB HDD)
                                                                                                                       grid-ce.etf.ukim.edu.mk
                                6x HP Workstation PC (P4, 3GHz, 512MB, 40GB HDD)                                       grid-ui.etf.ukim.edu.mk Switch
                                                                                                                        1x Workstation PC (P3, 800MHz, 256MB, 40GB HDD)
                                                                                                                                                      grid-wn.etf.ukim.edu.mk
                                1x Dual Itanium-2 HP RX-2600, 2GB RAM, 210GB HDD)




                                                                                                                       grid-se.etf.ukim.edu.mk




                                                                                                                       3x Workstation PC (P4HT, 1.8GHz, 512MB, 40GB HDD)




.hr


Ruđer Bošković Institute (RBI) provides Grid resources of several levels of complexity. Our GRID orientation is
presently intended and used for scientific cluster computing and in the future with scientific instruments in the grid (e.g.
NMR, electronic microscopy), scientific databases, distributed data acquisition (e.g. Marine ecology) and scientific
visualization. RBI internal networks are Gb optical Ethernet as a dynamic reconfigurable network with predictable
communication (latency, bandwidth, jitter), 100 Mb atmospheric laser link, 10 Mb microwave links, and connections
with Internet over 1 Gbps optical and ATM 155/622/1.2Gb through CARNET to GEANT. Presently the clusters in the
Campus GRID attain around 180 GHz Linux PC processing power (heavily used). The testbed research includes
metacomputing technology, distributed computing test bed, high-speed computing, high throughput computing, virtual
laboratory e-science center tendency etc. The CRO GRID, whose part RBI is, provides grid computing throughout the
research and educational network in Croatia, and is intended to be OGSA compliant.
In addition to above, the RBI Campus local area network was overhauled in 2003. The new fibre-optic LAN stretching
between 26 buildings provides 3100 connections and satisfies the requirements for VPN and Grid applications.
.tr
TUBITAK ULAKBIM, has three third parties in the SEE Grid Project. Taking into account that Bilkent University will
be the developer of one of the regional applications (SE4SEE) and TUBITAK ULAKBIM will coordinate the technical
activities, operational part of the national SEE Grid Infrastructure was planned to be in TUBITAK ULAKBIM
originally.


At the start of the project, ULAKBIM deployed the first SEE-Grid cluster in Turkey with a small site configuration. The
selected OS was SLC 3.0.3 and there were many stages to be completed manually during the installation. With the
recent YAIM software, middleware installation is now easy and there are fewer problems to be reported in this
document.


Following table summarizes the testbed TR-Grid infrastructure:




79163d0b-c558-4988-8845-d426662f0fd6.doc                                            SEE-GRID consortium
D5.4 – Demonstration Labs                                                                  Page 19 of 50


                     Location   # CPUs      Storage          Supported VOs              Type

 TR-01-ULAKBIM        Ankara      16       160 Gbyte   HEP,seegrid,National,dteam   EGEE, SEE-Grid

 TR-02-BILKENT        Ankara       8       180 Gbyte         seegrid,dteam             SEE-Grid

   TR-03-METU         Ankara      12       30 Gbyte    HEP,seegrid,national,dteam      SEE-Grid

 TR-04-ERCIYES       Kayseri       6       90 Gbyte    HEP,seegrid,national,dteam      SEE-Grid

   TR-05-BOUN        İstanbul      2       20 Gbyte    HEP,seegrid,national,dteam      SEE-Grid

  TR-06-SELCUK        Konya        6       40 Gbyte    HEP,seegrid,national,dteam      SEE-Grid




                                Table – Current TR-Grid Infrastructure




79163d0b-c558-4988-8845-d426662f0fd6.doc         SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                 Page 20 of 50

                                    Figure – Current TR-Grid Infrastructure



.yu
Network Infrastructure
Belgrade University Computer Centre RCUB, as the central node of the AMREJ network, has one 155 Mbps and one 34
Mbps Internet connection. The first one is provided via SEEREN project (http://www.seeren.org), connecting AMREJ
to the GEANT network via GRnet GEANT POP. The second one is provided via local ISP Telekom Srbija. Total
current Internet bandwidth is 189 Mbps. Backbone technologies varies from 1 Gbps (Ethernet SHDH) to 155 Mbps
(SDH) and 2 Mbps (PDH). The current state of AMREJ national backbone is shown in the figure below.
Additional GEANT connection using Hungarnet POP, expected during 2006, will bring additional 91 Mbps with the
total bandwidth approaching 280 Mbps. It is currently connected with three regional backbone nodes with 1 Gbps, 2
Mbps, and 1 Mbps links, the most of academic institutions within the metropolitan area, and with major local ISPs.
Each of the regional nodes connects the institutions based in its geographical area (usually metropolitan area). Core of
the metropolitan networks is built using dark fibers and Gigabit Ethernet technology, which spread in the access part of
the network in addition with the traditional leased line services (DSL, nx64).

In the last several years, AMREJ has the leading role in the country in research and implementation of optical
networking, by defining the strategic goal to build new NREN infrastructure using the customer empowered optical fibre
technology. The new network backbone will consist of 19 nodes connected with 1 Gbps links using leased dark fibres.
New backbone should be operational during the 2006.




        Figure – AMREJ National Backbone – current state. Details can be found on http://amrej.rcub.bg.ac.yu/




79163d0b-c558-4988-8845-d426662f0fd6.doc              SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                Page 21 of 50

At Institute of Physics the IP's internal network is based on Gbps Ethernet over copper. Scientific Computing Laboratory
(SCL), responsible for Grid services, has internal Gbps network with 2 x 1 Gbps uplink to RCUB, other UOB partner in
SEE-GRID (the uplink is extensible up 12 Gbps).


Grid Infrastructure
After the first experiences with a four-node LCG-2_1_0 testbed located at the Institute of Physics, which was
used for exercises in LCG installation, usage and management of grid services, at the beginning of SEE-
GRID project two sites were set up, one in Belgrade University Computer Centre and one at the Institute of
Physics. In that phase we had valuable help and support from József Patvarczki. Currently there are three
sites: AEGIS01-PHY-SCL at IP (LCG-2_7_0), AEGIS02-RCUB at RCUB (LCG-2_6_0), and AEGIS03-ELEF-
LEDA at the Faculty of Electric Engineering of the University of Nis (LCG-2_7_0).

The IP site is a set of 29 nodes (2 x PIV Xeon on 2.8 GHz with 1 GB of RAM) fully dedicated to the SEE-
GRID. OS is SL 3.0.5, while the configuration consists of UI/CE node ce.phy.bg.ac.yu, SE node
se.phy.bg.ac.yu (with 160 GB of space on /storage partition), BDII/PX/MON node grid.phy.bg.ac.yu (with 2
GB of RAM), RB node rb.phy.bg.ac.yu, and 25 WNs wn01.phy.bg.ac.yu – wn25.phy.bg.ac.yu. So, IP
dedicated computing power of 50 physical, i.e. 100 logical processors (due to the hyper-threading) in 25
worker nodes to SEE-GRID.




                  Figure – AEGIS01-PHY-SCL site at the Institute of Physics in Belgrade

At Belgrade University Computer Centre (RCUB) we have a scavenger grid currently consisting of 14 nodes
running Scientific Linux 3.0.4. The machines have 2.0 GHz AMD Sempron processors, 1GB RAM and 80 GB
of hard disk space. One machine is dedicated to CE/SE/MON services, one to UI/LFC, and twelve nodes are
WNs. The Storage Area Network (SAN) cluster, which was donated by Sun Microsystems, is connected to
the Storage element via NFS. It consists of Sun Storedge 3510 Fibre Channel Array with 5 x 73 GB SCSI 10k
RPM Disks with HW Raid Controller and 1GB cache, which is extendable up to 1.7 TB. The array is
connected through 2Gb PCI Dual FC Network Adapters (Sun Storedge 2 Gbps PCI Dual Fibre Channel
network, 200 Mbps Channel with Optical Interface) to two Sun Fire SFV240 servers (1GHz Ultrasparc III,
512MB RAM, 36 GB SCSI HDD, 4 x 10/100/1000 Gigabit Ethernet). Despite of total disk capacity of 365 GB,
the effective capacity of the SAN is 211 GB, since RAID 5 is used.




79163d0b-c558-4988-8845-d426662f0fd6.doc              SEE-GRID consortium
D5.4 – Demonstration Labs                                                                     Page 22 of 50




                 Figure – AEGIS02-RCUB site at the Belgrade University Computer Centre


Another SEE-GRID site in Serbia is AEGIS03-ELEF-LEDA, installed at the Faculty of Electric Engineering of
                                             rd
the University of Nis, a future SEE-GRID-2 3 party. The site consist of 5 nodes, all of them PIV on 2.4 GHz
with 512 MB of RAM and 80 GB of disk space. One node serves as UI/CE/SE/MON, while four nodes are
WNs. All nodes have Scientific Linux 3.0.5 installed.




      Figure – AEGIS03-ELEF-LEDA site at the Faculty of Electric Engineering of the University of Nis




.al
All participants from Albania have local networks of 100 Mbps.
Internet access is assured via private ISPs, with bandwidth:
     - INIMA: 1 Mbps




79163d0b-c558-4988-8845-d426662f0fd6.doc             SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                      Page 23 of 50

    -     FIE:      1 Mbps, upgradeable to 2 Mbps in future
    -     FSN:      1 Mbps
INIMA: GRID infrastructure is based on the training laboratory, situated in one class, networked and with Internet
connectivity. This laboratory has 20 PCs that may be used as a LCG cluster. Only the central PC of the cluster has real
IP while others use a NAT box to access the Internet. This infrastructure will serve as GRID test-bed. It may serve also
as pre-production GRID infrastructure, being available for distributed applications during the periods of time free from
training activities. Actually only one part of this infrastructure is used for project's purposes, as it is shown in following
table.




                                 MV version                                             Memory
                                                                                                   Storage
           Node Name              & Tag     Services Archietcture            CPU         RAM                       OS
                                  Number                                                              TB
                                                                                          GB
                                                                           Intel P3
        Prof.salla6.inima.al      LCG 2.4.0     CE/SE         i386                       0.256      0.005        SL 3.0.4
                                                                           .08GHz

               v101               LCG 2.4.0      WN           i386         Intel P3      0.256      0.005        SL 3.0.4
                                                                           .08GHz

               v102               LCG 2.4.0      WN           i386         Intel P3      0.256      0.005        SL 3.0.4
                                                                           .08GHz

               v119               LCG 2.4.0      WN           i386         Intel P3      0.256      0.005        SL 3.0.4
                                                                           .08GHz
                                                                           Intel P3
               v120               LCG 2.4.0      WN           i386                       0.256      0.005        SL 3.0.4
                                                                           .08GHz


FIE: GRID infrastructure is based on training laboratory for Linux systems and applications. Only part of nodes have
real IP addresses, others use NAT to access the Internet.

                                 MV version                                             Memory
                                                                                                   Storage
           Node Name              & Tag     Services Archietcture            CPU         RAM                       OS
                                  Number                                                              TB
                                                                                          GB
                                                                           Intel P4
        seegrid2.fie.upt.al       LCG2.6.0        CE          i386                       0.512                   SL3.0.5
                                                                           2.8 GHz
                                                                           Intel P4
        seegrid3.fie.upt.al       LCG2.6.0 SE+MON             i386                       0.512                   SL3.0.5
                                                                           2.8 GHz
                                                                           Intel P4
              8 nodes             LCG2.6.0       WN           i386                       0.512                   SL3.0.5
                                                                           2.8 GHz




79163d0b-c558-4988-8845-d426662f0fd6.doc                 SEE-GRID consortium
     D5.4 – Demonstration Labs                                                                           Page 24 of 50


     .hu
     The cluster of SZTAKI based on Linux system and has 58 processors. The computing resources which joined to the
     SEE-GRID test-bed consist of 3 computers and one virtual machine. The OS of these computers is Scientific Linux
     3.0.3. Capacity of computers is small and they may be used mainly for small-scale experiments and demonstrations.




                                       Figure – Dedicated nodes to SEE-GRID testbed



              n31                           n27                           n28                    n45
UID_domain    lcg.hpcc.sztaki.hu            lcg.hpcc.sztaki.hu            lcg.hpcc.sztaki.hu     lcg.hpcc.sztaki.hu
              SEE-GRID CA, Dec 23
       cert   07:36:17 2005 GMT
    domain    hpcc.sztaki.hu                hpcc.sztaki.hu                hpcc.sztaki.hu         hpcc.sztaki.hu
        hw    MSI MS-6580                   Dell Precision 410 M          Dell Precision 410 M   Virtual machine
   Memory     512M SDRAM                    500M , 2x128M DIMM            500M , 2x128M DIMM
  processor   P4 2.53G;                     2xPIII                        2xPIII
                                            QUANTUM ATLAS I SCSI          QUANTUM ATLAS SCSI
  hard disk   MAXTOR 6Y080L0 80G            9G                            9G
        net   Intel 82801BD PRO/100         3Com 3c905B                   3Com 3c905B             3Com 3c905B
         ip   193.224.187.160               193.224.187.156               193.224.187.157        193.224.187.174
   location   room 051                      room 043                      room 043               room 043

                                                                                                 User interface, P-GRADE
                                                                                                 Portal v2.3 for SEE-GRID
              LCG(2.7.0),SE, CE, Resource                                                        (port: 8080),LCG_UI(2.7.0),
      Role    Broker, Replica Manager       Worker Node (Default,#1)      Worker Node (#2)       Condor(6.6.10)


                               Table - Detailed description of the SEE-GRID testbed at SZTAKI




     79163d0b-c558-4988-8845-d426662f0fd6.doc                 SEE-GRID consortium
D5.4 – Demonstration Labs                                                                             Page 25 of 50




Photos about the testbed and the SZTAKI cluster




On these nodes LCG-2 middleware was installed by manual installation. These three nodes are separated from the
original cluster of SZTAKI. The node n31 are functioning like CE, SE in simultaneously. The fourth node (n45) is a
virtual machine, which is reality running on the n59 node. The n45 is working as a SEE-GRID User Interface and it is
hosting the P-GRADE Portal (Version 2.3), which is the official WEB-based single access point to the SEE-GRID
resources. The WN are the n27 and n28, the default worker node is the n28.


.ba
There are four GRID sites (cluster) in Bosnia and Herzegovina. They are stationed at four third parties to the
project:
         BA-01-ETFBL - Faculty of Electrical Engineering Banja Luka – 4 nodes– LCG 2.7.0
              o   CE/UI (P IV 3 GHz 512 MB RAM, 80 GB HDD)
              o   SE/MON (P IV 2.4 GHz, 1 GB RAM, 150 GB HDD RAID Stripping)
              o   2 x WN (P IV 3 GHz, 512 MB RAM, 80 GB HDD)
              o   100 Mbps switched ethernet
         BA-02-ETFIS - Faculty of Electrical Engineering East Sarajevo – 4 nodes– LCG 2.6.0
              o   CE/SE/MON (Itanium2, 1 GB RAM),
              o   UI/WN (Dual P IV 2.4 GHz, 256 MB RAM),
              o   2 x WN (Dual P IV 2.4 GHz, 256 MB RAM)
              o   100 Mbps switched ethernet
         BA-03-ETFSA – Faculty of Electrical Engineering Sarajevo – currently off-line
         BA-04-PMFSA – Faculty of Natural Sciences Sarajevo – 5 nodes – LCG 2.6.0




79163d0b-c558-4988-8845-d426662f0fd6.doc            SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                 Page 26 of 50

             o    CE/SE/MON/UI (Dual P IV 2.4 GHz, 1GB RAM, 80 GB HDD)
             o    4 x WN (Dual P IV 2.4 GHz, 1 GB RAM, 80 GB HDD; P IV 2.4 GHz, 512 MB RAM, 80 GB
                  HDD; 2 x Sempron 2600, 512 MB RAM, 80 GB HDD)
             o    100 Mbps switched Ethernet
             o


1.3. Other typical site parameters
During the SEE-GRID project it became clear that there are some countries where a GRID initiative has already begun,
and that already have some kind of GRID infrastructure in place. These countries may have based their GRID on
something else than LCG/CERN Linux, specifically if the NGI started before the EGEE and SEE-GRID project starts.
For them, reinstalling to CERN Linux and LCG would mean turning off the production clusters for OS reinstallation.
This further could also lead to existing application software incompatibilities. This is because LCG doesn't run well on
many popular Linux distributions. In fact, only RedHat/CERN Linux 7.3 is officially supported.


.bg
We came to the conclusion that HEP, biomedical and probably the regional applications require significant computing
resources. We decided to have homogeneous Worker Node configuration, with at least 1 GB RAM, 2.8 GHz Pentium
IV CPU with HT, Gigabit NIC, and 80 GB HDD. All Worker Nodes will be connected with Gigabit switches.


.ro
A change in the RO-01-ICI site configuration was the upgrade of the Storage Element from a disk based solution to a
disk array solution. Specifically, RO-01-ICI is currently using a Promise Vtrak 15100 disk array, which has 15 bays, an
Ultra 160 SCSI host interface port, and a RAID controller. Using 400 GB SATA hard disks, it's planned that the storage
capacity to get a total of 6 TB. Also, RO-01-ICI started to build a rack of servers, using only rack mountable computers.
In order to accommodate the need of high availability resources of the production site, a new UPS system is being
deployed.
As the number of Working Nodes has increased, a new Gigabit Switch was added, so that all the cluster nodes are
interconnected in a Gigabit network. The cluster has 5 rack servers running at 3 GHz Pentium 4 with 2 GB RAM, 4
desktop-type Pentium 4 with 1 Gb RAM, and 4 Dual Xeons at 2.4 Ghz with 1 Gb RAM.
Computing Element and Storage Element are using a Celeron running at 2.4 Ghz with 1 Gb RAM. Monitoring box has
been separated from Storage Element as the RGMA service on the MON box started to use authenticated connectors
which were using the same port as the Disk Pool Manager system which was deployed on the Storage Element.


.mk
MARNET, FNSM and FEE sites are using 1GBit channel for interconnection. The local area network for inter
cluster connection is 100Mbit switch network. The main disadvantage between nodes is the lack of SAN
storage at any site.
The machines are fully dedicated to the project and hence are installed in standard way. The only installation
problem was while installing the Itanium-2 machine where the LCG-2 installation is not supported by yaim
(the installation part). Another problem was overcame while installing the SFT on the Itanium, where the
standard installation of MySQL was not compliant with the installation of the Perl and Python libraries used in
SFT. This was solved by recompilation the MySQL using specific switches.
Services provided by .mk sites are mainly monitoring. For this purposes the installed GridIce server has been
updated several times, and several bugs have been found and removed from the software.




79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                Page 27 of 50


.hr
Croatia is a country where a National Grid Initiative (NGI) started several years ago. Therefore one of the key
parameters that determined the LCG-2 installation in RBI is the fact there already were working clusters and it was
necessary just to add the LCG-2 middleware (without reinstalling a production cluster from scratch). During several
years RBI developed the DCC (Debian Cluster Components) cluster distribution, which was primarily developed for the
possibility of easy installation of new clusters and simple maintenance and upgrading of existing ones.
Due to the fact that RBI develops and uses the DCC/Debian cluster distribution (ref. [2]), the installation of LCG-2 was
difficult, as Debian is not one of the supported LCG-2 installations (it is not even RPM based).

Among the existing cluster distributions, Debian Cluster Components (DCC) is most similar to Oscar. DCC is not a
complete Linux OS distribution, but is rather installed as a collection of additional packages to a previously installed
Debian system. This way it utilises to the maximum the existing packaging system (apt) and the packages from the
standard Debian installation, thereby reducing the cost of DCC maintenance. Most of the DCC packages configure the
standard Debian packages to work in a cluster (LDAP server, DHCP, TFTP etc.). Besides the configuration packages,
DCC also contains a number of binary packages, since not all of the services necessary for cluster creation and
maintenance come in standard Debian distribution (Torque, C3, Ganglia etc.). The most important components of the
DCC cluster are:
         TORQUE – job scheduler (heir to OpenPBS),
         SIS – simple creation of node images (debootstrap) and network installation of nodes (PXE),
         LDAP, CPU – centralised management of user accounts across the cluster,
         C3 – cluster maintenance tools,
         Ganglia – web-based cluster monitoring tool,
         Shorewall – firewall configuration.
Any additionally required Debian package (such as MPICH, LAM MPI, PVM, PovRay etc.) can be installed on a DCC
cluster in the usual Debian way (apt-get install package-name).


.tr
With the new STM-4 Geant link and Ankara-Istanbul link, enough capacity for communicating with other SEE-Grid
sites is provided by the NREN. As a preparation for the new advanced infrastructure, which will include over 500 CPUs,
all national WAN links of the universities are increased to suitable levels. Sites are now connected between 34Mbps-
1Gbps speeds.
At the LAN level all of the sites use gigabit ethernet connection. With such an interconnection and fair CPU speeds
these sites are suitable for MPI computing also.
At the start of the project, original plan was to include a portion of the 128 node Debian cluster in ULAKBIM to the
SEE-Grid testbed. With the help of the UML (User mode linux) LCG-2.2.0 and LCG-2.4.0 were tested on the cluster.
CPUs of this cluster will be added to TR-01-ULAKBIM site to easily overcome the analyzed problems during the test
phase.
Since TR-Grid now involves 6 sites and will involve more than 10 sites in the near future, infrastructure administration
at national level is an important issue. To solve problems quickly, some services were installed on the coordinator site
(ULAKBIM). Trouble ticket system, monitoring systems (ganglia, google map that was developed by ULAKBIM, real
time cluster health monitoring tool that was also developed by ULAKBIM, yumit) are some of these.


.yu
All AEGIS sites support seegrid and dteam VOs. In addition, AEGIS01-PHY-SCL as the only site also in
EGEE, supports also atlas, cms, esr, and see VOs. As of February 2006, national aegis VO is established,
and all our sites support it. Only the AEGIS01-PHY-SCL site has specific policy on the fair shares of
resources implemented by maui scheduler. According to the configuration, currently 5% is the fairshare of
resources devoted to SEE-GRID, and up to 16 processors can be used by the SEE-GRID at a given moment.
Regarding the operating systems, all sites use Scientific Linux 3.0.4/3.0.5.



79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                   Page 28 of 50

Firewall configuration is left to site admins, but we are planning to develop the suggested configuration.
.al
Participation in SEE-GRID offered the first concrete experience with grid technologies. At the beginning the attention
was oriented towards installation of experimental small grid infrastructures composed by few PCs in local network and
their integration with SEE-GRID regional structures.
Second step was identification of potential applications that may be run over the grid, both in local and regional scale.
Use of HEP applications is related with participation in CERN experiments, which is not the case of local physics
laboratories. Use of BioMed applications may be achieved by biological laboratories, but their concrete involvement
depends on development of local NREN infrastructure and participation in international research projects; despite
difficulties it remains a promising direction. Search for other potential applications is in progress. Promising areas
include Monte-Carlo simulations, geophysical modeling, and optimization.
Considering local requirements, deployment of grid infrastructure remains in experimental stage. Part of computers
involved has dual-operating system and part of time is used in other applications. Integration with SEE-GRID regional
structures is considered as important in order to gain knowledge and exploit it, when appropriate, for experimentation of
local and regional applications.
INIMA: site is limited in 4 work-nodes and one computer/storage element. User interfaces for the moment are installed
only in three individual workstations. In case of necessity, it is possible to activate up to 20 work-nodes. Used computers
are relatively old, with limitations in processor clock, RAM and hard-disk space. CE/SE node is dedicated to grid
infrastructure and runs all the time in Linux. Work-nodes are switched periodically in MS-Windows.
FIE: site is composed by 10 new PCs running Linux and grid middleware. There is one computer element, one storage
element, user interface and 8 work-nodes.
FNS: site is composed by few PCs, with one computer/storage element, one user interface and some work-nodes.




2. LCG-2 installation cycles
.gr
Scientific Linux 3.0.5 and LCG 2.6.0 have been installed at our nodes. For the installation of the grid middleware, the
manual installation was followed as described wiki pages.
The installation process was:
                 Installation of the CE_torque in grid1
                 Installation of the SE_torque in grid1
                 Installation of the WN_torque in grid2
                 Installation of the MON in grid2
                 Installation of the UI in grid2


.bg
Our initial attempt at manual installation revealed a lot of difficulties. Therefore we switched to
LCFGng based installation on Redhat 7.3, which was successful. Later on we upgraded to Scientific
Linux because of security concerns, and since LCFGng is no longer supported, we followed the
default yaim based installation. This installation was successfully certified on our clusters.




79163d0b-c558-4988-8845-d426662f0fd6.doc                SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                  Page 29 of 50


.ro
The ICI production site entered the TestZone on mid-December 2004 being configured with RedHat
7.3 and LCG-2.3.0.
For the time being the ICI production site is in maintenance mode. Half of the nodes are already
configured with Scientific Linux 3.0.3 and LCG-2.3.0. At mid March is expected to re-enter the
TestZone for further testing and in April the site should be ready to reach full production phase.


.mk
The .mk sites were first introduced to LCG 2_1_0 and gradually were upgraded to LCG 2_6_0. The
installation was done mainly by upgrade from the previous version of LCG to the new version, except in one
occasion where the change of the operating system was required from RedHat 7.3 to Scientific Linux 3. This
time the whole cluster was reinstalled ant the LCG 2_4_0 was put on newly installed machines.
The installation of the LCG 2_1_0 and 2_2_0 and 2_3_0 were done by manual installation using the manuals
published by CERN. The installation was quite problematic. After with yaim the installation got easier and
faster.


.yu
Starting with LCG-2_1_0, we installed all LCG releases at AEGIS01-PHY-SCL (LCG_2_2_0, LCG-2_3_0, LCG-
2_3_1, LCG-2_4_0, LCG-2_6_0, LCG-2_7_0), while on AEGIS02-RCUB we installed LCG-2_3_1, LCG-2_4_0, and
LCG-2_6_0. As a newly installed site, AEGIS03-ELEF-LEDA has installed just LCG-2_7_0.
The upgrades are all done in a requested time, usually in 2-3 weeks after the release is publicly announced.


.al
INIMA
       Installation of CE with local certificates
       Installation of WN in four nodes
       Installation of UI
       Generation of certificates
       Configuration of CE
       Installation of SE
       Installation of MON
FIE
       Installation of UI
       Generation of certificates
       Installation of CE/SE
       Installation of WN
FSN
       Installation of CE and UI
       … work in progress


.hu



79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                 Page 30 of 50

The testbed is now in a transition period until the end of February. All nodes are running on Scientific Linux 3.0.3. The
new LCG 2_7_0 installation is finished on the nodes. One of the nodes had a serious hardware failure, and the
procurement of the new hardware components and installation delayed the software installation with two weeks.
The installation process will be:
•        Installation on the CE/SE n31
•        Installation on the WNs (n27, n28)
•        Installation on the UI n45


2.1. Final Phase I: Linux installation
2.1.1. Known hardware issues
.gr
No significant hardware issue was encountered.


.ro
No significant hardware issue was encountered.


.mk
The Redhat and Scientific linux installed on our hardware without any hardware problems.


.hr
There were no hardware issues during Cern Linux/Scientific Linux installation on RBI, due to the fact that on
RBI we are using the kernel of the base OS (Debian) that supports all of our cluster hardware.


.tr
Most of the TR-Grid sites are composed of latest generation off the shelf components. TR-01-ULAKBIM and TR-02-
BILKENT sites had some problems during the installation with the gigabit ethernet and SATA drivers that was
packaged in the stable kernel.
Monitoring the sites is an essential issue in administering the infrastructure. This includes the hardware health
monitoring. Tools such as SMART in hard disks and also getting temperature, M/B and CPU voltages, FAN speed via
lmsensors were used in the whole infrastructure. Results from all sensors are collected to the ULAKBIM site and in the
case of an emergency local site administrator is alerted with e-mail.


.yu
No known hardware issues both in RH 7.3 and Scientific Linux 3.0.x.


.al
INIMA: Actual release SL 3.0.4. Hardware capacity is in lower limits of running Scientific Linux. RAM is upgraded to
     256 Mb while additional hard-disk drives of 40 Gb are added (part of this space is used for MS Windows). Short
     interruptions of energy create problems for continuous running of the system.
FIE: Actual release SL3.0.5
FSN: Actual release 3.0.5, work in progress



79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                             Page 31 of 50




.hu
The Linux OS installation has done successfully. All nodes are running on Scientific Linux 3.0.3. No significant
hardware issue was encountered.

2.1.2. Known software issues
.gr
Scientific Linux: No significant software issue was encountered during installation.


.bg
When Windows was installed before Linux, sometimes LCFG has problems to perform automatic
installation. Solved using dd utility in an appropriate manner.


.ro
With the Linux installation, ICI encountered few software problems especially related to the
hardware drivers. Network and SATA modules were the main issues regarding the Linux
installation phase.


.mk
The RadHat linux (CERN Linux) came with some services installed that were not needed.
Also there are no instructions on that what are the requirements for installed services that are needed or not
needed for security reasons.
The only software issue from the Linux installation was the Itanium version of Scientific Linux. The problem
was with the MySql that is needed by SFT. The problem was that needed libraries could not be compiled
under the standard binary deployment. The problem was solved by the recompilation of MySql using special
switch.


.hr
Due to the abovementioned fact that on RBI the production clusters are running on a home-developed and
maintained cluster distribution (DCC), for the installation of LCG software several issues had to be resolved.
Here is a description of the solution we on RBI implemented.
It is, actually, possible to install LCG-2 without the need for reinstallation of the whole cluster with an
operating system provided for the supported LCG-2 distributions. We managed to install LCG-2 on a Debian
GNU/Linux 3.1. Since the LCG middleware doesn't run on plain Debian (we tried converting rpm-s to deb-s
using alien) we decided to install CERN Linux into a separate directory (/lcg) inside our Debian installation.
While this can be done manually from rpm-s, we used system installer. System installer installs the OS by
unpacking the CERN Linux rpm-s we downloaded beforehand into a separate directory. After that, root can
chroot into the directory where CERN Linux is installed and configure it as usually (ie. install LCG
middleware).


.tr
Latest middleware of the TR-Grid sites were LCG-2.6.0 during the preparation. These were the issues that were arised
during the installation of the sites:
      -   Java rpm is installed before the installation of the nodes.




79163d0b-c558-4988-8845-d426662f0fd6.doc                  SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                    Page 32 of 50

       -    seegrid VO certificate rpm is installed on the service nodes.
       -    seegrid VO entries in gridmapfile, groupmapfile files under /opt/edg/etc/lcmaps directory were corrected.
            /opt/edg/etc/edg-mkgridmap.conf was also corrected.
       -    Also certificate rpms of the national VOs were installed on all clusters. Same modifications on the VO
            configuration files were done.
       -    During the standard updates of the middleware some unexpected errors and warnings were handled. GSSAPI
            warning on ssh clients, apel error on mysql are some of these.


.yu
No known software issues both in RH 7.3 and Scientific Linux 3.0.x.


.al
INIMA: Minor problems with RGMA, MON and GRIDICE are still not solved.
           Site in preparation for migration in SL3.0.5 & LCG2.7.0
FIE:
FSN:


.hu
No significant software issue was encountered.


2.2. Final Phase II: LCG-2 installation and configuration
2.2.1. Computing Element
.gr
            The CE was installed by following the guide of the manual installation. It was installed in grid1. A certificate
            about it was issued by Hellasgrid CA, after the relevant request. The certificate along with the corresponding
            private key was copied to /etc/grid-security.
            The steps below were common to the installation of all nodes:
                    Configuration of yaim conf file
                    Execution of command: /opt/lcg/yaim/scripts/configure_node site-info.def CE_torque SE_classic
                    Installation of the certificates to the CE, SE. Only two certificates had to be used because CE and SE
                     reside to the same server (grid1).


.mk
MK-01-UKIM_II: The LCG-2 installation on the Computing Element was first overlapped with Worker node,
but after some instructions, we removed the Worker Node. The computing element is installed with torque.
The central element supports SEEGRID, DTEAM and SEE VOs.
The name of the MK-01-UKIM_II CE is grid-ce.ii.edu.mk.
MK-02-ETF: The node is only a Computing element with torque. The supported VOs are SEEGRID and
DTEAM.
The name of the MK-01-UKIM_II CE is grid-ce.etf.ukim.edu.mk.




79163d0b-c558-4988-8845-d426662f0fd6.doc                   SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                Page 33 of 50


.tr
Beside the issues that were summarized in the “Known software issues” section no other issues are needed to be
included here. Software installation like on other nodes is as smooth as in other service nodes. Most of the issues that
were occurred during the installation were caused by misconfiguration.


.yu
CE on AEGIS01-PHY-SCL is ce.py.bg.ac.yu
CE on AEGIS02-RCUB is grid01.rcub.bg.ac.yu
CE on AEGIS03-ELEF-LEDA is grid01.elfak.ni.ac.yu


.al
CE in all sites is configured using standard site-info.def file from SEE-GRID web site.
INIMA: LCG 2.4.0 completed one CE/SE
FIE: LCG 2.6.0 completed one CE
FNS: LCG 2.6.0 work in progress


.hu
The n31.hpcc.sztaki.hu node is working as a Computing Element in the testbed. n31 had a hardware problem. It is
running on LCG 2_6. The installation of the new hardware will occur on 20-21 Feb., and the LCG 2_7_0 installation
will happen afterwards until the end of February.


2.2.1.1.1 Available jobmanagers and the main parameters

.gr
PBS is installed on the CE. The configuration has been complited through yaim conf file.


.ro
The available jobmanagers at the ICI production site are the fork (implicit) and torque + maui job
scheduler. The default configuration was used until now. As the VOs support (other VOs than
dteam) would be provided from the mid-March, different job queues and parameter tuning would be
accounted for.


.mk
The problems that were encountered during the installation of the computing elements were:
         The reverse DNS was not set resulting in the jobmanager not working properly
         The ntp service did not synchronize the time


.hr
The RBI cluster is using Torque jobmanager. At the moment scheduling is done using Torque C/FIFO
scheduler although we are planning to switch to Maui. Maui will allow us to set a strict policy for sharing our
cluster resources between the local and grid jobs.



79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                           Page 34 of 50




.tr
PBS and maui are in service on all CE nodes of the TR-Grid sites. On the ULAKBIM site CPU numbers for the VOs
were limited on PBS. Also one CPU was dedicated for dteam functional tests.


.yu
Available jobmanager is pbs (shared home directories) on all sites, and MPICH is enabled on all sites.
Available queues on AEGIS01-PHY-SCL: atlas, cms, esr, see, dteam, seegrid, aegis.
Available queues on AEGIS02-RCUB: dteam, seegrid.
Available queues on AEGIS03-ELEF-LEDA: dteam, seegrid.


.al
Torque/Maui is used in all sites.


.hu
Using Torque as lcgpbs for supporting MPI. The jobmanager is supporting fully the SEE-GRID VO.



2.2.1.2 Occurrent problems and solutions
.gr
No significant problems have been encountered.


.mk
The solution to the ntp service not synchronizing was putting the ntp service to synchronize manually by a
cron job every night.


.hr
Our biggest concern was how (if at all) LCG-2 middleware will work inside the chrooted installation of
CernLinux.


.tr
Since sites other than TR-01-ULAKBIM are not at production level, these sites are not overloaded. One problem that
was caused during heavy load days of TR-01-ULAKBIM site was to give priority for short jobs from national and
seegrid VOs.
For MPI support read/write NFS shares are required.


.yu
In addition to the common LCG problems with CE installation and configuration, there were some SEE-GRID specific
ones. The site-info.def configuration file template and users.conf template for SEE-GRID was developed during the
upgrade to LCG-2_6_0 and posted at http://lcg.phy.bg.ac.yu/LCG-2_6_0/




79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                   Page 35 of 50

On AEGIS01-PHY-SCL we had a problem in LCG-2_6_0 with seegrid and see queues, which were confused by YAIM,
and it was needed to manually fix this. This problem is solved in LCG-2_7_0. Also, PBS APEL log parser was broken
in LCG-2_6_0, and this is fixed in LCG-2_7_0. All experiences are published on SEE-GRID Wiki in Guides section,
http://wiki.egee-see.org/index.php/SEE-GRID_Wiki
There were also problems with the information system that needed to be fixed manually. They are all listed on SEE-
GRID Wiki.


.al
Investigation for simple grid solutions that may be used in local networks of different institutions without need for
involvement in complex grid-oriented organizations is considered.
No particular problems are identified in that time, except the fact that precompiled binaries were not certified for Ms
Windows XP but for NT and 2000 only. It remains to test new releases in XP and Linux.


.hu
Official MPI support is missing for LCG. We have done some workarounds to support MPI.



2.2.1.3 Solutions / workarounds for the problems


.tr
Some of the nodes are dedicated to national and seegrid VOs as done in dteam PBS configuration.
Users of the MPI supported national VOs use a shared nfs home.


.yu
The permission on queues in LCG-2_6_0 should have to be corrected manually, so that the group corresponding to each
VO has exclusive access to the corresponding queue. Lcmaps needed also to be changed, since SEE VO users were
mapped as .see, and SEEGRID VO users as .seegrid, and it happened that .see resolved to seegrid001. This was avoided
by mapping SEE VO users to .see0. In LCG-2_7_0 this is not needed.
In information system there were several misconfigured values in LCG-2_7_0, and several bugs, most notably
https://savannah.cern.ch/bugs/index.php?func=detailitem&item_id=14946


.al
CONDOR as parallel solution has been tested in MS Windows environment only. No particular problems are identified,
except the fact that precompiled binaries were not certified for Ms Windows XP but for NT and 2000 only. It remains to
test new releases in XP and Linux.


.hu
Unofficial way of supporting MPI is deployed. We have done the setup, according to the posted “MPI Support with
Torque” guide of Charles Loomis on GOC Wiki. We have done minor changes in the script, because we are using SMP
nodes.




79163d0b-c558-4988-8845-d426662f0fd6.doc                SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                 Page 36 of 50


2.2.2. User Interface
.gr
           Installation of UI node was common like the others:
                  Configuration of yaim conf file
                  Execution of command: /opt/lcg/yaim/scripts/configure_node site-info.def lcg-WN_torque lcg-MON
                   lcg-UI
                  Installation of the certificates to the CE, SE. Only two certificates had to be used because CE and SE
                   reside to the same server (grid2).


.mk
The User interface is the same computer with the central element or the worker node. The installation of the
User interface did not made any problems. The installation and configuration with yaim proved to be good
once yaim is configured successfully.
The UI node for the MK-01-UKIM_II sites is: grid-ce.ii.edu.mk and for MK-02-ETF is grid-wn.etf.ukim.edu.mk




.tr
Various UI installations were done on sites for providing secure access to the SEE-Grid infrastructure. Using user web
portals is in the future plan.


.yu
UI on AEGIS01-PHY-SCL is ce.py.bg.ac.yu
UI on AEGIS02-RCUB is grid02.rcub.bg.ac.yu
UI on AEGIS03-ELEF-LEDA is grid01.elfak.ni.ac.yu


.al
INIMA: User Interface is installed in three separate individual workstations (two desktops and one notebook, all running
     dual operating systems)
FIE: User Interface installed in one of PCs.
FSN: User Interface installed in one of PCs.


.hu
Our User Interface node is the n45.hpcc.sztaki.hu and it is hosting the new P-GRADE Portal version 2.3 server as well.
It is a virtual machine with LCG 2_6, it is running on the n59 machine on the SZTAKI cluster. The LCG 2_7 installation
will occur with the other nodes until the end of February.



2.2.2.1.       Certificate issues
.gr
LCG user certificates were requested from HellasGrid CA (National grid CA), which is an accepted LCG2 certification
authority (prior to the SEE-GRID CA being established). The issue of the certificates was in accordance with the




79163d0b-c558-4988-8845-d426662f0fd6.doc                SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                   Page 37 of 50

process that is defined by HellasGrid. User Certificates were acquired and then it was possible to issue Certificates for
the various grid nodes.


.ro
The Romanian RA, Sandu Ionut, received his certificate and the steps for a certificate request were
described to the Romanian partners. The certification process is in progress for the third-parties as
the ICI users already held valid certificates.


.mk
The support for SEEGRID VO was done by installing the SEEGRID-CA rpm package that configured the grid-
security files. The users were then able to request new certificates for the SEEGRID CA.


.tr
There are 10 user certificates that are registered to seegrid and national VOs. Most of the authentication issues caused by
false seegrid VOMS configuration files that were packaged with LCG-2.6.0 distribution.


.yu
No problems.


.al
Certificates were requested immediately after installation of UIs and generation of requests.


.hu
No certificate issues. The previous certificate signed by the SEE-GRID CA is replaced with the new certificate issued by
the Hungarian NIIF Certification Authority.



2.2.2.2.     Occurrent problems
.gr
No significant problems have been encountered.


.ro
ICI delivers the SE service on the same machine where the RGMA service are deployed. Although
there were not significant issues encountered with the SE the RGMA services posed some irrelevant
problems related to the edg-tomcat service.


.mk
There were no problems regarding the UI node.




79163d0b-c558-4988-8845-d426662f0fd6.doc                SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                  Page 38 of 50


.yu
No problems.


.hu
No significant problems have been encountered.



2.2.2.3.       Solutions / workarounds for the problems


.ro
Minor changes in several scripts resolved all the encountered issues.


.mk
The storage element overlaps with the MON node. The storage element uses a classic storage. When DMP
was released with LCG 2_4_0 the MK-01-UKIM_II site tried to install DMP SE but after experiencing several
problems with it, the site was switched back to classic SE.
The Storage element uses local HDD for storage. There is no storage area network available.


.yu
No problems.


.hu
No workarounds.



2.2.3.     Storage Element
.gr
           The SE_torque was installed by following the guide of the manual installation. It was installed in grid1. A
           certificate about it was issued by Hellasgrid CA, after the relevant request. The certificate along with the
           corresponding private key was copied to /etc/grid-security.
           The steps below were common to the installation of all nodes:
                  Configuration of yaim conf file
                  Execution of command: /opt/lcg/yaim/scripts/configure_node site-info.def CE_torque SE_classic
                  Installation of the certificates to the CE, SE. Only two certificates had to be used because CE and SE
                   reside to the same server (grid1).


.mk
The storage element overlaps with the MON node. The storage element uses a classic storage. When DMP
was released with LCG 2_4_0 the MK-01-UKIM_II site tried to install DMP SE but after experiencing several
problems with it, the site was switched back to classic SE.
The Storage element uses local HDD for storage. There is no storage area network available.




79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                             Page 39 of 50




.tr
There are six SE machines in the TR-Grid infrastructure. Other than their storage element functions, these machines
have a shared area for experiments.


.yu
SE on AEGIS01-PHY-SCL is se.py.bg.ac.yu
SE on AEGIS02-RCUB is grid01.rcub.bg.ac.yu
SE on AEGIS03-ELEF-LEDA is grid01.elfak.ni.ac.yu


.al
All storage elements are installed using standard configuration files from SEE-GRID site
INIMA: Storage element was included in the CE node during upgrades of LCG, together with MON.
FIE: Storage element installed separately, in the same node as MON.
FSN: … work in progress


.hu
The n31.hpcc.sztaki.hu node is working as a Storage Element in the testbed. n31 have had a hardware problem. It is
running on LCG 2_7.



2.2.3.1.     Occurred problems


.gr
No significant problems have been encountered.


.bg
Yaim bug sets wrong permissions of experiment software information directories. Fixed by letting
the sgm users read/write there.
Rfiod wrongly configured by OS. Fixed by editing the rfiod start-up script.
Globus-mds even more unstable than on CE. Same workaround used.


.mk
The main problem with the Storage element was that RFIO was not working. This was the reason why VIVE
was not able to install for quite a time.


.tr
One common problem for all the sites was to choose a scaleable filesystem for adding extra storage.




79163d0b-c558-4988-8845-d426662f0fd6.doc              SEE-GRID consortium
D5.4 – Demonstration Labs                                                                               Page 40 of 50

Bundled nfs configuration in SL is not optimized for middle size or large sites. This causes problems when more than 4
nodes reach the nfs share at the same time.


.yu
Apart from hardware failures, no other problems detected.


.hu
n31.hpcc.sztaki.hu has a harddisk failure.



2.2.3.2.       Solutions / workarounds for the problems


.mk
The solution to this problem lies in the difference in the domain name of the WNs and the SE. The WNs have
marnet.net.mk DNS suffix and SE has ii.edu.mk DNS. This caused for this accesses to be treated as remote
accesses, and they were rejected. The solution was to create a file named /etc/shift.localhosts and put the
following lines:
grid-wn0.marnet.net.mk
grid-wn1.marnet.net.mk
grid-wn2.marnet.net.mk
grid-wn3.marnet.net.mk
grid-wn4.marnet.net.mk
grid-wn5.marnet.net.mk
Other problems were that sometimes lcg-rm test was failing because of timeouts.


.tr
LVM was used in some of the sites for adding and removing raw storage spaces.
NFS and TCP/IP kernel parameters were optimized on both server and client side.


.yu
After hardware failures, all data was recovered from backups.


.hu
Procurement of the new hardware elements, and hardware replacement on 20-21 February.



2.2.4.     Worker Nodes
.gr
           Installation of UI node was common like the others:
                  Configuration of yaim conf file




79163d0b-c558-4988-8845-d426662f0fd6.doc                SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                  Page 41 of 50

                  Execution of command: /opt/lcg/yaim/scripts/configure_node site-info.def lcg-WN_torque lcg-MON
                   lcg-UI
                  Installation of the certificates to the CE, SE. Only two certificates had to be used because CE and SE
                   reside to the same server (grid2).


.mk
The worker nodes are dedicated machines on both sites. The torque system utilizes them with no problems.
On MK-01-UKIM_II they are Pentium 4 with HT and torque treats them as dual processor machines.


.tr
As summarized in “installation cycles” section, kickstart based automatic OS installation was used to deploy WN
machines on all sites. Middleware installation was done with self generated management scripts. WN machines share
their experiment application directory from the SE machines and also some sites include a shared read/write home from
the CE for some MPI user groups.


.yu
WNs on AEGIS01-PHY-SCL are wn01.py.bg.ac.yu to wn25.phy.bg.ac.yu
WNs on AEGIS02-RCUB are grid03.rcub.bg.ac.yu to grid14.rcub.bg.ac.yu
WNs on AEGIS03-ELEF-LEDA are grid02.elfak.ni.ac.yu to grid05.elfak.ni.ac.yu


.al
Work-nodes are installed using standard configuration files from SEE-GRID site.
INIMA: 4 nodes, one is temporarily out of work, waiting for reconfiguration.
FIE: 8 nodes
FNS:


.hu
On Worker Nodes the LCG 2_7_0 upgrade will happen parallel with the SE, CE, UI nodes until the end of February.



2.2.4.1.       Occurred problems


.bg
Node synchronization is extremely important. Difficult to maintain homogeneous configuration
with the increasing number of WNs.


.mk
There were no problems regarding the Worker Nodes.




79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                   Page 42 of 50


.yu
No problems.


.hu
No significant problems.

2.2.4.2.     Solutions / workarounds for the problems


.bg
MON box needed for each site. Notoriously difficult to setup. We set our Storage Elements to be
also MON boxes. Some configuration files still had changeme.invalid instead of the correct entries,
after running yaim configure_MON script.


.hu
No problems, no workarounds.



2.3.       Achieved Results


.bg
All our sites are certified to run LCG2_7.


.ro
We have installed the LCG2 middleware on top of the Red Hat 7.3 linux distribution. Because that
distribution is no longer officially supported by the Vendor we had to use the community project
FedoraLagacy in order to get security updates.
Our production site was deployed on four dual Xeon servers with SATA harddisks. Because of that
the LCFGng based installation posed some difficulties as we had to modify the standard
configuration and to compile new profiles. Also this tool proved to be an overhead as we had only
four machines. In respect to that the LCG2 manual installation was chosen.
We started with LCG2_2_0 configured manually for all the site services. The UI services were
available and we could submit test jobs using different RBs. Also our CE was up and could handle
the submitted jobs. After the release of the LCG2_3_0 we upgraded the site using the Yaim tool
which proved to be a more efficient manner of installing and configuring a site.
After the installation was finished we started to test the site using the tests provided for that purpose
and we registered the site on the CERN's BDII. Since then we have been tested daily by the Site
Functional Tests suite, GIIS Monitor and other tools used for monitoring. Apart of that the RGMA
service wasn't activated and so the accounting for the site was disabled. This issue will be fixed in
this “maintenance” period of time. In addition to that HEP and BIOMED VOs would be enabled as
well as genuine SEE-GRID applications. From that moment it will become a full production site




79163d0b-c558-4988-8845-d426662f0fd6.doc        SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                Page 43 of 50


since the Dteam VO was only for testing and monitoring purposes. Then we updated the
middleware according to the newest release.
Our computing cluster is based on torque and maui as this was the recommendation for the actual
version of the LCG. Improvements in its reliability were noticed and we also plan to explore the
new features of openpbs and maui scheduler. As we had an old ipchains based firewall the upgrade
to maui produced some difficulties as the new ports used by it were blocked by the firewall. This
issue was a good opportunity for site security analysis and for a different security approach to be
taken in consideration.


.mk
The MK sites are running from the beginning of the project and have supported the seegrid VO. The usage of
the Grid infrastructure from the Macedonian research community is in its beginnings. The overall results of
the deployment of the clusters are that the Macedonian Grid Initiative has the know-how of the grid
deployment and grid operation. This will be crucial in adopting new sites and developing applications for the
grids, as well as giving support to the local research community.
The Macedonian team has great experience in deploying monitoring tools and their operation. The team has
deployed, adapted and debugged several monitoring tools that are actively used in the grid infrastructure for
SEEGRID.


.hr
We are very happy to have developed this alternative method of LCG-2 installation, as this enables us to use
the existing cluster as a part of SEE-GRID. A similar approach can be used to install LCG-2 as an add-on for
a previously installed/unsupported cluster distribution (e.g. Rocks, Oscar). This is also convenient for UI - this
way users can install UI software on their own computer running their favorite distribution and use it for
submitting jobs.


.tr
In the first year of the project only ULAKBIM and Bilkent University were offering resources to the SEE-Grid
infrastructure. After dissemination events four new sites were added to the infrastructure. One major challenge was to
solve problems at these sites on time. Collecting all monitoring data to ULAKBIM site provided instant intervention to
problems. This has been a good exercise for a reliable infrastructure most of which will be a part of EGEE production
level infrastructure.
Remote installation which was done with conference tools was also a good experiment for future use of these tools. LCG
middleware installation has become less problematic in this period and when combined with the automatic SL
installation, initial site work of a middle size cluster has decreased under half a day.
After accreditation of TR-Grid CA, ULAKBIM can supply certificates to user community in the country which is
described in detail in the “Certificate Authorities” section.
TR-Grid initiative has covered a long distance in the last one year of the project. Together with the expansion of the
infrastructure, human network was also broadened which was the first aim of the project.


.yu
Apart from providing smooth operation and timely upgrades of its three sites, UOB also contributed to the core services
providing SEE-GRID RB and BDII. In addition to this, we contributed to the documentation on SEE-GRID Wiki and to
the GIM mailing list.




79163d0b-c558-4988-8845-d426662f0fd6.doc              SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                    Page 44 of 50


.al
INIMA: completion of 4+1 nodes cluster LCG 2.4.0,
         minor problems remains open,
         integration with SEE-GRID regional structures,
         preparation for migration into SL3.0.5 & LCG 2.7.0
FIE:     completion of 8+2 nodes cluster LCG 2.6.0
FNS:     work in progress for 1 CE/SE and some WN-s, release SL 3.0.5 & LCG 2.6.0


.hu
We have successfully finished on time a new version of the P-GRADE Portal (version 2.3). On the testbed we have
changed our certificate, the old was issued by SEE-GRID CA and the new is issued by the Hungarian NIIF CA. We
have successfully achieved the Final Phase I, and all our nodes are working with Scientific Linux 3.0.3. We are working
now on the LCG-2 installation and configuration (Final Phase II), which will finish at the end of February.




3. Certificate Authorities
.gr
LCG user certificates were requested from HellasGrid CA (National grid CA), which is an accepted LCG2 certification
authority (prior to the SEE-GRID CA being established). The issue of the certificates was in accordance with the
process that is defined by HellasGrid. User Certificates were acquired and then it was possible to issue Certificates for
the various grid nodes.


.mk
Macedonia still uses the SEEGRID CA located in Greece. The Regional Authority is still not capable of
certification of the National Grid CA, but will do so in the future, as the infrastructure is developed and the
community grows.


.tr
After an acceptance process of 4 months, National Grid Certification Authority of Turkey (TR-Grid CA) has been
accredited at the fifth EUGridPMA Meeting in Poznan, in September 2005. Before accreditation of TR-Grid CA,
necessary user and host certificates were obtained from SEE-Grid CA. After the registration to the repositories of the
PMA, since October 2005, TR-Grid CA has been providing user, host and service certificates to the grid community in
Turkey. By February 2006, there are 6 user, 15 host and 1 service certificates issued by TR-Grid CA and there are still
3 user and 6 host certificates signed by SEE-Grid CA, that have not expired yet. Currently, among SEE-Grid sites in
Turkey, TR-02-BILKENT and TR-03-METU are using SEE-Grid CA certificates and TR-01-ULAKBIM, TR-04-
ERCIYES, TR-05-BOUN and TR-06 SELCUK are using TR-Grid CA certificates.


TR-Grid CA is trying to provide efficient PKI services by means of its public repository at http://www.grid.org.tr/ca.
You can find the policy document, TR-Grid CA Root Certificate, the list of valid certificates, certificate revocation list
(CRL) and a web-based SSL protected certificate request form at this website. Moreover, on TR-Grid wiki page
(http://wiki.grid.org.tr), for the new Grid users and site administrators, there is useful and explanatory information about
grid security in native language.




79163d0b-c558-4988-8845-d426662f0fd6.doc                SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                     Page 45 of 50


.yu
Not yet.


.al
Certification authority is activated as individual role. Several certificates for people and hosts are requested and received
from SEE-GRID CA.


.hu
The NIIF Certificate Authority provides PKI services for the Hungarian academic community. Certificates are
issued to staff persons and students of NIIF member organizations (ie. Hungarian universities, high-schools,
research institutes of the Hungarian Academy of Sciences, public collections etc.) and any other organization
cooperating with these entities in the practice of research, and educational functions. Certificates may also be
issued to network hosts and application services operated by the above organizations. The NIIF CA is
operated by the NIIF Office, which located in Victor Hugo u. 18-22. H-1132 Budapest, Hungary. The WEB
Site of NIIF can be reached on the following link: http://www.ca.niif.hu/niif_ca_howtoget.html


4. Final state of SEE-GRID sites
The following table presents an overview snapshot of the SEE grid aggregated resources.




                                        Figure – SEE-GRID Resources Overview




79163d0b-c558-4988-8845-d426662f0fd6.doc                 SEE-GRID consortium
D5.4 – Demonstration Labs                                                                            Page 46 of 50


5. P-GRADE PORTAL: The WEB-based single access point of
   the SEE-GRID resources

5.1. Overview of the P-GRADE Portal
The P-GRADE Grid Portal was developed by the Laboratory of Parallel and Distributed Systems at MTA-SZTAKI,
Hungary. It is a workflow oriented WEB-based Grid portal, which offers reliable single access point to all the SEE-
GRID resources. It enables the creation, execution and monitoring workflows in SEE-GRID Grid infrastructure through
high-level, graphical Web interfaces. Components of the workflows can be sequential and parallel (MPI, PVM) jobs.
The P-GRADE Grid Portal hides the low-level details of Grid access mechanisms by providing a high-level Grid user
interface, and it is able to cover the whole lifecycle of workflow-based grid applications.




                                 Figure - P-GRADE Portal welcome screen of SEE-GRID.



5.2. Features of P-GRADE Grid Portal
P-GRADE Portal has been built onto the Globus middleware, particularly those tools of GT-2 that are generally
accepted and widely used in production grids today: Globus GridFTP, GRAM, MDS and GSI have been chosen as the
basic underlying toolset for the Portal.
P-GRADE Portal has the following main features:
       Built-in graphical Workflow Editor
       Workflow manager to coordinate the execution of workflows in the Grid (including the coordination of the
        necessary file transfers)
       Certificate management
       Multi-Grid management
       Resource management
       Quota management
       On-line Workflow and parallel job monitoring
       Built-in MDS and LCG-2 based Information System management
       Local and Remote files handling
       Storage Element management
       JDL (Broker) support for resources of the LCG-2 Grid



79163d0b-c558-4988-8845-d426662f0fd6.doc            SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                Page 47 of 50

       Workflow fault tolerance by job level rescuing
       Workflow archive service




                          Figure - Certificate Management with MyProxy in the P-GRADE Portal



5.3. Detailed description of the P-GRADE Portal
The P-GRADE Grid Portal is an extension of the GridSphere portal framework and it offers portlet based access to the
following services:
       Creation of workflows from sequential (C, C++, Fortan, etc.) and parallel (MPI, PVM) programs
       Executing job workflows on Globus technology based Grid resources
       Exploiting two-level parallelism: a. among jobs of a workflow; b. among processes inside a parallel job
       Parallel execution of workflow components inside one Grid or in different Grids
       Managing user certificates and proxy credentials to realize secure communication with grid participants
       Collecting trace data from the running jobs by the Mercury monitor.
       On-line visualization of the execution of workflows, communication between workflow components and
        visualization of process communication inside a parallel job running on a remote Grid site.




                              Figure - Visualization of process communication in P-GRADE


5.4. Grid Applications in the P-GRADE Portal
Workflow applications can be developed in the P-GRADE Portal by the graphical Workflow Editor. The Editor is
implemented as a Java Web-Start application that can be downloaded and executed on the client machines on the fly.
The Editor communicates only with the portal server application and it is completely independent from the portal
framework and the grid middleware the server application is built on. A P-GRADE Portal workflow is an acyclic
dependency graph that connects sequential and parallel programs into an interoperating set of jobs. The nodes of such a
graph are batch programs, while the arc connections define data relations among them. These arcs define the execution




79163d0b-c558-4988-8845-d426662f0fd6.doc              SEE-GRID consortium
D5.4 – Demonstration Labs                                                                       Page 48 of 50

order of the jobs and the dependencies between them that must be resolved by the workflow manager during the
execution. One of the demo workflows of the P-GRADE Portal is shown in the xx Figure as an example.




                     Figure - Demo workflows of the P-GRADE Portal in the Workflow Editor



5.5. P-GRADE Portal Development Roadmap




                   Figure - Past and Future Roadmap of the P-GRADE Portal Development




6. Future plans
.mk
The future plan is to install additional clusters in other universities and to spread the know-how to the
scientific environment. Also big problem represents the small storage and we hope that the site will be
upgraded with network storage.
The plans for the current sites are to provide sustainability of the infrastructure by constant upgrading and
patching with the newest software.
Also the interests of the third parties are to get involved in Itanium-2 LCG deployment and development, in
order to utilize the Itanium machines that are not used fully currently.
.tr




79163d0b-c558-4988-8845-d426662f0fd6.doc          SEE-GRID consortium
D5.4 – Demonstration Labs                                                                                  Page 49 of 50

As can be seen in the following figure in near future, TR-Grid infrastructure will include two main different type
clusters:
      -   EGEE clusters which will work on both EGEE and SEE-Grid infrastructure.
      -   SEE-Grid only clusters which are development clusters with an adequate number of CPUs. These clusters will
          provide a development and certification testbed for national and regional grid applications.
In the second quarter of 2006, TR-Grid infrastructure is going to include more than 500 CPUs and also 20 Tbyte
storage. 6 new sites with 32-128 CPUs will be installed and TR-01-ULAKBIM site will be extended to over 250 CPUs.
National grid services such as VOMS, LFC is going to be distributed on new sites with fair UlakNet connection.
Grid projects also triggered the use of new technologies on UlakNet. Interconnection of ULAKBIM and Bilkent
University sites were provided by a gigabit ethernet link over a dark fiber. 20 Gbps link is going to be in service at the
second quarter of 2006 between Middle East Technical University and ULAKBIM sites which is also a start for using
10 Gbps ethernet technology at MAN level.




.yu




79163d0b-c558-4988-8845-d426662f0fd6.doc               SEE-GRID consortium
D5.4 – Demonstration Labs                                                                              Page 50 of 50

We will keep up with the latest LCG/gLite releases and upgrade our sites according to the decisions taken by the SEE-
GRID community.
We are also planning to add more sites into the SEE-GRID, at least one during this year.


.al
INIMA: Establish dedicated cluster,
       follow-up new releases,
       investigate and experiment local software,
       give support for regional software

FIE:    …
        follow-up new releases,
        investigate and experiment local software,
        give support for regional software
FIE:    …
        follow-up new releases,
        investigate and experiment local software,
        give support for regional software


.hu
As long term plans, we have already visioned new P-GRADE Portal release milestones in our Portal Development
Roadmap. We would like to enhance forward the P-GRADE Portal, and serve the SEE-GRID user community with
attractive, high-end quality, reliable Grid portal software.




79163d0b-c558-4988-8845-d426662f0fd6.doc              SEE-GRID consortium

								
To top