Alberto Santoro: Digital Divide Executive Report Outline ...



[pic]

International Committee for Future Accelerators (ICFA)

Standing Committee on Inter-Regional Connectivity (SCIC)

Chairperson: Professor Harvey Newman, Caltech

ICFA SCIC Digital Divide Executive Report

Prepared by the ICFA SCIC Digital Divide Working Group

Chairperson: Professor Alberto Santoro, UERJ, Rio de Janeiro, Brazil

Members: Heidi Alvarez heidi@fiu.edu;

Julio Ibarra julio@fiu.edu;

Slava Illyin Ilyin@sinp.msu.ru;

Marcel Kunze Marcel.Kunze@hik.fzk.de;

Yukio Karita yukio.karita@kek.ip;

David O. Williams David.O.Williams@cern.ch;

Victoria White white@;

Harvey B. Newman newman@hep.caltech.edu

February 8, 2003

Table of Contents

I. Introduction and Overview 4

II Methodology 6

III. Conclusions 7

IV. Recommendations 12

V. Remaining Actions: Path to a Solution - A Worldwide CyberInfrastructure for Physics 15

Appendix A – The Americas, with a Focus on South and Central America, and Caribbean NRN Information 16

The Americas – South and Central America, the Caribbean and Mexico 16

Research Networks in Latin America 17

Infrastructure in Latin America 19

Links to Regional High Energy Physics Centers and Networking Information 25

South and Central America 25

The Americas – North America 26

United States 26

Canada 28

Appendix B – Questionnaire / Responses 30

ICFA SCIC Questionnaire – Responses by Russian Sites 58

Appendix C - Case Studies 59

Rio de Janeiro, Brazil – Network Rio ( REDE RIO) 59

Beijing, China 61

Table of Tables

Table 1: Characteristics of Research and Education Networks in Latin American Countries 19

Table 2: Submarine Fiber-Optic Cables to CALA Region with Total Bandwidth Capacity 20

Table 3: Respondents 33

Table 4: Connection Status 35

Table 5: Connection Details 39

Table 6: Other networking needs 43

Table 7: Most relevant networking related problems 46

Table 8: Presented ideas for prospective solutions 48

Table 9: Present Computing Facilities Dedicated to HEP, 50

Table of Figures

Figure 1Global Physics Network Cost Model 9

Figure 2 Penetration of Communication Technologies versus the Cost as a % of Per Capita Income 10

Figure 3: Non-Homogeneous bandwidth distribution in South America 17

Figure 4: Submarine and Terrestrial Optical Cable Systems connecting the Americas 20

Figure 5: RNP2, Brazil's National Research and Education Network 23

Figure 6: Rede Rio Network in Brazil, Rio de Janeiro 24

Figure 7: The Abilene Network 27

Figure 8: ESnet Backbone 28

Figure 9: Canada's CAnet4 29

I. Introduction and Overview

In an era of global collaborations, and data intensive Grids, advanced networks are required to interconnect physics groups seamlessly, enabling them to collaborate throughout the lifecycle of their work. For the major experiments, high performance networks are required to make possible Data Grids capable of processing and sharing massive datasets, rising from the Petabyte to the Exabyte scale within the next decade.

However, as the pace of network advances continues to accelerate, the gap between the technologically “favored” regions and the rest of the world is, if anything, in danger of widening. This gap in network capability and the associated gap in access to communications and Web-based information, e-learning and e-commerce that separates the wealthy regions of the world from the poorer regions is known as the Digital Divide. Since networks of sufficient capacity and capability in all regions are essential for the health of our major scientific programs, as well as our global collaborations, we must encourage the development and effective use of advanced networks in all world regions.

It is of particular concern that “Digital Divide” problems will delay, and in some cases prevent, physicists in less economically favored regions of the world from participating effectively, as equals in their collaborations. Physicists in these regions have the right, as members of the collaboration who are providing their fair-share of the cost and manpower to build and operate the experiment, to be full partners in the analysis, and in the process of search and discovery. Achieving these goals requires the experimental collaborations to make the transition to a new era of worldwide-distributed data analysis, and to a new “culture of collaboration”. As part of the culture of collaboration, the managements of the laboratory and the experiments would have to actively foster new working methods that include physicists from remote regions in the daily ongoing discussions, and knowledge sharing that is essential for effective data analysis. This same culture is essential if the expert physicists and engineers from all regions who built detector components and subsystems are to participate effectively in the commissioning and operation of the experiment, through “Virtual Control Rooms”. For any of this to happen, high performance networks are required, in all regions where the physicists and engineers are located.

The purpose of the Digital Divide sub-committee is to suggest ways to help alleviate and, where possible, eliminate inequitable and insufficient broadband connectivity. As part of this process this report documents the existing worldwide bandwidth asymmetry in research and education networks.

From the scientists’ point of view, what matters is the end-to-end performance of a given network path. This path involves the local networks at the laboratories and universities at the ends of the path; the metropolitan or regional networks within and between cities; and the national, continental and, in some cases, intercontinental network backbones along the path[1]. Compromises in bandwidth or link quality in any of these networks can limit overall performance as seen by other physicists. We can therefore decompose end-to-end performance problems into three tiers of possible bandwidth bottlenecks:

• The Long Range (wide area) Connection on national and international backbones

• The “Last Mile”[2] Connection across the metropolitan or regional network, and

• The Local Connection across the university or laboratory’s internal local area network

If any one of these components presents a bandwidth bottleneck, or has insufficient quality leading to poor throughput, it will not be possible to effectively exploit the next generation of Grid architectures that are being designed and implemented to enable high-energy physicists in all world regions to work effectively with the data from experiments such as the LHC.

In addition to the problem of bandwidth of the links themselves, there may be additional problems preventing the installation or development of adequate networks:

➢ Lack of consistent policies among organizations (e.g. the research universities themselves) in some world regions, allowing the evolution of bandwidth provisioning to be managed coherently. This can also prevent the establishment of reasonable policies for sharing the bandwidth in case it is limited

➢ Lack of Competition, due to Government policies (for example monopolies), or lack of network infrastructure in the region from competing vendors, leading to very high bandwidth costs;

➢ Government Policies that inhibit encourage international network connections with good performance

➢ Lack of cooperation among carriers managing national, regional and metropolitan networks

➢ Lack of adequate peering among different academic and research networks

➢ Lack of arrangements for transit traffic across a national or continental infrastructure, to link remote collaborative research communities

This report identifies locations where adequate end-to-end network performance is unavailable for our high-energy physics community, and tries to look into the underlying reasons.

The work of the Digital Divide sub-committee was motivated, in addition to the need to solve some of HENP’s major problems, by the realization that solving the end-to-end network problems of HENP will have broad and profound implications for scientific research. The fields of Astronomy, Medicine, Meteorology, Biological Sciences, Engineering, and High Energy Physics are recognizing and employing data intensive applications that heavily and increasingly rely on broadband network connections. Many scientific and engineering disciplines are building a new world of computing applications based on Grid architectures that rely, in turn, on increasing levels of network performance.

The impact of solving the Digital Divide problem extends even beyond the bounds of the research community. In the twentieth century at least three basic infrastructures were needed for a modern city to be viable: Pervasive water distribution, drainage distribution, and electricity distribution through an electrical grid. At the beginning of the twenty-first century, a modern city also has to be equipped with pervasive optical network infrastructures that enable advanced technologies in computing and media communications. These four infrastructure conditions are now a necessary condition for a society’s advancement in the sciences, arts, education, and business. It is generally accepted that those countries making concerted, continuing efforts to provide pervasive access to the latest high speed “broadband” network technologies and services in residences, schools and the at-large community can look forward to the greatest progress, economically and culturally, in the 21st century.

The solutions to HENP’s leading-edge problems, providing networks and building new-generation systems enabling distributed organizations and worldwide collaboration, if developed and applied in a broad context, could therefore have an important impact on research, and society.

This Executive Report provides the Digital Divide sub-committee’s main conclusions, and recommendations. The appendices provide the detailed materials that we have gathered to support these conclusions and recommendations.

II Methodology

The Digital Divide sub-committee carried out their work in the following manner:

• Regular phone meetings of the group to chart actions and discuss data

• Produced a questionnaire (see Appendix E) and disseminated it to the management of labs and major collaborations

• Analyzed and tabulated the responses to the questionnaire

• Gathered maps, network topologies and bandwidth measurements from web sources

• Gathered case studies (Appendix C)

o Rio de Janeiro, Brazil

o Beijing, China

o Russia, Digital Divide and Connectivity for HEP in Russia

There exists a small, but well defined and documented, community of research and education networking organizations throughout the world from which reference material for this report was drawn. TERENA’s[3] recent compendium provides important information about the current status of Trans-European networks and serves as a basic input for this evaluation. The US National Science Foundation CISE-ANIR Division AMPATH Valdivia Group Report[4] contains a recent evaluation and recommendation about Latin American and Caribbean network development from a collaborative scientific research and educational outreach perspective. In this report we use data from these sources[5] plus information gathered from the questionnaire.

In assembling this report, the sub-committee observed that there is a shortage of readily available documentation. While some material is available on the Internet, it is difficult to locate and display information that is oriented to a physics network. The available maps, shown in the Appendix, display the present bandwidth availability to HEP collaboration points per country, and only at the major research centers.

III. Conclusions

1. The bandwidth of the major national and international networks used by the HENP community, as well as the transoceanic links is progressing rapidly and has reached the 2.5 – 10 Gbps range. This is encouraged by the continued rapid fall of prices per unit bandwidth for wide area networks, as well as the widespread and increasing affordability of Gigabit Ethernet.

This is important and encouraging because :

a. Academic and Commercial use of, and reliance on, networks is increasing rapidly as a result of new ways of working.

b. High energy physicists increasingly rely on networks in support of collaborative work with each other and with scientists in other disciplines, such as medicine, astronomy, astrophysics, biology, climate research and other disciplines. More and more, the inter-disciplinary interest is stimulated as these domains start work in international collaborations using the research networks.

c. Universities around the world have been engaged in the use of the Internet (including advanced Internet private networks) for some time now. Students

around the world are engaged in attending courses across many scientific domains. Many of these courses are now being developed as interactive experiences and lectures provided at long distance using IP-based research networks or the Internet to varying degrees. Once the infrastructure is in place it becomes an economical alternative to expensive ISDN dial-up networking for lecture delivery by videoconference.

d. Grids offer an enabling technology that permits the transparent coupling of geographically-dispersed resources (machines, networks, data storage, visualization devices, and scientific instruments) for large-scale distributed applications. They provide several important benefits for users and applications: convenient interfaces to remote resources, resource coupling for resource intensive distributed applications and remote collaboration, and resource sharing. They embody the confluence of high performance parallel computing, distributed computing, and Internet computing, in support of “big science.” They require a high performance and reliable network infrastructure. For LHC experiments they will be essential.

2. A key issue for our field is to close the Digital Divide in HENP, so that scientists from all regions of the world have access to high performance networks and associated technologies that will allow them to collaborate as full partners: in experiment, theory and accelerator development.

This is not the case at present and several examples where inadequate network infrastructure adversely affects the ability of scientists to fully participate were noted through the work of the Digital Divide sub-committee. Also, some of the obstacles that cause this Digital Divide were identified through the sub-committee efforts.

a. In current collider experiments, such as the DZero experiment at the Fermilab Tevatron, potentially valuable computing (and human) resources that could be used for Monte Carlo data production, or for analysis of data cannot be harnessed due to insufficient bandwidth to transfer data between, for example, sites in Brazil and Fermilab.

b. Among the problems pointed out by the community the most important are

1. Cost: The high cost of bandwidth makes prohibitive the ability to upgrade the research networks in several countries[6].

The cost issue is complex, as illustrated in Figure 1. A typical physics group in a “remote region” may find itself faced with cost problems, as well as performance issues, in the local university network, last mile connection or metropolitan network, and national and international wide area network. The typical case at the present time, in regions such as South America, Southeast Europe, China, India, Pakistan, Russia and parts of Southeast Asia is that the total cost (Y in the figure) for an adequate network connection end-to-end, is prohibitive.

[pic]

Figure 1Global Physics Network Cost Model

2. Sharing the research networks with the university user communities: In many cases University networks are not properly engineered to support the high demands of HEP research traffic, or the organization supporting the network is not geared (or inclined) to support a “high performance” sub-community on the campus.

3. New technology penetration is slower in poor regions, such South and Central America, Mexico, Southeast Europe, and large parts of Asia, than in the richer regions. It is generally accepted that once a technology is perceived as having broad utilitarian value, price as a percentage of per capita income is the main driver of penetration: a lower percentage corresponds to a much greater penetration (Figure 2).

The penetration of telecommunications technologies in low income countries is further inhibited by at least 3 factors:

• Low income per capita;

• Less competition, resulting in higher prices from monopolies;

• Fewer applications, as new applications are perceived to have No broad utilitarian value.

The above factors have helped contribute to the fact that Telecom monopolies, and some independent telecom operators as well, have even higher prices in low income countries (e.g. this is the case in Russia).

Speaking about networks in Latin America, a leader of network[7] development in Mexico said:

Broadband is [currently] about expanding the digital divide, not narrowing it. The ones who need [broadband connectivity] should get it. Let educators, researchers, businesses, hospitals and governments get it. In a second [round of development, the] increased income will drive [further] penetration.

[pic]

Figure 2 Penetration of Communication Technologies versus

the Cost as a % of Per Capita Income

3. The rate of progress in the major networks has been faster than foreseen (even 1 to 2 years ago). The current generation of network backbones, representing a typical upgrade by a factor of four in speed, arrived in the last 15 months in the US, Europe and Japan. This rate of improvement is 2 to 3 times Moore’s Law. This rapid rate of progress, confined mostly to the US, Europe and Japan, threatens to open the Digital Divide further, unless we take action.

4. There are many end-to-end performance problems, particularly for networks in economically poor regions. For many regions achieving reliable high end-to-end performance is a great challenge and will take much more work in the following areas.

a. End-to-end monitoring of traffic flows extending to all regions serving our community. We must collect data and in some cases conduct simulations of all branches of the research networks in order to understand traffic flow and end-to-end performance. Tools, such as PINGer and IEPM have been developed by ICFA/SCIC members, in order to provide clear and unambiguous information of where there is packet loss. Results from PINGer provide information to study the behavior of network segments and to discover bottlenecks.

b. Dissemination of a network engineering cookbook or “best practices” guidelines for university networks to properly support HEP research traffic.

It is incumbent on the HEP community to work with the network engineering communities to formulate this information and spread it widely, as some university (and city or provincial) networks are not properly set up to support HEP research traffic. As an example, the Southeastern University Research Association (SURA), in the United States, published an Optical Networking Cookbook[8], as a result of an NSF-funded Optical Technologies Workshop, to provide a practical resource that details the ingredients required for optical networking and offers "recipes" in the form of case studies that illustrate a variety of optical networking implementations.

c. Ensuring network architectures for the HEP community take into account Grid application requirements to prevent network bottlenecks from occurring. To prevent these bottlenecks, the end-to-end performance of the network path (taking into account the Local Connection, the Last Mile and the Long-Range Wide-Area connection) must be considered when designing the network architecture for HEP Grid applications. It has become essential that network architectures take into account the levels of requirements that are demanding distinctions in networks for experimentation, research and production uses. Network architectures at Tier centers must be sufficiently scaleable and flexible to support a cyber-infrastructure[9] for experimental, research and production end-to-end Grid application requirements. A cyber-infrastructure[10] encompasses the emerging computational, visualization, data storage, instrumentation and networking technologies that support major science and engineering research facilities, enabling e-Science. It recommends an architecture that empowers the researchers (application developers) to have access to essential network and computational resources in a coordinated manner, through middleware. ICFA/SCIC should consider adoption of the cyber-infrastructure architecture to ease the Digital Divide conditions the HENP community is presently experiencing.

d. Systematizing and tracking information on network utilization as a function of time, and using this information together with requirements estimates from several fields on requirements, to predict the network needs over the next decade. This is being done for the major backbones and international links, but we need to generalize this to cover all regions of the world.

Our questionnaire shows that congestion on research networks is being caused by the large traffic volume that occurs during the day’s peak usage hours. Even without using the data from the questionnaire, models can be created to forecast the network load in a 5 to 10 year window, based on the requirements and a range of new applications coming into use in the following disciplines: Medical Projects (there are several), Long Distance Education (Universities throughout the world are organizing courses using the Internet), Astrophysics and Astronomy, Biology, Genome Projects, Climate, Videoconferences in general, and so on. These applications will share the same network as HEP Grids.

e. Dealing with (many) “Last Mile” connection problems. It is insufficient to obtain terrestrial or submarine cable connections between regions. There must be equal broadband connectivity from the Points-of-Presence (PoPs) and the institutions where the Tier-N (N=1, 2, 3) data centers are located. In many regions, it is the local networks, city or state networks that are the problem. In many cases, the link coming into a country has a much greater bandwidth than links within or between cities in the country.

f. Dealing with (many) Long Range (wide area) connection problems, between long-distance regional network segments that interconnect the HEP centers and universities. These segments often serve to interconnect aggregation points of regional networks and to move large volumes of traffic between these points. Network congestion and bottlenecks can easily occur between segments if the network architecture cannot accommodate network traffic-volume variations and large traffic bursts.

Another problem is “peering” among different networks, where the routing policies interconnecting two or more networks, managed by separate organizations, might be poorly configured, because the optimal end-to-end path was not considered.

Scalable network architectures and pervasive network monitoring are essential to be able to prevent network congestion and bottlenecks in these cases.

IV. Recommendations

The world community will only reap the benefits of global collaborations in research and education, and of the development of advanced network and Grid systems, if we work to close the Digital Divide that separates the economically and technologically most-favored from the less-favored regions of the world. We recommend that some of the following steps be taken to help to close the divide:

• Identify and work on specific problems, country by country and region by region, to enable groups in all regions to be full partners in the process of search and discovery in science. We have seen that networks with adequate bandwidth tend to be too costly or otherwise hard to obtain in the economically poorest regions. Particular attention to China, India, Pakistan, Southeast Asia, Southeast Europe, Russia, South America and Africa is required.

Performance on existing national, metropolitan and local network infrastructures also may be limited, due to last mile problems, political problems, or a lack of coordination among different organizations[11].

• Create and encourage inter-regional programs to solve specific regional problems. Leading examples include the Virtual Silk Highway project () led by DESY, the support for links in Asia by the KEK High Energy Accelerator Organization in Japan (), and the support of network connections for research and education in South America by the AMPATH “Pathway to the Americas” ( ) at Florida International University.

• Make direct contacts, and help educate government officials on the needs and benefits to society of the development and deployment of advanced infrastructure and applications: for research, education, industry, commerce, and society as a whole.

• Share and systematize information on the Digital Divide. The SCIC is gathering information on these problems and developing a Web Site on the Digital Divide problems of research groups, universities and laboratories throughout its worldwide community. This will be coupled to general information on link bandwidths, quality, utilization and pricing. This will promote understanding the nature of the problems: from lack of backbone bandwidth, to last mile connectivity problems, to policy and pricing issues.

Specific aspects of information sharing that will help develop a general approach to solving the problem globally include:

o Sharing examples of how the Divide can be bridged, or has been bridged successfully in a city, country or region. One class of solutions is the installation of short-distance optical fibers leased or owned by a university or laboratory, to reach the “point of presence” of a network provider. Another is the activation of existing national or metropolitan fiber-optic infrastructures (typically owned by electric or gas utilities, or railroads) that have remained unused. A third class is the resolution of technical problems involving antiquated network equipment, or equipment-configuration, or network software settings, etc.

o Making comparative pricing information available. Since international network prices are falling rapidly along the major Transatlantic and Transpacific routes, sharing this information should help us set lower pricing targets in the economically poorer regions, by pressuring multinational network vendors to lower their prices in the region to bring them in line with their prices in larger markets.

o Identifying common themes in the nature of the problem, whether technical, political and financial, and the corresponding methods of solution.

• Create a “new culture of collaboration” in the major experiments and at the HENP laboratories. (This is discussed further in the main SCIC report to ICFA).

• Use (lightweight; non-disruptive) network monitoring to identify and track problems, and keep the research community (and the world community) informed on the evolving state of the Digital Divide. One leading example in the HEP community is the Internet End-to-end Performance Monitoring (IEPM) initiative () at SLAC.

It is vital that support for the IEPM activity in particular, which covers 79 countries with 80% of the world population be continued and strengthened, so that we can monitor and track progress in network performance in more countries and more sites within countries, around the globe. This is as important for the general mission of the SCIC in our community as it is for our work on the Digital Divide.

• Work with the Internet Educational Equal Access Foundation (IEEAF) (), and other organizations that aim to arrange for favorable network prices or outright bandwidth donations[12], where possible.

• Prepare for and take part in the World Summit of the Information Society (WSIS; ). The WSIS will take place in Geneva in December 2003 and Tunis in 2005. HENP has been involved in preparatory and regional meetings in Bucharest in November 2002 and in Tokyo in January 2003. The WSIS process aims to develop a society where

“highly-developed… networks, equitable and ubiquitous access to information, appropriate content in accessible formats and effective communication can help people achieve their potential…”.

These aims are clearly synergistic with the aims of our field: for worldwide collaboration, for effective sharing of the data analysis as well as the operation of experiments, and for the construction and operation of future accelerators as “global facilities”.

HENP has been recognized as having relevant experience in effective methods of initiating and promoting international collaboration, and harnessing or developing new technologies and applications to achieve these aims. It has been invited to run a session on The Role of New Technologies in the Development of an Information Society[13], and has been invited[14] to take part in the planning process for the Summit itself. CERN is planning a scientific event shortly before the Geneva Summit.

This recommendation therefore concerns a call for continuing work, and involvement by additional ICFA members in these activities to help prepare a statement to be presented at the WSIS, and other actions that will assist the WSIS process.

• Formulate or encourage bi-lateral proposals[15], through appropriate funding agency programs. Examples of programs are the US National Science Foundation’s ITR and International programs, the European Union’s Sixth Framework and @LIS programs, NATO’s Science for Peace program and FASTnet project linking Moscow to StarLight funded by the NSF and the Russian Ministry of Industry, Science and Technologies (MoIST).

• Help start and support workshops on networks, Grids, and the associated advanced applications. These workshops could be associated with helping to solve Digital Divide problems in a particular country or region, where the workshop will be hosted. One outcome of such a workshop is to leave behind a better network, and/or better conditions for the acquisition, development and deployment of networks.

The SCIC is planning the first such workshop in Rio de Janeiro in February 2004.

An organizational meeting will be held in July 2003. ICFA members should participate in these meetings and in this process.

• Help form regional support and training groups for network and Grid system development, operations, monitoring and troubleshooting[16].

V. Remaining Actions: Path to a Solution - A Worldwide CyberInfrastructure for Physics

“The US Broadband Problem” by Charles H. Ferguson[17] describes the dynamics of the economy behind fiber optic terrestrial and submarine circuits. This report, which provides a preliminary survey of the present broadband connectivity status, provides a clear direction, for us to develop a project proposal with the unifying theme of developing an integrated worldwide Cyberinfrastructure[18], where all HEP research institutes are connected at standard Gigabit/sec (Gbps) or higher end-to-end speeds, based on fiber-optic links supporting multiple wavelengths[19]. Without this assertive and cohesive direction, current and near-future HEP experiments will be severely limited in their ability to achieve their goals of global participation in the data analysis and physics; nor will these experiments be able to make effective use of emergent Data Grid technologies in support of these goals.

Appendix A – The Americas, with a Focus on South and Central America, and Caribbean NRN Information

Introduction

The appendix describes the research and education networks and infrastructure in the Americas: North America, South and Central America, the Caribbean and Mexico.

The Americas – South and Central America, the Caribbean and Mexico

Countries of South and Central America, the Caribbean and Mexico have a large Digital Divide problem. If this problem is not fixed, the small number of countries participating at the LHC experiments will have difficulties being able to contribute to the analysis, and will not be able to participate in physics meetings, shifts on line, and so on. These countries will be condemned to a very secondary role on these experiments. We will have a sad contradiction in the sense that with the advancement in digital technologies, the broadband problem[20] is becoming a major bottleneck for the world economy. The “broadband bottleneck”, exacerbating the Digital Divide problem, will increase the cultural distances between the economically developed countries and the countries that have people scientifically prepared to contribute to the development of science, but cannot as a result of unequal access to Next Generation Infrastructure. From the AMPATH Valdivia workshop Report[21], we can see maps (figure 4 below) and comments about the region, which show the present status of the network bandwidth distribution in the countries of South America.

[pic]

Figure 3: Non-Homogeneous bandwidth distribution in South America

The challenges associated with the AMPATH service area[22] are its immense size, the total number of secondary schools, colleges and universities and the unknown number of scientific activities that are going on. Some of the individual countries are well advanced in their networking evolution, e.g. Chile and Brazil, while others are not so well connected. It is the opinion of the Valdivia Group that most of the countries in the service area have problems with their infrastructure. The Figure shows the countries that have adequate networking infrastructure, those that are evolving to an adequate infrastructure, and those countries that are not well connected. This issue is what is referred to in the US as the “last mile” problem, in that in the US, the last mile of copper wire into the home was considered a difficulty. The last mile - and in many cases the “last thousand miles” in many Latin American countries - is a challenge that needs to be addressed. The in-country infrastructure of some Latin American countries is severely underdeveloped.

Research Networks in Latin America

Table 1 summarizes the characteristics of the research and education networks in 17 Latin American countries[23] . There are many low-speed last-mile circuits in the Kbps range, and numerous backbone circuits in the single digit Mbps range – clearly unable to support HENP GRID requirements. There is little to no published information about the university network infrastructures. Nevertheless, one can infer that the on-campus network infrastructures, where HENP researchers are located, are also inadequate.

|Country |Organization |

|Americas 1 |0.560 |

|Americas II |2.5 |

|Global Crossing’s South American Crossing |1,280 |

|Columbus II |0.560 |

|Columbus III |2.5 |

|Telefonica’s Emergia |1,920 |

|ARCOS |960 |

|Maya-1 |60 |

|360 Americas |10 |

Table 2: Submarine Fiber-Optic Cables to CALA Region with Total Bandwidth Capacity

The total aggregate bandwidth capacity into the Latin American region is estimated at 4,236 GB. The total bandwidth capacity for the research and education community to access global RENs via AMPATH is 225Mbps. Approximately 71.2Mbps is being utilized[25]

Other Research Networks in the Latin American Region

The AMPATH Valdivia Group Report[26] is a very important source of information about research networks in Latin America Region. AMPATH is certainly a very successful project to foster collaborative efforts in the Latin American region where Digital Divide is a serious problem. As we concentrate our focus in HEP institutions our summary below will only be for countries having programs in HEP or as potential collaborators. We also exclude the information about other global e-Science instrumentation and programs, agreements and networks such as the Gemini and ALMA observatories in Chile, NASA/INPE[27]-Brazil, weather, astrobiology, to name but a few. Comments below derive in part from those found in the Valdivia Group Report, chaired by Robert Bradford of NASA and written by a group of US research scientists and the AMPATH staff. The Valdivia Group Report is the result of NSF Award #ANI-0220176 sponsoring the first AMPATH International conference in Valdivia, Chile in April, 2002.

Mexico - Corporacion Universitaria para el Desarollo del Internet (CUDI)

Internet2 in Mexico. URL:

HEP groups in Mexico collaborate with FERMILAB and LHC experiments. ALICE has a Mexican group of physicists with a strong collaboration with Italy. Other Mexican groups plan to present a proposal to collaborate with CMS.

CUDI is comprised of nearly 50 member institutions.

CONACYT – A Council for Research and financial support in Mexico intend to join CUDI. The Network is provided by Telmex.

Network services include: Quality of Service (QoS), Mulitcast, IPv6, H.323, Routing, Topology, Security and a network operations center (NOC).

Advanced applications being run over the CUDI network include:

Distance education, digital libraries, telemedicine, astronomy, earth sciences, visualization, and robotics. The physics community in Mexico is preparing their participation in the effort of GRID for LHC. During XX Mexican School of Particle and Fields (Playa del Carmen), Mexico, October 2002 ) a number of representative physicists from Latin American discussed the possibility of upgrading he existing Links and their future collaboration with CERN and FERMILAB. It was informally established a number of representative, to be the contact persons for physics and networks purpose.

[pic]Colombia

ICFES – High Quality Research and Education in Virtual Environments. 119 Universities have Internet Access. There is a group of high energy physics in Colombia collaborating with Fermilab. We do not have information about their activities in Computing for Grids or any other project. We have to look for further information from them.

URL:

[pic]Puerto Rico

We have no relevant information for Networks and/or Computing. They have a proposal for a High Performance Computing facility and the creation of a Virtual Machine Room for Puerto Rican CISE education and research.

URL:

[pic]Venezuela

REACCIUN – Academic Network of Research Centers and National Universities is the network connected to the Centro Nacional de Teconologias de Informacion (CNTI) created in March of 2000. CNTI connects 316 institutions. Physicists and the CNTI are making efforts to upgrade their network and creates a HEP group to, eventually, collaborate with LHC. (Venezuela has responded our questionnaire.)

URL:

[pic]Argentina

RETINA – REd TeleINformática Academica – is a project launched by Asociacion Civil Ciencia Hoy in 1990. Argentina has HEP groups in Experimental and Phenomenology. Argentina has a big potential to become a strong collaborator in HEP due to their tradition in this area. The past and present economic situation has stopped the development of the groups in high energy physics.

URL: We believe that the physicists in Argentina will have strong participation in HEP . There is a big effort to build the AUGER observatory which is an International High Energy Cosmic Rays Project.

[pic]Brazil

RNP – Brazilian National Research Network rnp.br

Networks in Brazil started a long time ago with Bitnet. They originated in Rio and São Paulo, then were slowly implemented in all the 27 states of the country via a National project supported by CNPq, and later by the MCT. RNP is the organization that was created to stimulate the creation of networks in the whole country. REDE RIO was created in Rio de Janeiro as a powerful organization, a mixed institutional backbone sponsored by TELEMAR, the Telecommunication Company of Rio de Janeiro. TELEMAR is part of a 155 Mbps ring around the city of Rio de Janeiro. A considerable part of the national budget for Science and Technology in Brazil is dedicated to the network upgrades. There are many projects of supercomputing and Networks in São Paulo and Rio de Janeiro. RNP is the center of all activities involving Brazilian networks.

The ideas of Grids and advanced networks first appeared in the High Energy Physics community. Until recently there was no indication as to whether RNP or REDE RIO would truly support the needs of HEP in Brazil. In 2002 however, there has been significant progress. UERJ in Rio, working together with AMPATH and Caltech, has developed a plan with RNP and Global Crossing (GX) to connect to RNP with a dark fiber at Global Crossing’s nearby Point of Presence. A plan has been developed in late 2002 and early 2003 for an STM4 (622 Mbps) link between Rio and Miami, supported by RNP with contributions from the US National Science Foundation, and this link is expected to start in 2003. According to this plan, UERJ can expect to use approximately half of this international link in support of its Tier2 center and Grid developments. We also started discussions with the groups responsible for networks in São Paolo, along with RNP and GX, to share an STM16 (2.5 Gbps) link in 2004, if that turns out to be affordable.

We now show Brazil’s best research and education network and its inhomogeneous bandwidth distribution within the country (Figure 3). This is the topology of the Brazilian National Network for Research and Education. It shows a widespread presence of the Digital Divide problem. Bringing the network to all regions of the country is a stimulus to research work, to integration within the country, and to Brzil’s partnership with the rest of the world. A better homogeneity of bandwidth distribution has to be pursued, in order to open opportunities to the many research centers and universities situated far from the more developed southern region of the country.

[pic]

Figure 5: RNP2, Brazil's National Research and Education Network

The following map shows the Digital Divide in the city of Rio de Janeiro. The same problem is found in city of São Paulo. The other Brazilian states have a worse situation for their links. Examining only one of the important States of Brazil, Rio de Janeiro, we see a backbone of 155 Mbps with financial support of FAPERJ – Financial Support Agency for Research of the State of Rio de Janeiro. UERJ, the biggest University supported by the government of the State of Rio de Janeiro, is not connected to this backbone.

[pic][pic]

Figure 6: Rede Rio Network in Brazil, Rio de Janeiro

As the network above shows, there is a very clear digital divide problem: The differences in last-mile connection speeds are too great, causing a significant imbalance. This has caused a stop to the programs of HENP research for many institutions. UERJ has a strong group that is completely stopped by this policy. No one in this state has a demand for higher bandwidth than the UERJ HEP group for Dzero and CMS collaboration with several needs in computing and video conference. The projects of the Brazilian high energy physicists’ collaborators include the participation on GRID LHC/CERN projects.

[pic]HEPGRID

This part of the report does not come from the Valdivia Report. It comes from Brazilian information about projects for GRID involving Networking.

The HEPGRID project was first presented in Rio, in 1998, as part of an initiative of Physicists of seven Institutions as a consequence of the former project CLIENT/SERVER that was a farm of PCs to produce Monte Carlo events for Dzero. This farm worked during 3 years and was very useful for Dzero students. They could remotely (through the web) submit and control their jobs, choosing the number of nodes according to their needs.

The present project proposes the establishment of 5 Tier 3 distributed in the following institutions: CBPF, UFRJ, USP, UFRGS and UERJ. The Tier 3 to be set at UERJ is designed to evolve to a Tiers 1 before 2005/6. The current problem for UERJ is the Last Mile Connection. The Institute of Physics is now requiring the upgrade of the internal network from 10/100 Mbps to Gigabit. It is clear that the whole network system has to evolve to a rate of 2.5 and 10 Gbps due to future needs.

There is a project already approved by FINEP- Financial Support Agency for Research and Projects- awaiting for funds to be made available. Another project has been approved by FAPERJ – Financial support Agency of the State of Rio de Janeiro. HEPGRID has submitted two other projects to MCT/CNPq . The main purpose of HEPGRID project is to create an opportunity for Brazilian groups to continue to work on the next generation of LHC experiments.

Coordinator: A. Santoro - santoro@uerj.br

[pic]CHILE

REUNA2 – Connection between Chile to Internet2 via AMPATH

The REUNA National Backbone will be partially upgraded to 2.5 Gbps in 2003 and fully upgraded to 2.5 Gbps in 2004. As far as we know there is no high energy physicists involved in LHC experiments in the country.

Other Latin American countries

. During the X Mexican School in Particle and Fields in November 2002 there were discussions of problems related to the Network in Latin America. Many Latin American physicists were not informed about the initiatives of AMPATH and the effort of this project to upgrade the networks in Latin American. A group of physicists is organizing Master courses in Physics in Central America and are discussing the possibility of creating a HEP group in the region. This initiative includes Puerto Rico, Guatemala, Honduras, San Salvador and Nicaragua.

Links to Regional High Energy Physics Centers and Networking Information

South and Central America

• Universidad de Panamá (in Panama City)

• Universidad de Costa Rica (in San Jose, Costa Rica)

• Universidad Nacional de Costa Rica (in San Jose, Costa Rica)

• Universidad Autónoma de Nicaragua, Managua (in Managua, Nicaragua)

• Universidad Nacional Autónoma de Honduras (in Tegucipgalpa)

• Universidad de El Salvador (in San Salvador, El Salvador)

• Universidad del Valle de Guatemala (in Guatemala City)

• Universidad de San Carlos de Guatemala (in Guatemala City)

The Americas – North America

For North America, this appendix will describe the major research and education networks in the United States and Canada. The research networks in North America typically follow a hierarchical network architecture: a national backbone, regional aggregation at core nodes, sub-regional aggregation typically at the State level, and local metropolitan aggregation from Universities to the local PoP (Last Mile). In North America, most, if not all universities, are connected at OC3c (155 Mbps) speeds to a regional aggregation point. The present trend is that universities are acquiring dark fiber for their last-mile connection.

United States

The United States has well-established research and education networks. There are several research networks in the US that are funded and operated by other federal agencies: DREN, ESNet, NISN, NREN, the vBNS; or by the academic community Abilene. These networks provide an excellent platform for collaboration and the development of advanced Internet applications that utilize protocols that are not available on the commercial Internet, such as QoS, IPv6 and multicast, as well as provide testbeds for testing new protocols. This section highlights two of it’s most successful networks: UCAID’s Abilene[28] network and the Department of Energy’s research network, ESnet[29].

[pic]

Figure 7: The Abilene Network

[pic]

Figure 8: ESnet Backbone

Canada

Canada’s CAnet network is an advanced optical platform for research and education networks, inter-connecting universities, schools and research organizations. Canada was one of the early adopters of using dark fiber and promoting the concept of customer-empowered networks. In Canada, there is a growing trend for many universities, schools, and large businesses/institutions to acquire their own dark fiber as part of a condominium or municipal fiber build. As a result, the research institutions in Canada have access to a network infrastructure that can support very high-speed wide-bandwidth connectivity in the country. Canada is very well connected to the US through Washington State, Chicago at StarLight and New York City.

[pic]

Figure 9: Canada's CAnet4

Appendix B – Questionnaire / Responses

In order to get new information from institutions around the globe participating in HEP experiments, to elicit their comments and suggestions for eventual solutions to network-related problems, we designed a minimal questionnaire that was distributed to the leaders of the HENP laboratories and the major experiments, as well as to individual physics groups in more than 35 countries.

The questionnaire was:

1. Your Name, Institution, Country, E-Mail. What are your areas of responsibility related to networks, computing or data analysis ?

General Questions on Your Network Situation, Plans, Problems and Solutions

2. Where are the bandwidth bottlenecks, in the path between your university

or laboratory and the laboratory (or laboratories) where your experiments

are sited (please explain):

• In the local area network at my institution

• In the “last mile” or local loop

• The city or state or regional network (specify which)

• The national network

• International connections

• Is your LAN connected to the WAN by use of a firewall? If yes, indicate the possible maximum throughput.

• Other

3. For each of the above elements in the network path, please send information about the present situation, including the bandwidth, sponsoring organization (is it academic or commercial or both), who pays for the network, etc. If you are using a shared network infrastructure, mention how much is used, or allowed to be used by HEP, and whether it is sufficient.

4. Describe the upgrade plans in terms of bandwidth, technology and schedule, where known, for the

• Local network (LAN) infrastructure

• Regional networks

• National backbones

• Interconnection between the LAN and the outside.

5. Describe your network needs as they relate to the current and/or planned physics programs in which you are involved. Compare the needs to the current and planned networks. Describe any particular needs, apart from sufficient bandwidth, that you consider relevant.

6. Please describe any particular network-related problems you consider most relevant.

[Examples: the city/regional/national/international network hierarchy;

high prices; restrictive policies; lack of network service;

technical problems, etc.]

7. Please present your ideas for solving or improving any of the specific problems you face. Where possible, describe how these approaches to solutions are applied in other cases.

8. Which organizations or committees (other than the ICFA SCIC) are working on characterizing and solving the network problems that affects you? Do you work

on any of these committees?

Some Specific Questions About Your Current Computing and Networking Situation

1. Describe the Computing Facility you currently have in your Institution. (If it is a shared facility, specify the resources dedicated to HEP.)

a. Do you have a Mainframe?

b. Clusters? (How many and what type of CPUs? Storage?

Interconnection Technology?)

c. How are the CPUs connected to the local area network?

2. Describe your local area network. Specify which technologies (e.g. 10 Mbps Ethernet, 100 Mbps, Gigabit Ethernet, ATM, Gigabit trunking), the configuration (shared or switched), and physical media used (optical fiber, copper links, radio links, other).

3. How is your local network connected to the wide area network (WAN)?

Does HEP have any special links to the WAN that are separate from the general gateway to the WAN at your institution?

4. How many hops (number of internet routers) are there between the local and remote machines, over the network paths that are most important to your physics programs?

(Provide one or more traceroutes and point to any problem hops if known).

5. Does HEP have to pay for its connection to the WAN, or is it covered in the general expenditures of your university of laboratory. If there is specific payment, is it for unlimited use or for a specified bandwidth ?

The response to these questions by each country/Institute/Existing Local Network will serve as basis for our further discussions, and will help us greatly in finding the best practical alternatives.

A summary of the responses received follows. We expect to receive more responses, in order to increase our database for this subject.

Table 3: Respondents

|Name |Institution |e-mail |Notes |

|Maria Teresa Dova |Univ. Nacional de La Plate |dova@fisica.unlp.edu.ar |HEP physicist |

| |Argentina | | |

|Stefaan Tavernier |HHE(VUB-ULB), |Tavernier@hep.iihe.ac.be |HEP Physicist |

|AND Rosette Vandenbroucke |Belgium |vandenbroucke@helios.iihe.ac.be |AND |

| | | |Technical Contact |

|Ghislain Gregoire |University of Louvain – Belgium |Ghislain.Gregoire@cern.ch |Contact Person |

| | | |AND |

|AND Alain Ninane | |Alain.Ninane@slashdev.be |Technical Contact |

|Philippe.Herquettec |University of Mons-Hainaut - Service PPE – |Philippe.Herquet@umh.ac.be |Professor Physics |

|AND |Belgium | |AND |

|Joseph Hanton | |Joseph.Hanton@umh.ac.be |Technical contact |

|Alexandre Sztajnberg |UERJ - Rio de Janeiro State University, |alexszt@uerj.br |Network support staff |

| |Brazil | | |

|Eduardo Gregores |IFT-UNESP – |gregores@ift.unesp.br |HEP Physicist |

| |Brazil | | |

|Panos Razis |University of Cyprus |razis@ucy.ac.cy |HEP Physicist |

| |Cyprus | | |

|Thomas Muller AND Guenter |Institut fuer Experimentelle Kernphysik – |mullerth@ekp.physik.uni-karlsruhe.de |Group Leader –HEP |

|Quast |University of Karlsruhe | |AND |

| |Germany |Gunter.Quast@cern.ch |Contact Person |

|P. V. Deshpande |Tata Institute – Bombay – |pvd@tifr.res.in |Network Administrator |

| |India | | |

|Lorme Levinson |Weizmann Institute of Science |Lorme.Levinson@weizmann.ac.il |HEP responsible |

| |Israel | | |

|Paolo Capiluppi, |Dept. of Physics and INFN, Bologna |Paolo.Capiluppi@bo.infn.it |HEP - Physicist |

|AND |Italy. | | |

|Paolo Mazzanti | |Paolo.Mazzanti@bo.infn.it |Responsible for Computing |

|Pierluigi Paolucci, |INFN Napoli |pierluigi.paolucci@na.infn.it |HEP - Physicist |

| |Italy | | |

|Heriberto Castilla Valdez |Cinvestav-IPN, |castilla@ |HEP physicist |

| |Mexico | | |

|Hafeez Hoorani |QAU - National Centre for Physics at |Hafeez.Hoorani@cern.ch |Professor of Physics. Group |

| |Quaid-iAzam University - Islamabad | |working for CMS |

| |Pakistan | | |

|Serge D. Belov |Budker Instgitute of Nuclear Physics - |belov@inp.nsk.su |HEP Physicist |

| |Novosibirsk – | | |

| |Russia | | |

|Vadim Petukhov |Mathematics and Computing Division |Petukhov@mx.ihep.su |Head of Math.and Computing |

| |–IHEP-Protvino – Moscow Region | |Division |

| |Russia | | |

Table 3: Respondents - Continuation (page 2)

|Victor Kolosov |ITEP – Moscow – |Victor.Kolosov@itep.ru |HEP Physicist |

| |Russia | | |

|Name |Institution |e-mail |Notes |

|Victor Abramovsky |Novgorod State University, |ava@novsu.ac.ru |Area of responsibility is data |

| |Russia | |analysis |

|Yuri Ryabov |Technology Division in S. Petersburg Nuclear| |Chief of Information -Network |

| |Physics Institute Russian Academy of Science |ryabov@pnpi.nw.ru Yuri.Ryabov@cern.ch|and computing ,connectivity |

| |(PNPI RAS) | |Responsibility |

| |Russia | | |

|Anton Jusko |Institute of Experimental Physics, Slovak |   jusko@saske.sk |Head of User Support Group for |

| |Academy of Sciences, Kosice | |Computing |

| |SLOVAKIA | | |

|Andrej Filipcic |Jozef Stefan Institute, |andrej.filipcic@ijs.si |network administrator |

| |Slovenia | | |

|Teresa Rodrigo |Instituto de Física de Cantabria –University |Rodrigo@ifca.unican.es |HEP-Physicist |

|AND Jesus Marco, Angel |of Cantabria – | |AND |

|Camacho |Spain | |Contact Person |

|Christoph Grab |Inst. Of Particle Phys. –ETH-Zurich |grab@phys.ethz.ch |On behalf of IPP ETHZ in CMS |

| |Switzerland | | |

|Gulsen Onengut |University of Cukurova, Adana |onengut@cu.edu.tr | |

| |University of Cukurova, Adana, | | |

|Isa Dumanoglu, |Turkey |dumanogl@cu.edu.tr | |

|Ramazan Sever, |Middle East Technical University |sever@metu.edu.tr | |

| |Turkey |Ramazan.Sever@cern.ch | |

|Lap Leung |University of California – Santa Barbara CA |Lap@hep.ucsb.edu |Computer System |

| |USA | |Manager |

|James Branson |University of California, San Diego (UCSD) |branson@ucsd.edu |HEP-Physicist |

| |USA | | |

|Ian Fisk | |fisk@ucsd.edu |Technical Contact |

|Mark Adams |Physics Department University of Illinois at |adams@uic.edu |HEP-Physicist |

| |Chicago | | |

|Michael Klawitter |USA |engineer@uic.edu |Technical Contact |

|Petar Adzic |VINCA Institute of Nuclear Sciences – |Petar.Adzic@cern.ch |HEP Yu. Coordinator |

|AND |Belgrade – |AND |AND |

|Predrag Milenovic | |Predrag.milenovic@cern.ch |Responsible for HEP Networks |

|AND | |AND |(Lab 010 and CERN) on behalf of|

| | | |The CMS Belgrade Group |

|Dragoslav Jovanović |Faculty of Physics, Belgrade | |AND |

| |Yugoslavia |dragan@ff.bg.ac.yu |Faculty of Physics LAN |

| | | |Administrator |

|Luis A. Núñez |Centro Nacional de Cálculo Científico | nunez@ula.ve|High Performance & Scientific |

| |Universidad de Los Andes (CeCalCULA) | |Computing |

| | | | |

| |Departamento de Fisica, Facultad de Ciencias,| | |

|Anamaria Font |Universidad Central de Venezuela (UCV) |afont@fisica.ciens.ucv.ve |Particle Theoretician |

| |Venezuela | | |

Table 4: Connection Status

|Institution |Bottleneck |Notes |

| |Internal (1) |Regional / National (2) |International (3) | |

|UNLP, |10 Mbps, shared, copper |Copper link to UNLP Computing |Private carriers have |UNLP pays for its link |

|Argentina |links |center, then optical link to |considerable resources| |

| | |Buenos Aires |for those who can pay | |

| | | |for it | |

|HHE(VUB-ULB), |The LAN will be |- |- |There are no bottlenecks in the network |

| |connected to the WAN via| | |connections. |

|Belgium |a firewall with a max. | | | |

| |throughput of 100 Mbps | | | |

|University of |There is a firewall at |- |- | |

|Louvain – |the entrance of the lab | | | |

|Belgium |which could do filtering| | | |

| |at 100 Mbit/s (at | | | |

| |least). | | | |

|University of |100Mbit, University |National provider : 2.5 Gbit ,|- |NO firewall |

|Mons-Hainaut – |internal to router : 1 | | | |

|Service PPE |Gbit , University router| | | |

|Belgium |to international | | | |

|UERJ, Brazil |10/100 Mbps (mostly 10) |10 Mbps ATM connection |45 Mbps via IMPSAT |RedeRIO, our regional network is supported by |

| | | | |the State Government |

|IFT-UNESP | LAN to WAN 10 Mbps and|4 Mbps to Regional Backbone |- |- |

|Brazil |100 Mbps | | | |

|University of |10 Mbps connection |Daily traffic |Slow traffic by GEANT |The Main Bottleneck is Internal and/or Last |

|Cyprus - |between the Comp. Center| |some times |Mile Connection. |

|Cyprus |and building | | | |

| | | | | |

|TATA Institute – |100 Mbps |2 Mbps link to Regional |155 Mbps | |

|India | |Backbone | | |

|Weizmann Inst. of|No information |35 Mbps |155 Mbps/GEANT |Firewall without limitation for Bandwidths |

|Sci | | | | |

|Israel | | | | |

| Dept. of Physics|The current bottleneck |- |- |Our LANs are connected via a firewall, indeed |

|and INFN, Bologna|is the “local loop” at | | |only a packet filter on the routers and |

|Italy |34 Mbps. | | |therefore the maximum throughput is at 34 |

| | | | |Mbps. |

Table 4: Connection Status- (Continuation)

|Institution |Bottleneck |Notes |

| |Internal (1) |Regional / National (2) |International (3) | |

|INFN Napoli |1Gbps LAN with 100 |INFN – Napoli to GARR link at |GARR to CERN link, at 1|Sponsored by INFN-Napoli and University of |

|Italy |Mbps connection to |64Mbps is the slowest line |Gbps |Naples – Physics Dept. |

| |local hosts. |along the path to CERN |Sponsored under | |

| | |GARR, at 155 Mbps |international |Sponsored by MIUR (Minister for Research and |

| | | |agreements |University) |

|Cinvestav, |100 Mbps ethernet |2 x 2 Mbps, shared |Not mentioned |Cinvestav pays for the link |

|Mexico |switches | | | |

|QAU, | |128 kpbs or dialup | |Actual connection and future upgrades |

|Pakistan | | | |supported by Ministry of Science and |

| | | | |Technology-100 users Internal Network. |

|Budker Inst. of | |all Siberian networks is |BINP->KEK 0.5 Mbps |Paid by KEK,SLAC and BINP.- Upgrade cost |

|Nuc. Phys. – |No Information |estimated as 10-12 Mbps, |Currently the most |prohibited. RBNet - the working |

|Novosibirsk | |including all kinds of |probable bottleneck |Infrastructure for Scientific and Educational|

|Russia | |access to European and US | |institutions. It is not clear what bandwidth |

| | |networks. | |could be made available for HEP activities |

|ITEP – Moscow – |No Bottleneck |No Bottleneck |No Bottleneck | |

|Russia |LAN-100-1000 Mbps |100-1000 Mbps Regional |155 Mbps to US & Europe| |

| | |2-1000 Mbps National | | |

|Mathematics and |Local: 100 Mbps LAN |Poor connection between |Low throughput is |Fast Ethernet |

|Computing Division |to WAN: 6 Mbps by a |Protvino and Moscow |devoted for Int. | |

|–IHE-Protvino – |CISCO7500 router | |connections for the | |

|Moscow Region |Possibility to go to| |scientific and Federal | |

|Russia |1 Gbps | |Organizations | |

|Novgorod State |Here is the main | And outside of the University-| |The LAN is Connected to the WAN using |

|University, |bottleneck, |Max of 30 Kbps | |firewall with max of 2.Mbps. |

|Russia | | | | |

|Tech. Div. S. |10-100 Mbps Ethernet|256 Kbps |No clear information |Collaborate with ATLAS,CMS, LHCb and ALICE |

|Petersburg Nuc. | | | | |

|Phys. Inst. Russian| | | | |

|Academy of Science | | | | |

|(PNPI RAS) | | | | |

|Russia | | | | |

Table 4: Connection Status- (Continuation)

|Institution |Bottleneck |Notes |

| |Internal (1) |Regional / National (2) |International (3) | |

|Institute of |HEP community subnet|The national network (SANET2) |International |Possible bottlenecks between my institution |

|Experimental |is connected to this|is 1Gb/s. |connections (see |and CERN.  Our LAN connection is now |

|Physics, |LAN through firewall| |sanet.sk): |10Mbit/sec, upgrade is planned this year to |

|Slovak Academy of | | |GEANT 100Mb/s |100Mbit/s. (see for current trafic statistics|

|Sciences, |(2x100Mb ethernet | |GTS   100Mb/s |

|Kosice |cards). | |ACOnet 1Gb/s |.tuke.sk/tu-kosice-gw.tuke.sk_3.html ) |

|Slovakia | | | | |

|JSI, |Fast ethernet, |1 Gbps, switched, shared w/ |622 Mbps, shared, not | |

|Slovenia |switched, high load,|~1000 PCs |limited but not | |

| |no QoS | |sufficient | |

|Inst. de Fís.de |- |- |- |Bandwidth: 155 Mbps (expected in few months) |

|Cantabria –Univ. of| | | |– free access, not defined for HEP |

|Cantabria | | | |-Sponsor: RedIris(academic) |

|Spain | | | | |

|Inst. Of Particle |100 Mbit/s within |~ 1 Gbit/s for connection to |We have connections to |Part of our physicists are located within |

|Phys. –ETH-Zurich |buildings |the regional network , i.e. to |GEANT/Internet2 from |CERN, so they suffer only from the conditions|

|Switzerland |1 Gbit/s for the |SWITCH |the CERN / Geneva |at CERN itself. |

| |local loop (within |The national network is |national peering CIXP |The real bottlenecks between the desktop in |

| |ETH) |operated by SWITCH (Swiss |point at 2.5 Gbit/s, |Zuerich and the Experiment at CERN is at |

| | |Education and research network:|and a further 622 |present the local network within CERN |

| | |see switch.ch/network), |Mbit/s to GBLX . see | |

| | |and provides now up to 10 | |

| | |Gbit/s connectivity between |blic/services/cixp/ | |

| | |Zuerich and Geneva/CERN. |In addition there is | |

| | | |direct peering from ZH | |

| | | |to the public Internet | |

| | | |exchange point ZH, | |

| | | |which provides about | |

| | | |200 Mbit/s. | |

|University of |Our main problem is | | |The Turkish Acad. Net. (UlakNet), provide |

|Cukurova, Adana |low bandwidth and | | |access to all Univ. and research |

| |there are no | | |organizations in Turkey. More than 120 |

|University of |bottlenecks till the| | |connections for a total bandwidth of 75 Mbps,|

|Cukurova, Adana |Turkish Academic | | |to Internet. This access uses ATM backbone |

|Turkey |Network (UlakNet). | | |installed between three (PoP) in Ankara, |

| | | | |Istanbul & İzmir. The bandwidth of UlakNet, |

| | | | |to Internet, is 10/4 Mbps. Will be increased |

| | | | |to over 34 Mbps in a short while |

Table 4: Connection Status- (Continuation)

|Institution |Bottleneck |Notes |

| |Internal (1) |Regional / National (2) |International (3) | |

|Middle East |Local: 10 Mbps |The “last mile” or local loop: |: International | No Firewall |

|Technical | |2 Mb |connections: 8 Gb? | |

|University | |The regional network Academic | | |

|Turkey | |network | | |

|University of | |No Info | No Info |No Info |

|California – Santa |No Info. | | | |

|Barbara CA USA | | | | |

|University of |From our local area |The network between our |A secondary bottleneck |Our primary bottleneck between UCSD and the |

|California, San |network at our |computing cluster and the wide |is |experiments we are |

|Diego (UCSD) |site. . |area network is currently |congestion on the |working on, currently Babar and CMS |

|USA | |restricted to 100 Base-T, which|relatively high speed | |

| | |is slower than |national and | |

| | |most of the wide area links we |intercontinental | |

| | |use |WAN links. | |

|Physics Department |The bandwidth |Recently installed 100Base-T | |UIC has a very good connection to national |

|University of |limitation for HEP |Ethernet connections in our | |backbones. Available connectivity for HEP |

|Illinois at Chicago|is our local |offices and have access to a | |upgrades in the expanding STARLIGHT layout |

| |connection to the |Gigabit switch, that is | | |

|USA |backbone |optically connected to the | | |

| | |backbone. | | |

|Centro Nac. de |The main bottleneck |. Typically each institution |The Universidad de Los |The academic network is (I should say was) a |

|Cálculo Científico |is at the |has 1 to 4 Mbps, and the whole |Andes has 4 Mbps and |cooperative governmental organization paid by|

|Universidad de Los |institution network.|academic network has 15 to 20 |CeCalCULA has 1 Mbps. |each institution. Now, because the objectives|

|Andes (CeCalCULA) |Most of them are |Mbps of international link for | |of this organization have been changed, each |

| |poorly administered.|15 institutions. | |institution is looking for their |

|Departamento de |It can be checked | | | |

|Fisica, Facultad de|through | | | |

|Ciencias, | | | |

|Universidad Central|a.ve/estadisticas/co| | | |

|de Venezuela (UCV) |nexion/ | | | |

| |Bandwith bottlenecks| | | |

|Venezuela |occur in the local | | |our LAN does not have a firewall connectivity|

| |loop, the national &| | |solution |

| |international | | | |

| |networks. | | | |

|Vinca – Inst. For |No |128Kbps Vinca LAN( Yu. Academic|2 x 2 Mbps |Internal Network is Gbps |

|Nuc, Sc. | |Network | | |

|Yugoslavia | | | | |

(1) Internal problems. Obsolete technology, router problems, networking equipment, intercampus connection

(2) Regional and National problems. Last / first mile problem. Last 1000 miles problem. Network /POP Hierarchy connection / bandwidth problems

(3) International connection / bandwidth

Table 5: Connection Details

|Institution |Providers |Firewall |

| |Regional / National (2) |International (3) | |

|UNLP, |Commercial optical fiber |Not mentioned |No firewall |

|Argentina | | | |

|HHE(VUB-ULB), |(1) LAN: own infrastructure of the |(1) Connection to Géant at 2.5 Gbps |Yes |

|Belgium |IIHE(VUB-ULB) |(international research) | |

| |(2) Connection to the University network:|(2) Connection to Teleglobe at 155 Mbps and| |

| |100 Mbps |to Internet2 at 155 Mbps (international | |

| |(3) Connection of the University network |research) | |

| |to the National Research Network: will go|(3) Connection to Opentransit at 622 Mbps | |

| |up to 2.5 Gbps (national) – connection |(international commercial) – use paid by | |

| |and use sponsored by government |university | |

|University of |1. Laboratory LAN 100 Mbit/s. |5. Link with GEANT - unknown speed - |Yes |

|Louvain – |2. Link to the university LAN at 10 |managed by GEANT | |

|Belgium |Mbit/s | | |

| |3. University LAN 100 Mbit/s between |For all these links, access is free (no | |

| |routers |costs) for the reseach community | |

| |4. Link to BELNET at 155 Mbit/s - managed| | |

| |by BELNET. | | |

|University of |- |- |No |

|Mons-Hainaut - | | | |

|Service PPE | | | |

|Belgium | | | |

|UERJ, |RedeRio |RedeRio to IMPSAT |CISCO ACLs. No BW limitations. |

|Brazil | | | |

|IFT-UNESP |4 Mbps via FAPESP |155 Mbps via TERREMARK |NAT and Firewall 10 Mbps maximum |

|Brazil | | | |

|University of |34 Mbps via CYNET |GEANT | |

|Cyprus – | | | |

|CYPRUS | | | |

|Institut fuer |- |Networking is funded by the university; |All our systems are safeguarded by |

|Experimentelle | |link to the FZK shared with other |firewalls (the GridKa centre, the |

|Kernphysik – | |Faculties, but supposed to be suffient |university, our work station cluster |

|University of | | |in the institute. |

|Karlsruhe – | | | |

|Germany | | | |

|TATA, |2 Mbps via National Carrier VSNL | | |

|India | | | |

|Weizmann Institute |Israel Inter-University Computing Center |GEANT 155 Mbps |Yes but without limitation for |

|of Science – | | |bandwidths |

|Israel | | | |

Table 5: Connection details (Continuation)

|Institution |Providers |Firewall |

| |Regional / National (2) |International (3) | |

|Dept. of Physics and|• LANs are shared between INFN Section |• Regional, national and international |Yes |

|INFN, Bologna |and Physics Department, HEP can use all |connections are managed by GARR and INFN | |

|Italy |resources. There are areas on the LAN |pays for the sharing of bandwidth in | |

| |where the bandwidth is insufficient. |proportion of the local connections (34 | |

| |Costs are shared. |Mbps for Bologna INFN). Current national | |

| |• WAN connection is dedicated to HEP |backbone capacity is more than 2.5 Gbps, as| |

| |(The Physics Dept. has its own WAN |well as the international (EU and US) | |

| |connection) and is payed by INFN via the|connections | |

| |National Academic & Research Network | | |

| |GARR. | | |

|INFN Napoli |Local: No upgrade planned in the short |Don’t know |No |

|Italy |term Regional networks: No upgrade | | |

| |planned in the short term National | | |

| |backbones: Don’t’ know. | | |

|Cinvestav, |Not mentioned |Not mentioned |Not mentioned |

|Mexico | | | |

|QAU, | |ISP-Most via dialup 54 kibps $0.5/hour – | |

|Pakistan | |Need more bandwidth-Problems with Funds | |

|ITEP Moscow– |No Information |No Information |No information |

|Russia | | | |

|Math. & Comp. Div. |Fast Ethernet |Ministry of Science and Technology |No - |

|HE-Protvino Moscow |Radio-MSU NET owned by IHEP | | |

|Region | | | |

|Russia | | | |

|Novgorod State |All elements is paid by the University | |Yes |

|University, | | | |

|Russia | | | |

|Budker Insitute .of |National network, which is provided by |Connection to National Network is 34M link |Yes |

|Nuclear Physics – |34M trunk operated by Transtelecom. |Novosibirsk-Mosco shared by numerous | |

|Russia |CANET (city network), SANET2 (national |organizations of Siberian Branch of Russian| |

| |network). I am in touch with CANET, but |Academy of Science, universities; Used | |

| |I don't work in any SANET2-committee. |60-75% -Saturation for soon is expected. | |

|Tech. Div. S. |- |- |External channel is covered by |

|Petersburg Nuc. | | |institution’s budget and RAS |

|Phys. Inst. Russian | | |foundation for telecommunication |

|Academy of Science | | |development |

|(PNPI RAS) | | | |

|Russia | | | |

Table 5: Connection details (Continuation)

|Institution |Providers |Firewall |

| |Regional / National (2) |International (3) | |

|Institute of |SANET2 is academic network, paid by |International connectivity seems to |YES |

|Experimental |government. For now, the HEP community |be sufficient (see | |

|Physics, Slovak |shares the network with academic and | -| |

|Academy of Sciences,|governmental institutions, there is no |Our trafic is routed  through the | |

|Kosice |special agreement about the bandwidth |GEANT connection). | |

|SLOVAKIA |for HEP community. | | |

|JSI, |Not mentioned |Not mentioned |Yes, 1Gbps max throughput |

|Slovenia | | | |

|Instituto de Física |In steps up to 2.5 Gbps . Schedule | - |- |

|de Cantabria |unknown | | |

|–University of | | | |

|Cantabria | | | |

|Spain | | | |

|University of |Turkish Academic Network (UlakNet): | (Regional) + London Level 3 |Yes |

|Cukurova, Adana |Local network has single mode fiber |(Satellite Connection) | |

| |optics connection through 10Mbps/100Mbps| | |

|University of |and 155Mbps ATMs. 2 Mbps ULAKNET | | |

|Cukurova, Adana, |(National) + 1Mbps Adana-AnkaraERE | | |

|Turkey | | | |

|Middle East |Local network (LAN) infrastructure: To |Interconnection between the LAN and |Yes |

|Technical University|100 Mb in near feature |the outside. To 140 Gb | |

|Turkey |Regional networks: To 4 Mb | | |

| |National backbones | | |

|University of |LAN not Connect to WAN We may upgrade |UCSB HEP has unlimited usage of the |NO |

|California – Santa |part of the LAN to gigabit soon |academic/research network funded by | |

|Barbara | |both state and federal government. | |

|CA | |We have sufficient bandwidth now. | |

|USA | | | |

|University of |The local area network is primarily Fast|For our connections to CMS and Babar |NO. We use two national networks for |

|California, San |Ethernet supported by the University of |we are routed through the |connections out of state, ESnet and |

|Diego (UCSD) |California. The "last mile" is Gigabit |California supported state network |Abilene, both support by the NSF. ESnet |

|USA |Ethernet over fiber also supported by |called Calren2, which is primarily |is currently 155Mbps ATM and heavily used |

| |the UC. This is shared with the rest of |622 Mbps ATM. This network is shared|between Fermilab and UCSD. We have been |

| |the University but usage is usually less|between all the California |able to successfully achieve 100Mbps for |

| |than 30% and is sufficient for our |universities and is often very |long periods of time. The Abilene network|

| |current needs Our LAN is not connected |congested. It is difficult even with|is shared by a large number of |

| |through a firewall and we have been able|network tuning and multiple streams |universities and is frequently extremely |

| |to achieve the full performance of the |to achieve better than 250 Mbps. |congested, even late at night 70% usage is|

| |available links. | |not uncommon. The Abilene network connects|

| | | |UCSD with Chicago and the trans-Atlantic |

| | | |link to CERN |

Table 5: Connection details (Continuation)

|Institution |Providers |Firewall |

| |Regional / National (2) |International (3) | |

|Physics Department |All networks are supported by UIC |Our connections are sufficient for |No Info |

|University of |(academic). HEP independently paid for |our current usage to FNAL. We have | |

|Illinois at Chicago |and installed cable to connect our PCs |local linux boxes and windows pcs at | |

|USA |to a local Gigabit switch via 100Base-T.|desks (total around 12). We do not | |

| | |have a large analysis cluster at UIC.| |

| | |We do not use the UIC mainframe. | |

|Centro Nac. de Cálc.|We are planning to have 1 - 2 Mbps link |Universidad de Los Andes has 4 Mbps |No |

|Científico |between Caracas and Merida. This link |and CeCalCULA has 1 Mbps. We are | |

|Universidad de Los |will be used to have IP telephony and |looking to have 8 Mbps next year | |

|Andes (CeCalCULA) |direct logging from this institute and | | |

| |our center. | | |

|Dep. de Fisica, Fac.|LAN : 100 MBps (sponsored by UCV) | | |

|de Ciencias, Univ. |National Network: 512 + E1 (sponsored by| | |

|Central de |Reacciun which is part of the Ministry | | |

|Venezuela(UCV) |of Science and Technology) | | |

|Venezuela | | | |

|Vinca – Inst. of |Vinca LAN((RCUB) Yu. work |Beotel (Mbps) link + GRnet-Geant |No Info. |

|Nuc. Science- | |(2Mbps) | |

|Yugoslavia | | | |

Table 6: Other networking needs

|Institution |Computing / networking needs related to HEP |Other |

|UNLP, |LAN upgrade to 100 Mbps |- |

|Argentina |LAN-to-WAN upgrade to 4 Mbps | |

|HHE(VUB,ULB), |Network needs do not seem to be a problem for the participation in LHC (CMS).|- |

|Belgium |The WAN speed is Ok and the LAN access to data servers can easily be | |

| |upgraded. | |

|University of Louvain – |work needs are fulfilled for our current activities but will |- |

|Belgium |be very insufficient for our future tasks: Grids, CMS analysis, | |

| |2. Apart from bandwidth, we needs reliable and 24x7x36[56] availability. | |

| |3. We have been suffering of network breakdown (hours, days) without help or | |

| |technical desk available. | |

|University of Mons-Hainaut|- |- |

|– Service PPE - | | |

|Belgium | | |

|UERJ, |HEPGRID PROJECT presented for financial support to work on CMS |Waiting for delivering approved |

|Brazil | |budget to build a T3 and Later 2005/6|

| | |a T1 |

|IFT-UNESP Brazil |Will maintain a farm for Monte Carlo studies and a Tier 3 Grid node |- |

|University of Cyprus - |The HEP group intends to have responsibilities on Monte Carlo Production and |The Bandwidth of 34 Mbps is sponsored|

|CYPRUS |build a Grid Unity type T2 or T3. Need to upgrade Network to Gigabit. In |by Cyprus Telecommunications Agency |

| |principle there is no limits to use the Network. But the daily traffic is the|via a research Program and GEANT. The|

| |real limitation. |University pays for the Network |

|Institut fuer |The most important upgrade is the increase in bandwith to the FZK from |- |

|Experimentelle Kernphysik |155Mbit to 1 Gbit/sec. | |

|– University of Karlsruhe |Needs: good connection to the German Tier1 must always be ensured; good | |

|– |connections of the German Tier1 to the other LHC Tier1 centers and to the | |

|Germany |Tevatron are important | |

|TATA, |Will have a Tier 3 Grid node |- |

|India | | |

|Weizmann Institute of |No needs for now. Don’t know how will be the present network behavior when |- |

|Science – |they will need to transfer more data | |

|Israel | | |

|Dept. of Physics and INFN,|• LANs upgrade is already occuring, moving to a LANs “backbone” of Gbit |- |

|Bologna |ethernet and bringing the Gbit to key Farms and Servers. | |

|Italy |• Regional and National networks are upgrading to GARR-G (Italian Gbit | |

| |academic-research infrastructure) | |

| |• Interconnection between the LANs and the outside is going to be upgraded to| |

| |100 Mbps and we’ll have an option to “grow” up to Gbit. | |

Table 6: Other networking needs (Continuation)

|Institution |Computing / networking needs related to HEP |Other |

|INFN Napoli |No info. |No Info. |

|Italy | | |

|Cinvestav, |Dedicated 2 Mbps link for HEP group |- |

|Mexico | | |

|QAU, |In terms of Hardware declared that they have what they really need. In terms |- |

|Pakistan |of bandwidth need to upgrade but no last mile connection problem. | |

|Budker Insitute .of Nuclear |Upgrade everything. Scarce resources, equal situation for other communities, |SB RAS with the funds reserved by |

|Physics – |it is very difficult situation for upgrades of get a dedicated network. |the Ministry of Science and |

|Russia | |Technology |

|ITEP – Moscow – |Routing equipment for LAN, and to increase the capacity of external |- |

|Russia |connections. | |

|Mathematics and Computing |Local area network: our plan is to reach 1 Gbps within the next 3 years |Connection to WAN: 600 Mbps to be |

|Division –IHE-Protvino – |Link to Moscow will grow to 30 Mbps by the next year and up to 600 Mbps by|supported by the budget for |

|Moscow Region |2007-2008 ( optical link) |Institute |

|Russia | | |

|Novgorod State University, |First half of 2003 increase existing external channel throughput to no less | |

|Russia |than 150 Mbps. Reconstruct local area network to Gigabit Ethernet. | |

|Institute of Experimental |Upgrade schedule: |Network-related problems (i.e. |

|Physics, Slovak Academy of |-LAN: 10Mb and 100Mbit (majority of computers are connected with 100Mb) |what I expect to be the bottleneck|

|Sciences, Kosice |-LAN-City network connection: 10Mb, upgrade to 100Mb this year, upgrade to |in next years): manpower, |

|SLOVAKIA |1Gb planned in 2003 |bandwidth |

| |-National backbone:   1Gb, I don't think it will be more in 2003   |Other: the lack of raw CPU power |

| |International connection: 1Gb + 2x100Mb (see above), I don't know what is |and disk space |

| |planed in near future. | |

|JSI, |Additional bandwidth should be reserved for HEP |- |

|Slovenia | | |

|Instituto de Física de |CMS/CERN – MC production center |- |

|Cantabria –University of |CDF/FERMILAB – Data analysis | |

|Cantabria |GRID Computing | |

|Spain | | |

|Inst. Of Particle Phys. |As member of the LCG-GDB and in the process of setting up a national Tier-2 | |

|–ETH-Zurich |regional centre at SCSC in Manno (scsc.ch); As such Switzerland is | |

|Switzerland |participating in the LCG-phase1 data challenges for the various experiments, | |

| |and we plan to use this our local RC for that purpose; | |

| |For this the required networking infrastructure between the RC (in Manno), | |

| |the experiment at CERN (CMS) and the home-institute (ETHZ) has to be of order| |

| |1 GByte/s in 2003, and a gradual increase to reach about 10 GByte/s by end | |

| |2004. This bandwidth is anticipated to be available between Zuerich and | |

| |CERN, but it may be a bit more difficult to obtain that bandwidth between | |

| |Zuerich and Manno. By the time of LHC-startup (2007) we'll need a bandwidth | |

| |well beyond 10 GByte/s. | |

| |The present planning of the Swiss national network is expected to roughly | |

| |match our needs, with the possible exception of the above-mentioned | |

| |connection ETHZH->Manno. | |

Table 6 : Other networking needs (Continuation)

|University of Cukurova, |Sponsoring organizations are ULAKBIM and University of Çukurova (both | |

|Adana |academic) There is no special allocation for HEP. We have one Sun Sparc | |

|University of Cukurova, |station and 3 Intel Pentium PCs.Our infrastructure is not sufficient. | |

|Adana, | | |

|Turkey | | |

|Middle East Technical |Restructuring local area network is necessary. Switch based infrastructure is| |

|University |also necessary. | |

|Turkey |Present bandwidth is not enough at the user end. | |

|University of California – |We have sufficient bandwidth for now. |- |

|Santa Barbara CA | | |

|USA | | |

|University of California, San|UCSD has become a Monte Carlo production center for CMS and is expected to |- |

|Diego (UCSD) |join the large scale simulation effort in Babar as well. We are also | |

|USA |attempting to become more involved in remote analysis. We estimate to meet | |

| |our production and analysis goals, we need to routinely achieve100Mbps | |

| |between UCSD and the experiments we | |

| |participate in. | |

|Physics Department University|No specific need |- |

|of Illinois at Chicago | | |

|USA | | |

|Centro Nacional de Cálculo |We are not aware of any upgrade plans. | |

|Científico Universidad de Los|Our network needs better security | |

|Andes (CeCalCULA) | | |

| | | |

|Dep. de Fisica, Fac. de | | |

|Ciencias, Univ. Central de | | |

|Venezuela(UCV) | | |

|Venezuela | | |

|Vinca – Inst. of Nuc. |Higher Bandwidth to outer links. Both Regional and International are low |- |

|Science- |bandwidth. | |

|Yugoslavia | | |

Table 7: Most relevant networking related problems

|Institution |Most relevant networking related problems |

|UNLP, |2 Mbps optical link, shared with well over 1000 people, is not sufficient |

|Argentina | |

|HHE(VUB-ULB), |- |

|Belgium | |

|University of Louvain – |Technical manpower assistance |

|Belgium | |

|University of Mons-Hainaut - |- |

|Service PPE - | |

| | |

|Belgium | |

|UERJ, |UERJ is outside the Rede RIO’s 155 Mbps ATM ring, connected by a 10 Mbps link. Rede RIO has a private |

|Brazil |International 45 Mbps connection, which is saturated. RNP, the national backbone research network has a |

| |155+42 Mbps International link that could be used by our institution given that the proper configuration was |

| |imposed. |

|IFT-UNESP |(2/3) speed and bandwidth |

|Brazil | |

|University of Cyprus – |Last Mile Connections, big traffic on the used network limit the present work. Need to buy more nodes to |

|CYPRUS |work for CMS Monte Carlo. Missing of trained people on Network Technologies. |

|Institut fuer Experimentelle |- |

|Kernphysik – University of | |

|Karlsruhe – | |

|Germany | |

|TATA, |High prices |

|India | |

|Weizmann Institute of Science – |No info |

|Israel | |

|Dept. of Physics and INFN, |We are involved in the LCH Program as well as other HEP Experiments (CDF, Hera-B, etc.). Plans of upgrade of |

|Bologna |network is following mainly the LHC projects and experiments, so that our local “Tier2s” can be part of the |

|Italy |LHC Computing Systems. Another driving activity for the network upgrades is related to our participation to |

| |Grid Projects. |

| |Apart from the bandwidth, we consider important to have a “global” HENP policy for “security”. Firewalls, at |

| |least, must be coordinated. |

|INFN Napoli | |

|Italy | |

|Cinvestav, |2 x 2Mbps links for the whole community is not sufficient |

|Mexico | |

|QAU, |(2/3) Internet connection bandwidth. In processes of deploying a 64 Kbps leased line (by October) - limited |

|Pakistan |by funding |

|Budker Insitute .of Nuclear | 0.5M dedicated channel BINP-KEK * 100M connection to regional network |

|Physics- |34M shared connection to National backbone (up to 10% really available) |

|Russia |10M shared connection to Global Internet (up to 10% really available) |

| |Future needs:-2M dedicated channel BINP-KEK + (2-3) Gbps connection to regional network + (4-5)*155 M |

| |shared connection to National backbone or 155M dedicated connection to National backbone + 155M dedicated|

| |connection to Global Internet |

|ITEP – Moscow | No information |

|Russia | |

Table 7: Most relevant networking related problems(Continuation)

|Institution |Most relevant networking related problems |

|Mathematics and Computing |International connections: low bandwidth caused by high prices |

|Division –IHE-Protvino – Moscow | |

|Region | |

|Russia | |

|Novgorod State University, |We have only financial, rather than technical problems |

|Russia | |

|Tech. Div. S. Petersburg Nuc. |Next requirements are for better international networks |

|Phys. Inst. Russian Academy of | |

|Science (PNPI RAS | |

|Russia | |

|Institute of Experimental |-- |

|Physics, Slovak Academy of | |

|Sciences, Kosice | |

|SLOVAKIA | |

|JSI, Slovenia |LAN-WAN interconnection: high load, no QoS, new infrastructure needed |

|Instituto de Física de Cantabria|Network national hierarchy is the main problem now. |

|–University of Cantabria | |

|Spain | |

|Inst. Of Particle Phys. |- |

|–ETH-Zurich | |

|Switzerland | |

|University of Cukurova, Adana |- |

|Turkey | |

|Middle East Technical University|- |

|Turkey | |

|University of California – Santa|No info |

|Barbara | |

|CA | |

|USA | |

|Physics Dept University of |No Info |

|Illinois at Chicago | |

|USA | |

|Centro Nacional de Cálculo |High prices, and the lack of National policies have forced us to look for individual connectivity solutions |

|Científico Universidad de Los |instead of a cooperative organization that we planned |

|Andes (CeCalCULA) |Perhaps the biggest problem we face is the lack of support and improvement policies, both at the faculty and|

|Dep. de Fisica, Fac. de |university level |

|Ciencias, Univ. Central de |A local firewall to improve security could be useful. |

|Venezuela(UCV |To our knowledge there are no committees trying to solve our problems. |

|Venezuela | |

|Vinca – Inst. of Nuc. Science- |No Funds to do upgrades |

|Yugoslavia | |

Table 8: Presented ideas for prospective solutions

|Institution |Presented ideas toward a solution for the detected problems |

|UNLP, |Better economical support |

|Argentina | |

|HHE(VUB,ULB), |- |

|Belgium | |

|University of Louvain – |A solution will be to have a dedicated line and high bandwidth from the laboratory directly to the BELNET |

|Belgium |research network. |

|University of Mons-Hainaut - |- |

|Service PPE – | |

|Belgium | |

|UERJ, |(1) Upgrade internal network as soon as the approved budged is delivered. |

|Brazil |(2) Upgrade the last mile link with an “almost”-blind fiber and up to 155 ATM Mbps. |

| |(3) Upgrade the Rede RIO’s international link or reconfigure our routing tables to forward packets through |

| |RNP’s international link (necessary a Rede RIO and RNP agreement). Most of our solutions require |

| |non-immediately-available funding. |

|IFT-UNESP |University should provide funds to increase speed and bandwidth |

|Brazil | |

|University of Cyprus – |More effort is needed to include the smaller Institutes and Laboratories in the GRID Projects and not only |

|CYPRUS |the well established Research Centers. GRIDs concentrated in HEP programs and not waste limited resources. |

|Institut fuer Experimentelle |- |

|Kernphysik – University of | |

|Karlsruhe – | |

|Germany | |

|TATA, |Proposals for progressive upgrades |

|India | |

|Weizmann Institute of Science – |Next upgrade to 45 Mbps – Implement QOS (Quality of Services) classes to get discount on bulk traffic |

|Israel | |

|Dept. of Physics and INFN, |There is an increasing need of “QoS” and it seems difficult to get it via the network providers. Real “QoS” |

|Bologna |services can only be obtained if the “quality” is all along the path between end-nodes. |

|Italy | |

|INFN Napoli |No Info. |

|Italy | |

|Cinvestav, |More money would allow them to buy additional links 2 Mbps each. |

|Mexico | |

|QAU, |(2/3) Upgrade Internet connection to 512 Kbps |

|Pakistan | |

|Budker Insitute .of Nuclear |No specific technical ideas/inventions - all technologies are well proven and the only thing necessary in |

|Physics – |order to implement them is additional funding. |

|Russia | |

|ITEP – Moscow – |No specific ideas |

|Russia | |

|Mathematics and Computing |In many cases we need the access only to the some particular Scientific Centers (CERN, FNAL, DESY, …) but not|

|Division –IHE-Protvino – Moscow |to all Internet resources. So, may be there are any possibilities to organize tunnels between the point in|

|Region |Internet with lower Tariff. |

|Russia |It will allow HEP community to exchange by Petabytes of data with reasonable costs. |

Table 8: Presented ideas for prospective solutions (Continuation)

|Institution |Presented ideas toward a solution for the detected problems |

|Novgorod State University, |We have just began, thus the only problem that we met is lack of facilities, due to the lack of funds allowed|

|Russia |for this program. |

|Tech. Div. S. Petersburg |No information |

|Nuc. Phys. Inst. Russian Academy| |

|of Science (PNPI RAS) | |

|Russia | |

|Institute of Experimental | |

|Physics,Slovak Academy of | |

|Sciences,   Kosice | |

|SLOVAKIA | |

|JSI, |Better infrastructure: 1 Gbps Giga-Ethernet for LAN and WAN, 10 Gbps SDH for national network and 1 Gbps SDH |

|Slovenia |for international links. Reserved bandwidth for HEP |

|Instituto de Física de Cantabria|- |

|–University of Cantabria | |

|Spain | |

|Inst. Of Particle Phys. |- |

|–ETH-Zurich | |

|Switzerland | |

|University of Cukurova, Adana |-- |

|Turkey | |

|Middle East Technical University|- |

|Turkey | |

|University of California – Santa|No Info |

|Barbara CA | |

|USA | |

|University of California, San |No Info |

|Diego (UCSD) | |

|USA | |

|Physics Department University of|No Info |

|Illinois at Chicago | |

|USA | |

|Centro Nacional de Cálculo |Establish a cooperative organization that provided services (IP telephony, Digital Libraries, Distributed |

|Científico Universidad de Los |Computing, Storage, and training.) to the institutions |

|Andes (CeCalCULA) | |

| | |

|Dep. de Fisica, Fac. de | |

|Ciencias, Univ. Central de | |

|Venezuela(UCV | |

|Venezuela | |

|Vinca – Inst. of Nuc. Science- |Dedicated Line to HEP |

|Yugoslavia | |

Table 9: Present Computing Facilities Dedicated to HEP

A-Mainframe, Clusters, LAN, Networked CPUs

|Institution |Mainframe |Clusters |Networked CPUs |LAN Technology |

|UNLP, |No |No |Isolated PCs in a LAN |10 Mbps, shared copper links |

|Argentina | | | | |

|HHE(VUB,ULB) |No |1 Beowulf cluster, CPU type P4,|Via fast Ethernet |LAN: switched Ethernet at 100 |

|Belgium | |480 Mbytes storage, | |Mbps, essentially copper links |

| | |interconnection via fast | | |

| | |Ethernet | | |

|University of Louvain – |No |No clusters. |Connection is a mix of 100/1000|Our LAN is connected to the WAN|

|Belgium |. |We have 5 servers: |Mbit/s Ethernet between |though the University network. |

| | |1 - management |servers. Offices are connected|For the moment, there is no |

| | |3 - computing (1 HEP, 2 shared |at 10 Mbit/s. |dedicated network/gateway for |

| | |with other research projects) | |HEP community |

| | |1 storage (HEP) | | |

| | |4. We have ± 15 of desktop PCs | | |

| | |for HEP. | | |

|University of Mons-Hainaut|- |Dedicated cluster of 4 |via 1GBit Ethernet |Outside connection is via |

|- Service PPE – | |BiProcessors Intel PIII-1.2GHz | |100MBit Ethernet network |

|Belgium | |each with 1 | |copper cable |

| | |GBytes of memory, 2 Mono | | |

| | |Processor PIV , 512MBytes of | | |

| | |memory | | |

| | |all running Linux RedHat 7.X | | |

| | |and , 2.5 TB Raid5 disk array, | | |

|UERJ, |No |to be deployed |35 |10/10 Ethernet hub/switches |

|Brazil | | | | |

|IFT-UNESP |No |No | |100 Mbps |

|Brazil | | | | |

|University of Cyprus |No |Linux Clusters with “many” |“few” computers and | |

|CYPRUS | |nodes |Workstations | |

|Institut fuer |- |- |- |- |

|Experimentelle Kernphysik | | | | |

|– University of Karlsruhe | | | | |

|-Germany | | | | |

|TATA, |No |No |32 |100 Mbps |

|India | | | | |

|Weizmann Institute of |No |64 Pentium-farm |No Info. |100 Mbps |

|Science – | | | | |

|Israel | | | | |

|QAU, |No |6 PC Linux |10 | |

|Pakistan | | | | |

A-Mainframe, Clusters, LAN, Networked CPUs (Continued)

|Institution |Mainframe |Clusters |Networked CPUs |LAN Technology |

|Dept. of Physics and INFN,| |Yes we have Clusters and Farms, |Storage is served by “disk |The LANs are all 100Mbps |

|Bologna | |about 10. Most of them are |servers” (some TB on a few |Ethernet, interconnected via |

|Italy |No |Linux. They range from a couple |servers) via NAS or SAN (Gbit |Gbit Ethernet trunking. Some |

| | |of nodes to 30 dual-CPU boxes |Ethernet and/or FC). |portion of LANs are already |

| | |rack mounted. Typical CPUs are | |full Gbit Ethernet also for end|

| | |PIII at 1GHz (but we are rapidly| |nodes. All the nodes (with very|

| | |changing to the new CPUs). | |few exceptions) are switched. |

| | | | |Physical media is twisted pairs|

| | | | |(cat 5 and above) copper. Gbit |

| | | | |Ethernet for trunking is on |

| | | | |fiber, local Gbit connections |

| | | | |are on copper. We have a backup|

| | | | |connection between our two |

| | | | |major buildings via radio link |

| | | | |(35 Mbps) HEP has a special |

| | | | |link (INFN Section of Bologna) |

| | | | |to WAN (GARR) currently via ATM|

| | | | |at 34 Mbps. The Dept. of |

| | | | |Physiscs of the University of |

| | | | |Bologna has a connection via |

| | | | |ATM at 622 Mbps to University |

| | | | |MAN which is in turn connected |

| | | | |to GARR at 100 Mbps (fast |

| | | | |Ethernet). |

|INFN Napoli |No |local cluster for CMS |1100 Mbps Ethernet link |Naples is a new CMS group, |

|Italy | |6 CPU of 2.4GHz | |working on the construction of |

| | |360 GB storage, connected at 100| |a Farm in order to begin to |

| | |Mbps planned to be upgraded at | |work on the CMS core software |

| | |1Gbps | |development. |

|Cinvestav, Mexico |No |25 2-CPUs + File Server | |100 Mbps NICs w/ twisted pair |

|Budker Insitute .of |No |(4-5) clusters working for |Intracluster interconnections |Usually 100M connection via |

|Nuclear Physics – | |different projects, each of |are based on switched 100M/1G |Ethernet switch. |

|Russia | |10-15 middle-range CPU |Ethernet. | |

| | |(P3/500MHz, 1GB Memory, 20GB/CPU| | |

| | |storage). | | |

A-Mainframe, Clusters, LAN, Networked CPUs (Continued)

|Institution |Mainframe |Clusters |Networked CPUs |LAN Technology |

|ITEP Moscow |No |50 CPUs 2 TB Storage |100-1000 Mbps Ethernet |local connections: 100 Mbps |

|Russia | | | |Ethernet, and 1 GBit Ethernet |

| | | | |backbone optical link |

|Mathematics and Computing |AlphaServers: |(1)10 CPUs Pentium III 833 MHz; |Fast Ethernet; |10 Mbps Ethernet; |

|Division –IHE-Protvino – |(1)8200 (6CPUs) |(2)30 CPUs Pentium III 930 MHz; |FDDI; |100 Mbps Ethernet; |

|Moscow Region |5/300 MHz; 1GB |(3)2 TB discs; |Ethernet |FDDI |

|Russia |RAM; (2) 8200 |(4)Fast Ethernet. | | |

| |(6CPUs)5/440 | | | |

| |MHz;1GB RAM; | | | |

| |(3) 200 (4CPUs) | | | |

| |5/625 MHz; 1GB | | | |

| |RAM. | | | |

|Novgorod State University,|No |No |One CPU connected to the LAN, |We have only one computer |

| | | |it is dedicated to HEP |connected to the University’s |

|Russia | | | |switch using Fast Ethernet (100|

| | | | |Mbps). |

|Tech. Div. S. Petersburg |No |Our LAN consists of more then |The segments of LAN belongs to |Technologies using cooper |

|Nuc.Phys. Inst. Russian | |500 PC (~200 PC dedicated to |different departments built on |twisted pairs (5th cat) and |

|Academy of Science (PNPI | |HEP). Cluster dedicated to HEP: |the 10Mbps Ethernet and Fast |communication equipment HUB and|

|RAS) | |6 dual PC (Intel p-III-700 Mhz, |Ethernet |Switch (Cisco, 3COM, Intel and |

|Russia | |RAM-256 Mb, HDD 30 GB). | |others). The segments are |

| | | | |combined into Institution’s LAN|

| | | | |using optical Fiber (the Length|

| | | | |~ 5 km). |

|Institute of Experimental |No mainframe |We have Linux PC Cluster (10 |Nodes are interconnected |Computing facility is shared, |

|Physics,Slovak Academy of | |nodes, 1 node is based on SMP MB|through 100Mbit ethernet |(but now it is used mainly by |

|Sciences, Kosice | |with |switch, which is connected to |HEP group). |

|SLOVAKIA | |2 procesors. Average node |LAN. | |

| | |consists of: 2x733MHz Pentium |Common RAID-5 disk array | |

| | |III procesor, 512MB RAM, 40GB |(140GB) is used by all the | |

| | |HD). |nodes, this year to be | |

| | | |upgraded to 1TB RAID-5 disk | |

| | | |array. | |

| | | |1c. 100Mb (through ethernet | |

| | | |switch) | |

|JSI, |No |~40 CPUs, ~25 Athlon 1500XP | |100 Mbps ethernet, switched |

|Slovenia | |equiv + Alpha server, 1 TB disk| |copper links |

|Inst. de Física –Univ. of |No |80 Linux 1.26 GHz 640 Mb memory,| |Through the univ. network |

|Cantabria | |storage: 7 Tb disk, 7Tb tapes | | |

|Spain | | | | |

A-Mainframe, Clusters, LAN, Networked CPUs (Continuation)

|Institution |Mainframe |Clusters |Networked CPUs |LAN Technology |

|Inst. Of Particle |No |For CMS computing ETH provided : | |Network connection: |

|Phys. –ETH-Zurich | |CPU : | |all connections are with |

|Switzerland | |a) 25%of a Beowulf cluster | |Ethernet; Mostly 100 Mbit/s, |

| | |(ASGARD), that corresponds to 128| |apart from the fileservers on |

| | |CPUs(Intel 500 MHz each), located | |the ASGARD cluster, which is 1 |

| | |in Zuerich. | |Gbit/s. |

| | |This is HEP facility, is sharing | |Most of the LAN and WAN |

| | |with others (theory, etc); HEP | |questions have already been |

| | |receives 25% of an overall of 512| |answered above. |

| | |CPU's. | |Ethernet everywhere. Optical |

| | |b) about 5 PC (Intel 500MHz | |fibers for the long range links|

| | |each) , located at Zuerich | |(Zuerich -> CERN) |

| | |c) about 10 PC (Intel 1 GHz each)| | |

| | |, located at CERN | | |

| | |All Linux OS; | | |

| | |Disk : | | |

| | |a) about 1 TB, located at | | |

| | |Zuerich, attached to ASGARD | | |

| | |b) about 1 TB, located at CERN, | | |

| | |available to the PCs | | |

| | |c) about 0.5 TB, located at | | |

| | |Zuerich, available to the PCs | | |

|University of |NO |NO |Through ethernet cards |We have 10/100 Mbps Ethernet |

|Cukurova, Adana | | | |and ATMs. We have copper links |

|Turkey | | | |connect to main router with |

| | | | |single mode fiber optic. |

| | | | |Local WAN has single mode |

| | | | |optical fiber.HEP do not have |

| | | | |any especial link. |

|Middle East |Yes |No for HEP Mostly Pentium IV | | |

|Technical | | | | |

|University | | | | |

|Turkey | | | | |

|University of | | |Most CPUs use 100BaseT switched|100 Mbps Ethernet |

|California – Santa |NO |NO |network |LAN->WAN: Gigabit Ethernet. |

|Barbara CA | | | | |

|USA | | | | |

A-Mainframe, Clusters, LAN, Networked CPUS (Continuation)

|Institution |Mainframe |Clusters |Networked CPUs |LAN Technology |

|University of |NO |We currently have a computing |The data servers are connected |Our local area network is |

|California, San | |cluster of 20 dual CPU P3 800 MHz |with gigabit Ethernet and the |100Mbps Ethernet over copper We|

|Diego (UCSD) | |nodes. These are connect to 4.5TB |computational |are currently connect to the |

|USA | |of IDE and SCSI RAID arraysThese |nodes have 100Mbps fast |wide area network over 100Mbps |

| | |systems will be connected with |Ethernet. . We are in the |copper Ethernet. |

| | |gigabit. We hope to upgrade to |process of procuring an | |

| | |gigabit Ethernet over fiber soon. |additional 20 computational | |

| | | |nodes with 2.4GHz dual P4 | |

| | | |Xeons. | |

|Physics Department | | | | |

|University of | | | | |

|Illinois at Chicago| | | | |

|USA | | | | |

|Centro Nacional de |Not at all, we have |~60 dual processors ~900 MHz 1GB |Gigabit Ethernet |Gigabits Trunking through Fiber|

|Cálculo Científico |several multiprocessor|RAM/2proc ~3 terabytes storage | |optics, 100 Mbps local area |

|Universidad de Los |servers (SUN, IBM and |capacity | |through twisted pair and radio |

|Andes (CeCalCULA) |SGI) | | |links (Broadband Delivery |

| | | |System and Spread Spectrum, 2.4|

|Dep. de Fisica, |ve/recursos/hardware/ | | |GHz, 11 |

|Fac. deCiencias, | | | |Router CISCO 7500 with two |

|Univ. Central de | | | |links (512 KB/2 GB) |

|Venezuela(UCV) | |We just have several CPU's |The LAN uses 100 MBps switched |Mbps) |

|Venezuela | |connected to the LAN with TCP/IP. |on copper links. | |

|Vinca – Inst.of |No |No |400 |100 Mbps |

|Nuc.Science- | | | | |

|Yugoslavia | | | | |

B-Firewall, WAN, Hops, Financial Support

|Institution |Firewall |WAN connection |Hops |Who pays for the connection ? |

|UNLP, Argentina |No |Commercial optical fiber |23 to CERN 22 to |UNLP and/or Conicet |

| | | |Fermilab | |

|HHE(VUB,UL) |Yes |University LAN is then |About 10 hops |HEP does not have to pay for the WAN connection |

|Belgium | |connected to the WAN at 2.5 | | |

| | |Gbps. No separate HEP | | |

| | |connections to the WAN | | |

|University of |Yes |Our LAN is connected to the | |HEP do not pay for network infrastructure. |

|Louvain – | |WAN through the University | | |

|Belgium | |network. There is no | | |

| | |dedicated network/gateway | | |

| | |for HEP community. | | |

|University of |No |We are using the internal |One |No |

|Mons-Hainaut - | |100 MBit network of our | | |

|Service PPE | |university | | |

|Belgium | | | | |

|UERJ, |No |University’s 10 ATM Mbps |20 |Rio de janeiro State Govern |

|Brazil | |over fiber | | |

|IFT-UNESP |Yes |NO Info |20 to FNAL |University |

|Brazil | | | | |

|University of | |Local Network to a WAN by |Only 1 internet |University of Cyprus |

|Cyprus – | |CYNET (National Level) and |Router. The Second | |

|CYPRUS | |GEANT for international) |Line is 128Kbps | |

|Institut fuer |- |- |- |- |

|Experimentelle | | | | |

|Kernphysik – | | | | |

|University of | | | | |

|Karlsruhe – | | | | |

|Germany | | | | |

|TATA, | | | | |

|India | | | | |

|Weizmann Institute |Yes |ATM/Gb internal backbone/ 35|15 to CERN |Weizmann Institute |

|of Science – | |Mbps to Regional Network | | |

|Israel | |over fiber. | | |

|Dept. of Physics |Yes |See Above Table A |10. |HEP has its own WAN connection and therefore pays |

|and INFN, Bologna | | | |for that. There is agreement between HEP (INFN) and|

|Italy | | | |the Department (University) for use of the |

| | | | |reciprocal WAN connections in case of failures |

| | | | |(backups). |

B-Firewall, WAN, Hops, Financial Support (Continuation)

|Institution |Firewall |WAN connection |Hops |Who pays for the connection ? |

|INFN Napoli |No |1Gbps no trunking no ATM | |WAN connection is covered in the general |

|Italy | |switched LAN with connection| |expenditures by INFN |

| | |to single hosts at 100 Mbps | | |

| | |(copper- 64 Mbps connection | | |

| | |to GARR dedicated to INFN | | |

| | |and University of Naples | | |

| | |Physics Department Ethernet | | |

| | |cabling) | | |

|Cinvestav, | |Not mentioned |22 to Fermilab |Cinvestav and/or Conacyt |

|Mexico | | | | |

|QAU, | | | | |

|Pakistan | | | | |

|Budker Inst. of |No |- |15 |The Institute |

|Nuc.Phys | | | | |

|Russia | | | | |

|ITEP –Moscow - |No info. |No Info. |Regional 5-10 |No info. |

|Russia | | |Internet. 10-20 | |

|Mathematics & Comp.|No Info. |No Info |10 |IHEP |

|Div. –IHE-Protvino | | | | |

|– Moscow Region | | | | |

|Russia | | | | |

|Novgorod State |Yes. |No info. |30 |The University |

|University, | | | | |

|Russia | | | | |

|Tech. Div. S. |No Info. |Our LAN is connected to the |No Info. |Russian Academy of Science |

|Petersburg Nuc. | |commercial WAN using optical| | |

|Phys. Inst. Russian| |fiber. HEP community of our | | |

|Academy of Science | |institute hasn’t any special| | |

|(PNPI RAS) | |lines to the WAN. The | | |

|Russia | |bandwidth PNPI-Gatchina ~ | | |

| | |256 Kbps, Gatchina - | | |

| | |Saint-Petersburg - 2048 Kbps| | |

| | |(current status). The | | |

| | |bandwidth PNPI - Gatchina | | |

| | |restricted by financial | | |

| | |reason. | | |

B-Firewall, WAN, Hops, Financial Support (Continuation)

|Institution |Firewall |WAN connection |Hops |Who pays for the connection ? |

|Institute of |YES |Ethernet 100Mbit is used in |13 hops |LAN is connected by 10Mb/s fibre, to be upgrade to |

|Experimental | |LAN | |100Mbit this year. |

|Physics,Slovak | | | |There is no specific payment for bandwidth paid by |

|Academy of | | | |HEP community. |

|Sciences, Kosice | | | |   Internet connection is shared, no limit (for |

|SLOVAKIA | | | |now) is known to me. |

|JSI, | |1 Gbps optical fiber, no | | |

|Slovenia |Yes |special links for HEP |12 to CERN |Not mentioned |

|Inst. de Física de |No | |CERN – 4 |Cover by the University |

|Cantabria –Univ. | | |FERMILAB – 6 | |

|Cantabria | | | | |

|Spain | | | | |

|Inst. of Particle |No |- |30 |ETH |

|Phys.–ETHZurich | | | | |

|Switzerland | | | | |

|University of | | | | |

|Cukurova, Adana & | | | | |

|University of | | | | |

|Cukurova, Adana, | | | | |

|Turkey | | | | |

|Middle East | |2 Gbps Through a switch, HEP|3 |TUBITAK and our University |

|Technical | |has no any special link to | | |

|University | |WAN. Main entries is ATM and| | |

|Turkey | |star structure is used for | | |

| | |local distribution. | | |

|University of |No |Gigabit Ethernet. |30 |Every IP host at UCSB pays $11 per IP address per |

|California – Santa | | | |year for the IP network usage |

|Barbara CA | | | | |

|USA | | | | |

|University of |No |- |Between 10 and 15 hops|The network connection is covered by the general |

|California, San | | | |expenditures of the university |

|Diego (UCSD) | | | | |

| | | | | |

|USA | | | | |

|Physics Department |No |- |7 |Connection to WAN is paid for by university |

|University of | | | | |

|Illinois at Chicago| | | | |

|USA | | | | |

B-Firewall, WAN, Hops, Financial Support (Continuation)

|Institution |Firewall |WAN connection |Hops |Who pays for the connection ? |

|Centro Nacional de |No Info. | |For physics programs |Our university |

|Cálculo Científico | | |involving national | |

|Universidad de Los | | |institutions. Adn 5 or| |

|Andes (CeCalCULA) | | |6 to international | |

|Venezuela | | |institutions in USA. | |

|Vinca – Inst. of |No info. |Gb internal backbone – |20 to CERN |Serbian Government |

|Nuc. Science- | |128Kbps serial line to | | |

|Yugoslavia | |Regional Network | | |

ICFA SCIC Questionnaire – Responses by Russian Sites

These are contained in the accompanying report, on the Digital Divide in Russia

Appendix C - Case Studies

Rio de Janeiro, Brazil – Network Rio ( REDE RIO)

We have shown several maps of the networking situation in Brazil, in Appendix A. Brazil is part of South America, a region experiencing a strong Digital Divide condition. Of all the countries in South America, Brazil and Chile are the countries that have the best infrastructure to support Grid-based networks for HEP, followed by Mexico. We will examine the status of the Rio de Janeiro State Network (REDE-RIO) that is part of a larger national network - Rede Nacional de Ensino e Pesquisa (RNP).

The States of Rio de Janeiro and Sao Paulo host the most important Science and Technology State funding agencies: FAPERJ and FAPESP, respectively. Both have as fundamental goals supporting and funding State Universities and State Research Centers; (there are also Federal funding agencies, which have as their main goal, supporting Federal institutions). FAPERJ and FAPESP, each, run a regional network connecting primarily universities in their States. As a result, the policies of these networks are defined by these agencies.

It is common knowledge that Grid projects need very high-speed networking infrastructures. The people responsible for the development and operation of the Brazilian research networks have been systematically warned of the urgency and necessity to upgrade the network infrastructures to properly support advanced Grid-based physics projects. Unfortunately, there has been little to no reaction to our pleas for help.

Recently, RNP, Brazil’s National Research Network operator, received approval for a project to deploy a Gigabit-network infrastructure[30] connecting the research networks in Rio de Janeiro and Sao Paulo. RNP is aware of the fact that the Brazilian High Energy Physics groups have been waiting for an adequate network infrastructure to begin running experiments using a Grid-based architecture, supporting high-volume data exchanges. We are optimistic that RNP’s GIGA Project will provide significant improvements in bandwidth availability and access to network resources to advance Brazil’s HENP community’s Grid-physics requirements.

However, it is uncertain if Brazil’s national funding agencies will recognize our Grid-based research and collaboration as a science application. Ultimately, Brazil’s scientific community must succeed in establishing national policy that will contain a long-term strategy to provide funding for the advancement of network and computational technologies to support our scientific disciplines. In Brazil, the scientific community has traditionally not paid for network infrastructure. This is generally supported by the agencies mentioned above.

There are many Brazilian institutions collaborating with all LHC experiments: UERJ, UFRJ, UFBA, USP, UNESP and CBPF. If we extend this to the experiments at Fermilab and Auger, the number of institutions increases (UNICAMP, for example). Considering also the collaboration of Engineers, phenomenologists and Computer Science Researchers working in HEP experiments, the number of institutions increases even more, (UFRGS, for example). If we count the collaborators for LHC distributed by the four experiments, we have about 60 researchers (physicists, engineers and computer scientists) from several Universities and Research Centers (UERJ, UFRJ, UFBA, UFRGS, UNICAMP, CBPF, USP, and UNESP).

The academic and scientific networks mentioned above each have technical committees; however, major science users are unfortunately, not represented. This lack of representation causes many difficulties for the science/research domain users of these networks and has greatly contributed to the research communities’ Digital Divide conditions, such as the State University of Rio de Janeiro (UERJ).

In the State of Rio de Janeiro, the main backbone used for research networking was established by an agreement between FAPERJ, Rede-RIO and Telemar (a telecommunications provider). The initial agreement favored five institutions that would be connected to the backbone, without fees. The criteria for selecting the five institutions was not clearly defined, resulting in UERJ not being considered in this initial agreement. Brazil’s HENP community, led by UERJ, informed the management of RNP and Rede-Rio that their selection criteria had resulted in their universities and laboratories not being connected to Rede-Rio. Regrettably, no change in their policy occurred in this initial offering and the HEP community at these universities did not receive any improvement in their network connectivity.

Four years ago, a second agreement with Telemar was established to upgrade the Rede-Rio backbone to 155 Mbps. Again, important research universities, such as UERJ, the most important University funded by the State Government of Rio de Janeiro, was not included in the network upgrade plan. Up until 2001, the connection between UERJ and Rede-RIO was 2 Mbps. After a lot of effort, UERJ’s connection was finally upgraded to a meager 8 Mbps. This rate still does not allow UERJ’s HEP group to conduct important work with its international collaborators. Additionally, this link is now saturated with the university’s general network traffic, without even including traffic originated by our HEP experiments. Many of the other universities listed above face the same conditions as UERJ. Clearly, if this situation does not improve, the consequences for sciences in Brazil, and in particular for High Energy Physics, will be disastrous. Beyond the HEP projects in UERJ and at other Brazilian universities, there are many projects in other domains that are halted due to inadequate access to network resources.

Beijing, China

The first data links to China were established in the mid-1980’s with the assistance of Caltech (with DOE support) and CERN, in association with the L3 experiment at LEP. These links used the “X.25” protocol that was then used widely in Europe and the US, alongside the TCP/IP links that make up today’s Internet.

The first "always on" Internet link to mainland China became operational in 1994 between SLAC and IHEP/Beijing to meet the needs of the IHEP/SLAC Beijing Electron Spectrometer (BES) collaboration. Wolfgang Panofsky of SLAC and T. D. Lee of Columbia were instrumental in starting the project. Part of the funding for the link was provided through SLAC by the DoE, and SLAC provided much expertise, project leadership and visits by Les Cottrell to Beijing to assist and train.

-----------------------

[1] There are also issues of configuration and performance of the network interfaces in the computers at the end of the path, and the network switches and routers, which are the concern of the Advanced Technologies Working Group.

[2] This connection may also be 10 or 100 miles in practice, depending on the size of the metropolitan or regional network traversed.

[3] The TERENA (Trans-European Research and Education Networking Association) 2002 Compendium, edited by Bert van Pinxteren,

[4] NSF ANI-0220176

[5] Additional information that just appeared at the time of this report, focused on Latin America, may be found at the AMPATH January 2003 Miami Workshop: “Fostering Collaborations and Next Generation Infrastructure”,

[6] This high cost may be the direct result of a lack of competition, and of government policies that do not encourage open competition and/or the installation and deployment of new optical infrastructures. Pricing structures imposed by governments (in Latin America for example) may penalize high bandwidth users to subsidize the cost of low bandwidth connections based on older technology (such as modems connected over telephone lines) . See the presentation by C. Casasus at the

[7] Observations provided by Carlos Casasus, CEO of the Corporacion Universitaria Para el Desarrollo de Internet (CUDI), a non-profit corporation in charge of Mexico's Internet2 Project ( ) at the AMPATH Miami Workshop: Fostering Collaborations and Next Generation Infrastructure, January 2002

[8]

[9]

[10]

[11] These problems tend to be most prevalent in the poorer regions, but examples of poor performance on existing network infrastructures due to lack of coordination and policy may be found in all regions.

[12] The IEEAF successfully arranged a bandwidth donation of a 10 Gbps research link and a 622 Mbps production service in September 2002. It is expected to announce a donation between California and the Asia Pacific region by mid-2003.

[13] At the WSIS Pan-European Ministerial meeting in Bucharest in November 2002.

See and the US State Department site

[14] By the WSIS Preparatory Committee and the US State Department.

[15] A recent example of a proposal is one to NSF is from Florida International University, AMPATH, other universities in Florida, Caltech and UERJ in Brazil for a center for HEP Research, Education and Outreach which includes partial funding for a network link between North and South America for HENP.

[16] One example is the Internet2 HENP Working Group in the US. See

[17] The U.S. Broadband Problem- July 2002 –

[18] One example of such a Cyberinfrastructure is the Global Biomedical Research Exchange, currently under development by the IEEAF (). The SCIC has discussed, and begun work on a possible Global Research Exchange for Physics with the IEEAF during 2002.

[19] Wavelength(s) is short for wavelength division multiplexing, a type of multiplexing developed for use on optical fiber. WDM modulates each of several data streams onto a different part of the light spectrum.

[20]

[21] AMPATH Valdivia Workshop Report,

[22] South and Central America, Mexico and the Caribbean

[23] CAESAR Report, Review of Developments in Latin America, Cathrin Stöver, DANTE, June 2002

[24]

[25]

[26] . AMPATH is a project of Florida International University with support from the National Science Foundation to connect Research & Education Networks and major US e-Science projects in South and Central America, Mexico and the Caribbean to US and Global Research & Education networks.

[27] Instituto Nacional De Pesquisas Espacias

[28]

[29]

[30] What will Optical Networking for the Americas look like?, Michael Stanton, Rede Nacional de Ensino e Pesquisa do Brasil – RNP, AMPATH Workshop: Fostering Collaboration and Next-Generation Infrastructure, FIU, Miami, FL, USA, January 2003, h«[

h© ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download