Overview of the Munich Scientific Network (MWN)

The Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities (LRZ) operates the Munich Scientific Network (MWN). The sites of the Munich universities, many student residence halls and some institutions outside the universities, such as, for example, the Bayerische Staatsbibliothek (Bavarian State Library), museums and institutes of the Max Planck and the Fraunhofer Society, are connected to the network. More than 100,000 terminals are connected. The MWN consists of a backbone network with routers and switches for connecting the networks of the institutions at the various locations. The routers and switches are connected to one another by means of Ethernet (100 Mbit/s to 100 Gbit/s), depending on the individual location's bandwidth requirements. As a rule, the physical medium consists of fiber optic lines with long-term leasing agreements with Deutsche Telekom AG or the company M-net.

In addition to operating the university network, the LRZ is also active as the competence center for data communication in research into new network technology. Some of the projects here are Customer Network Management for the X-WiN, DGRID, Géant3 End-to-End MonitoringGéant3 I-SHARe and Saser-Siegfried (Safe and Secure European Routing).

University networks

The MWN particularly connects locations of the Ludwig-Maximilians-Universität München (LMU), the Technische Universität München (TUM), the Bayerische Akademie der Wissenschaften (Bavarian Academy of Sciences and Humanities - BAdW), the Hochschule München (Munich University of Applied Sciences - HM) and the Hochschule Weihenstephan-Triesdorf (Weihenstephan-Triesdorf University of Applied Sciences - (HSWT)). These locations are spead throughout the entire Munich region (primarily the Munich metropolitan area, Garching and Weihenstephan), and are joined by a few other more remote locations in Bavaria, such as in Straubing, Triesdorf, Iffeldorf and on top of the Zugspitze, Germany's highest mountain.

The LRZ is responsible for the entire backbone network and the connected building networks. The internal networks of the medical departments of the Munich universities (Rechts der Isar (right of the Isar) - TUM, Großhadern and inner city clinics - LMU), the Munich University of Applied Sciences and the TUM computer system are exceptions. They are maintained by local operational groups, but the LRZ is responsible for the connection of these networks to the MWN and beyond into the Internet.

The LRZ requires that a so-called "Netzverantwortlicher", or person responsible for the network, is appointed as the contact at each institution. These people administer the local address spaces and help clarify any network problems or improper network use.

The period from 7:00 a.m. to 9:00 a.m. Tuesdays has been set aside for maintenance work (such as for replacing switches). The scope and duration of the work is always announced on the day before in the latest LRZ information on the LRZ web server and additionally by e-mail to all people responsible for a particular network.

Network Wiring

Locations

The MWN locations are distributed throughout the entire Munich region (primarily the Munich municipal area, Garching, and Weihenstephan).

mwn-karte

Various technologies and bandwidths are used to provide the connections, depending on the size and traffic volume of the individual locations (Overview). 39 fiber optic lines are rented from the Deutsche Telekom and 35 fiber optic lines from M-net in long-term leasing agreements. Fiber optic lines were also laid in individual campus areas (Garching, TU main campus, LMU main campus, etc.) in the framework of the NIP (Network Investment Program) of the universities and also on the basis of independent action. Smaller locations are connected via DSL connections from the Telekom or via SDSL from M-net (2 MBit/s - 25 MBit/s). Where it is possible M-net Fibre SDSL is used.

The lines are operated with the following technologies according to demand and requirements:

100 Gigabit Ethernet

10 Gigabit Ethernet

100,000 Mbit/s

10,000 Mbit/s

Large locations,  high volume of communication (router at the site)
(router or switch at the site)

Gigabit Ethernet 1,000 Mbit/s

Larger to medium-sized locations
(switch at the site)

Fast Ethernet 100 Mbit/s Medium-sized or smaller locations,
(switch at the site)
xDSL / Fibre  DSL 2 - 50 Mbit/s Smaller locations, low to medium
volume of communication

Transported protocols

Only TCP/IP is routed beyond the router ports. Some ports are filtered during the transition into the Internet. More information on this is documented on the page Restrictions and monitoring in the Munich Scientific Network

IPv6 is also supported in addition to IPv4. The LRZ has its own globally valid IPv6 address block (2001:4ca0::/32). Each administrative unit is being equipped with a /48 network, either one after another or immediately upon request. Both the native and a tunneled connection are possible. All important servers already support IPv6.

Network components in use

For support reasons (management, configuration, logistics), only one family or one particular type of device is used for each application area in the MWN where possible.

Router Nexus C7010: Manufactured by Cisco

The new routers in the backbone are of the type Cisco Nexus C7010. The devices are fully redundant, software updates care possible with no interruptions. They support Ethernet with bandwidths up to 100 GBit/s, that is in use at the connection from LRZ to the Garching campus router. Forwarding of data packets is implemented in hardware on the line cards. Switching is done in full line speed, even if fully equipped ("non blocking"). (Herstellerinformationen)

Catalyst 6509 router: Manufactured by Cisco

The routers in the backbone are of the type Cisco Catalyst 6509. The devices support Ethernet (10/100/1,000/10,000 Mbit/s). Forwarding of data packets is implemented in hardware on the line cards; this is often referred to as layer 3 switching. The connection of the central servers in the LRZ is handled with a virtual router (VSS - Virtual Switching System) made of two 6509 chassis. "Policy based routing" is supported for the routing of institutions outside the universities (such as Max Planck institutes). (Manufacturer's information)

Routers: Cisco 1700, 1800, 1900

We operate several devices of the model series Cisco 800, 1700, 1800 and 1900 for connecting buildings connected via DSL. (Manufacturer's information)

Switch: Cisco Nexus 7000

A Cisco Nexus 7000 with 256 10GE ports is used for connecting the Linux cluster. It allows a large density of 10GE ports (up to 512) with a high switching throughput (max. 8 Tbit/s, 960 Mp/s). (Manufacturer's information)

Switches: Manufactured by HP

Type HP ProCurve switches are used in building networks. These devices have a modular design and can handle up to 192 connection ports. The uplink to the backbone is handled with 10 Gigabit Ethernet for some central switches in larger locations and with Gigabit Ethernet for the rest. The devices support VLAN Tagging and SNMP management. In some newer buildings, the Procurve 5400 is used. This device supports supplying power via the data connection (Power over LAN). (Manufacturer's information)

Layer 4/7 switches (service load balancer): Manufactured by F5

In order to distribute the load and ensure the system stability of important servers, two laver4/layer7 switches (service load balancers), type F5 BigIP 8900, are used. Each device monitors the other. If there is a malfunction, the remaining switch takes over the function of the malfunctioning one. WWW (external, internal and virtual WWW servers), LDAP, Radius, PAC, and DHCP servers are some of the servers currently connected. (Manufacturer's information)

Number:

(not including medical, HM, TUM computer center)

Routers (without DSL)   16
Switches    1,400+ 
LANs (sub-networks)    IPv4:  5,000+   IPv6: 900+

Backbone

The bulk of the Telekom fiber optic lines end in the TU main campus and those from M-net in the LMU main campus. The ring structure of the LRZ - Garching Campus - LMU main campus - Großhadern Campus - TUM main campus - LRZ means that the large districts have a redundant connection.

Backbone

The type of connection that individual locations have to the MWN backbone depends on the volumes of data transferred and the size of the respective location (number of terminals connected). The type of connection is adjusted as required to suit the respective conditions (bandwidth demand) as a part of the network management and in coordination with the users.

LANs

Most of the building LANs are supervised centrally from the LRZ. The network outlet is generally seen as the transfer point. Management of the individual institution LAN is handled in coordination with the People responsible for the networks. The LRZ must be notified of the names of these people. In defined cases (in which the institution has the necessary know-how), the interface in the router can also be defined as the transfer point. In these cases, the LRZ is only responsible only for the connectivity to the MWN.

WLAN

Wireless access to the MWN (IEEE 802.11b/g/n) is provided at many locations when requested by the universities. There are currently (February 2016) 3,056 access points in operation. New access points are continually being added. The standard 802.11g (up to 54 Mbit/s) is supported at all locations, 802.11a and 802.11n are supported at many locations, 802.11ac in some buildings. The LRZ uses AP-135, AP-215, AP-275, AP-325 from Alcatel-Lucent (controller based) and HP MSM-460, MSM-422, MSM-320 MSM-310 as access points. More information is available on the pages WLAN in the MWN

Management systems

Tivoli/Netcool from IBM is the network management platform for all network components. HP ProCurve Manager Plus is also used for managing the HP switches. A so-called Customer Network Management application (http://www.cnm.mwn.de) is operated in order to make it possible to supply customers of the Munich Scientific Network with information on the backbone status at all times. This provides information on the backbone availability, throughput, and utilization, as well as on the interconnection points to the separate building and institution networks. We use Infovista as our reporting tool.

Medical/scientific network

The networks of the locations of the medical departments Rechts der Isar (right of the Isar - RdI), Großhadern, and the inner city clinics are connected to the MWN via Gigabit Ethernet. The particular computer center is responsible for the operation and structure of the corresponding networks.

Student residence halls

The LRZ allows residence halls to have a permanent connection to the MWN, and consequently to the Internet, via a dedicated line, DSL technology or wireless connection. The agency responsible for the residence hall is also responsible for the costs for the connection, but no fees are charged for network use. At this time, 49 residence halls are connected to the MWN, with 33 of these connected via a fiber optic cable at 100 Mbit/s or 1 Gbit/s, 2 via Fibre-DSL, 10 via wireless connections, 1 via laser, 1 via micowave and 2 via DSL.

Integration of voice communication

In order to save costs for coupling in-place telephone systems to the central systems of the TUM or LMU, this coupling is implemented over the existing fiber optic lines where possible. This connection is made either directly via IP (for example, between the LMU main building and Martiusstr. 4) or by means of S2m IP adaptors from RAD (for example, Großhadern telecommunication system). The following paths of the cross-connection network are based on the same technology:

Board of Building and Public Works - TU switchboards
Board of Building and Public Works - LMU switchboards
Board of Building and Public Works - Weihenstephan
Board of Building and Public Works - Klinikum Rechts der Isar (Hospital to the right of the Isar)

LRZ - LMU switchboards
Großhadern - Oberschleißheim

External connection

The MWN is connected to the German Science network (X-WiN) via two trunks of two 10GE interfaces each, where 11 Gbit/s are usable in each case. One trunk is routed via Erlangen, the other via Frankfurt. All traffic of the Munich Scientific Network, both inbound and outbound, which means nationally into the X-WiN and internationally into the global Internet, is handled over these connections. Currently around 1,000 TByte of inbound data and 600 TByte of outbound data are transferred per month.

As a backup for the Internet connection, there is additionally a connection with a bandwidth of 10 Gbit/s through the local provider M-net. This connection is used only when there is a disruption in the X-WiN access. The routing is switched automatically.

The diagram on the page http://www.lrz.de/services/netz/statistik/diagmhn/ shows the development of the data volume.

Access via VPN

Eight VPN servers (IPsec, SSLVPN) are operated in a cluster for LRZ customers with Internet access from external providers (for example, with a DSL connection). This allows these users to utilize services internal to the MWN (such as access to online media in the university libraries).

Costs

Statutory users or institutions (such as universities) are currently not charged for the utilization of the Munich university network and the external connection (Internet access). Other institutions in the field of science and research that are not considered to be statutory users can use the Munich Scientific Network (MWN) but must pay a share of the costs.

Operation of the MWN backbone costs roughly 1.6 million € a year. This figure includes both the costs for the line rental for the wire paths and the ongoing maintenance costs for the leased fiber optic lines to the Telekom or M-net, the maintenance costs for the network components and a prorated share of investment costs for new network components. Personnel costs for operating the infrastructure are not included here.

An additional 660,000 € must be expended annually for the external connection (X-WiN connection).

Planned expansions

NIP (Network Investment Program) Phase V

Unfortunately not all of the existing cabling structures in the university buildings satisfy the current requirements. Coax cables are still found in very few TUM buildings. Plans call for these to be replaced with structured cabling as soon as possible. Other buildings have only 4-wire technology. An application for funding has been approved for replacing this wiring. In the TUM building for mechanical engineering work is in progress, at LMU the pharmacy and chemistry building will be next. However these tasks will extend to a longer period because of the large amount of data links.

The individual universities, not the LRZ, are in overall charge of the cabling. The measures are financed in the framework of the Bavarian Network Investment Program.